[error_logger:info,2017-10-01T10:13:43.748-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.139.0>}, {name,timer_server}, {mfargs,{timer,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:44.341-07:00,nonode@nohost:<0.89.0>:ns_server:init_logging:138]Started & configured logging [ns_server:info,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]Static config terms: [{error_logger_mf_dir,"@DATA_PREFIX@/var/lib/couchbase/logs"}, {path_config_bindir,"@BIN_PREFIX@/bin"}, {path_config_etcdir,"@BIN_PREFIX@/etc/couchbase"}, {path_config_libdir,"@BIN_PREFIX@/lib"}, {path_config_datadir,"@DATA_PREFIX@/var/lib/couchbase"}, {path_config_tmpdir,"@DATA_PREFIX@/var/lib/couchbase/tmp"}, {path_config_secdir,"@BIN_PREFIX@/etc/security"}, {nodefile,"@DATA_PREFIX@/var/lib/couchbase/couchbase-server.node"}, {loglevel_default,debug}, {loglevel_couchdb,info}, {loglevel_ns_server,debug}, {loglevel_error_logger,debug}, {loglevel_user,debug}, {loglevel_menelaus,debug}, {loglevel_ns_doctor,debug}, {loglevel_stats,debug}, {loglevel_rebalance,debug}, {loglevel_cluster,debug}, {loglevel_views,debug}, {loglevel_mapreduce_errors,debug}, {loglevel_xdcr,debug}, {loglevel_xdcr_trace,error}, {loglevel_access,info}, {disk_sink_opts, [{rotation, [{compress,true}, {size,41943040}, {num_files,10}, {buffer_size_max,52428800}]}]}, {disk_sink_opts_json_rpc, [{rotation, [{compress,true}, {size,41943040}, {num_files,2}, {buffer_size_max,52428800}]}]}, {disk_sink_opts_xdcr_trace, [{rotation,[{compress,false},{size,83886080},{num_files,5}]}]}, {net_kernel_verbosity,10}] [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter error_logger_mf_dir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_bindir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_etcdir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_libdir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_datadir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_tmpdir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter path_config_secdir, which is given from command line [ns_server:warn,2017-10-01T10:13:44.345-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter nodefile, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_default, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_couchdb, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_server, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_error_logger, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_user, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_menelaus, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_doctor, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_stats, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_rebalance, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_cluster, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_views, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_mapreduce_errors, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr_trace, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter loglevel_access, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts_json_rpc, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts_xdcr_trace, which is given from command line [ns_server:warn,2017-10-01T10:13:44.346-07:00,nonode@nohost:<0.89.0>:ns_server:log_pending:32]not overriding parameter net_kernel_verbosity, which is given from command line [error_logger:info,2017-10-01T10:13:44.352-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.147.0>}, {name,local_tasks}, {mfargs,{local_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:44.372-07:00,nonode@nohost:ns_server_cluster_sup<0.146.0>:log_os_info:start_link:25]OS type: {unix,linux} Version: {3,13,0} Runtime info: [{otp_release,"R16B03-1"}, {erl_version,"5.10.4.0.0.1"}, {erl_version_long, "Erlang R16B03-1 (erts-5.10.4.0.0.1) [source-3ea8397] [64-bit] [smp:4:4] [async-threads:16] [kernel-poll:true]\n"}, {system_arch_raw,"x86_64-unknown-linux-gnu"}, {system_arch,"x86_64-unknown-linux-gnu"}, {localtime,{{2017,10,1},{10,13,44}}}, {memory, [{total,111416576}, {processes,10173232}, {processes_used,10171560}, {system,101243344}, {atom,339441}, {atom_used,320141}, {binary,53264}, {code,7756655}, {ets,2413152}]}, {loaded, [ns_info,log_os_info,local_tasks,restartable, ns_server_cluster_sup,'ale_logger-metakv', 'ale_logger-rebalance','ale_logger-xdcr_trace', 'ale_logger-menelaus','ale_logger-stats', 'ale_logger-json_rpc','ale_logger-access',calendar, ale_default_formatter,io_lib_fread,'ale_logger-ns_server', 'ale_logger-user','ale_logger-ns_doctor', 'ale_logger-cluster','ale_logger-xdcr',otp_internal, ale_stderr_sink,ns_log_sink,ale_disk_sink,misc,couch_util, ns_server,filelib,cpu_sup,memsup,disksup,os_mon,io, release_handler,overload,alarm_handler,sasl,timer, tftp_sup,httpd_sup,httpc_handler_sup,httpc_cookie, inets_trace,httpc_manager,httpc,httpc_profile_sup, httpc_sup,ftp_sup,inets_sup,inets_app,ssl,lhttpc_manager, lhttpc_sup,lhttpc,tls_connection_sup,ssl_session_cache, ssl_pkix_db,ssl_manager,ssl_sup,ssl_app,crypto_server, crypto_sup,crypto_app,ale_error_logger_handler, 'ale_logger-ale_logger','ale_logger-error_logger', beam_opcodes,beam_dict,beam_asm,beam_validator,beam_z, beam_flatten,beam_trim,beam_receive,beam_bsm,beam_peep, beam_dead,beam_split,beam_type,beam_bool,beam_except, beam_clean,beam_utils,beam_block,beam_jump,beam_a, v3_codegen,v3_life,v3_kernel,sys_core_dsetel,erl_bifs, sys_core_fold,cerl_trees,sys_core_inline,core_lib,cerl, v3_core,erl_bits,erl_expand_records,sys_pre_expand,sofs, erl_internal,sets,ordsets,erl_lint,compile, dynamic_compile,ale_utils,io_lib_pretty,io_lib_format, io_lib,ale_codegen,dict,ale,ale_dynamic_sup,ale_sup, ale_app,epp,ns_bootstrap,child_erlang,file_io_server, orddict,erl_eval,file,c,kernel_config,user_io,user_sup, supervisor_bridge,standard_error,code_server,unicode, hipe_unified_loader,gb_sets,ets,binary,code,file_server, net_kernel,global_group,erl_distribution,filename, inet_gethost_native,os,inet_parse,inet,inet_udp, inet_config,inet_db,global,gb_trees,rpc,supervisor,kernel, application_master,sys,application,gen_server,erl_parse, proplists,erl_scan,lists,application_controller,proc_lib, gen,gen_event,error_logger,heart,error_handler, erts_internal,erlang,erl_prim_loader,prim_zip,zlib, prim_file,prim_inet,prim_eval,init,otp_ring0]}, {applications, [{lhttpc,"Lightweight HTTP Client","1.3.0"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {public_key,"Public key infrastructure","0.21"}, {asn1,"The Erlang ASN1 compiler version 2.0.4","2.0.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}, {ale,"Another Logger for Erlang","5.0.0-0000-enterprise"}, {inets,"INETS CXC 138 49","5.9.8"}, {ns_server,"Couchbase server","5.0.0-0000-enterprise"}, {crypto,"CRYPTO version 2","3.2"}, {ssl,"Erlang/OTP SSL application","5.3.3"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}]}, {pre_loaded, [erts_internal,erlang,erl_prim_loader,prim_zip,zlib, prim_file,prim_inet,prim_eval,init,otp_ring0]}, {process_count,113}, {node,nonode@nohost}, {nodes,[]}, {registered, [inets_sup,'sink-ns_log',lhttpc_sup,code_server, ale_stats_events,lhttpc_manager,application_controller, ale,httpd_sup,release_handler,'sink-disk_json_rpc', kernel_safe_sup,'sink-disk_metakv',standard_error,ale_sup, overload,error_logger,'sink-disk_access_int', alarm_handler,'sink-disk_access',ale_dynamic_sup, timer_server,'sink-xdcr_trace',standard_error_sup, 'sink-disk_reports','sink-disk_stats',crypto_server, 'sink-disk_xdcr_errors',ns_server_cluster_sup,crypto_sup, sasl_safe_sup,'sink-disk_xdcr',init,tftp_sup, 'sink-disk_debug',inet_db,os_mon_sup,rex,user, 'sink-disk_error',tls_connection_sup,ssl_sup,kernel_sup, cpu_sup,'sink-disk_default',global_name_server,memsup, disksup,httpc_sup,file_server_2,ssl_manager,global_group, httpc_profile_sup,httpc_manager,httpc_handler_sup,ftp_sup, local_tasks,sasl_sup,'sink-stderr',erl_prim_loader]}, {cookie,nocookie}, {wordsize,8}, {wall_clock,3}] [ns_server:info,2017-10-01T10:13:44.390-07:00,nonode@nohost:ns_server_cluster_sup<0.146.0>:log_os_info:start_link:27]Manifest: [] [error_logger:info,2017-10-01T10:13:44.403-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.148.0>}, {name,timeout_diag_logger}, {mfargs,{timeout_diag_logger,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:44.406-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ip_start" [ns_server:info,2017-10-01T10:13:44.406-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ip" [ns_server:info,2017-10-01T10:13:44.406-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:init:163]ip config not found. Looks like we're brand new node [error_logger:info,2017-10-01T10:13:44.410-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,inet_gethost_native_sup} started: [{pid,<0.151.0>},{mfa,{inet_gethost_native,init,[[]]}}] [error_logger:info,2017-10-01T10:13:44.411-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.150.0>}, {name,inet_gethost_native_sup}, {mfargs,{inet_gethost_native,start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:46.066-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:bringup:214]Attempting to bring up net_kernel with name 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:13:46.097-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.153.0>}, {name,erl_epmd}, {mfargs,{erl_epmd,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.097-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.154.0>}, {name,auth}, {mfargs,{auth,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.103-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',local_nodeup,{node,'n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:46.103-07:00,n_0@127.0.0.1:dist_manager<0.149.0>:dist_manager:configure_net_kernel:258]Set net_kernel vebosity to 10 -> 0 [error_logger:info,2017-10-01T10:13:46.103-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.155.0>}, {name,net_kernel}, {mfargs, {net_kernel,start_link, [['n_0@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.103-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_sup} started: [{pid,<0.152.0>}, {name,net_sup_dynamic}, {mfargs, {erl_distribution,start_link, [['n_0@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [ns_server:info,2017-10-01T10:13:46.107-07:00,n_0@127.0.0.1:dist_manager<0.149.0>:dist_manager:save_node:147]saving node to "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/nodefile" [ns_server:debug,2017-10-01T10:13:46.122-07:00,n_0@127.0.0.1:dist_manager<0.149.0>:dist_manager:bringup:228]Attempted to save node name to disk: ok [ns_server:debug,2017-10-01T10:13:46.122-07:00,n_0@127.0.0.1:dist_manager<0.149.0>:dist_manager:wait_for_node:235]Waiting for connection to node 'babysitter_of_n_0@127.0.0.1' to be established [error_logger:info,2017-10-01T10:13:46.123-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'babysitter_of_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:46.140-07:00,n_0@127.0.0.1:dist_manager<0.149.0>:dist_manager:wait_for_node:247]Observed node 'babysitter_of_n_0@127.0.0.1' to come up [error_logger:info,2017-10-01T10:13:46.146-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.149.0>}, {name,dist_manager}, {mfargs,{dist_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.154-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.160.0>}, {name,ns_cookie_manager}, {mfargs,{ns_cookie_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.155-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.161.0>}, {name,ns_cluster}, {mfargs,{ns_cluster,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:46.160-07:00,n_0@127.0.0.1:ns_config_sup<0.162.0>:ns_config_sup:init:32]loading static ns_config from "priv/config" [error_logger:info,2017-10-01T10:13:46.161-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.163.0>}, {name,ns_config_events}, {mfargs, {gen_event,start_link,[{local,ns_config_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.161-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.164.0>}, {name,ns_config_events_local}, {mfargs, {gen_event,start_link, [{local,ns_config_events_local}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:46.280-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:load_config:1078]Loading static config from "priv/config" [ns_server:info,2017-10-01T10:13:46.292-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:load_config:1092]Loading dynamic config from "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/config.dat" [ns_server:info,2017-10-01T10:13:46.302-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:load_config:1097]No dynamic config file found. Assuming we're brand new node [ns_server:debug,2017-10-01T10:13:46.313-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:load_config:1100]Here's full dynamic config we loaded: [[]] [ns_server:info,2017-10-01T10:13:46.318-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:load_config:1121]Here's full dynamic config we loaded + static & default config: [{password_policy,[{min_length,6},{must_present,[]}]}, {drop_request_memory_threshold_mib,undefined}, {{request_limit,capi},undefined}, {{request_limit,rest},undefined}, {auto_reprovision_cfg,[{enabled,true},{max_nodes,1},{count,0}]}, {auto_failover_cfg,[{enabled,false},{timeout,120},{max_nodes,1},{count,0}]}, {replication,[{enabled,true}]}, {alert_limits, [{max_overhead_perc,50},{max_disk_used,90},{max_indexer_ram,75}]}, {email_alerts, [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server, [{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]}, {alerts, [auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events,indexer_ram_max_usage, ep_clock_cas_drift_threshold_exceeded,communication_issue]}]}, {{node,'n_0@127.0.0.1',ns_log}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {filename, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}]}, {{node,'n_0@127.0.0.1',port_servers}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}]}, {{node,'n_0@127.0.0.1',moxi}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12001}, {verbosity,[]}]}, {secure_headers,[]}, {buckets,[{configs,[]}]}, {cbas_memory_quota,3190}, {fts_memory_quota,319}, {memory_quota,3190}, {{node,'n_0@127.0.0.1',memcached_config}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]}]}, {{node,'n_0@127.0.0.1',memcached}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users, ["@cbq-engine","@projector","@goxdcr","@index","@fts","@cbas"]}, {admin_pass,"*****"}, {engines, [{membase, [{engine, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached, [{engine, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}]}, {{node,'n_0@127.0.0.1',memcached_defaults}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}]}, {memcached,[]}, {{node,'n_0@127.0.0.1',audit}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {log_path,"logs/n_0"}]}, {audit, [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}]}, {{node,'n_0@127.0.0.1',isasl}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}]}, {remote_clusters,[]}, {read_only_user_creds,null}, {rest_creds,null}, {{node,'n_0@127.0.0.1',ssl_proxy_upstream_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 11997]}, {{node,'n_0@127.0.0.1',ssl_proxy_downstream_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 11998]}, {{node,'n_0@127.0.0.1',cbas_ssl_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19300]}, {{node,'n_0@127.0.0.1',cbas_auth_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9310]}, {{node,'n_0@127.0.0.1',cbas_debug_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9309]}, {{node,'n_0@127.0.0.1',cbas_messaging_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9308]}, {{node,'n_0@127.0.0.1',cbas_result_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9307]}, {{node,'n_0@127.0.0.1',cbas_data_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9306]}, {{node,'n_0@127.0.0.1',cbas_cluster_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9305]}, {{node,'n_0@127.0.0.1',cbas_hyracks_console_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9304]}, {{node,'n_0@127.0.0.1',cbas_cc_client_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9303]}, {{node,'n_0@127.0.0.1',cbas_cc_cluster_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9302]}, {{node,'n_0@127.0.0.1',cbas_cc_http_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9301]}, {{node,'n_0@127.0.0.1',cbas_http_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9300]}, {{node,'n_0@127.0.0.1',fts_ssl_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19200]}, {{node,'n_0@127.0.0.1',fts_http_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9200]}, {{node,'n_0@127.0.0.1',indexer_stmaint_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9105]}, {{node,'n_0@127.0.0.1',indexer_stcatchup_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9104]}, {{node,'n_0@127.0.0.1',indexer_stinit_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9103]}, {{node,'n_0@127.0.0.1',indexer_https_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19102]}, {{node,'n_0@127.0.0.1',indexer_http_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9102]}, {{node,'n_0@127.0.0.1',indexer_scan_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9101]}, {{node,'n_0@127.0.0.1',indexer_admin_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9100]}, {{node,'n_0@127.0.0.1',xdcr_rest_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 13000]}, {{node,'n_0@127.0.0.1',projector_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 10000]}, {{node,'n_0@127.0.0.1',ssl_query_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19499]}, {{node,'n_0@127.0.0.1',query_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9499]}, {{node,'n_0@127.0.0.1',ssl_capi_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19500]}, {{node,'n_0@127.0.0.1',capi_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 9500]}, {{node,'n_0@127.0.0.1',ssl_rest_port}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| 19000]}, {{node,'n_0@127.0.0.1',rest}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,9000}, {port_meta,local}]}, {{couchdb,max_parallel_replica_indexers},2}, {{couchdb,max_parallel_indexers},4}, {rest,[{port,8091}]}, {{node,'n_0@127.0.0.1',membership}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| active]}, {server_groups, [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@127.0.0.1']}]]}, {nodes_wanted,['n_0@127.0.0.1']}, {{node,'n_0@127.0.0.1',compaction_daemon}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}]}, {set_view_update_daemon, [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}]}, {autocompaction, [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}]}, {max_bucket_count,10}, {index_aware_rebalance_disabled,false}, {{node,'n_0@127.0.0.1',ldap_enabled}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| true]}, {{node,'n_0@127.0.0.1',is_enterprise}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| true]}, {{node,'n_0@127.0.0.1',config_version}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| {5,0}]}, {{node,'n_0@127.0.0.1',uuid}, [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>]}] [error_logger:info,2017-10-01T10:13:46.357-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.165.0>}, {name,ns_config}, {mfargs, {ns_config,start_link, ["priv/config",ns_config_default]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.359-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.168.0>}, {name,ns_config_remote}, {mfargs,{ns_config_replica,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.363-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.169.0>}, {name,ns_config_log}, {mfargs,{ns_config_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.363-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.162.0>}, {name,ns_config_sup}, {mfargs,{ns_config_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:46.366-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.171.0>}, {name,vbucket_filter_changes_registry}, {mfargs, {ns_process_registry,start_link, [vbucket_filter_changes_registry, [{terminate_command,shutdown}]]}}, {restart_type,permanent}, {shutdown,100}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.368-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.172.0>}, {name,json_rpc_connection_sup}, {mfargs,{json_rpc_connection_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:46.399-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.175.0>}, {name,remote_monitors}, {mfargs,{remote_monitors,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:46.401-07:00,n_0@127.0.0.1:menelaus_barrier<0.176.0>:one_shot_barrier:barrier_body:58]Barrier menelaus_barrier has started [error_logger:info,2017-10-01T10:13:46.401-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.176.0>}, {name,menelaus_barrier}, {mfargs,{menelaus_sup,barrier_start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.401-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.177.0>}, {name,rest_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,rest_lhttpc_pool}, {connection_timeout,120000}, {pool_size,20}]]}}, {restart_type,{permanent,1}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.415-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.178.0>}, {name,memcached_refresh}, {mfargs,{memcached_refresh,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:46.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.180.0>}, {name,ssl_service_events}, {mfargs, {gen_event,start_link, [{local,ssl_service_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:46.444-07:00,n_0@127.0.0.1:ns_ssl_services_setup<0.181.0>:ns_ssl_services_setup:init:388]Used ssl options: [{keyfile,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/ssl-cert-key.pem"}, {certfile,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/ssl-cert-key.pem"}, {versions,[tlsv1,'tlsv1.1','tlsv1.2']}, {cacertfile,undefined}, {dh,<<48,130,1,8,2,130,1,1,0,152,202,99,248,92,201,35,238,246,5,77,93,120,10, 118,129,36,52,111,193,167,220,49,229,106,105,152,133,121,157,73,158, 232,153,197,197,21,171,140,30,207,52,165,45,8,221,162,21,199,183,66, 211,247,51,224,102,214,190,130,96,253,218,193,35,43,139,145,89,200,250, 145,92,50,80,134,135,188,205,254,148,122,136,237,220,186,147,187,104, 159,36,147,217,117,74,35,163,145,249,175,242,18,221,124,54,140,16,246, 169,84,252,45,47,99,136,30,60,189,203,61,86,225,117,255,4,91,46,110, 167,173,106,51,65,10,248,94,225,223,73,40,232,140,26,11,67,170,118,190, 67,31,127,233,39,68,88,132,171,224,62,187,207,160,189,209,101,74,8,205, 174,146,173,80,105,144,246,25,153,86,36,24,178,163,64,202,221,95,184, 110,244,32,226,217,34,55,188,230,55,16,216,247,173,246,139,76,187,66, 211,159,17,46,20,18,48,80,27,250,96,189,29,214,234,241,34,69,254,147, 103,220,133,40,164,84,8,44,241,61,164,151,9,135,41,60,75,4,202,133,173, 72,6,69,167,89,112,174,40,229,171,2,1,2>>}, {ciphers,[{dhe_rsa,aes_256_cbc,sha256}, {dhe_dss,aes_256_cbc,sha256}, {rsa,aes_256_cbc,sha256}, {dhe_rsa,aes_128_cbc,sha256}, {dhe_dss,aes_128_cbc,sha256}, {rsa,aes_128_cbc,sha256}, {dhe_rsa,aes_256_cbc,sha}, {dhe_dss,aes_256_cbc,sha}, {rsa,aes_256_cbc,sha}, {dhe_rsa,'3des_ede_cbc',sha}, {dhe_dss,'3des_ede_cbc',sha}, {rsa,'3des_ede_cbc',sha}, {dhe_rsa,aes_128_cbc,sha}, {dhe_dss,aes_128_cbc,sha}, {rsa,aes_128_cbc,sha}]}] [ns_server:debug,2017-10-01T10:13:47.665-07:00,n_0@127.0.0.1:ns_ssl_services_setup<0.181.0>:ns_server_cert:generate_cert_and_pkey:78]Generated certificate and private key in 1217150 us [ns_server:debug,2017-10-01T10:13:47.665-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cert_and_pkey -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097227}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIB/TCCAWagAwIBAgIIFOmBlRUDHnQwDQYJKoZIhvcNAQELBQAwJDEiMCAGA1UE\nAxMZQ291Y2hiYXNlIFNlcnZlciAzODAyZTUzNTAeFw0xMzAxMDEwMDAwMDBaFw00\nOTEyMzEyMzU5NTlaMCQxIjAgBgNVBAMTGUNvdWNoYmFzZSBTZXJ2ZXIgMzgwMmU1\nMzUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMxj2iFzf5TxfmT0Q61Jd2cM\nNDHmKB8FjpZWy2CI9iIKeM8oSrLwpq1himl3y7umd2vaUVE9gg9P5TTCGSgYkwNu\nqY5UC88wScAB4/aCx/CAfze8ON/h983"...>>, <<"*****">>}] [ns_server:debug,2017-10-01T10:13:47.665-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097227}}]}] [ns_server:info,2017-10-01T10:13:47.666-07:00,n_0@127.0.0.1:ns_ssl_services_setup<0.181.0>:ns_ssl_services_setup:maybe_generate_local_cert:557]Failed to read node certificate. Perhaps it wasn't created yet. Error: {error, {badmatch, {error, enoent}}} [ns_server:info,2017-10-01T10:13:48.523-07:00,n_0@127.0.0.1:ns_ssl_services_setup<0.181.0>:ns_ssl_services_setup:do_generate_local_cert:545]Saved local cert for node 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:13:48.531-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.181.0>}, {name,ns_ssl_services_setup}, {mfargs,{ns_ssl_services_setup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:48.562-07:00,n_0@127.0.0.1:<0.188.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for n1ql [ns_server:info,2017-10-01T10:13:48.562-07:00,n_0@127.0.0.1:<0.188.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for cbas [ns_server:info,2017-10-01T10:13:48.562-07:00,n_0@127.0.0.1:<0.188.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for fts [ns_server:debug,2017-10-01T10:13:48.585-07:00,n_0@127.0.0.1:<0.188.0>:restartable:start_child:98]Started child process <0.190.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} [error_logger:info,2017-10-01T10:13:48.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.188.0>}, {name,ns_rest_ssl_service}, {mfargs, {restartable,start_link, [{ns_ssl_services_setup, start_link_rest_service,[]}, 1000]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:48.586-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.179.0>}, {name,ns_ssl_services_sup}, {mfargs,{ns_ssl_services_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:48.588-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.208.0>}, {name,user_storage_events}, {mfargs, {gen_event,start_link, [{local,user_storage_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:48.597-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_storage_sup} started: [{pid,<0.210.0>}, {name,users_replicator}, {mfargs,{menelaus_users,start_replicator,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:48.600-07:00,n_0@127.0.0.1:users_replicator<0.210.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:13:48.605-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_storage:anounce_startup:68]Announce my startup to <0.210.0> [ns_server:debug,2017-10-01T10:13:48.605-07:00,n_0@127.0.0.1:users_replicator<0.210.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <0.211.0> [ns_server:debug,2017-10-01T10:13:48.614-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_dets:open:170]Opening file "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/users.dets" [error_logger:info,2017-10-01T10:13:48.615-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_storage_sup} started: [{pid,<0.211.0>}, {name,users_storage}, {mfargs,{menelaus_users,start_storage,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:48.615-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.209.0>}, {name,users_storage_sup}, {mfargs,{users_storage_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:48.633-07:00,n_0@127.0.0.1:compiled_roles_cache<0.213.0>:versioned_cache:init:44]Starting versioned cache compiled_roles_cache [error_logger:info,2017-10-01T10:13:48.634-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.213.0>}, {name,compiled_roles_cache}, {mfargs,{menelaus_roles,start_compiled_roles_cache,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:48.634-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.207.0>}, {name,users_sup}, {mfargs,{users_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:48.651-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.217.0>}, {name,dets_sup}, {mfargs,{dets_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:48.651-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.218.0>}, {name,dets}, {mfargs,{dets_server,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:48.670-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,'$1','$2','_','_'}, [], [{{'$1','$2'}}]}], 100} [ns_server:debug,2017-10-01T10:13:48.670-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_dets:init_after_ack:162]Loading 0 items, 299 words took 0ms [ns_server:debug,2017-10-01T10:13:48.670-07:00,n_0@127.0.0.1:users_replicator<0.210.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:13:48.674-07:00,n_0@127.0.0.1:wait_link_to_couchdb_node<0.221.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:133]Waiting for ns_couchdb node to start [error_logger:info,2017-10-01T10:13:48.674-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.216.0>}, {name,start_couchdb_node}, {mfargs,{ns_server_nodes_sup,start_couchdb_node,[]}}, {restart_type,{permanent,5}}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:48.675-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:48.675-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:48.675-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.224.0>,shutdown}} [error_logger:info,2017-10-01T10:13:48.675-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:48.876-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:48.876-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.227.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:48.877-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:48.877-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.078-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.079-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.230.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:49.079-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:49.079-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.280-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.280-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.233.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:49.280-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:49.280-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.482-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:49.482-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:49.482-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.236.0>,shutdown}} [error_logger:info,2017-10-01T10:13:49.482-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.683-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.684-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.239.0>,shutdown}} [error_logger:info,2017-10-01T10:13:49.684-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:49.684-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:49.885-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:49.886-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.242.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:49.886-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:49.886-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.087-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.087-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.245.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:50.088-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:50.088-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.289-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.289-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.248.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:50.290-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:50.290-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.491-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.492-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.251.0>,shutdown}} [ns_server:debug,2017-10-01T10:13:50.492-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:13:50.492-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:13:50.693-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:13:50.731-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:50.932-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:51.133-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:51.334-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:51.535-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:51.737-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:51.938-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:13:52.139-07:00,n_0@127.0.0.1:<0.222.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [error_logger:info,2017-10-01T10:13:53.861-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.221.0>}, {name,wait_for_couchdb_node}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:53.867-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.174.0>:ns_storage_conf:setup_db_and_ix_paths:52]Initialize db_and_ix_paths variable with [{db_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/data"}, {index_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/data"}] [error_logger:info,2017-10-01T10:13:53.894-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.266.0>}, {name,ns_disksup}, {mfargs,{ns_disksup,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:53.897-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.267.0>}, {name,diag_handler_worker}, {mfargs,{work_queue,start_link,[diag_handler_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:53.899-07:00,n_0@127.0.0.1:ns_server_sup<0.265.0>:dir_size:start_link:39]Starting quick version of dir_size with program name: godu [error_logger:info,2017-10-01T10:13:53.915-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.268.0>}, {name,dir_size}, {mfargs,{dir_size,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:53.920-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.269.0>}, {name,request_throttler}, {mfargs,{request_throttler,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:53.933-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.271.0>}, {name,timer2_server}, {mfargs,{timer2,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2017-10-01T10:13:53.934-07:00,n_0@127.0.0.1:ns_log<0.270.0>:ns_log:read_logs:128]Couldn't load logs from "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log" (perhaps it's first startup): {error, enoent} [error_logger:info,2017-10-01T10:13:53.935-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.270.0>}, {name,ns_log}, {mfargs,{ns_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:53.936-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.272.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:53.983-07:00,n_0@127.0.0.1:memcached_passwords<0.273.0>:memcached_cfg:init:62]Init config writer for memcached_passwords, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:13:53.993-07:00,n_0@127.0.0.1:memcached_passwords<0.273.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:info,2017-10-01T10:13:57.665-07:00,n_0@127.0.0.1:<0.276.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:13:57.665-07:00,n_0@127.0.0.1:<0.276.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:13:57.666-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [error_logger:info,2017-10-01T10:13:57.666-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.273.0>}, {name,memcached_passwords}, {mfargs,{memcached_passwords,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.669-07:00,n_0@127.0.0.1:memcached_permissions<0.277.0>:memcached_cfg:init:62]Init config writer for memcached_permissions, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [ns_server:debug,2017-10-01T10:13:57.670-07:00,n_0@127.0.0.1:memcached_permissions<0.277.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [error_logger:info,2017-10-01T10:13:57.670-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.277.0>}, {name,memcached_permissions}, {mfargs,{memcached_permissions,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.670-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.280.0>}, {name,ns_log_events}, {mfargs,{gen_event,start_link,[{local,ns_log_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2017-10-01T10:13:57.685-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:57.685-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [isasl] failed. Retry in 1000 ms. [ns_server:debug,2017-10-01T10:13:57.685-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_cast:55]Refresh of rbac requested [error_logger:info,2017-10-01T10:13:57.686-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.282.0>}, {name,ns_node_disco_events}, {mfargs, {gen_event,start_link, [{local,ns_node_disco_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.686-07:00,n_0@127.0.0.1:ns_node_disco<0.283.0>:ns_node_disco:init:138]Initting ns_node_disco with [] [ns_server:warn,2017-10-01T10:13:57.686-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:57.686-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:13:57.686-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [user:info,2017-10-01T10:13:57.687-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_init:83]Initial otp cookie generated: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.687-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097237}}]}] [ns_server:debug,2017-10-01T10:13:57.687-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: otp -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097237}}]}, {cookie,{sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}}] [ns_server:debug,2017-10-01T10:13:57.687-07:00,n_0@127.0.0.1:<0.284.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.695-07:00,n_0@127.0.0.1:<0.284.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:13:57.696-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.283.0>}, {name,ns_node_disco}, {mfargs,{ns_node_disco,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.698-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.286.0>}, {name,ns_node_disco_log}, {mfargs,{ns_node_disco_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.700-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.287.0>}, {name,ns_node_disco_conf_events}, {mfargs,{ns_node_disco_conf_events,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.703-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.288.0>}, {name,ns_config_rep_merger}, {mfargs,{ns_config_rep,start_link_merger,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.703-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:init:68]init pulling [ns_server:debug,2017-10-01T10:13:57.703-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:init:70]init pushing [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:init:74]init reannouncing [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:50]ns_node_disco_conf_events config on otp [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:<0.297.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:<0.298.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:<0.298.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: otp -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097237}}]}, {cookie,{sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}}] [ns_server:debug,2017-10-01T10:13:57.704-07:00,n_0@127.0.0.1:<0.297.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cert_and_pkey -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097227}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIB/TCCAWagAwIBAgIIFOmBlRUDHnQwDQYJKoZIhvcNAQELBQAwJDEiMCAGA1UE\nAxMZQ291Y2hiYXNlIFNlcnZlciAzODAyZTUzNTAeFw0xMzAxMDEwMDAwMDBaFw00\nOTEyMzEyMzU5NTlaMCQxIjAgBgNVBAMTGUNvdWNoYmFzZSBTZXJ2ZXIgMzgwMmU1\nMzUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMxj2iFzf5TxfmT0Q61Jd2cM\nNDHmKB8FjpZWy2CI9iIKeM8oSrLwpq1himl3y7umd2vaUVE9gg9P5TTCGSgYkwNu\nqY5UC88wScAB4/aCx/CAfze8ON/h983"...>>, <<"*****">>}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:memcached_passwords<0.273.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: alert_limits -> [{max_overhead_perc,50},{max_disk_used,90},{max_indexer_ram,75}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: audit -> [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: auto_failover_cfg -> [{enabled,false},{timeout,120},{max_nodes,1},{count,0}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: auto_reprovision_cfg -> [{enabled,true},{max_nodes,1},{count,0}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: autocompaction -> [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[],{configs,[]}] [ns_server:debug,2017-10-01T10:13:57.705-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cbas_memory_quota -> 3190 [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: drop_request_memory_threshold_mib -> undefined [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: email_alerts -> [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server,[{user,[]}, {pass,"*****"}, {host,"localhost"}, {port,25}, {encrypt,false}]}, {alerts,[auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events,indexer_ram_max_usage, ep_clock_cas_drift_threshold_exceeded,communication_issue]}] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: fts_memory_quota -> 319 [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: index_aware_rebalance_disabled -> false [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: max_bucket_count -> 10 [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: memcached -> [] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: memory_quota -> 3190 [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: nodes_wanted -> ['n_0@127.0.0.1'] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: password_policy -> [{min_length,6},{must_present,[]}] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: read_only_user_creds -> null [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: remote_clusters -> [] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: replication -> [{enabled,true}] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest -> [{port,8091}] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest_creds -> null [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: secure_headers -> [] [ns_server:debug,2017-10-01T10:13:57.706-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: server_groups -> [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@127.0.0.1']}]] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: set_view_update_daemon -> [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {couchdb,max_parallel_indexers} -> 4 [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {couchdb,max_parallel_replica_indexers} -> 2 [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {request_limit,capi} -> undefined [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {request_limit,rest} -> undefined [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',audit} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {log_path,"logs/n_0"}] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9500] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_auth_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9310] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_client_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9303] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9302] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9301] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9305] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_data_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9306] [ns_server:debug,2017-10-01T10:13:57.707-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_debug_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9309] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9300] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_hyracks_console_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9304] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_messaging_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9308] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_result_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9307] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19300] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',compaction_daemon} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',config_version} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|{5,0}] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',fts_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9200] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',fts_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19200] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_admin_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9100] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9102] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_https_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19102] [ns_server:debug,2017-10-01T10:13:57.708-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_scan_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9101] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stcatchup_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9104] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stinit_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9103] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stmaint_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9105] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',is_enterprise} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|true] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',isasl} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ldap_enabled} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|true] [ns_server:debug,2017-10-01T10:13:57.709-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| active] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts","@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached_config} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached_defaults} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',moxi} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12001}, {verbosity,[]}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ns_log} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',port_servers} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',projector_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|10000] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9499] [ns_server:debug,2017-10-01T10:13:57.710-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',rest} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,9000}, {port_meta,local}] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19500] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|11998] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|11997] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19499] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19000] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',uuid} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',xdcr_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|13000] [ns_server:debug,2017-10-01T10:13:57.711-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097237}}]}] [error_logger:info,2017-10-01T10:13:57.737-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.289.0>}, {name,ns_config_rep}, {mfargs,{ns_config_rep,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.737-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.281.0>}, {name,ns_node_disco_sup}, {mfargs,{ns_node_disco_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:57.737-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([alert_limits,audit,auto_failover_cfg, auto_reprovision_cfg,autocompaction,buckets, cbas_memory_quota,cert_and_pkey, drop_request_memory_threshold_mib,email_alerts, fts_memory_quota, index_aware_rebalance_disabled, max_bucket_count,memcached,memory_quota, nodes_wanted,otp,password_policy, read_only_user_creds,remote_clusters, replication,rest,rest_creds,secure_headers, server_groups,set_view_update_daemon, {couchdb,max_parallel_indexers}, {couchdb,max_parallel_replica_indexers}, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {request_limit,capi}, {request_limit,rest}, {node,'n_0@127.0.0.1',audit}, {node,'n_0@127.0.0.1',capi_port}, {node,'n_0@127.0.0.1',cbas_auth_port}, {node,'n_0@127.0.0.1',cbas_cc_client_port}, {node,'n_0@127.0.0.1',cbas_cc_cluster_port}, {node,'n_0@127.0.0.1',cbas_cc_http_port}, {node,'n_0@127.0.0.1',cbas_cluster_port}, {node,'n_0@127.0.0.1',cbas_data_port}, {node,'n_0@127.0.0.1',cbas_debug_port}, {node,'n_0@127.0.0.1',cbas_http_port}, {node,'n_0@127.0.0.1', cbas_hyracks_console_port}, {node,'n_0@127.0.0.1',cbas_messaging_port}, {node,'n_0@127.0.0.1',cbas_result_port}, {node,'n_0@127.0.0.1',cbas_ssl_port}, {node,'n_0@127.0.0.1',compaction_daemon}, {node,'n_0@127.0.0.1',config_version}, {node,'n_0@127.0.0.1',fts_http_port}, {node,'n_0@127.0.0.1',fts_ssl_port}, {node,'n_0@127.0.0.1',indexer_admin_port}, {node,'n_0@127.0.0.1',indexer_http_port}, {node,'n_0@127.0.0.1',indexer_https_port}, {node,'n_0@127.0.0.1',indexer_scan_port}, {node,'n_0@127.0.0.1',indexer_stcatchup_port}, {node,'n_0@127.0.0.1',indexer_stinit_port}, {node,'n_0@127.0.0.1',indexer_stmaint_port}, {node,'n_0@127.0.0.1',is_enterprise}, {node,'n_0@127.0.0.1',isasl}, {node,'n_0@127.0.0.1',ldap_enabled}, {node,'n_0@127.0.0.1',membership}, {node,'n_0@127.0.0.1',memcached}, {node,'n_0@127.0.0.1',memcached_config}, {node,'n_0@127.0.0.1',memcached_defaults}, {node,'n_0@127.0.0.1',moxi}]..) [ns_server:debug,2017-10-01T10:13:57.742-07:00,n_0@127.0.0.1:compiled_roles_cache<0.213.0>:versioned_cache:handle_info:89]Flushing cache compiled_roles_cache due to version change from undefined to {undefined, {0, 3720434207}, false, []} [error_logger:info,2017-10-01T10:13:57.746-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.306.0>}, {name,vbucket_map_mirror}, {mfargs,{vbucket_map_mirror,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.755-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.308.0>}, {name,bucket_info_cache}, {mfargs,{bucket_info_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.755-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.311.0>}, {name,ns_tick_event}, {mfargs,{gen_event,start_link,[{local,ns_tick_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.755-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.312.0>}, {name,buckets_events}, {mfargs, {gen_event,start_link,[{local,buckets_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.777-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_mail_sup} started: [{pid,<0.314.0>}, {name,ns_mail_log}, {mfargs,{ns_mail_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.777-07:00,n_0@127.0.0.1:ns_log_events<0.280.0>:ns_mail_log:init:44]ns_mail_log started up [error_logger:info,2017-10-01T10:13:57.777-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.313.0>}, {name,ns_mail_sup}, {mfargs,{ns_mail_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:57.778-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.315.0>}, {name,ns_stats_event}, {mfargs, {gen_event,start_link,[{local,ns_stats_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.781-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.316.0>}, {name,samples_loader_tasks}, {mfargs,{samples_loader_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.791-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.318.0>}, {name,ns_heart}, {mfargs,{ns_heart,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.793-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.324.0>}, {name,ns_heart_slow_updater}, {mfargs,{ns_heart,start_link_slow_updater,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.793-07:00,n_0@127.0.0.1:ns_heart<0.318.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,187}]}, {ns_heart,handle_info,2, [{file,"src/ns_heart.erl"},{line,118}]}]}} [ns_server:debug,2017-10-01T10:13:57.793-07:00,n_0@127.0.0.1:ns_heart<0.318.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,187}]}]}} [error_logger:info,2017-10-01T10:13:57.794-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.317.0>}, {name,ns_heart_sup}, {mfargs,{ns_heart_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:57.821-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.328.0>}, {name,ns_doctor_events}, {mfargs, {gen_event,start_link,[{local,ns_doctor_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.829-07:00,n_0@127.0.0.1:<0.325.0>:restartable:start_child:98]Started child process <0.327.0> MFA: {ns_doctor_sup,start_link,[]} [error_logger:info,2017-10-01T10:13:57.829-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.329.0>}, {name,ns_doctor}, {mfargs,{ns_doctor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.829-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.325.0>}, {name,ns_doctor_sup}, {mfargs, {restartable,start_link, [{ns_doctor_sup,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:57.831-07:00,n_0@127.0.0.1:ns_heart<0.318.0>:ns_heart:grab_local_xdcr_replications:461]Ignoring exception getting xdcr replication infos {exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}}, [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {xdc_replication_sup,all_local_replication_infos,0, [{file,"src/xdc_replication_sup.erl"},{line,58}]}, {ns_heart,grab_local_xdcr_replications,0, [{file,"src/ns_heart.erl"},{line,440}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,318}]}, {ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,187}]}, {ns_heart,handle_info,2,[{file,"src/ns_heart.erl"},{line,118}]}, {gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,604}]}]} [error_logger:info,2017-10-01T10:13:57.847-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.332.0>}, {name,remote_clusters_info}, {mfargs,{remote_clusters_info,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.848-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.333.0>}, {name,master_activity_events}, {mfargs, {gen_event,start_link, [{local,master_activity_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.850-07:00,n_0@127.0.0.1:ns_heart<0.318.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [error_logger:info,2017-10-01T10:13:57.854-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.335.0>}, {name,xdcr_ckpt_store}, {mfargs,{simple_store,start_link,[xdcr_ckpt_data]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.854-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.336.0>}, {name,metakv_worker}, {mfargs,{work_queue,start_link,[metakv_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.854-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.337.0>}, {name,index_events}, {mfargs,{gen_event,start_link,[{local,index_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.869-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.338.0>}, {name,index_settings_manager}, {mfargs,{index_settings_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.872-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.341.0>}, {name,menelaus_ui_auth}, {mfargs,{menelaus_ui_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.874-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.343.0>}, {name,menelaus_local_auth}, {mfargs,{menelaus_local_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.876-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.324.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,244}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]}} [ns_server:debug,2017-10-01T10:13:57.877-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.324.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,244}]}]}} [error_logger:info,2017-10-01T10:13:57.879-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.355.0>}, {name,menelaus_web_cache}, {mfargs,{menelaus_web_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.879-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.324.0>:ns_heart:grab_local_xdcr_replications:461]Ignoring exception getting xdcr replication infos {exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}}, [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {xdc_replication_sup,all_local_replication_infos,0, [{file,"src/xdc_replication_sup.erl"},{line,58}]}, {ns_heart,grab_local_xdcr_replications,0, [{file,"src/ns_heart.erl"},{line,440}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,318}]}, {ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,slow_updater_loop,0,[{file,"src/ns_heart.erl"},{line,244}]}, {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]} [ns_server:debug,2017-10-01T10:13:57.879-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.324.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [error_logger:info,2017-10-01T10:13:57.883-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.357.0>}, {name,menelaus_stats_gatherer}, {mfargs,{menelaus_stats_gatherer,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.883-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.358.0>}, {name,json_rpc_events}, {mfargs, {gen_event,start_link,[{local,json_rpc_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:57.885-07:00,n_0@127.0.0.1:menelaus_sup<0.340.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for n1ql [ns_server:info,2017-10-01T10:13:57.885-07:00,n_0@127.0.0.1:menelaus_sup<0.340.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for cbas [ns_server:info,2017-10-01T10:13:57.891-07:00,n_0@127.0.0.1:menelaus_sup<0.340.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for fts [error_logger:info,2017-10-01T10:13:57.892-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.359.0>}, {name,menelaus_web}, {mfargs,{menelaus_web,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.897-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.376.0>}, {name,menelaus_event}, {mfargs,{menelaus_event,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.901-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.377.0>}, {name,hot_keys_keeper}, {mfargs,{hot_keys_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.933-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.378.0>}, {name,menelaus_web_alerts_srv}, {mfargs,{menelaus_web_alerts_srv,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.940-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.379.0>}, {name,menelaus_cbauth}, {mfargs,{menelaus_cbauth,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [user:info,2017-10-01T10:13:57.941-07:00,n_0@127.0.0.1:ns_server_sup<0.265.0>:menelaus_sup:start_link:46]Couchbase Server has started on web port 9000 on node 'n_0@127.0.0.1'. Version: "5.0.0-0000-enterprise". [error_logger:info,2017-10-01T10:13:57.941-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.340.0>}, {name,menelaus}, {mfargs,{menelaus_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:57.942-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.385.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.949-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_sup} started: [{pid,<0.389.0>}, {name,service_agent_children_sup}, {mfargs, {supervisor,start_link, [{local,service_agent_children_sup}, service_agent_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:57.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_sup} started: [{pid,<0.390.0>}, {name,service_agent_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.388.0>}, {name,service_agent_sup}, {mfargs,{service_agent_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:57.969-07:00,n_0@127.0.0.1:ns_ports_setup<0.385.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,xdcr_proxy] [error_logger:info,2017-10-01T10:13:57.971-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.392.0>}, {name,ns_memcached_sockets_pool}, {mfargs,{ns_memcached_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:57.971-07:00,n_0@127.0.0.1:ns_audit_cfg<0.393.0>:ns_audit_cfg:write_audit_json:158]Writing new content to "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json" : [{auditd_enabled, false}, {disabled, []}, {log_path, "logs/n_0"}, {rotate_interval, 86400}, {rotate_size, 20971520}, {sync, []}, {version, 1}, {descriptors_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/etc/security"}] [ns_server:debug,2017-10-01T10:13:57.986-07:00,n_0@127.0.0.1:ns_audit_cfg<0.393.0>:ns_audit_cfg:handle_info:107]Instruct memcached to reload audit config [error_logger:info,2017-10-01T10:13:57.986-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.393.0>}, {name,ns_audit_cfg}, {mfargs,{ns_audit_cfg,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.397.0>}, {name,ns_audit}, {mfargs,{ns_audit,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2017-10-01T10:13:57.992-07:00,n_0@127.0.0.1:<0.396.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:debug,2017-10-01T10:13:57.992-07:00,n_0@127.0.0.1:memcached_config_mgr<0.398.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:57.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.398.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:13:57.995-07:00,n_0@127.0.0.1:<0.399.0>:ns_memcached_log_rotator:init:28]Starting log rotator on "logs/n_0"/"memcached.log"* with an initial period of 39003ms [error_logger:info,2017-10-01T10:13:57.995-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.399.0>}, {name,ns_memcached_log_rotator}, {mfargs,{ns_memcached_log_rotator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:57.998-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.400.0>}, {name,memcached_clients_pool}, {mfargs,{memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.017-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.401.0>}, {name,proxied_memcached_clients_pool}, {mfargs,{proxied_memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.018-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.402.0>}, {name,xdc_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,xdc_lhttpc_pool}, {connection_timeout,120000}, {pool_size,200}]]}}, {restart_type,{permanent,1}}, {shutdown,10000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.029-07:00,n_0@127.0.0.1:ns_ports_setup<0.385.0>:ns_ports_setup:set_children:78]Monitor ns_child_ports_sup <11719.74.0> [ns_server:debug,2017-10-01T10:13:58.029-07:00,n_0@127.0.0.1:memcached_config_mgr<0.398.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:info,2017-10-01T10:13:58.031-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.404.0>}, {name,ns_null_connection_pool}, {mfargs, {ns_null_connection_pool,start_link, [ns_null_connection_pool]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.041-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<0.406.0>}, {name,xdc_stats_holder}, {mfargs, {proc_lib,start_link, [xdcr_sup,link_stats_holder_body,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.042-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<0.407.0>}, {name,xdc_replication_sup}, {mfargs,{xdc_replication_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:58.043-07:00,n_0@127.0.0.1:memcached_config_mgr<0.398.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [error_logger:info,2017-10-01T10:13:58.045-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<0.409.0>}, {name,xdc_rep_manager}, {mfargs,{xdc_rep_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,30000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.045-07:00,n_0@127.0.0.1:xdc_rep_manager<0.409.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:13:58.048-07:00,n_0@127.0.0.1:memcached_config_mgr<0.398.0>:memcached_config_mgr:init:78]wrote memcached config to /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json. Will activate memcached port server [ns_server:debug,2017-10-01T10:13:58.063-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.413.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:13:58.063-07:00,n_0@127.0.0.1:xdc_rdoc_replication_srv<0.414.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:13:58.064-07:00,n_0@127.0.0.1:<0.405.0>:xdc_rdoc_manager:start_link_remote:45]Starting xdc_rdoc_manager on 'couchdb_n_0@127.0.0.1' with following links: [<0.413.0>, <0.414.0>, <0.409.0>] [error_logger:info,2017-10-01T10:13:58.064-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<0.413.0>}, {name,xdc_rdoc_replicator}, {mfargs,{xdc_rdoc_manager,start_replicator,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.064-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<0.414.0>}, {name,xdc_rdoc_replication_srv}, {mfargs,{doc_replication_srv,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.081-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.413.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [ns_server:debug,2017-10-01T10:13:58.081-07:00,n_0@127.0.0.1:xdc_rdoc_replication_srv<0.414.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [error_logger:info,2017-10-01T10:13:58.081-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.405.0>,xdcr_sup} started: [{pid,<11720.267.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_n_0@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.081-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.405.0>}, {name,xdcr_sup}, {mfargs,{xdcr_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:58.080-07:00,n_0@127.0.0.1:xdc_rep_manager<0.409.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [error_logger:info,2017-10-01T10:13:58.089-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.416.0>}, {name,xdcr_dcp_sockets_pool}, {mfargs,{xdcr_dcp_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.090-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.417.0>}, {name,testconditions_store}, {mfargs,{simple_store,start_link,[testconditions]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.095-07:00,n_0@127.0.0.1:memcached_config_mgr<0.398.0>:memcached_config_mgr:init:81]activated memcached port server [ns_server:debug,2017-10-01T10:13:58.101-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.413.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [error_logger:info,2017-10-01T10:13:58.111-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.419.0>}, {name,ns_bucket_worker}, {mfargs,{work_queue,start_link,[ns_bucket_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.121-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_sup} started: [{pid,<0.421.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.121-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.420.0>}, {name,ns_bucket_sup}, {mfargs,{ns_bucket_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.122-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.418.0>}, {name,ns_bucket_worker_sup}, {mfargs,{ns_bucket_worker_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.149-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.422.0>}, {name,system_stats_collector}, {mfargs,{system_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.151-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.426.0>}, {name,{stats_archiver,"@system"}}, {mfargs,{stats_archiver,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.169-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.428.0>}, {name,{stats_reader,"@system"}}, {mfargs,{stats_reader,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.170-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.429.0>}, {name,{stats_archiver,"@system-processes"}}, {mfargs, {stats_archiver,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.170-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.431.0>}, {name,{stats_reader,"@system-processes"}}, {mfargs, {stats_reader,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.172-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.432.0>}, {name,{stats_archiver,"@query"}}, {mfargs,{stats_archiver,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.172-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.434.0>}, {name,{stats_reader,"@query"}}, {mfargs,{stats_reader,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.176-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.435.0>}, {name,query_stats_collector}, {mfargs,{query_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.179-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.437.0>}, {name,{stats_archiver,"@global"}}, {mfargs,{stats_archiver,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.179-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.439.0>}, {name,{stats_reader,"@global"}}, {mfargs,{stats_reader,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.182-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.440.0>}, {name,global_stats_collector}, {mfargs,{global_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.184-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.442.0>}, {name,goxdcr_status_keeper}, {mfargs,{goxdcr_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.193-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.444.0>}, {name,index_stats_children_sup}, {mfargs, {supervisor,start_link, [{local,index_stats_children_sup}, index_stats_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.197-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.446.0>}, {name,index_status_keeper_worker}, {mfargs, {work_queue,start_link, [index_status_keeper_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.237-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.447.0>}, {name,index_status_keeper}, {mfargs,{indexer_gsi,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.240-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.450.0>}, {name,index_status_keeper_fts}, {mfargs,{indexer_fts,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.247-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.453.0>}, {name,index_status_keeper_cbas}, {mfargs,{indexer_cbas,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.247-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.445.0>}, {name,index_status_keeper_sup}, {mfargs,{index_status_keeper_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.247-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.456.0>}, {name,index_stats_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.247-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.443.0>}, {name,index_stats_sup}, {mfargs,{index_stats_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.253-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.458.0>}, {name,compaction_daemon}, {mfargs,{compaction_daemon,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.299-07:00,n_0@127.0.0.1:<0.461.0>:new_concurrency_throttle:init:113]init concurrent throttle process, pid: <0.461.0>, type: kv_throttle# of available token: 1 [ns_server:debug,2017-10-01T10:13:58.315-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_kv. Rescheduling compaction. [error_logger:info,2017-10-01T10:13:58.315-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.459.0>}, {name,compaction_new_daemon}, {mfargs,{compaction_new_daemon,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,86400000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.315-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2017-10-01T10:13:58.316-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2017-10-01T10:13:58.316-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2017-10-01T10:13:58.316-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_master. Rescheduling compaction. [ns_server:debug,2017-10-01T10:13:58.316-07:00,n_0@127.0.0.1:compaction_new_daemon<0.459.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_master too soon. Next run will be in 3600s [error_logger:info,2017-10-01T10:13:58.317-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,cluster_logs_sup} started: [{pid,<0.463.0>}, {name,ets_holder}, {mfargs, {cluster_logs_collection_task, start_link_ets_holder,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.318-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.462.0>}, {name,cluster_logs_sup}, {mfargs,{cluster_logs_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.320-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.464.0>}, {name,remote_api}, {mfargs,{remote_api,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.337-07:00,n_0@127.0.0.1:<0.465.0>:mb_master:check_master_takeover_needed:140]Sending master node question to the following nodes: [] [ns_server:debug,2017-10-01T10:13:58.337-07:00,n_0@127.0.0.1:<0.465.0>:mb_master:check_master_takeover_needed:142]Got replies: [] [ns_server:debug,2017-10-01T10:13:58.337-07:00,n_0@127.0.0.1:<0.465.0>:mb_master:check_master_takeover_needed:148]Was unable to discover master, not going to force mastership takeover [user:info,2017-10-01T10:13:58.337-07:00,n_0@127.0.0.1:mb_master<0.467.0>:mb_master:init:86]I'm the only node, so I'm the master. [error_logger:info,2017-10-01T10:13:58.387-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.470.0>,ns_tick,<0.470.0>,#Fun} [error_logger:info,2017-10-01T10:13:58.387-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.470.0>,#Ref<0.0.0.1948>}} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [ns_server:debug,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:mb_master_sup<0.469.0>:misc:start_singleton:855]start_singleton(gen_server, ns_tick, [], []): started as <0.470.0> on 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_tick},{pid,<0.470.0>}} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:13:58.388-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:13:58.389-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:13:58.389-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:13:58.389-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.470.0>}, {name,ns_tick}, {mfargs,{ns_tick,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.410-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.473.0>}, {name,ns_janitor_server}, {mfargs,{ns_janitor_server,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.474.0>,auto_reprovision,<0.474.0>,#Fun} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.474.0>,#Ref<0.0.0.1978>}} [ns_server:debug,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:ns_orchestrator_child_sup<0.472.0>:misc:start_singleton:855]start_singleton(gen_server, auto_reprovision, [], []): started as <0.474.0> on 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_reprovision},{pid,<0.474.0>}} [error_logger:info,2017-10-01T10:13:58.420-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.474.0>}, {name,auto_reprovision}, {mfargs,{auto_reprovision,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.475.0>,ns_orchestrator,<0.475.0>,#Fun} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.475.0>,#Ref<0.0.0.1995>}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_orchestrator},{pid,<0.475.0>}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:13:58.421-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:13:58.422-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:13:58.422-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [ns_server:debug,2017-10-01T10:13:58.442-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[3,0]}] [ns_server:info,2017-10-01T10:13:58.442-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_online_config_upgrader:upgrade_config_from_3_0_to_4_0:62]Performing online config upgrade to 4.0 version [ns_server:debug,2017-10-01T10:13:58.446-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[4,0]}, {delete,goxdcr_upgrade}, {set,{node,'n_0@127.0.0.1',stop_xdcr},true}, {set,{metakv,<<"/indexing/settings/config">>}, <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.log_level\":\"info\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":0,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":536870912}">>}] [ns_server:info,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_online_config_upgrader:upgrade_config_from_4_0_to_4_1:67]Performing online config upgrade to 4.1 version [ns_server:debug,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[4,1]}, {set,{service_map,n1ql},[]}, {set,{service_map,index},[]}] [ns_server:info,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_online_config_upgrader:upgrade_config_from_4_1_to_4_5:71]Performing online config upgrade to 4.5 version [ns_server:debug,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[4,5]}, {set,{metakv,<<"/indexing/settings/config">>}, <<"{\"indexer.settings.compaction.days_of_week\":\"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday\",\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.compaction.compaction_mode\":\"circular\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":0,\"indexer.settings.storage_mode\":\"\",\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":536870912,\"indexer.settings.compaction.abort_exceed_interval\":false}">>}, {set,{service_map,fts},[]}] [ns_server:debug,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[4,6]}] [ns_server:info,2017-10-01T10:13:58.447-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_online_config_upgrader:upgrade_config_from_4_6_to_spock:94]Performing online config upgrade to Spock version [ns_server:debug,2017-10-01T10:13:58.448-07:00,n_0@127.0.0.1:ns_config<0.165.0>:ns_config:do_upgrade_config:711]Upgrading config by changes: [{set,cluster_compat_version,[5,0]}, {delete,roles_definitions}, {delete,users_upgrade}, {delete,read_only_user_creds}, {set,buckets,[{configs,[]}]}] [ns_server:debug,2017-10-01T10:13:58.448-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets,cluster_compat_version,goxdcr_upgrade, read_only_user_creds,roles_definitions, users_upgrade, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/indexing/settings/config">>}, {service_map,fts}, {service_map,index}, {service_map,n1ql}, {node,'n_0@127.0.0.1',stop_xdcr}]..) [ns_server:debug,2017-10-01T10:13:58.448-07:00,n_0@127.0.0.1:compiled_roles_cache<0.213.0>:versioned_cache:handle_info:89]Flushing cache compiled_roles_cache due to version change from {undefined, {0,3720434207}, false,[]} to {[5, 0], {0, 3720434207}, false, []} [ns_server:debug,2017-10-01T10:13:58.448-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: users_upgrade -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:menelaus_ui_auth<0.341.0>:menelaus_ui_auth:handle_cast:194]Revoke tokens [] for role ro_admin [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: roles_definitions -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,fts} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,index} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,n1ql} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/indexing/settings/config">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097238}}]}| <<"{\"indexer.settings.compaction.days_of_week\":\"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday\",\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.compaction.compaction_mode\":\"circular\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\""...>>] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}|true] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: goxdcr_upgrade -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cluster_compat_version -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{6,63674097238}}]},5,0] [ns_server:debug,2017-10-01T10:13:58.449-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}],{configs,[]}] [ns_server:debug,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: read_only_user_creds -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:handle_call:115]Got full synchronization request from 'n_0@127.0.0.1' [ns_server:debug,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{3,63674097238}}]}] [ns_server:debug,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:handle_call:121]Fully synchronized config in 421 us [ns_server:debug,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:memcached_permissions<0.277.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [user:warn,2017-10-01T10:13:58.450-07:00,n_0@127.0.0.1:<0.475.0>:ns_orchestrator:consider_switching_compat_mode_dont_exit:1043]Changed cluster compat mode from undefined to [5,0] [ns_server:debug,2017-10-01T10:13:58.451-07:00,n_0@127.0.0.1:ns_orchestrator_child_sup<0.472.0>:misc:start_singleton:855]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.475.0> on 'n_0@127.0.0.1' [ns_server:debug,2017-10-01T10:13:58.451-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_dets:handle_call:251]Suspended by process <0.277.0> [ns_server:debug,2017-10-01T10:13:58.452-07:00,n_0@127.0.0.1:memcached_permissions<0.277.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{user,'_'},'_',false,'_'}, [], ['$_']}], 100} [error_logger:info,2017-10-01T10:13:58.452-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.475.0>}, {name,ns_orchestrator}, {mfargs,{ns_orchestrator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2017-10-01T10:13:58.452-07:00,n_0@127.0.0.1:<0.491.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.452-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{4,63674097238}}]}] [ns_server:debug,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097238}}]}| '_deleted'] [error_logger:info,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.472.0>}, {name,ns_orchestrator_child_sup}, {mfargs,{ns_orchestrator_child_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:ns_config_rep<0.289.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {node,'n_0@127.0.0.1',stop_xdcr}]..) [ns_server:debug,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:<0.408.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.398.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.398.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.454-07:00,n_0@127.0.0.1:users_storage<0.211.0>:replicated_dets:handle_call:258]Released by process <0.277.0> [ns_server:debug,2017-10-01T10:13:58.453-07:00,n_0@127.0.0.1:<0.411.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.398.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.463-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_cast:55]Refresh of rbac requested [ns_server:warn,2017-10-01T10:13:58.464-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.464-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:debug,2017-10-01T10:13:58.505-07:00,n_0@127.0.0.1:<0.501.0>:auto_failover:init:150]init auto_failover. [ns_server:debug,2017-10-01T10:13:58.505-07:00,n_0@127.0.0.1:ns_orchestrator_sup<0.471.0>:misc:start_singleton:855]start_singleton(gen_server, auto_failover, [], []): started as <0.501.0> on 'n_0@127.0.0.1' [ns_server:debug,2017-10-01T10:13:58.505-07:00,n_0@127.0.0.1:<0.465.0>:restartable:start_child:98]Started child process <0.467.0> MFA: {mb_master,start_link,[]} [error_logger:error,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.398.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.411.0>] dictionary: [] trap_exit: false status: running heap_size: 28690 stack_size: 27 reductions: 25307 neighbours: [error_logger:info,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.501.0>,auto_failover,<0.501.0>,#Fun} [error_logger:info,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.501.0>,#Ref<0.0.0.2219>}} [error_logger:info,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_failover},{pid,<0.501.0>}} [ns_server:debug,2017-10-01T10:13:58.535-07:00,n_0@127.0.0.1:ns_ports_setup<0.385.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,goxdcr,xdcr_proxy] [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.501.0>}, {name,auto_failover}, {mfargs,{auto_failover,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.471.0>}, {name,ns_orchestrator_sup}, {mfargs,{ns_orchestrator_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.536-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.465.0>}, {name,mb_master}, {mfargs, {restartable,start_link, [{mb_master,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.537-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.502.0>}, {name,master_activity_events_ingress}, {mfargs, {gen_event,start_link, [{local,master_activity_events_ingress}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.537-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.503.0>}, {name,master_activity_events_timestamper}, {mfargs, {master_activity_events,start_link_timestamper,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.553-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.504.0>}, {name,master_activity_events_pids_watcher}, {mfargs, {master_activity_events_pids_watcher,start_link, []}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.557-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.505.0>}, {name,master_activity_events_keeper}, {mfargs,{master_activity_events_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.571-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.508.0>}, {name,ns_server_monitor}, {mfargs,{ns_server_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.572-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.510.0>}, {name,service_monitor_children_sup}, {mfargs, {supervisor,start_link, [{local,service_monitor_children_sup}, health_monitor_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.574-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.511.0>}, {name,service_monitor_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.578-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.519.0>}, {name,node_monitor}, {mfargs,{node_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.584-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.527.0>}, {name,node_status_analyzer}, {mfargs,{node_status_analyzer,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.584-07:00,n_0@127.0.0.1:memcached_config_mgr<0.529.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:58.584-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.174.0>:one_shot_barrier:notify:27]Notifying on barrier menelaus_barrier [error_logger:info,2017-10-01T10:13:58.584-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.507.0>}, {name,health_monitor_sup}, {mfargs,{health_monitor_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:menelaus_barrier<0.176.0>:one_shot_barrier:barrier_body:62]Barrier menelaus_barrier got notification from <0.174.0> [ns_server:debug,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.174.0>:one_shot_barrier:notify:32]Successfuly notified on barrier menelaus_barrier [error_logger:error,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.398.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:<0.173.0>:restartable:start_child:98]Started child process <0.174.0> MFA: {ns_server_nodes_sup,start_link,[]} [error_logger:error,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.398.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:<0.2.0>:child_erlang:child_loop:116]5832: Entered child_loop [error_logger:info,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.529.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.265.0>}, {name,ns_server_sup}, {mfargs,{ns_server_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.173.0>}, {name,ns_server_nodes_sup}, {mfargs, {restartable,start_link, [{ns_server_nodes_sup,start_link,[]}, infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:13:58.585-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= application: ns_server started_at: 'n_0@127.0.0.1' [ns_server:warn,2017-10-01T10:13:58.686-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.686-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:13:58.688-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.688-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:debug,2017-10-01T10:13:58.894-07:00,n_0@127.0.0.1:memcached_config_mgr<0.529.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:58.896-07:00,n_0@127.0.0.1:memcached_config_mgr<0.529.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.898-07:00,n_0@127.0.0.1:memcached_config_mgr<0.529.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.898-07:00,n_0@127.0.0.1:memcached_config_mgr<0.529.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.899-07:00,n_0@127.0.0.1:<0.533.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.899-07:00,n_0@127.0.0.1:<0.530.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.529.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.899-07:00,n_0@127.0.0.1:<0.531.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.529.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [error_logger:error,2017-10-01T10:13:58.899-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.529.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.899-07:00,n_0@127.0.0.1:memcached_config_mgr<0.534.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:58.900-07:00,n_0@127.0.0.1:memcached_config_mgr<0.534.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.900-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.529.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.531.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17169 neighbours: [error_logger:error,2017-10-01T10:13:58.900-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.529.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.900-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.529.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.900-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.534.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.901-07:00,n_0@127.0.0.1:memcached_config_mgr<0.534.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.902-07:00,n_0@127.0.0.1:memcached_config_mgr<0.534.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.902-07:00,n_0@127.0.0.1:memcached_config_mgr<0.534.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.903-07:00,n_0@127.0.0.1:<0.538.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:58.903-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.534.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.534.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.536.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17173 neighbours: [error_logger:error,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.534.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:<0.536.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.534.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [error_logger:error,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.534.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:<0.535.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.534.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.904-07:00,n_0@127.0.0.1:memcached_config_mgr<0.539.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:58.905-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.539.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.905-07:00,n_0@127.0.0.1:memcached_config_mgr<0.539.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:58.906-07:00,n_0@127.0.0.1:memcached_config_mgr<0.539.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.907-07:00,n_0@127.0.0.1:memcached_config_mgr<0.539.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.908-07:00,n_0@127.0.0.1:memcached_config_mgr<0.539.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.909-07:00,n_0@127.0.0.1:<0.543.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.909-07:00,n_0@127.0.0.1:<0.541.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.539.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:<0.540.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.539.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:58.909-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.539.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:memcached_config_mgr<0.544.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.539.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.541.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17169 neighbours: [ns_server:debug,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:memcached_config_mgr<0.544.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.539.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.539.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.910-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.544.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.913-07:00,n_0@127.0.0.1:memcached_config_mgr<0.544.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.913-07:00,n_0@127.0.0.1:memcached_config_mgr<0.544.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.914-07:00,n_0@127.0.0.1:memcached_config_mgr<0.544.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.915-07:00,n_0@127.0.0.1:<0.548.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.915-07:00,n_0@127.0.0.1:<0.545.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.544.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.915-07:00,n_0@127.0.0.1:<0.546.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.544.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [error_logger:error,2017-10-01T10:13:58.915-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.544.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.916-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.544.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.546.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17103 neighbours: [error_logger:error,2017-10-01T10:13:58.916-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.544.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:58.916-07:00,n_0@127.0.0.1:memcached_config_mgr<0.549.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:58.916-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.544.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.916-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.549.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.917-07:00,n_0@127.0.0.1:memcached_config_mgr<0.549.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:58.925-07:00,n_0@127.0.0.1:memcached_config_mgr<0.549.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.926-07:00,n_0@127.0.0.1:memcached_config_mgr<0.549.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.926-07:00,n_0@127.0.0.1:memcached_config_mgr<0.549.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.931-07:00,n_0@127.0.0.1:<0.553.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:58.931-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.549.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.932-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.549.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.551.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17103 neighbours: [error_logger:error,2017-10-01T10:13:58.932-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.549.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.932-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.549.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.932-07:00,n_0@127.0.0.1:<0.551.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.549.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.932-07:00,n_0@127.0.0.1:<0.550.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.549.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.933-07:00,n_0@127.0.0.1:memcached_config_mgr<0.554.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:58.933-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.554.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.933-07:00,n_0@127.0.0.1:memcached_config_mgr<0.554.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:58.934-07:00,n_0@127.0.0.1:memcached_config_mgr<0.554.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.935-07:00,n_0@127.0.0.1:memcached_config_mgr<0.554.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.935-07:00,n_0@127.0.0.1:memcached_config_mgr<0.554.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.936-07:00,n_0@127.0.0.1:<0.558.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:58.936-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.554.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.554.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.556.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17092 neighbours: [error_logger:error,2017-10-01T10:13:58.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.554.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.554.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.937-07:00,n_0@127.0.0.1:<0.556.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.554.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.938-07:00,n_0@127.0.0.1:<0.555.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.554.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.938-07:00,n_0@127.0.0.1:memcached_config_mgr<0.559.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:58.938-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.559.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.938-07:00,n_0@127.0.0.1:memcached_config_mgr<0.559.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:58.939-07:00,n_0@127.0.0.1:memcached_config_mgr<0.559.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.941-07:00,n_0@127.0.0.1:memcached_config_mgr<0.559.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.941-07:00,n_0@127.0.0.1:memcached_config_mgr<0.559.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:<0.563.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:<0.560.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.559.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:<0.561.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.559.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:memcached_config_mgr<0.564.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.559.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.949-07:00,n_0@127.0.0.1:memcached_config_mgr<0.564.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.559.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.561.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17173 neighbours: [error_logger:error,2017-10-01T10:13:58.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.559.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:58.950-07:00,n_0@127.0.0.1:memcached_config_mgr<0.564.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [error_logger:error,2017-10-01T10:13:58.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.559.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.564.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.953-07:00,n_0@127.0.0.1:memcached_config_mgr<0.564.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.953-07:00,n_0@127.0.0.1:memcached_config_mgr<0.564.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.957-07:00,n_0@127.0.0.1:<0.568.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.957-07:00,n_0@127.0.0.1:<0.566.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.564.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.957-07:00,n_0@127.0.0.1:memcached_config_mgr<0.569.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:58.957-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.564.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.957-07:00,n_0@127.0.0.1:<0.565.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.564.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.958-07:00,n_0@127.0.0.1:memcached_config_mgr<0.569.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.564.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.566.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17173 neighbours: [error_logger:error,2017-10-01T10:13:58.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.564.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.564.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.569.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.959-07:00,n_0@127.0.0.1:memcached_config_mgr<0.569.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.961-07:00,n_0@127.0.0.1:memcached_config_mgr<0.569.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.961-07:00,n_0@127.0.0.1:memcached_config_mgr<0.569.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.963-07:00,n_0@127.0.0.1:<0.573.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:58.964-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.569.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.964-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.569.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.571.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17169 neighbours: [ns_server:debug,2017-10-01T10:13:58.965-07:00,n_0@127.0.0.1:<0.571.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.569.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.965-07:00,n_0@127.0.0.1:<0.570.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.569.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:58.965-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.569.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.965-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.569.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.966-07:00,n_0@127.0.0.1:memcached_config_mgr<0.574.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:58.966-07:00,n_0@127.0.0.1:memcached_config_mgr<0.574.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:info,2017-10-01T10:13:58.966-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.574.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.967-07:00,n_0@127.0.0.1:memcached_config_mgr<0.574.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.968-07:00,n_0@127.0.0.1:memcached_config_mgr<0.574.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.969-07:00,n_0@127.0.0.1:memcached_config_mgr<0.574.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.970-07:00,n_0@127.0.0.1:<0.578.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:58.971-07:00,n_0@127.0.0.1:<0.576.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.574.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.971-07:00,n_0@127.0.0.1:<0.575.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.574.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.971-07:00,n_0@127.0.0.1:memcached_config_mgr<0.579.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:58.972-07:00,n_0@127.0.0.1:memcached_config_mgr<0.579.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.970-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.574.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:58.973-07:00,n_0@127.0.0.1:memcached_config_mgr<0.579.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [error_logger:error,2017-10-01T10:13:58.974-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.574.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.576.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17173 neighbours: [error_logger:error,2017-10-01T10:13:58.974-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.574.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:58.974-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.574.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.974-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.579.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.981-07:00,n_0@127.0.0.1:memcached_config_mgr<0.579.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:58.982-07:00,n_0@127.0.0.1:memcached_config_mgr<0.579.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:58.983-07:00,n_0@127.0.0.1:<0.583.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:58.983-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.579.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.579.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:<0.581.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.579.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:<0.580.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.579.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:memcached_config_mgr<0.584.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.579.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.581.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17103 neighbours: [ns_server:debug,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:memcached_config_mgr<0.584.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:58.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.579.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:58.985-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.584.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:58.987-07:00,n_0@127.0.0.1:memcached_config_mgr<0.584.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:58.988-07:00,n_0@127.0.0.1:memcached_config_mgr<0.584.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.002-07:00,n_0@127.0.0.1:memcached_config_mgr<0.584.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.003-07:00,n_0@127.0.0.1:<0.396.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:warn,2017-10-01T10:13:59.005-07:00,n_0@127.0.0.1:<0.588.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.005-07:00,n_0@127.0.0.1:<0.586.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.584.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.005-07:00,n_0@127.0.0.1:<0.585.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.584.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:59.005-07:00,n_0@127.0.0.1:memcached_config_mgr<0.589.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:59.005-07:00,n_0@127.0.0.1:memcached_config_mgr<0.589.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:59.006-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.584.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.007-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.584.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.586.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17171 neighbours: [ns_server:debug,2017-10-01T10:13:59.007-07:00,n_0@127.0.0.1:memcached_config_mgr<0.589.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [error_logger:error,2017-10-01T10:13:59.008-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.584.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.008-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.584.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:59.008-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.589.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.009-07:00,n_0@127.0.0.1:memcached_config_mgr<0.589.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.010-07:00,n_0@127.0.0.1:memcached_config_mgr<0.589.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.011-07:00,n_0@127.0.0.1:<0.593.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.012-07:00,n_0@127.0.0.1:<0.590.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.589.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:59.012-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.589.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.013-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.589.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.591.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17096 neighbours: [ns_server:debug,2017-10-01T10:13:59.013-07:00,n_0@127.0.0.1:<0.591.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.589.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.013-07:00,n_0@127.0.0.1:memcached_config_mgr<0.594.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:59.013-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.589.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:59.014-07:00,n_0@127.0.0.1:memcached_config_mgr<0.594.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:59.014-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.589.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:59.014-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.594.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.022-07:00,n_0@127.0.0.1:memcached_config_mgr<0.594.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.029-07:00,n_0@127.0.0.1:memcached_config_mgr<0.594.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.029-07:00,n_0@127.0.0.1:memcached_config_mgr<0.594.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.030-07:00,n_0@127.0.0.1:<0.598.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:59.030-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.594.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.594.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.596.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17166 neighbours: [error_logger:error,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.594.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.594.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:<0.596.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.594.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:<0.595.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.594.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:59.031-07:00,n_0@127.0.0.1:memcached_config_mgr<0.599.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:59.032-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.599.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.032-07:00,n_0@127.0.0.1:memcached_config_mgr<0.599.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:59.033-07:00,n_0@127.0.0.1:memcached_config_mgr<0.599.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.034-07:00,n_0@127.0.0.1:memcached_config_mgr<0.599.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.034-07:00,n_0@127.0.0.1:memcached_config_mgr<0.599.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.035-07:00,n_0@127.0.0.1:<0.603.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.035-07:00,n_0@127.0.0.1:<0.601.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.599.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.035-07:00,n_0@127.0.0.1:<0.600.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.599.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:59.035-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.599.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.036-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.599.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.601.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17089 neighbours: [error_logger:error,2017-10-01T10:13:59.036-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.599.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.036-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.599.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.036-07:00,n_0@127.0.0.1:memcached_config_mgr<0.604.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:info,2017-10-01T10:13:59.037-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.604.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.037-07:00,n_0@127.0.0.1:memcached_config_mgr<0.604.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:59.039-07:00,n_0@127.0.0.1:memcached_config_mgr<0.604.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.040-07:00,n_0@127.0.0.1:memcached_config_mgr<0.604.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.040-07:00,n_0@127.0.0.1:memcached_config_mgr<0.604.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.042-07:00,n_0@127.0.0.1:<0.608.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.042-07:00,n_0@127.0.0.1:<0.605.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.604.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [error_logger:error,2017-10-01T10:13:59.042-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.604.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.604.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.606.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17166 neighbours: [error_logger:error,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.604.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [ns_server:debug,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:memcached_config_mgr<0.609.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.604.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.609.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.043-07:00,n_0@127.0.0.1:<0.606.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.604.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.044-07:00,n_0@127.0.0.1:memcached_config_mgr<0.609.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:13:59.045-07:00,n_0@127.0.0.1:memcached_config_mgr<0.609.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.046-07:00,n_0@127.0.0.1:memcached_config_mgr<0.609.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.046-07:00,n_0@127.0.0.1:memcached_config_mgr<0.609.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.047-07:00,n_0@127.0.0.1:<0.613.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.047-07:00,n_0@127.0.0.1:<0.610.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.609.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:59.047-07:00,n_0@127.0.0.1:<0.611.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.609.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [error_logger:error,2017-10-01T10:13:59.047-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.609.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:memcached_config_mgr<0.614.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:memcached_config_mgr<0.614.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.609.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.611.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17170 neighbours: [error_logger:error,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.609.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.609.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:59.048-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.614.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.049-07:00,n_0@127.0.0.1:memcached_config_mgr<0.614.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.050-07:00,n_0@127.0.0.1:memcached_config_mgr<0.614.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.051-07:00,n_0@127.0.0.1:memcached_config_mgr<0.614.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.069-07:00,n_0@127.0.0.1:<0.618.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.069-07:00,n_0@127.0.0.1:<0.616.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.614.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.069-07:00,n_0@127.0.0.1:<0.615.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.614.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:memcached_config_mgr<0.619.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [error_logger:error,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.614.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [ns_server:debug,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:memcached_config_mgr<0.619.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.614.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.616.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17170 neighbours: [error_logger:error,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.614.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.070-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.614.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.071-07:00,n_0@127.0.0.1:memcached_config_mgr<0.619.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [error_logger:info,2017-10-01T10:13:59.071-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.619.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.072-07:00,n_0@127.0.0.1:memcached_config_mgr<0.619.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.072-07:00,n_0@127.0.0.1:memcached_config_mgr<0.619.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.081-07:00,n_0@127.0.0.1:<0.623.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.082-07:00,n_0@127.0.0.1:<0.620.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.619.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:debug,2017-10-01T10:13:59.082-07:00,n_0@127.0.0.1:<0.621.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.619.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.082-07:00,n_0@127.0.0.1:memcached_config_mgr<0.624.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:13:59.082-07:00,n_0@127.0.0.1:memcached_config_mgr<0.624.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [error_logger:error,2017-10-01T10:13:59.083-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.619.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.083-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.619.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.621.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17100 neighbours: [error_logger:error,2017-10-01T10:13:59.083-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.619.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.084-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.619.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:13:59.084-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.624.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.085-07:00,n_0@127.0.0.1:memcached_config_mgr<0.624.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:13:59.086-07:00,n_0@127.0.0.1:memcached_config_mgr<0.624.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:13:59.086-07:00,n_0@127.0.0.1:memcached_config_mgr<0.624.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:warn,2017-10-01T10:13:59.097-07:00,n_0@127.0.0.1:<0.628.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:13:59.097-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.624.0> terminating ** Last message in was do_check ** When Server state == {state,<11719.81.0>, <<"{\n \"admin\": \"@ns_server\",\n \"audit_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json\",\n \"breakpad\": {\n \"enabled\": true,\n \"minidump_dir\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash\"\n },\n \"client_cert_auth\": {\n \"state\": \"disable\"\n },\n \"connection_idle_time\": 0,\n \"dedupe_nmvb_maps\": false,\n \"extensions\": [\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so\",\n \"config\": \"\"\n },\n {\n \"module\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so\",\n \"config\": \"cyclesize=10485760;sleeptime=19;filename=logs/n_0/memcached.log\"\n }\n ],\n \"interfaces\": [\n {\n \"host\": \"*\",\n \"port\": 12000,\n \"maxconn\": 30000\n },\n {\n \"host\": \"*\",\n \"port\": 11999,\n \"maxconn\": 5000\n },\n {\n \"host\": \"*\",\n \"port\": 11996,\n \"maxconn\": 30000,\n \"ssl\": {\n \"key\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem\",\n \"cert\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem\"\n }\n }\n ],\n \"privilege_debug\": false,\n \"rbac_file\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac\",\n \"root\": \"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install\",\n \"ssl_cipher_list\": \"HIGH\",\n \"ssl_minimum_protocol\": \"tlsv1\",\n \"verbosity\": 0,\n \"xattr_enabled\": false\n}\n">>} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3,[{file,"src/async.erl"},{line,131}]}]} [error_logger:error,2017-10-01T10:13:59.097-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.624.0> registered_name: [] exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.626.0>] dictionary: [] trap_exit: false status: running heap_size: 17731 stack_size: 27 reductions: 17089 neighbours: [error_logger:error,2017-10-01T10:13:59.098-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.624.0>, {error, {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"}, {line,131}]}]}}} [error_logger:error,2017-10-01T10:13:59.098-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_memcached,'-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"},{line,1605}]}, {async,'-async_init/3-fun-0-',3, [{file,"src/async.erl"},{line,131}]}]} Offender: [{pid,<0.624.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:13:59.098-07:00,n_0@127.0.0.1:<0.626.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.624.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-', 1, [{file, "src/ns_memcached.erl"}, {line, 1605}]}, {async, '-async_init/3-fun-0-', 3, [{file, "src/async.erl"}, {line, 131}]}]} [ns_server:debug,2017-10-01T10:13:59.098-07:00,n_0@127.0.0.1:<0.625.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.624.0> died with {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_memcached, '-config_validate/1-fun-0-',1, [{file,"src/ns_memcached.erl"}, {line,1605}]}, {async,'-async_init/3-fun-0-', 3, [{file,"src/async.erl"}, {line,131}]}]}. Exiting [ns_server:warn,2017-10-01T10:13:59.390-07:00,n_0@127.0.0.1:<0.630.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:warn,2017-10-01T10:13:59.465-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.465-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:13:59.689-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.689-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:13:59.690-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:13:59.690-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:00.004-07:00,n_0@127.0.0.1:<0.396.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:warn,2017-10-01T10:14:00.392-07:00,n_0@127.0.0.1:<0.630.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:warn,2017-10-01T10:14:00.469-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:00.469-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:00.691-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:00.691-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:00.692-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:00.692-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:01.005-07:00,n_0@127.0.0.1:<0.396.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:debug,2017-10-01T10:14:01.386-07:00,n_0@127.0.0.1:compiled_roles_cache<0.213.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"couchbase",admin} [user:info,2017-10-01T10:14:01.388-07:00,n_0@127.0.0.1:<0.375.0>:ns_storage_conf:setup_disk_storage_conf:125]Setting database directory path to /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir and index directory path to /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir [ns_server:info,2017-10-01T10:14:01.390-07:00,n_0@127.0.0.1:<0.375.0>:ns_storage_conf:setup_disk_storage_conf:133]Removing all unused database files [ns_server:debug,2017-10-01T10:14:01.393-07:00,n_0@127.0.0.1:<0.375.0>:ns_storage_conf:setup_db_and_ix_paths:52]Initialize db_and_ix_paths variable with [{db_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir"}, {index_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir"}] [ns_server:debug,2017-10-01T10:14:01.393-07:00,n_0@127.0.0.1:ns_audit<0.397.0>:ns_audit:handle_call:104]Audit disk_storage_conf: [{index_path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {db_path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {node,'n_0@127.0.0.1'}, {real_userid, {[{source,ns_server},{user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34034}]}}, {timestamp,<<"2017-10-01T10:14:01.393-07:00">>}] [ns_server:debug,2017-10-01T10:14:01.393-07:00,n_0@127.0.0.1:<0.173.0>:restartable:loop:71]Restarting child <0.174.0> MFA: {ns_server_nodes_sup,start_link,[]} Shutdown policy: infinity Caller: {<0.375.0>,#Ref<0.0.0.3442>} [ns_server:debug,2017-10-01T10:14:01.394-07:00,n_0@127.0.0.1:<0.512.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.511.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.394-07:00,n_0@127.0.0.1:<0.528.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.527.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.394-07:00,n_0@127.0.0.1:<0.520.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.519.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.394-07:00,n_0@127.0.0.1:<0.509.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.508.0>} exited with reason shutdown [ns_server:info,2017-10-01T10:14:01.394-07:00,n_0@127.0.0.1:mb_master<0.467.0>:mb_master:terminate:298]Synchronously shutting down child mb_master_sup [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,auto_failover},{pid,<0.501.0>}} [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,auto_failover, {pid,<0.501.0>}, {'n_0@127.0.0.1',<0.501.0>}}} [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,ns_orchestrator},{pid,<0.475.0>}} [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,ns_orchestrator, {pid,<0.475.0>}, {'n_0@127.0.0.1',<0.475.0>}}} [ns_server:debug,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:<0.506.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {master_activity_events,<0.505.0>} exited with reason killed [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,auto_reprovision},{pid,<0.474.0>}} [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,auto_reprovision, {pid,<0.474.0>}, {'n_0@127.0.0.1',<0.474.0>}}} [error_logger:info,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,ns_tick},{pid,<0.470.0>}} [ns_server:debug,2017-10-01T10:14:01.395-07:00,n_0@127.0.0.1:<0.468.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.467.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.465.0>:restartable:shutdown_child:120]Successfully terminated process <0.467.0> [error_logger:info,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,ns_tick, {pid,<0.470.0>}, {'n_0@127.0.0.1',<0.470.0>}}} [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.460.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.459.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.457.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.456.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.455.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.453.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.454.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.453.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.397-07:00,n_0@127.0.0.1:<0.451.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.450.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.397-07:00,n_0@127.0.0.1:<0.449.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.447.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.397-07:00,n_0@127.0.0.1:<0.448.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.447.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.397-07:00,n_0@127.0.0.1:<0.441.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.440.0>} exited with reason shutdown [ns_server:warn,2017-10-01T10:14:01.397-07:00,n_0@127.0.0.1:<0.664.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:debug,2017-10-01T10:14:01.396-07:00,n_0@127.0.0.1:<0.452.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.450.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.408-07:00,n_0@127.0.0.1:<0.438.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.437.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.409-07:00,n_0@127.0.0.1:<0.436.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.435.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.429-07:00,n_0@127.0.0.1:<0.433.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.432.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.437-07:00,n_0@127.0.0.1:<0.430.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.429.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.451-07:00,n_0@127.0.0.1:<0.427.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.426.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.451-07:00,n_0@127.0.0.1:<0.425.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.422.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.451-07:00,n_0@127.0.0.1:<0.424.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ale_stats_events,<0.422.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:01.451-07:00,n_0@127.0.0.1:<0.421.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.420.0>} exited with reason shutdown [error_logger:error,2017-10-01T10:14:01.452-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_bucket_sup} Context: shutdown_error Reason: normal Offender: [{pid,<0.421.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:01.452-07:00,n_0@127.0.0.1:<0.410.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.409.0>} exited with reason shutdown [ns_server:warn,2017-10-01T10:14:01.471-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:01.471-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:01.693-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:01.693-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:01.694-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:01.694-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:warn,2017-10-01T10:14:02.006-07:00,n_0@127.0.0.1:<0.396.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [error_logger:error,2017-10-01T10:14:02.006-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server ns_audit_cfg terminating ** Last message in was notify_memcached ** When Server state == {[{auditd_enabled,false}, {disabled,[]}, {rotate_interval,86400}, {rotate_size,20971520}, {sync,[]}], [{log_path,"logs/n_0"}]} ** Reason for termination == ** {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_audit_cfg,handle_info,2,[{file,"src/ns_audit_cfg.erl"},{line,108}]}, {gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,604}]}, {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]} [ns_server:debug,2017-10-01T10:14:02.006-07:00,n_0@127.0.0.1:<0.394.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.393.0>} exited with reason {{badmatch, {error, couldnt_connect_to_memcached}}, [{ns_audit_cfg, handle_info, 2, [{file, "src/ns_audit_cfg.erl"}, {line, 108}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [error_logger:error,2017-10-01T10:14:02.007-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: ns_audit_cfg:init/1 pid: <0.393.0> registered_name: ns_audit_cfg exception exit: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_audit_cfg,handle_info,2, [{file,"src/ns_audit_cfg.erl"}, {line,108}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,604}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]} in function gen_server:terminate/6 (gen_server.erl, line 744) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.265.0>,<0.394.0>] dictionary: [] trap_exit: false status: running heap_size: 4185 stack_size: 27 reductions: 6553 neighbours: [ns_server:warn,2017-10-01T10:14:02.399-07:00,n_0@127.0.0.1:<0.664.0>:ns_memcached:connect:1187]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [error_logger:error,2017-10-01T10:14:02.454-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: shutdown_error Reason: killed Offender: [{pid,<0.397.0>}, {name,ns_audit}, {mfargs,{ns_audit,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:error,2017-10-01T10:14:02.454-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: shutdown_error Reason: {{badmatch,{error,couldnt_connect_to_memcached}}, [{ns_audit_cfg,handle_info,2, [{file,"src/ns_audit_cfg.erl"},{line,108}]}, {gen_server,handle_msg,5, [{file,"gen_server.erl"},{line,604}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]} Offender: [{pid,<0.393.0>}, {name,ns_audit_cfg}, {mfargs,{ns_audit_cfg,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:02.454-07:00,n_0@127.0.0.1:<0.391.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.390.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.454-07:00,n_0@127.0.0.1:<0.387.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {user_storage_events,<0.385.0>} exited with reason killed [ns_server:debug,2017-10-01T10:14:02.454-07:00,n_0@127.0.0.1:<0.386.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.385.0>} exited with reason killed [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.403.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.385.0> died with killed. Exiting [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.381.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.379.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.380.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {json_rpc_events,<0.379.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.384.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ssl_service_events,<0.379.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.383.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {user_storage_events,<0.379.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.455-07:00,n_0@127.0.0.1:<0.382.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.379.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.456-07:00,n_0@127.0.0.1:<0.339.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.338.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.456-07:00,n_0@127.0.0.1:<0.325.0>:restartable:shutdown_child:120]Successfully terminated process <0.327.0> [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.342.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.341.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.330.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.329.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.319.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {buckets_events,<0.318.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.307.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.306.0>} exited with reason killed [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.290.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events_local,<0.289.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.457-07:00,n_0@127.0.0.1:<0.310.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.308.0>} exited with reason killed [error_logger:error,2017-10-01T10:14:02.458-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: gen_event:init_it/6 pid: <0.309.0> registered_name: bucket_info_cache_invalidations exception exit: killed in function gen_event:terminate_server/4 (gen_event.erl, line 320) ancestors: [bucket_info_cache,ns_server_sup,ns_server_nodes_sup, <0.173.0>,ns_server_cluster_sup,<0.89.0>] messages: [] links: [] dictionary: [] trap_exit: true status: running heap_size: 376 stack_size: 27 reductions: 151 neighbours: [ns_server:debug,2017-10-01T10:14:02.458-07:00,n_0@127.0.0.1:<0.279.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {user_storage_events,<0.277.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.458-07:00,n_0@127.0.0.1:<0.278.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.277.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.458-07:00,n_0@127.0.0.1:<0.275.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {user_storage_events,<0.273.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.458-07:00,n_0@127.0.0.1:<0.274.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.273.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.459-07:00,n_0@127.0.0.1:<0.264.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.221.0> died with shutdown. Exiting [ns_server:debug,2017-10-01T10:14:02.459-07:00,n_0@127.0.0.1:ns_couchdb_port<0.216.0>:ns_port_server:terminate:195]Shutting down port ns_couchdb [ns_server:debug,2017-10-01T10:14:02.460-07:00,n_0@127.0.0.1:ns_couchdb_port<0.216.0>:ns_port_server:port_shutdown:296]Shutdown command: "shutdown" [error_logger:info,2017-10-01T10:14:02.470-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.254.0>,connection_closed}} [error_logger:info,2017-10-01T10:14:02.470-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:02.471-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.665.0>,shutdown}} [error_logger:info,2017-10-01T10:14:02.471-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:02.471-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.667.0>,shutdown}} [error_logger:info,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [ns_server:warn,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:memcached_refresh<0.178.0>:memcached_refresh:handle_info:93]Refresh of [rbac,isasl] failed. Retry in 1000 ms. [ns_server:debug,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:ns_couchdb_port<0.216.0>:ns_port_server:terminate:198]ns_couchdb has exited [ns_server:debug,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:<0.215.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.213.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.472-07:00,n_0@127.0.0.1:<0.214.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {user_storage_events,<0.213.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.473-07:00,n_0@127.0.0.1:<0.188.0>:restartable:shutdown_child:120]Successfully terminated process <0.190.0> [ns_server:debug,2017-10-01T10:14:02.473-07:00,n_0@127.0.0.1:<0.182.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.181.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:02.474-07:00,n_0@127.0.0.1:<0.173.0>:restartable:shutdown_child:120]Successfully terminated process <0.174.0> [ns_server:debug,2017-10-01T10:14:02.474-07:00,n_0@127.0.0.1:menelaus_barrier<0.672.0>:one_shot_barrier:barrier_body:58]Barrier menelaus_barrier has started [error_logger:info,2017-10-01T10:14:02.474-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.671.0>}, {name,remote_monitors}, {mfargs,{remote_monitors,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.474-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.672.0>}, {name,menelaus_barrier}, {mfargs,{menelaus_sup,barrier_start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.475-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.673.0>}, {name,rest_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,rest_lhttpc_pool}, {connection_timeout,120000}, {pool_size,20}]]}}, {restart_type,{permanent,1}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:02.475-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:init:40]Starting during memcached lifetime. Try to refresh all files. [error_logger:info,2017-10-01T10:14:02.475-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.674.0>}, {name,memcached_refresh}, {mfargs,{memcached_refresh,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.476-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.676.0>}, {name,ssl_service_events}, {mfargs, {gen_event,start_link, [{local,ssl_service_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2017-10-01T10:14:02.476-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:ns_memcached:connect:1184]Unable to connect: {error,{badmatch,{error,econnrefused}}}. [ns_server:debug,2017-10-01T10:14:02.476-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:93]Refresh of [isasl,ssl_certs,rbac] failed. Retry in 1000 ms. [ns_server:info,2017-10-01T10:14:02.476-07:00,n_0@127.0.0.1:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:init:388]Used ssl options: [{keyfile,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/ssl-cert-key.pem"}, {certfile,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/ssl-cert-key.pem"}, {versions,[tlsv1,'tlsv1.1','tlsv1.2']}, {cacertfile,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/ssl-cert-key.pem-ca"}, {dh,<<48,130,1,8,2,130,1,1,0,152,202,99,248,92,201,35,238,246,5,77,93,120,10, 118,129,36,52,111,193,167,220,49,229,106,105,152,133,121,157,73,158, 232,153,197,197,21,171,140,30,207,52,165,45,8,221,162,21,199,183,66, 211,247,51,224,102,214,190,130,96,253,218,193,35,43,139,145,89,200,250, 145,92,50,80,134,135,188,205,254,148,122,136,237,220,186,147,187,104, 159,36,147,217,117,74,35,163,145,249,175,242,18,221,124,54,140,16,246, 169,84,252,45,47,99,136,30,60,189,203,61,86,225,117,255,4,91,46,110, 167,173,106,51,65,10,248,94,225,223,73,40,232,140,26,11,67,170,118,190, 67,31,127,233,39,68,88,132,171,224,62,187,207,160,189,209,101,74,8,205, 174,146,173,80,105,144,246,25,153,86,36,24,178,163,64,202,221,95,184, 110,244,32,226,217,34,55,188,230,55,16,216,247,173,246,139,76,187,66, 211,159,17,46,20,18,48,80,27,250,96,189,29,214,234,241,34,69,254,147, 103,220,133,40,164,84,8,44,241,61,164,151,9,135,41,60,75,4,202,133,173, 72,6,69,167,89,112,174,40,229,171,2,1,2>>}, {ciphers,[{dhe_rsa,aes_256_cbc,sha256}, {dhe_dss,aes_256_cbc,sha256}, {rsa,aes_256_cbc,sha256}, {dhe_rsa,aes_128_cbc,sha256}, {dhe_dss,aes_128_cbc,sha256}, {rsa,aes_128_cbc,sha256}, {dhe_rsa,aes_256_cbc,sha}, {dhe_dss,aes_256_cbc,sha}, {rsa,aes_256_cbc,sha}, {dhe_rsa,'3des_ede_cbc',sha}, {dhe_dss,'3des_ede_cbc',sha}, {rsa,'3des_ede_cbc',sha}, {dhe_rsa,aes_128_cbc,sha}, {dhe_dss,aes_128_cbc,sha}, {rsa,aes_128_cbc,sha}]}] [error_logger:info,2017-10-01T10:14:02.488-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.677.0>}, {name,ns_ssl_services_setup}, {mfargs,{ns_ssl_services_setup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2017-10-01T10:14:02.489-07:00,n_0@127.0.0.1:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for n1ql [ns_server:info,2017-10-01T10:14:02.493-07:00,n_0@127.0.0.1:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for cbas [ns_server:info,2017-10-01T10:14:02.493-07:00,n_0@127.0.0.1:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for fts [ns_server:debug,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:<0.679.0>:restartable:start_child:98]Started child process <0.680.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} [error_logger:info,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.679.0>}, {name,ns_rest_ssl_service}, {mfargs, {restartable,start_link, [{ns_ssl_services_setup, start_link_rest_service,[]}, 1000]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.675.0>}, {name,ns_ssl_services_sup}, {mfargs,{ns_ssl_services_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:users_replicator<0.700.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [error_logger:info,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.698.0>}, {name,user_storage_events}, {mfargs, {gen_event,start_link, [{local,user_storage_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_storage:anounce_startup:68]Announce my startup to <0.700.0> [error_logger:info,2017-10-01T10:14:02.494-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_storage_sup} started: [{pid,<0.700.0>}, {name,users_replicator}, {mfargs,{menelaus_users,start_replicator,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:users_replicator<0.700.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <0.701.0> [ns_server:debug,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:open:170]Opening file "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/users.dets" [error_logger:info,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_storage_sup} started: [{pid,<0.701.0>}, {name,users_storage}, {mfargs,{menelaus_users,start_storage,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:versioned_cache:init:44]Starting versioned cache compiled_roles_cache [error_logger:info,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.699.0>}, {name,users_storage_sup}, {mfargs,{users_storage_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,users_sup} started: [{pid,<0.703.0>}, {name,compiled_roles_cache}, {mfargs,{menelaus_roles,start_compiled_roles_cache,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.495-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.697.0>}, {name,users_sup}, {mfargs,{users_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:02.496-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,'$1','$2','_','_'}, [], [{{'$1','$2'}}]}], 100} [ns_server:debug,2017-10-01T10:14:02.496-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:init_after_ack:162]Loading 0 items, 299 words took 0ms [ns_server:debug,2017-10-01T10:14:02.496-07:00,n_0@127.0.0.1:users_replicator<0.700.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:02.502-07:00,n_0@127.0.0.1:wait_link_to_couchdb_node<0.709.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:133]Waiting for ns_couchdb node to start [error_logger:info,2017-10-01T10:14:02.502-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.707.0>}, {name,start_couchdb_node}, {mfargs,{ns_server_nodes_sup,start_couchdb_node,[]}}, {restart_type,{permanent,5}}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:02.502-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:02.504-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.712.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:02.504-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:02.504-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:02.707-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:02.709-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:02.709-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.715.0>,shutdown}} [error_logger:info,2017-10-01T10:14:02.709-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [ns_server:info,2017-10-01T10:14:02.731-07:00,n_0@127.0.0.1:<0.299.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:02.731-07:00,n_0@127.0.0.1:<0.299.0>:goport:handle_port_erlang_exit:474]Port terminated [error_logger:info,2017-10-01T10:14:02.910-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:02.911-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:02.911-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.718.0>,shutdown}} [error_logger:info,2017-10-01T10:14:02.911-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.112-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:03.113-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:03.113-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.721.0>,shutdown}} [error_logger:info,2017-10-01T10:14:03.113-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.314-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:03.315-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:03.315-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.724.0>,shutdown}} [error_logger:info,2017-10-01T10:14:03.315-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:03.496-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl,ssl_certs,rbac] succeeded [error_logger:info,2017-10-01T10:14:03.516-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:03.517-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:03.517-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.727.0>,shutdown}} [error_logger:info,2017-10-01T10:14:03.517-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.718-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.718-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.730.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:03.719-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:03.719-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.920-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:03.922-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.733.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:03.922-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:03.922-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.123-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:04.124-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:04.124-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.736.0>,shutdown}} [error_logger:info,2017-10-01T10:14:04.124-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.325-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:04.326-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:04.326-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.739.0>,shutdown}} [error_logger:info,2017-10-01T10:14:04.326-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.527-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:04.528-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:04.528-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.742.0>,shutdown}} [error_logger:info,2017-10-01T10:14:04.528-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.729-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.729-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.745.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:04.730-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:04.730-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.931-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:04.932-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.748.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:04.932-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:04.932-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.133-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.133-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.751.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:05.133-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:05.134-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.335-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.336-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.754.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:05.336-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:05.336-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.537-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.538-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.757.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:05.538-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:05.538-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.739-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.740-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.760.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:05.740-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2017-10-01T10:14:05.740-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:05.941-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:05.964-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:06.165-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:06.366-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:06.567-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:06.768-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:06.969-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:07.171-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:07.372-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [ns_server:debug,2017-10-01T10:14:07.573-07:00,n_0@127.0.0.1:<0.710.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:147]ns_couchdb is not ready: false [error_logger:info,2017-10-01T10:14:08.514-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.709.0>}, {name,wait_for_couchdb_node}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:08.515-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.670.0>:ns_storage_conf:setup_db_and_ix_paths:52]Initialize db_and_ix_paths variable with [{db_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir"}, {index_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir"}] [ns_server:info,2017-10-01T10:14:08.521-07:00,n_0@127.0.0.1:ns_server_sup<0.775.0>:dir_size:start_link:39]Starting quick version of dir_size with program name: godu [error_logger:info,2017-10-01T10:14:08.521-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.776.0>}, {name,ns_disksup}, {mfargs,{ns_disksup,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:08.521-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.777.0>}, {name,diag_handler_worker}, {mfargs,{work_queue,start_link,[diag_handler_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:08.529-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.778.0>}, {name,dir_size}, {mfargs,{dir_size,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:08.529-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.779.0>}, {name,request_throttler}, {mfargs,{request_throttler,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:08.529-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.780.0>}, {name,ns_log}, {mfargs,{ns_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:08.529-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.781.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:08.530-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:init:62]Init config writer for memcached_passwords, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:14:08.530-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:info,2017-10-01T10:14:09.932-07:00,n_0@127.0.0.1:<0.785.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:09.932-07:00,n_0@127.0.0.1:<0.785.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:09.933-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.782.0> [ns_server:debug,2017-10-01T10:14:09.933-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:09.933-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.782.0> [ns_server:debug,2017-10-01T10:14:09.934-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [error_logger:info,2017-10-01T10:14:09.934-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.782.0>}, {name,memcached_passwords}, {mfargs,{memcached_passwords,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.934-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:memcached_cfg:init:62]Init config writer for memcached_permissions, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [ns_server:debug,2017-10-01T10:14:09.934-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [ns_server:debug,2017-10-01T10:14:09.935-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.786.0> [ns_server:debug,2017-10-01T10:14:09.935-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{user,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:09.935-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl] succeeded [ns_server:debug,2017-10-01T10:14:09.935-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.786.0> [ns_server:debug,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of rbac requested [error_logger:info,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.786.0>}, {name,memcached_permissions}, {mfargs,{memcached_permissions,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.789.0>}, {name,ns_log_events}, {mfargs,{gen_event,start_link,[{local,ns_log_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:ns_node_disco<0.792.0>:ns_node_disco:init:138]Initting ns_node_disco with [] [ns_server:debug,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [error_logger:info,2017-10-01T10:14:09.936-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.791.0>}, {name,ns_node_disco_events}, {mfargs, {gen_event,start_link, [{local,ns_node_disco_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:<0.793.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:<0.793.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.792.0>}, {name,ns_node_disco}, {mfargs,{ns_node_disco,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.794.0>}, {name,ns_node_disco_log}, {mfargs,{ns_node_disco_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:init:68]init pulling [ns_server:debug,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:init:70]init pushing [error_logger:info,2017-10-01T10:14:09.937-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.795.0>}, {name,ns_node_disco_conf_events}, {mfargs,{ns_node_disco_conf_events,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.938-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.796.0>}, {name,ns_config_rep_merger}, {mfargs,{ns_config_rep,start_link_merger,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.938-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [rbac] succeeded [ns_server:debug,2017-10-01T10:14:09.938-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:init:74]init reannouncing [ns_server:debug,2017-10-01T10:14:09.939-07:00,n_0@127.0.0.1:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:50]ns_node_disco_conf_events config on otp [ns_server:debug,2017-10-01T10:14:09.939-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:versioned_cache:handle_info:89]Flushing cache compiled_roles_cache due to version change from undefined to {[5, 0], {0, 2904514097}, false, []} [ns_server:debug,2017-10-01T10:14:09.939-07:00,n_0@127.0.0.1:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2017-10-01T10:14:09.939-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [error_logger:info,2017-10-01T10:14:09.939-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.797.0>}, {name,ns_config_rep}, {mfargs,{ns_config_rep,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: users_upgrade -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([alert_limits,audit,auto_failover_cfg, auto_reprovision_cfg,autocompaction,buckets, cbas_memory_quota,cert_and_pkey, cluster_compat_version, drop_request_memory_threshold_mib,email_alerts, fts_memory_quota,goxdcr_upgrade, index_aware_rebalance_disabled, max_bucket_count,memcached,memory_quota, nodes_wanted,otp,password_policy, read_only_user_creds,remote_clusters, replication,rest,rest_creds,roles_definitions, secure_headers,server_groups, set_view_update_daemon,users_upgrade, {couchdb,max_parallel_indexers}, {couchdb,max_parallel_replica_indexers}, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/indexing/settings/config">>}, {request_limit,capi}, {request_limit,rest}, {service_map,fts}, {service_map,index}, {service_map,n1ql}, {node,'n_0@127.0.0.1',audit}, {node,'n_0@127.0.0.1',capi_port}, {node,'n_0@127.0.0.1',cbas_auth_port}, {node,'n_0@127.0.0.1',cbas_cc_client_port}, {node,'n_0@127.0.0.1',cbas_cc_cluster_port}, {node,'n_0@127.0.0.1',cbas_cc_http_port}, {node,'n_0@127.0.0.1',cbas_cluster_port}, {node,'n_0@127.0.0.1',cbas_data_port}, {node,'n_0@127.0.0.1',cbas_debug_port}, {node,'n_0@127.0.0.1',cbas_http_port}, {node,'n_0@127.0.0.1', cbas_hyracks_console_port}, {node,'n_0@127.0.0.1',cbas_messaging_port}, {node,'n_0@127.0.0.1',cbas_result_port}, {node,'n_0@127.0.0.1',cbas_ssl_port}, {node,'n_0@127.0.0.1',compaction_daemon}, {node,'n_0@127.0.0.1',config_version}, {node,'n_0@127.0.0.1',fts_http_port}, {node,'n_0@127.0.0.1',fts_ssl_port}, {node,'n_0@127.0.0.1',indexer_admin_port}, {node,'n_0@127.0.0.1',indexer_http_port}, {node,'n_0@127.0.0.1',indexer_https_port}, {node,'n_0@127.0.0.1',indexer_scan_port}, {node,'n_0@127.0.0.1',indexer_stcatchup_port}, {node,'n_0@127.0.0.1',indexer_stinit_port}, {node,'n_0@127.0.0.1',indexer_stmaint_port}]..) [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: roles_definitions -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,fts} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,index} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,n1ql} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/indexing/settings/config">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097238}}]}| <<"{\"indexer.settings.compaction.days_of_week\":\"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday\",\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.compaction.compaction_mode\":\"circular\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\""...>>] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: goxdcr_upgrade -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:09.940-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cluster_compat_version -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{6,63674097238}}]},5,0] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: otp -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097237}}]}, {cookie,{sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cert_and_pkey -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097227}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIB/TCCAWagAwIBAgIIFOmBlRUDHnQwDQYJKoZIhvcNAQELBQAwJDEiMCAGA1UE\nAxMZQ291Y2hiYXNlIFNlcnZlciAzODAyZTUzNTAeFw0xMzAxMDEwMDAwMDBaFw00\nOTEyMzEyMzU5NTlaMCQxIjAgBgNVBAMTGUNvdWNoYmFzZSBTZXJ2ZXIgMzgwMmU1\nMzUwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMxj2iFzf5TxfmT0Q61Jd2cM\nNDHmKB8FjpZWy2CI9iIKeM8oSrLwpq1himl3y7umd2vaUVE9gg9P5TTCGSgYkwNu\nqY5UC88wScAB4/aCx/CAfze8ON/h983"...>>, <<"*****">>}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: alert_limits -> [{max_overhead_perc,50},{max_disk_used,90},{max_indexer_ram,75}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: audit -> [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: auto_failover_cfg -> [{enabled,false},{timeout,120},{max_nodes,1},{count,0}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: auto_reprovision_cfg -> [{enabled,true},{max_nodes,1},{count,0}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: autocompaction -> [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}],{configs,[]}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: cbas_memory_quota -> 3190 [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: drop_request_memory_threshold_mib -> undefined [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: email_alerts -> [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server,[{user,[]}, {pass,"*****"}, {host,"localhost"}, {port,25}, {encrypt,false}]}, {alerts,[auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events,indexer_ram_max_usage, ep_clock_cas_drift_threshold_exceeded,communication_issue]}] [ns_server:debug,2017-10-01T10:14:09.941-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: fts_memory_quota -> 319 [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: index_aware_rebalance_disabled -> false [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: max_bucket_count -> 10 [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: memcached -> [] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: memory_quota -> 3190 [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: nodes_wanted -> ['n_0@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: password_policy -> [{min_length,6},{must_present,[]}] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:<0.805.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.790.0>}, {name,ns_node_disco_sup}, {mfargs,{ns_node_disco_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: read_only_user_creds -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097238}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:<0.806.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.814.0>}, {name,vbucket_map_mirror}, {mfargs,{vbucket_map_mirror,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: remote_clusters -> [] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:<0.805.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:<0.806.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.816.0>}, {name,bucket_info_cache}, {mfargs,{bucket_info_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: replication -> [{enabled,true}] [error_logger:info,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.819.0>}, {name,ns_tick_event}, {mfargs,{gen_event,start_link,[{local,ns_tick_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.942-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest -> [{port,8091}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_log_events<0.789.0>:ns_mail_log:init:44]ns_mail_log started up [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest_creds -> null [error_logger:info,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.820.0>}, {name,buckets_events}, {mfargs, {gen_event,start_link,[{local,buckets_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: secure_headers -> [] [error_logger:info,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_mail_sup} started: [{pid,<0.822.0>}, {name,ns_mail_log}, {mfargs,{ns_mail_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: server_groups -> [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@127.0.0.1']}]] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: set_view_update_daemon -> [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}] [error_logger:info,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.821.0>}, {name,ns_mail_sup}, {mfargs,{ns_mail_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_heart<0.826.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,187}]}, {ns_heart,handle_info,2, [{file,"src/ns_heart.erl"},{line,118}]}]}} [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {couchdb,max_parallel_indexers} -> 4 [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {couchdb,max_parallel_replica_indexers} -> 2 [error_logger:info,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.823.0>}, {name,ns_stats_event}, {mfargs, {gen_event,start_link,[{local,ns_stats_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {request_limit,capi} -> undefined [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_heart<0.826.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,117}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,256}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,277}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,250}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,187}]}]}} [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {request_limit,rest} -> undefined [error_logger:info,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.824.0>}, {name,samples_loader_tasks}, {mfargs,{samples_loader_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.943-07:00,n_0@127.0.0.1:<0.833.0>:restartable:start_child:98]Started child process <0.834.0> MFA: {ns_doctor_sup,start_link,[]} [ns_server:debug,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',audit} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {log_path,"logs/n_0"}] [error_logger:info,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.826.0>}, {name,ns_heart}, {mfargs,{ns_heart,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9500] [error_logger:info,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.828.0>}, {name,ns_heart_slow_updater}, {mfargs,{ns_heart,start_link_slow_updater,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_auth_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9310] [ns_server:debug,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_client_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9303] [error_logger:info,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.825.0>}, {name,ns_heart_sup}, {mfargs,{ns_heart_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9302] [error_logger:info,2017-10-01T10:14:09.944-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.835.0>}, {name,ns_doctor_events}, {mfargs, {gen_event,start_link,[{local,ns_doctor_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.836.0>}, {name,ns_doctor}, {mfargs,{ns_doctor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cc_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9301] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9305] [error_logger:info,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.833.0>}, {name,ns_doctor_sup}, {mfargs, {restartable,start_link, [{ns_doctor_sup,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_data_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9306] [error_logger:info,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.840.0>}, {name,remote_clusters_info}, {mfargs,{remote_clusters_info,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_debug_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9309] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9300] [error_logger:info,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.841.0>}, {name,master_activity_events}, {mfargs, {gen_event,start_link, [{local,master_activity_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_hyracks_console_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9304] [error_logger:info,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.842.0>}, {name,xdcr_ckpt_store}, {mfargs,{simple_store,start_link,[xdcr_ckpt_data]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.945-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_messaging_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9308] [error_logger:info,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.843.0>}, {name,metakv_worker}, {mfargs,{work_queue,start_link,[metakv_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_result_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9307] [error_logger:info,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.844.0>}, {name,index_events}, {mfargs,{gen_event,start_link,[{local,index_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',cbas_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19300] [ns_server:debug,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',compaction_daemon} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] [error_logger:info,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.845.0>}, {name,index_settings_manager}, {mfargs,{index_settings_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',config_version} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|{5,0}] [error_logger:info,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.848.0>}, {name,menelaus_ui_auth}, {mfargs,{menelaus_ui_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.850.0>}, {name,menelaus_local_auth}, {mfargs,{menelaus_local_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.946-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',fts_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9200] [error_logger:info,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.851.0>}, {name,menelaus_web_cache}, {mfargs,{menelaus_web_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',fts_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19200] [error_logger:info,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.852.0>}, {name,menelaus_stats_gatherer}, {mfargs,{menelaus_stats_gatherer,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_admin_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9100] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9102] [error_logger:info,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.853.0>}, {name,json_rpc_events}, {mfargs, {gen_event,start_link,[{local,json_rpc_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_https_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19102] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_scan_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9101] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stcatchup_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9104] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stinit_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9103] [ns_server:debug,2017-10-01T10:14:09.947-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',indexer_stmaint_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9105] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',is_enterprise} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|true] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',isasl} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ldap_enabled} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|true] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| active] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts","@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2017-10-01T10:14:09.948-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached_config} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',memcached_defaults} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',moxi} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,12001}, {verbosity,[]}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ns_log} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',port_servers} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',projector_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|10000] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|9499] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',rest} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}, {port,9000}, {port_meta,local}] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19500] [ns_server:info,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:menelaus_sup<0.847.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for n1ql [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|11998] [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|11997] [ns_server:info,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:menelaus_sup<0.847.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for cbas [ns_server:debug,2017-10-01T10:14:09.949-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19499] [ns_server:debug,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',ssl_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|19000] [ns_server:info,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:menelaus_sup<0.847.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for fts [ns_server:debug,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',uuid} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>] [ns_server:debug,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',xdcr_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097226}}]}|13000] [ns_server:debug,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{4,63674097238}}]}] [error_logger:info,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.854.0>}, {name,menelaus_web}, {mfargs,{menelaus_web,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.871.0>}, {name,menelaus_event}, {mfargs,{menelaus_event,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.950-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.872.0>}, {name,hot_keys_keeper}, {mfargs,{hot_keys_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.951-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.873.0>}, {name,menelaus_web_alerts_srv}, {mfargs,{menelaus_web_alerts_srv,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.952-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.874.0>}, {name,menelaus_cbauth}, {mfargs,{menelaus_cbauth,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [user:info,2017-10-01T10:14:09.954-07:00,n_0@127.0.0.1:ns_server_sup<0.775.0>:menelaus_sup:start_link:46]Couchbase Server has started on web port 9000 on node 'n_0@127.0.0.1'. Version: "5.0.0-0000-enterprise". [error_logger:info,2017-10-01T10:14:09.954-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.847.0>}, {name,menelaus}, {mfargs,{menelaus_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.954-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.880.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.954-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_sup} started: [{pid,<0.884.0>}, {name,service_agent_children_sup}, {mfargs, {supervisor,start_link, [{local,service_agent_children_sup}, service_agent_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.955-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_sup} started: [{pid,<0.885.0>}, {name,service_agent_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.955-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.883.0>}, {name,service_agent_sup}, {mfargs,{service_agent_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.955-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.887.0>}, {name,ns_memcached_sockets_pool}, {mfargs,{ns_memcached_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.956-07:00,n_0@127.0.0.1:ns_audit_cfg<0.888.0>:ns_audit_cfg:write_audit_json:158]Writing new content to "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json" : [{auditd_enabled, false}, {disabled, []}, {log_path, "logs/n_0"}, {rotate_interval, 86400}, {rotate_size, 20971520}, {sync, []}, {version, 1}, {descriptors_path, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/etc/security"}] [ns_server:debug,2017-10-01T10:14:09.958-07:00,n_0@127.0.0.1:ns_audit_cfg<0.888.0>:ns_audit_cfg:handle_info:107]Instruct memcached to reload audit config [error_logger:info,2017-10-01T10:14:09.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.888.0>}, {name,ns_audit_cfg}, {mfargs,{ns_audit_cfg,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.958-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.893.0>}, {name,ns_audit}, {mfargs,{ns_audit,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.958-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:info,2017-10-01T10:14:09.959-07:00,n_0@127.0.0.1:<0.895.0>:ns_memcached_log_rotator:init:28]Starting log rotator on "logs/n_0"/"memcached.log"* with an initial period of 39003ms [error_logger:info,2017-10-01T10:14:09.959-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.894.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.959-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.895.0>}, {name,ns_memcached_log_rotator}, {mfargs,{ns_memcached_log_rotator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.960-07:00,n_0@127.0.0.1:ns_ports_setup<0.880.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,goxdcr,xdcr_proxy] [error_logger:info,2017-10-01T10:14:09.960-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.896.0>}, {name,memcached_clients_pool}, {mfargs,{memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.960-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.897.0>}, {name,proxied_memcached_clients_pool}, {mfargs,{proxied_memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.898.0>}, {name,xdc_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,xdc_lhttpc_pool}, {connection_timeout,120000}, {pool_size,200}]]}}, {restart_type,{permanent,1}}, {shutdown,10000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.899.0>}, {name,ns_null_connection_pool}, {mfargs, {ns_null_connection_pool,start_link, [ns_null_connection_pool]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.905.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [error_logger:info,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.901.0>}, {name,xdc_stats_holder}, {mfargs, {proc_lib,start_link, [xdcr_sup,link_stats_holder_body,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:xdc_rdoc_replication_srv<0.906.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:<0.900.0>:xdc_rdoc_manager:start_link_remote:45]Starting xdc_rdoc_manager on 'couchdb_n_0@127.0.0.1' with following links: [<0.905.0>, <0.906.0>, <0.903.0>] [error_logger:info,2017-10-01T10:14:09.961-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.902.0>}, {name,xdc_replication_sup}, {mfargs,{xdc_replication_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:xdc_rep_manager<0.903.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [error_logger:info,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.903.0>}, {name,xdc_rep_manager}, {mfargs,{xdc_rep_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,30000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.905.0>}, {name,xdc_rdoc_replicator}, {mfargs,{xdc_rdoc_manager,start_replicator,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:ns_ports_setup<0.880.0>:ns_ports_setup:set_children:78]Monitor ns_child_ports_sup <11719.74.0> [error_logger:info,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.906.0>}, {name,xdc_rdoc_replication_srv}, {mfargs,{doc_replication_srv,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.962-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:debug,2017-10-01T10:14:09.963-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:14:09.964-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:14:09.964-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:init:84]found memcached port to be already active [ns_server:debug,2017-10-01T10:14:09.966-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:apply_changed_memcached_config:161]New memcached config is hot-reloadable. [ns_server:debug,2017-10-01T10:14:09.967-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:14:09.970-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.905.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [ns_server:debug,2017-10-01T10:14:09.970-07:00,n_0@127.0.0.1:xdc_rdoc_replication_srv<0.906.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [ns_server:debug,2017-10-01T10:14:09.970-07:00,n_0@127.0.0.1:xdc_rep_manager<0.903.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.267.0> [error_logger:info,2017-10-01T10:14:09.970-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<11720.267.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_n_0@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.970-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.900.0>}, {name,xdcr_sup}, {mfargs,{xdcr_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.971-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.913.0>}, {name,xdcr_dcp_sockets_pool}, {mfargs,{xdcr_dcp_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.971-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.914.0>}, {name,testconditions_store}, {mfargs,{simple_store,start_link,[testconditions]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.972-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.916.0>}, {name,ns_bucket_worker}, {mfargs,{work_queue,start_link,[ns_bucket_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.972-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_sup} started: [{pid,<0.919.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.972-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.917.0>}, {name,ns_bucket_sup}, {mfargs,{ns_bucket_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.972-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.915.0>}, {name,ns_bucket_worker_sup}, {mfargs,{ns_bucket_worker_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [user:info,2017-10-01T10:14:09.978-07:00,n_0@127.0.0.1:memcached_config_mgr<0.894.0>:memcached_config_mgr:hot_reload_config:221]Hot-reloaded memcached.json for config change of the following keys: [<<"xattr_enabled">>] [error_logger:info,2017-10-01T10:14:09.979-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.921.0>}, {name,system_stats_collector}, {mfargs,{system_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.979-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.925.0>}, {name,{stats_archiver,"@system"}}, {mfargs,{stats_archiver,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.980-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.927.0>}, {name,{stats_reader,"@system"}}, {mfargs,{stats_reader,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.980-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.928.0>}, {name,{stats_archiver,"@system-processes"}}, {mfargs, {stats_archiver,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.980-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.930.0>}, {name,{stats_reader,"@system-processes"}}, {mfargs, {stats_reader,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.981-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.931.0>}, {name,{stats_archiver,"@query"}}, {mfargs,{stats_archiver,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.981-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.933.0>}, {name,{stats_reader,"@query"}}, {mfargs,{stats_reader,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.982-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.934.0>}, {name,query_stats_collector}, {mfargs,{query_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.983-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.936.0>}, {name,{stats_archiver,"@global"}}, {mfargs,{stats_archiver,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.983-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.938.0>}, {name,{stats_reader,"@global"}}, {mfargs,{stats_reader,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.939.0>}, {name,global_stats_collector}, {mfargs,{global_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.941.0>}, {name,goxdcr_status_keeper}, {mfargs,{goxdcr_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.944.0>}, {name,index_stats_children_sup}, {mfargs, {supervisor,start_link, [{local,index_stats_children_sup}, index_stats_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.984-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.946.0>}, {name,index_status_keeper_worker}, {mfargs, {work_queue,start_link, [index_status_keeper_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.985-07:00,n_0@127.0.0.1:xdcr_doc_replicator<0.905.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [error_logger:info,2017-10-01T10:14:09.985-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.947.0>}, {name,index_status_keeper}, {mfargs,{indexer_gsi,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.985-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.950.0>}, {name,index_status_keeper_fts}, {mfargs,{indexer_fts,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.986-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.953.0>}, {name,index_status_keeper_cbas}, {mfargs,{indexer_cbas,start_keeper,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.986-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.945.0>}, {name,index_status_keeper_sup}, {mfargs,{index_status_keeper_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.986-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.956.0>}, {name,index_stats_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.986-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.943.0>}, {name,index_stats_sup}, {mfargs,{index_stats_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.958.0>}, {name,compaction_daemon}, {mfargs,{compaction_daemon,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:<0.961.0>:new_concurrency_throttle:init:113]init concurrent throttle process, pid: <0.961.0>, type: kv_throttle# of available token: 1 [ns_server:debug,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2017-10-01T10:14:09.987-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1309]No buckets to compact for compact_master. Rescheduling compaction. [ns_server:debug,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_master too soon. Next run will be in 3600s [ns_server:debug,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:<0.965.0>:mb_master:check_master_takeover_needed:140]Sending master node question to the following nodes: [] [ns_server:debug,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:<0.965.0>:mb_master:check_master_takeover_needed:142]Got replies: [] [ns_server:debug,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:<0.965.0>:mb_master:check_master_takeover_needed:148]Was unable to discover master, not going to force mastership takeover [error_logger:info,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.959.0>}, {name,compaction_new_daemon}, {mfargs,{compaction_new_daemon,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,cluster_logs_sup} started: [{pid,<0.963.0>}, {name,ets_holder}, {mfargs, {cluster_logs_collection_task, start_link_ets_holder,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.962.0>}, {name,cluster_logs_sup}, {mfargs,{cluster_logs_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.988-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.964.0>}, {name,remote_api}, {mfargs,{remote_api,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [user:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:mb_master<0.967.0>:mb_master:init:86]I'm the only node, so I'm the master. [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.970.0>,ns_tick,<0.970.0>,#Fun} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.970.0>,#Ref<0.0.0.5275>}} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.989-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_tick},{pid,<0.970.0>}} [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [ns_server:debug,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:mb_master_sup<0.969.0>:misc:start_singleton:855]start_singleton(gen_server, ns_tick, [], []): started as <0.970.0> on 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.970.0>}, {name,ns_tick}, {mfargs,{ns_tick,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.973.0>}, {name,ns_janitor_server}, {mfargs,{ns_janitor_server,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.990-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.974.0>,auto_reprovision,<0.974.0>,#Fun} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.974.0>,#Ref<0.0.0.5304>}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [ns_server:debug,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:ns_orchestrator_child_sup<0.972.0>:misc:start_singleton:855]start_singleton(gen_server, auto_reprovision, [], []): started as <0.974.0> on 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_reprovision},{pid,<0.974.0>}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [ns_server:debug,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:ns_orchestrator_child_sup<0.972.0>:misc:start_singleton:855]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.975.0> on 'n_0@127.0.0.1' [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.974.0>}, {name,auto_reprovision}, {mfargs,{auto_reprovision,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.991-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.975.0>,ns_orchestrator,<0.975.0>,#Fun} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.975.0>,#Ref<0.0.0.5325>}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [ns_server:debug,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:<0.977.0>:auto_failover:init:150]init auto_failover. [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [ns_server:debug,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:ns_orchestrator_sup<0.971.0>:misc:start_singleton:855]start_singleton(gen_server, auto_failover, [], []): started as <0.977.0> on 'n_0@127.0.0.1' [ns_server:debug,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:<0.965.0>:restartable:start_child:98]Started child process <0.967.0> MFA: {mb_master,start_link,[]} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_orchestrator},{pid,<0.975.0>}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.975.0>}, {name,ns_orchestrator}, {mfargs,{ns_orchestrator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.992-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.972.0>}, {name,ns_orchestrator_child_sup}, {mfargs,{ns_orchestrator_child_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.977.0>,auto_failover,<0.977.0>,#Fun} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.977.0>,#Ref<0.0.0.5354>}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@127.0.0.1']}, {replies,[{'n_0@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_failover},{pid,<0.977.0>}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.977.0>}, {name,auto_failover}, {mfargs,{auto_failover,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.993-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.971.0>}, {name,ns_orchestrator_sup}, {mfargs,{ns_orchestrator_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.965.0>}, {name,mb_master}, {mfargs, {restartable,start_link, [{mb_master,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.978.0>}, {name,master_activity_events_ingress}, {mfargs, {gen_event,start_link, [{local,master_activity_events_ingress}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.670.0>:one_shot_barrier:notify:27]Notifying on barrier menelaus_barrier [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.979.0>}, {name,master_activity_events_timestamper}, {mfargs, {master_activity_events,start_link_timestamper,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:menelaus_barrier<0.672.0>:one_shot_barrier:barrier_body:62]Barrier menelaus_barrier got notification from <0.670.0> [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.980.0>}, {name,master_activity_events_pids_watcher}, {mfargs, {master_activity_events_pids_watcher,start_link, []}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:ns_server_nodes_sup<0.670.0>:one_shot_barrier:notify:32]Successfuly notified on barrier menelaus_barrier [ns_server:debug,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:<0.173.0>:restartable:start_child:98]Started child process <0.670.0> MFA: {ns_server_nodes_sup,start_link,[]} [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.981.0>}, {name,master_activity_events_keeper}, {mfargs,{master_activity_events_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.984.0>}, {name,ns_server_monitor}, {mfargs,{ns_server_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.988.0>}, {name,service_monitor_children_sup}, {mfargs, {supervisor,start_link, [{local,service_monitor_children_sup}, health_monitor_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.994-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.992.0>}, {name,service_monitor_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.995-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.995.0>}, {name,node_monitor}, {mfargs,{node_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.995-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,health_monitor_sup} started: [{pid,<0.997.0>}, {name,node_status_analyzer}, {mfargs,{node_status_analyzer,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:09.995-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.983.0>}, {name,health_monitor_sup}, {mfargs,{health_monitor_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:09.995-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.775.0>}, {name,ns_server_sup}, {mfargs,{ns_server_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:10.000-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"couchbase",admin} [cluster:info,2017-10-01T10:14:10.000-07:00,n_0@127.0.0.1:ns_cluster<0.161.0>:ns_cluster:handle_call:203]Changing address to "127.0.0.1" due to client request [cluster:info,2017-10-01T10:14:10.000-07:00,n_0@127.0.0.1:ns_cluster<0.161.0>:ns_cluster:do_change_address:436]Change of address to "127.0.0.1" is requested. [cluster:debug,2017-10-01T10:14:10.000-07:00,n_0@127.0.0.1:<0.1007.0>:ns_cluster:maybe_rename:466]Not renaming node. [ns_server:debug,2017-10-01T10:14:10.006-07:00,n_0@127.0.0.1:goxdcr_status_keeper<0.941.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:10.006-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@",admin} [ns_server:debug,2017-10-01T10:14:10.007-07:00,n_0@127.0.0.1:ns_heart<0.826.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:10.007-07:00,n_0@127.0.0.1:goxdcr_status_keeper<0.941.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:10.010-07:00,n_0@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.1011.0>:json_rpc_connection:init:74]Observed revrpc connection: label "goxdcr-cbauth", handling process <0.1011.0> [ns_server:debug,2017-10-01T10:14:10.011-07:00,n_0@127.0.0.1:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.1011.0>} started [ns_server:debug,2017-10-01T10:14:10.020-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system" stats: {error,no_samples} [ns_server:debug,2017-10-01T10:14:10.020-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "@system-processes" stats: {error,no_samples} [ns_server:debug,2017-10-01T10:14:10.022-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:10.166-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {node,'n_0@127.0.0.1',services}]..) [ns_server:debug,2017-10-01T10:14:10.166-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{5,63674097250}}]}] [ns_server:debug,2017-10-01T10:14:10.166-07:00,n_0@127.0.0.1:ns_audit<0.893.0>:ns_audit:handle_call:104]Audit setup_node_services: [{services,[cbas,kv]}, {node,'n_0@127.0.0.1'}, {real_userid, {[{source,ns_server},{user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34075}]}}, {timestamp,<<"2017-10-01T10:14:10.165-07:00">>}] [ns_server:debug,2017-10-01T10:14:10.166-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@127.0.0.1',services} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097250}}]}, cbas,kv] [ns_server:debug,2017-10-01T10:14:10.170-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{6,63674097250}}]}] [ns_server:debug,2017-10-01T10:14:10.170-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([settings, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:10.170-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: settings -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097250}}]}, {stats,[{send_stats,true}]}] [ns_server:debug,2017-10-01T10:14:10.176-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([rest, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:10.176-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{7,63674097250}}]}] [ns_server:debug,2017-10-01T10:14:10.176-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097250}}]}, {port,9000}] [ns_server:debug,2017-10-01T10:14:10.777-07:00,n_0@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.1056.0>:json_rpc_connection:init:74]Observed revrpc connection: label "saslauthd-saslauthd-port", handling process <0.1056.0> [ns_server:info,2017-10-01T10:14:10.792-07:00,n_0@127.0.0.1:<0.807.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:10.793-07:00,n_0@127.0.0.1:<0.807.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:10.793-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.782.0> [ns_server:debug,2017-10-01T10:14:10.793-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:10.793-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.782.0> [ns_server:debug,2017-10-01T10:14:10.801-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [ns_server:debug,2017-10-01T10:14:10.802-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:14:10.811-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl] succeeded [ns_server:debug,2017-10-01T10:14:11.004-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@goxdcr-cbauth",admin} [ns_server:info,2017-10-01T10:14:11.127-07:00,n_0@127.0.0.1:<0.1048.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:11.127-07:00,n_0@127.0.0.1:<0.1048.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:11.127-07:00,n_0@127.0.0.1:menelaus_ui_auth<0.848.0>:menelaus_ui_auth:handle_cast:194]Revoke tokens [] for role admin [ns_server:debug,2017-10-01T10:14:11.127-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:versioned_cache:handle_info:89]Flushing cache compiled_roles_cache due to version change from {[5,0], {0,2904514097}, false,[]} to {[5, 0], {0, 2904514097}, true, []} [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{8,63674097251}}]}] [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:ns_audit<0.893.0>:ns_audit:handle_call:104]Audit password_change: [{identity,{[{source,admin},{user,<<"couchbase">>}]}}, {real_userid,{[{source,ns_server}, {user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34079}]}}, {timestamp,<<"2017-10-01T10:14:11.128-07:00">>}] [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rest_creds -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097251}}]}| {"couchbase", {auth, [{<<"plain">>,<<"JnjZVVak6N+4JZ3tEIycsBXLGVUtbCsOJ6fXnOEq8w4FzD9u">>}, {<<"sha1">>, {[{<<"h">>,<<"/tKER3/CIgnWiA6a+6TmpfK5o4E=">>}, {<<"s">>,<<"CHE7/KFOh5JzvON6k/gjB9Ln2DU=">>}, {<<"i">>,4000}]}}, {<<"sha256">>, {[{<<"h">>,<<"he3E7dW6xoHMMHxjviKSpqncfmwI1iWE9mV3WO0HK8Y=">>}, {<<"s">>,<<"CZ9XnKGh/N2+Ijx8P6J66qx8WkHR+RMX6dW9WEovPTU=">>}, {<<"i">>,4000}]}}, {<<"sha512">>, {[{<<"h">>, <<"T94UTJZW/3ZHkZEpwKBtEveBiid8+W/iu6cLb3nDrsbJaRRHlgBb1V95bJcue6SVHcyw5gxh/Hlhi96YQgkq2Q==">>}, {<<"s">>, <<"TTqNI9XrNDUUzsiV5Z8hbsJjZU6Yb4CN1jp7De+hzUFwSvHQo90XfBF153BS8mvQlvU7WHr91rgUDwGryfSPuw==">>}, {<<"i">>,4000}]}}]}}] [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([rest_creds,uuid, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:11.128-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{9,63674097251}}]}] [ns_server:debug,2017-10-01T10:14:11.129-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: uuid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097251}}]}| <<"14a23d2e7fafd5b0238b233beb1f53e5">>] [ns_server:debug,2017-10-01T10:14:11.129-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.786.0> [ns_server:debug,2017-10-01T10:14:11.129-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{user,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:11.130-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.786.0> [ns_server:debug,2017-10-01T10:14:11.141-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of rbac requested [error_logger:info,2017-10-01T10:14:11.141-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_children_sup} started: [{pid,<0.1087.0>}, {name,{indexer_cbas,index_stats_collector}}, {mfargs, {index_stats_collector,start_link,[indexer_cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:11.142-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_children_sup} started: [{pid,<0.1090.0>}, {name,{indexer_cbas,stats_archiver,"@cbas"}}, {mfargs,{stats_archiver,start_link,["@cbas"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:11.142-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_children_sup} started: [{pid,<0.1092.0>}, {name,{indexer_cbas,stats_reader,"@cbas"}}, {mfargs,{stats_reader,start_link,["@cbas"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:11.145-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [rbac] succeeded [error_logger:info,2017-10-01T10:14:11.149-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_children_sup} started: [{pid,<0.1094.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:11.152-07:00,n_0@127.0.0.1:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.1011.0>} needs_update [error_logger:info,2017-10-01T10:14:11.154-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_monitor_children_sup} started: [{pid,<0.1098.0>}, {name,{kv,dcp_traffic_monitor}}, {mfargs,{dcp_traffic_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:11.157-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_monitor_children_sup} started: [{pid,<0.1100.0>}, {name,{kv,kv_monitor}}, {mfargs,{kv_monitor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:11.178-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:11.180-07:00,n_0@127.0.0.1:ns_ports_setup<0.880.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,projector,saslauthd_port,goxdcr,xdcr_proxy,cbas] [ns_server:info,2017-10-01T10:14:11.532-07:00,n_0@127.0.0.1:<0.1058.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:11.532-07:00,n_0@127.0.0.1:<0.1058.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:11.533-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.782.0> [ns_server:debug,2017-10-01T10:14:11.533-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:11.534-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.782.0> [ns_server:debug,2017-10-01T10:14:11.534-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [ns_server:debug,2017-10-01T10:14:11.534-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:14:11.535-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl] succeeded [stats:error,2017-10-01T10:14:11.996-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:12.022-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@goxdcr-cbauth",admin} [ns_server:debug,2017-10-01T10:14:12.112-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@",admin} [ns_server:debug,2017-10-01T10:14:12.113-07:00,n_0@127.0.0.1:json_rpc_connection-projector-cbauth<0.1127.0>:json_rpc_connection:init:74]Observed revrpc connection: label "projector-cbauth", handling process <0.1127.0> [ns_server:debug,2017-10-01T10:14:12.113-07:00,n_0@127.0.0.1:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"projector-cbauth",<0.1127.0>} started [ns_server:debug,2017-10-01T10:14:12.117-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@projector-cbauth",admin} [ns_server:debug,2017-10-01T10:14:12.147-07:00,n_0@127.0.0.1:json_rpc_connection-cbas-cbauth<0.1132.0>:json_rpc_connection:init:74]Observed revrpc connection: label "cbas-cbauth", handling process <0.1132.0> [ns_server:debug,2017-10-01T10:14:12.147-07:00,n_0@127.0.0.1:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"cbas-cbauth",<0.1132.0>} started [ns_server:debug,2017-10-01T10:14:12.149-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@cbas-cbauth",admin} [ns_server:debug,2017-10-01T10:14:12.153-07:00,n_0@127.0.0.1:json_rpc_connection-cbas-service_api<0.1138.0>:json_rpc_connection:init:74]Observed revrpc connection: label "cbas-service_api", handling process <0.1138.0> [ns_server:debug,2017-10-01T10:14:12.154-07:00,n_0@127.0.0.1:service_agent-cbas<0.1094.0>:service_agent:do_handle_connection:328]Observed new json rpc connection for cbas: <0.1138.0> [ns_server:debug,2017-10-01T10:14:12.154-07:00,n_0@127.0.0.1:<0.1097.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {json_rpc_events,<0.1095.0>} exited with reason normal [ns_server:info,2017-10-01T10:14:12.353-07:00,n_0@127.0.0.1:<0.1102.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:12.354-07:00,n_0@127.0.0.1:<0.1102.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:12.354-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.782.0> [ns_server:debug,2017-10-01T10:14:12.355-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:12.355-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.782.0> [ns_server:debug,2017-10-01T10:14:12.355-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [ns_server:debug,2017-10-01T10:14:12.357-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl] succeeded [stats:error,2017-10-01T10:14:12.994-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [stats:error,2017-10-01T10:14:13.991-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:14.945-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:14.991-07:00,n_0@127.0.0.1:cleanup_process<0.1199.0>:service_janitor:maybe_init_topology_aware_service:77]Doing initial topology change for service `cbas' [stats:error,2017-10-01T10:14:14.992-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:14.995-07:00,n_0@127.0.0.1:service_rebalancer-cbas<0.1200.0>:service_agent:wait_for_agents:77]Waiting for the service agents for service cbas to come up on nodes: ['n_0@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:14.995-07:00,n_0@127.0.0.1:service_rebalancer-cbas<0.1200.0>:service_agent:wait_for_agents_loop:95]All service agents are ready for cbas [ns_server:debug,2017-10-01T10:14:14.996-07:00,n_0@127.0.0.1:service_rebalancer-cbas-worker<0.1214.0>:service_rebalancer:rebalance:98]Rebalancing service cbas. KeepNodes: ['n_0@127.0.0.1'] EjectNodes: [] DeltaNodes: [] [ns_server:debug,2017-10-01T10:14:14.998-07:00,n_0@127.0.0.1:service_rebalancer-cbas-worker<0.1214.0>:service_rebalancer:rebalance:102]Got node infos: [{'n_0@127.0.0.1',[{node_id,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {priority,0}, {opaque,{[{<<"cc-http-port">>,<<"9301">>}, {<<"host">>,<<"127.0.0.1">>}, {<<"num-iodevices">>,<<"1">>}]}}]}] [ns_server:debug,2017-10-01T10:14:14.998-07:00,n_0@127.0.0.1:service_rebalancer-cbas-worker<0.1214.0>:service_rebalancer:rebalance:105]Rebalance id is <<"2ab8edf54b8304853305273dfd12a160">> [ns_server:debug,2017-10-01T10:14:14.999-07:00,n_0@127.0.0.1:service_rebalancer-cbas-worker<0.1214.0>:service_rebalancer:rebalance:114]Using node 'n_0@127.0.0.1' as a leader [ns_server:debug,2017-10-01T10:14:15.004-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{10,63674097255}}]}] [ns_server:debug,2017-10-01T10:14:15.005-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/nextPartitionId">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097255}}]}| <<"1">>] [ns_server:debug,2017-10-01T10:14:15.005-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/cbas/nextPartitionId">>}]..) [ns_server:debug,2017-10-01T10:14:15.007-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{11,63674097255}}]}] [ns_server:debug,2017-10-01T10:14:15.007-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097255}}]}| <<"{\"Command\":1,\"Extra\":\"2ab8edf54b8304853305273dfd12a160\"}">>] [ns_server:debug,2017-10-01T10:14:15.008-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv, <<"/cbas/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:15.009-07:00,n_0@127.0.0.1:goxdcr_status_keeper<0.941.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:15.010-07:00,n_0@127.0.0.1:goxdcr_status_keeper<0.941.0>:goxdcr_rest:get_from_goxdcr:164]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2017-10-01T10:14:15.010-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{12,63674097255}}]}] [ns_server:debug,2017-10-01T10:14:15.010-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/topology">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097255}}]}| <<"{\"nodes\":[{\"nodeId\":\"a7cadc9d6a7b1c5e2ac6210075d857d5\",\"priority\":0,\"opaque\":{\"cc-http-port\":\"9301\",\"host\":\"127.0.0.1\",\"master-node\":\"true\",\"num-iodevices\":\"1\",\"starting-partition-id\":\"0\"}}],\"id\":\"2ab8edf54b8304853305273dfd12a160\"}">>] [ns_server:debug,2017-10-01T10:14:15.010-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/cbas/topology">>}]..) [ns_server:debug,2017-10-01T10:14:15.015-07:00,n_0@127.0.0.1:service_rebalancer-cbas<0.1200.0>:service_rebalancer:run_rebalance:69]Worker terminated: {'EXIT',<0.1214.0>,normal} [ns_server:debug,2017-10-01T10:14:15.016-07:00,n_0@127.0.0.1:<0.1265.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1158.0>} exited with reason normal [ns_server:debug,2017-10-01T10:14:15.018-07:00,n_0@127.0.0.1:service_agent-cbas<0.1094.0>:service_agent:cleanup_service:506]Cleaning up stale tasks: [[{<<"rev">>,<<"NA==">>}, {<<"id">>,<<"prepare/2ab8edf54b8304853305273dfd12a160">>}, {<<"type">>,<<"task-prepared">>}, {<<"status">>,<<"task-running">>}, {<<"isCancelable">>,true}, {<<"progress">>,0}, {<<"extra">>, {[{<<"rebalanceId">>,<<"2ab8edf54b8304853305273dfd12a160">>}]}}]] [ns_server:debug,2017-10-01T10:14:15.020-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{13,63674097255}}]}] [ns_server:debug,2017-10-01T10:14:15.020-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/config/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097255}}]}| <<"{\"address\":\"127.0.0.1\",\"analyticsCcHttpPort\":\"9301\",\"analyticsHttpListenPort\":\"9300\",\"authPort\":\"9310\",\"clusterAddress\":\"127.0.0.1\",\"defaultDir\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/@analytics\",\"initialRun\":false,\"iodevices\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/@analytics/iodevice\",\"logDir\":\"/ho"...>>] [ns_server:debug,2017-10-01T10:14:15.020-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv, <<"/cbas/config/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:15.023-07:00,n_0@127.0.0.1:cleanup_process<0.1199.0>:service_janitor:maybe_init_topology_aware_service:80]Initial rebalance for `cbas` finished successfully [ns_server:debug,2017-10-01T10:14:15.023-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{14,63674097255}}]}] [ns_server:debug,2017-10-01T10:14:15.024-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,cbas} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097255}}]}, 'n_0@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:15.024-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {service_map,cbas}]..) [stats:error,2017-10-01T10:14:15.991-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [stats:error,2017-10-01T10:14:16.993-07:00,n_0@127.0.0.1:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:18.519-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@cbas",admin} [ns_server:debug,2017-10-01T10:14:18.541-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"couchbase",admin} [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:versioned_cache:handle_info:89]Flushing cache compiled_roles_cache due to version change from {[5,0], {0,2904514097}, true,[]} to {[5, 0], {0, 2904514097}, true, [{"beer-sample", <<"3a3b0a10eb5856c673f6293be848aab5">>}]} [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{15,63674097258}}]}] [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac" [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:memcached_cfg:write_cfg:118]Writing config file for: "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw" [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097258}}], {configs,[[{map,[]}, {fastForwardMap,[]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,[]}, {sasl_password,"*****"}]]}] [ns_server:debug,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:ns_audit<0.893.0>:ns_audit:handle_call:104]Audit create_bucket: [{props,{[{storage_mode,couchstore}, {conflict_resolution_type,seqno}, {eviction_policy,value_only}, {num_threads,3}, {flush_enabled,false}, {ram_quota,104857600}, {replica_index,true}]}}, {type,membase}, {bucket_name,<<"beer-sample">>}, {real_userid,{[{source,ns_server}, {user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34121}]}}, {timestamp,<<"2017-10-01T10:14:18.665-07:00">>}] [menelaus:info,2017-10-01T10:14:18.665-07:00,n_0@127.0.0.1:<0.866.0>:menelaus_web_buckets:do_bucket_create:636]Created bucket "beer-sample" of type: couchbase [{replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}] [ns_server:debug,2017-10-01T10:14:18.667-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [error_logger:info,2017-10-01T10:14:18.670-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_children_sup} started: [{pid,<0.1360.0>}, {name,{indexer_cbas,stats_archiver,"beer-sample"}}, {mfargs, {stats_archiver,start_link,["@cbas-beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.670-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_children_sup} started: [{pid,<0.1369.0>}, {name,{indexer_cbas,stats_reader,"beer-sample"}}, {mfargs, {stats_reader,start_link,["@cbas-beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:18.671-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.786.0> [ns_server:debug,2017-10-01T10:14:18.672-07:00,n_0@127.0.0.1:memcached_permissions<0.786.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{user,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:18.672-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.786.0> [ns_server:debug,2017-10-01T10:14:18.673-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of rbac requested [ns_server:debug,2017-10-01T10:14:18.675-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [rbac] succeeded [ns_server:debug,2017-10-01T10:14:18.689-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:cleanup_with_membase_bucket_check_servers:51]janitor decided to update servers list [ns_server:debug,2017-10-01T10:14:18.690-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{16,63674097258}}]}] [ns_server:debug,2017-10-01T10:14:18.690-07:00,n_0@127.0.0.1:ns_bucket_worker<0.916.0>:ns_bucket_sup:update_children:108]Starting new child: {{single_bucket_kv_sup,"beer-sample"}, {single_bucket_kv_sup,start_link,["beer-sample"]}, permanent,infinity,supervisor, [single_bucket_kv_sup]} [ns_server:debug,2017-10-01T10:14:18.690-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:18.690-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{3,63674097258}}], {configs,[{"beer-sample", [{map,[]}, {fastForwardMap,[]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@127.0.0.1']}, {sasl_password,"*****"}]}]}] [ns_server:debug,2017-10-01T10:14:18.691-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"couchbase",admin} [ns_server:debug,2017-10-01T10:14:18.699-07:00,n_0@127.0.0.1:<0.1382.0>:janitor_agent:query_vbucket_states_loop:100]Exception from query_vbucket_states of "beer-sample":'n_0@127.0.0.1' {'EXIT',{noproc,{gen_server,call, [{'janitor_agent-beer-sample','n_0@127.0.0.1'}, query_vbucket_states,infinity]}}} [ns_server:debug,2017-10-01T10:14:18.699-07:00,n_0@127.0.0.1:<0.1382.0>:janitor_agent:query_vbucket_states_loop_next_step:111]Waiting for "beer-sample" on 'n_0@127.0.0.1' [ns_server:debug,2017-10-01T10:14:18.703-07:00,n_0@127.0.0.1:single_bucket_kv_sup-beer-sample<0.1383.0>:single_bucket_kv_sup:sync_config_to_couchdb_node:78]Syncing config to couchdb node [ns_server:debug,2017-10-01T10:14:18.709-07:00,n_0@127.0.0.1:single_bucket_kv_sup-beer-sample<0.1383.0>:single_bucket_kv_sup:sync_config_to_couchdb_node:83]Synced config to couchdb node successfully [stats:error,2017-10-01T10:14:18.710-07:00,n_0@127.0.0.1:<0.866.0>:stats_reader:log_bad_responses:233]Some nodes didn't respond: ['n_0@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:18.716-07:00,n_0@127.0.0.1:capi_doc_replicator-beer-sample<0.1393.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:18.717-07:00,n_0@127.0.0.1:capi_ddoc_replication_srv-beer-sample<0.1394.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [error_logger:info,2017-10-01T10:14:18.716-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<0.1393.0>}, {name,doc_replicator}, {mfargs, {capi_ddoc_manager,start_replicator, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.717-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<0.1394.0>}, {name,doc_replication_srv}, {mfargs, {doc_replication_srv,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:18.726-07:00,n_0@127.0.0.1:capi_doc_replicator-beer-sample<0.1393.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.293.0> [ns_server:debug,2017-10-01T10:14:18.726-07:00,n_0@127.0.0.1:capi_ddoc_replication_srv-beer-sample<0.1394.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.293.0> [error_logger:info,2017-10-01T10:14:18.726-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'capi_ddoc_manager_sup-beer-sample'} started: [{pid,<11720.292.0>}, {name,capi_ddoc_manager_events}, {mfargs, {capi_ddoc_manager,start_link_event_manager, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.726-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'capi_ddoc_manager_sup-beer-sample'} started: [{pid,<11720.293.0>}, {name,capi_ddoc_manager}, {mfargs, {capi_ddoc_manager,start_link, ["beer-sample",<0.1393.0>,<0.1394.0>]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.726-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.291.0>}, {name,capi_ddoc_manager_sup}, {mfargs, {capi_ddoc_manager_sup,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:18.730-07:00,n_0@127.0.0.1:capi_doc_replicator-beer-sample<0.1393.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [error_logger:info,2017-10-01T10:14:18.746-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.302.0>}, {name,capi_set_view_manager}, {mfargs, {capi_set_view_manager,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.754-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.307.0>}, {name,couch_stats_reader}, {mfargs, {couch_stats_reader,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.754-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1392.0>}, {name,{docs_kv_sup,"beer-sample"}}, {mfargs,{docs_kv_sup,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:18.762-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:init:158]Starting ns_memcached [ns_server:debug,2017-10-01T10:14:18.763-07:00,n_0@127.0.0.1:<0.1398.0>:ns_memcached:run_connect_phase:181]Started 'connecting' phase of ns_memcached-beer-sample. Parent is <0.1397.0> [error_logger:info,2017-10-01T10:14:18.763-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1396.0>,ns_memcached_sup} started: [{pid,<0.1397.0>}, {name,{ns_memcached,"beer-sample"}}, {mfargs,{ns_memcached,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.767-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1396.0>,ns_memcached_sup} started: [{pid,<0.1399.0>}, {name,{terse_bucket_info_uploader,"beer-sample"}}, {mfargs, {terse_bucket_info_uploader,start_link, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.767-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1396.0>}, {name,{ns_memcached_sup,"beer-sample"}}, {mfargs,{ns_memcached_sup,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:18.775-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1401.0>}, {name,{dcp_sup,"beer-sample"}}, {mfargs,{dcp_sup,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:info,2017-10-01T10:14:18.780-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:ensure_bucket:1205]Created bucket "beer-sample" with config string "ht_locks=47;max_size=104857600;dbname=/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/beer-sample;backend=couchdb;couch_bucket=beer-sample;max_vbuckets=1024;alog_path=/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/beer-sample/access.log;data_traffic_enabled=false;max_num_workers=3;uuid=3a3b0a10eb5856c673f6293be848aab5;conflict_resolution_type=seqno;bucket_type=persistent;item_eviction_policy=value_only;failpartialwarmup=false;" [ns_server:info,2017-10-01T10:14:18.780-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:handle_cast:636]Main ns_memcached connection established: {ok,#Port<0.13783>} [error_logger:info,2017-10-01T10:14:18.784-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1406.0>}, {name,{dcp_replication_manager,"beer-sample"}}, {mfargs, {dcp_replication_manager,start_link, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.791-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1407.0>}, {name,{replication_manager,"beer-sample"}}, {mfargs, {replication_manager,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.798-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1408.0>}, {name,{dcp_notifier,"beer-sample"}}, {mfargs,{dcp_notifier,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.801-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'janitor_agent_sup-beer-sample'} started: [{pid,<0.1410.0>}, {name,rebalance_subprocesses_registry}, {mfargs, {ns_process_registry,start_link, ['rebalance_subprocesses_registry-beer-sample', [{terminate_command,kill}]]}}, {restart_type,permanent}, {shutdown,86400000}, {child_type,worker}] [ns_server:info,2017-10-01T10:14:18.802-07:00,n_0@127.0.0.1:janitor_agent-beer-sample<0.1411.0>:janitor_agent:read_flush_counter:926]Loading flushseq failed: {error,enoent}. Assuming it's equal to global config. [ns_server:info,2017-10-01T10:14:18.802-07:00,n_0@127.0.0.1:janitor_agent-beer-sample<0.1411.0>:janitor_agent:read_flush_counter_from_config:933]Initialized flushseq 0 from bucket config [error_logger:info,2017-10-01T10:14:18.803-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'janitor_agent_sup-beer-sample'} started: [{pid,<0.1411.0>}, {name,janitor_agent}, {mfargs,{janitor_agent,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.803-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1409.0>}, {name,{janitor_agent_sup,"beer-sample"}}, {mfargs,{janitor_agent_sup,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,10000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.810-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1412.0>}, {name,{stats_collector,"beer-sample"}}, {mfargs,{stats_collector,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.810-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1415.0>}, {name,{stats_archiver,"beer-sample"}}, {mfargs,{stats_archiver,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.811-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1417.0>}, {name,{stats_reader,"beer-sample"}}, {mfargs,{stats_reader,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.814-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1418.0>}, {name,{goxdcr_stats_collector,"beer-sample"}}, {mfargs, {goxdcr_stats_collector,start_link, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.815-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1420.0>}, {name,{goxdcr_stats_archiver,"beer-sample"}}, {mfargs, {stats_archiver,start_link,["@xdcr-beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.816-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1422.0>}, {name,{goxdcr_stats_reader,"beer-sample"}}, {mfargs, {stats_reader,start_link,["@xdcr-beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.816-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'single_bucket_kv_sup-beer-sample'} started: [{pid,<0.1423.0>}, {name,{failover_safeness_level,"beer-sample"}}, {mfargs, {failover_safeness_level,start_link, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:18.816-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_sup} started: [{pid,<0.1383.0>}, {name,{single_bucket_kv_sup,"beer-sample"}}, {mfargs, {single_bucket_kv_sup,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:18.965-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "beer-sample" stats: {error,no_samples} [ns_server:debug,2017-10-01T10:14:18.968-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@goxdcr-cbauth",admin} [ns_server:debug,2017-10-01T10:14:18.969-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@",admin} [ns_server:warn,2017-10-01T10:14:19.165-07:00,n_0@127.0.0.1:kv_monitor<0.1100.0>:kv_monitor:get_buckets:180]The following buckets are not ready: ["beer-sample"] [ns_server:info,2017-10-01T10:14:19.504-07:00,n_0@127.0.0.1:<0.1370.0>:goport:handle_port_os_exit:458]Port exited with status 0 [ns_server:debug,2017-10-01T10:14:19.505-07:00,n_0@127.0.0.1:<0.1370.0>:goport:handle_port_erlang_exit:474]Port terminated [ns_server:debug,2017-10-01T10:14:19.506-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.782.0> [ns_server:debug,2017-10-01T10:14:19.506-07:00,n_0@127.0.0.1:memcached_passwords<0.782.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{{doc,{auth,'_'},'_',false,'_'}, [], ['$_']}], 100} [ns_server:debug,2017-10-01T10:14:19.506-07:00,n_0@127.0.0.1:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.782.0> [ns_server:debug,2017-10-01T10:14:19.506-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of isasl requested [user:info,2017-10-01T10:14:19.507-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:handle_cast:665]Bucket "beer-sample" loaded on node 'n_0@127.0.0.1' in 0 seconds. [ns_server:debug,2017-10-01T10:14:19.508-07:00,n_0@127.0.0.1:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [isasl] succeeded [ns_server:debug,2017-10-01T10:14:19.700-07:00,n_0@127.0.0.1:janitor_agent-beer-sample<0.1411.0>:dcp_sup:nuke:91]Nuking DCP replicators for bucket "beer-sample": [] [ns_server:debug,2017-10-01T10:14:19.708-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "beer-sample" stats: {error,no_samples} [ns_server:info,2017-10-01T10:14:19.709-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:cleanup_with_membase_bucket_check_map:74]janitor decided to generate initial vbucket map [ns_server:debug,2017-10-01T10:14:19.725-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:mb_map:generate_map_old:378]Natural map score: {1024,0} [ns_server:debug,2017-10-01T10:14:19.734-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:mb_map:generate_map_old:385]Rnd maps scores: {1024,0}, {1024,0} [ns_server:debug,2017-10-01T10:14:19.734-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:mb_map:generate_map_old:392]Considering 1 maps: [{1024,0}] [ns_server:debug,2017-10-01T10:14:19.734-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:mb_map:generate_map_old:397]Best map score: {1024,0} (true,true,true) [ns_server:debug,2017-10-01T10:14:19.735-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([vbucket_map_history, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:19.735-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{17,63674097259}}]}] [ns_server:debug,2017-10-01T10:14:19.738-07:00,n_0@127.0.0.1:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:19.738-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: vbucket_map_history -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097259}}]}, {[['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1'|...], [...]|...], [{replication_topology,star},{tags,undefined},{max_slaves,10}]}] [ns_server:debug,2017-10-01T10:14:19.739-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{18,63674097259}}]}] [ns_server:info,2017-10-01T10:14:19.740-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 0 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.740-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:debug,2017-10-01T10:14:19.740-07:00,n_0@127.0.0.1:capi_doc_replicator-beer-sample<0.1393.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:info,2017-10-01T10:14:19.740-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 2 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.741-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 3 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:debug,2017-10-01T10:14:19.741-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{4,63674097259}}], {configs,[{"beer-sample", [{map,[{0,[],['n_0@127.0.0.1',undefined]}, {1,[],['n_0@127.0.0.1',undefined]}, {2,[],['n_0@127.0.0.1',undefined]}, {3,[],['n_0@127.0.0.1',undefined]}, {4,[],['n_0@127.0.0.1',undefined]}, {5,[],['n_0@127.0.0.1',undefined]}, {6,[],['n_0@127.0.0.1',undefined]}, {7,[],['n_0@127.0.0.1',undefined]}, {8,[],['n_0@127.0.0.1',undefined]}, {9,[],['n_0@127.0.0.1',undefined]}, {10,[],['n_0@127.0.0.1',undefined]}, {11,[],['n_0@127.0.0.1',undefined]}, {12,[],['n_0@127.0.0.1',undefined]}, {13,[],['n_0@127.0.0.1',undefined]}, {14,[],['n_0@127.0.0.1',undefined]}, {15,[],['n_0@127.0.0.1',undefined]}, {16,[],['n_0@127.0.0.1',undefined]}, {17,[],['n_0@127.0.0.1',undefined]}, {18,[],['n_0@127.0.0.1',undefined]}, {19,[],['n_0@127.0.0.1',undefined]}, {20,[],['n_0@127.0.0.1',undefined]}, {21,[],['n_0@127.0.0.1',undefined]}, {22,[],['n_0@127.0.0.1',undefined]}, {23,[],['n_0@127.0.0.1',undefined]}, {24,[],['n_0@127.0.0.1',undefined]}, {25,[],['n_0@127.0.0.1',undefined]}, {26,[],['n_0@127.0.0.1',undefined]}, {27,[],['n_0@127.0.0.1',undefined]}, {28,[],['n_0@127.0.0.1',undefined]}, {29,[],['n_0@127.0.0.1',undefined]}, {30,[],['n_0@127.0.0.1',undefined]}, {31,[],['n_0@127.0.0.1',undefined]}, {32,[],['n_0@127.0.0.1',undefined]}, {33,[],['n_0@127.0.0.1',undefined]}, {34,[],['n_0@127.0.0.1',undefined]}, {35,[],['n_0@127.0.0.1',undefined]}, {36,[],['n_0@127.0.0.1',undefined]}, {37,[],['n_0@127.0.0.1',undefined]}, {38,[],['n_0@127.0.0.1',undefined]}, {39,[],['n_0@127.0.0.1',undefined]}, {40,[],['n_0@127.0.0.1',undefined]}, {41,[],['n_0@127.0.0.1',undefined]}, {42,[],['n_0@127.0.0.1',undefined]}, {43,[],['n_0@127.0.0.1',undefined]}, {44,[],['n_0@127.0.0.1',undefined]}, {45,[],['n_0@127.0.0.1',undefined]}, {46,[],['n_0@127.0.0.1',undefined]}, {47,[],['n_0@127.0.0.1',undefined]}, {48,[],['n_0@127.0.0.1',undefined]}, {49,[],['n_0@127.0.0.1',undefined]}, {50,[],['n_0@127.0.0.1',undefined]}, {51,[],['n_0@127.0.0.1',undefined]}, {52,[],['n_0@127.0.0.1',undefined]}, {53,[],['n_0@127.0.0.1',undefined]}, {54,[],['n_0@127.0.0.1',undefined]}, {55,[],['n_0@127.0.0.1',undefined]}, {56,[],['n_0@127.0.0.1',undefined]}, {57,[],['n_0@127.0.0.1',undefined]}, {58,[],['n_0@127.0.0.1',undefined]}, {59,[],['n_0@127.0.0.1',undefined]}, {60,[],['n_0@127.0.0.1',undefined]}, {61,[],['n_0@127.0.0.1',undefined]}, {62,[],['n_0@127.0.0.1',undefined]}, {63,[],['n_0@127.0.0.1',undefined]}, {64,[],['n_0@127.0.0.1',undefined]}, {65,[],['n_0@127.0.0.1',undefined]}, {66,[],['n_0@127.0.0.1',undefined]}, {67,[],['n_0@127.0.0.1',undefined]}, {68,[],['n_0@127.0.0.1',undefined]}, {69,[],['n_0@127.0.0.1',undefined]}, {70,[],['n_0@127.0.0.1',undefined]}, {71,[],['n_0@127.0.0.1',undefined]}, {72,[],['n_0@127.0.0.1',undefined]}, {73,[],['n_0@127.0.0.1',undefined]}, {74,[],['n_0@127.0.0.1',undefined]}, {75,[],['n_0@127.0.0.1',undefined]}, {76,[],['n_0@127.0.0.1',undefined]}, {77,[],['n_0@127.0.0.1',undefined]}, {78,[],['n_0@127.0.0.1',undefined]}, {79,[],['n_0@127.0.0.1',undefined]}, {80,[],['n_0@127.0.0.1',undefined]}, {81,[],['n_0@127.0.0.1',undefined]}, {82,[],['n_0@127.0.0.1',undefined]}, {83,[],['n_0@127.0.0.1',undefined]}, {84,[],['n_0@127.0.0.1'|...]}, {85,[],[...]}, {86,[],...}, {87,...}, {...}|...]}, {fastForwardMap,[]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@127.0.0.1']}, {sasl_password,"*****"}]}]}] [ns_server:debug,2017-10-01T10:14:19.742-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{19,63674097259}}]}] [ns_server:debug,2017-10-01T10:14:19.742-07:00,n_0@127.0.0.1:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{5,63674097259}}], {configs,[{"beer-sample", [{map,[]}, {fastForwardMap,[]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@127.0.0.1']}, {sasl_password,"*****"}, {map_opts_hash,133465355}]}]}] [ns_server:info,2017-10-01T10:14:19.741-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 4 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.742-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 5 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 6 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 7 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 8 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 9 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 10 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 11 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 12 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 13 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.743-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 14 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 15 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 16 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 17 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 18 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 19 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 20 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.744-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 21 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.745-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 22 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.745-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 23 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.745-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 24 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.745-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 25 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 26 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 27 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 28 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 29 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 30 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 31 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 32 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.746-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 33 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 34 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 35 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 36 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 37 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 38 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 39 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 40 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 41 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 42 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 43 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 44 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 45 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 46 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 47 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.747-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 48 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 49 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 50 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 51 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 52 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 53 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.748-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 54 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 55 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 56 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 57 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 58 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 59 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 60 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 61 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 62 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 63 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 64 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 65 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.749-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 66 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 67 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 68 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 69 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 70 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 71 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 72 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 73 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 74 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 75 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 76 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.750-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 77 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.751-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 78 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.751-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 79 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.751-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 80 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.751-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 81 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.751-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 82 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 83 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 84 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 85 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 86 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 87 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 88 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 89 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 90 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 91 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 92 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 93 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.752-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 94 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 95 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 96 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 97 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 98 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 99 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 100 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 101 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 102 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 103 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 104 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 105 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 106 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 107 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 108 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.753-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 109 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 110 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 111 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 112 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 113 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 114 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 115 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 116 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 117 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 118 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 119 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 120 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 121 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 122 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 123 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.754-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 124 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 125 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 126 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 127 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 128 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 129 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 130 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 131 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 132 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 133 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 134 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.755-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 135 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 136 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 137 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 138 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 139 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 140 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 141 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 142 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 143 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.756-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 144 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 145 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 146 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 147 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 148 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 149 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 150 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 151 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 152 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 153 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 154 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.757-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 155 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 156 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 157 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 158 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 159 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 160 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 161 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 162 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 163 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 164 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 165 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 166 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 167 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 168 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 169 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 170 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.758-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 171 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 172 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 173 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 174 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 175 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 176 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 177 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 178 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 179 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 180 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 181 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 182 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 183 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 184 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 185 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.759-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 186 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 187 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 188 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 189 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 190 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 191 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 192 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 193 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 194 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 195 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 196 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 197 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 198 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 199 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.760-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 200 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 201 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 202 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 203 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 204 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 205 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 206 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 207 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 208 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 209 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 210 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 211 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 212 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.761-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 213 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 214 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 215 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 216 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 217 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 218 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 219 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 220 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 221 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 222 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 223 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 224 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 225 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.762-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 226 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 227 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 228 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 229 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 230 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 231 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 232 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 233 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 234 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 235 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 236 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 237 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 238 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.763-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 239 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 240 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 241 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 242 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 243 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 244 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 245 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 246 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 247 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 248 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 249 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 250 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 251 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.764-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 252 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 253 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 254 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 255 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 256 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 257 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 258 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 259 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 260 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.765-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 261 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 262 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 263 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 264 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 265 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 266 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 267 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 268 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 269 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 270 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 271 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 272 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 273 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 274 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.766-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 275 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 276 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 277 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 278 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 279 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 280 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 281 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 282 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 283 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 284 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 285 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 286 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.767-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 287 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 288 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 289 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 290 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 291 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 292 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 293 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 294 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 295 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 296 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 297 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 298 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 299 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.768-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 300 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 301 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 302 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 303 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 304 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 305 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 306 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 307 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 308 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 309 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 310 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 311 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 312 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 313 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.769-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 314 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 315 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 316 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 317 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 318 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 319 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 320 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 321 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 322 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 323 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 324 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 325 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 326 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.770-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 327 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 328 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 329 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 330 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 331 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 332 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 333 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 334 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 335 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 336 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 337 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 338 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 339 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.771-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 340 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 341 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 342 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 343 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 344 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 345 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 346 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 347 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 348 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 349 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 350 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 351 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 352 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.772-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 353 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 354 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 355 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 356 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 357 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 358 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 359 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 360 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 361 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 362 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 363 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 364 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 365 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 366 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.773-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 367 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 368 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 369 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 370 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 371 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 372 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 373 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 374 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 375 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 376 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 377 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 378 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 379 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.774-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 380 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 381 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 382 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 383 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 384 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 385 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 386 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 387 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 388 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 389 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 390 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 391 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 392 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 393 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.775-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 394 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 395 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 396 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 397 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 398 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 399 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 400 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 401 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 402 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 403 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 404 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 405 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 406 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.776-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 407 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 408 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 409 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 410 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 411 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 412 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 413 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 414 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 415 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 416 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 417 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 418 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 419 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 420 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.777-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 421 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 422 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 423 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 424 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 425 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 426 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 427 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 428 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 429 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 430 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 431 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 432 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 433 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 434 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.778-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 435 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 436 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 437 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 438 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 439 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 440 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 441 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 442 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 443 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 444 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 445 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 446 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 447 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.779-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 448 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 449 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 450 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 451 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 452 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 453 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 454 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 455 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 456 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 457 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 458 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 459 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 460 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 461 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.780-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 462 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 463 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 464 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 465 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 466 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 467 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 468 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 469 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 470 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 471 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 472 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 473 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 474 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 475 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.781-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 476 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 477 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 478 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 479 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 480 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 481 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 482 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 483 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 484 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 485 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 486 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 487 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 488 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.782-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 489 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 490 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 491 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 492 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 493 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 494 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 495 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 496 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 497 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 498 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 499 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 500 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 501 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.783-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 502 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 503 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 504 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 505 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 506 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 507 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 508 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 509 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 510 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 511 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 512 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 513 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 514 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.784-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 515 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 516 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 517 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 518 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 519 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 520 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 521 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 522 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 523 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 524 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 525 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 526 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 527 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.785-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 528 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 529 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 530 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 531 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 532 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 533 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 534 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 535 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 536 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 537 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 538 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 539 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.786-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 540 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 541 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 542 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 543 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 544 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 545 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 546 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 547 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 548 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 549 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 550 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 551 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 552 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.787-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 553 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 554 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 555 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 556 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 557 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 558 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 559 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 560 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 561 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 562 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 563 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 564 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 565 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 566 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.788-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 567 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 568 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 569 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 570 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 571 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 572 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 573 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 574 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 575 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 576 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 577 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 578 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.789-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 579 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 580 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 581 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 582 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 583 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 584 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 585 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 586 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 587 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 588 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 589 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 590 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 591 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.790-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 592 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 593 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 594 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 595 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 596 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 597 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 598 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 599 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 600 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 601 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 602 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 603 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 604 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 605 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.791-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 606 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 607 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 608 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 609 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 610 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 611 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 612 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 613 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 614 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 615 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 616 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 617 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 618 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 619 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.792-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 620 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 621 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 622 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 623 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 624 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 625 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 626 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 627 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 628 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 629 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 630 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 631 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 632 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 633 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.793-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 634 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 635 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 636 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 637 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 638 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 639 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 640 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 641 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 642 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 643 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 644 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 645 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 646 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 647 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 648 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.794-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 649 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 650 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 651 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 652 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 653 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 654 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 655 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 656 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 657 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 658 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 659 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 660 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 661 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 662 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 663 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.795-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 664 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 665 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 666 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 667 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 668 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 669 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 670 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 671 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 672 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 673 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 674 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 675 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 676 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 677 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 678 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.796-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 679 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 680 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 681 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 682 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 683 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 684 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 685 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 686 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 687 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 688 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 689 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 690 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.797-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 691 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 692 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 693 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 694 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 695 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 696 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 697 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 698 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 699 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 700 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 701 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 702 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 703 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 704 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 705 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.798-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 706 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 707 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 708 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 709 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 710 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 711 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 712 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 713 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 714 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 715 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 716 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 717 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 718 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 719 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 720 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.799-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 721 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 722 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 723 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 724 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 725 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 726 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 727 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 728 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 729 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 730 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 731 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 732 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 733 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 734 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.800-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 735 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 736 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 737 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 738 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 739 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 740 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 741 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 742 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 743 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 744 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 745 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 746 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 747 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 748 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 749 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.801-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 750 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 751 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 752 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 753 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 754 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 755 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 756 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 757 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 758 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 759 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 760 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 761 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 762 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 763 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.802-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 764 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 765 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 766 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 767 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 768 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 769 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 770 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 771 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 772 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 773 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 774 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 775 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 776 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.803-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 777 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 778 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 779 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 780 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 781 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 782 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 783 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 784 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 785 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 786 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 787 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 788 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 789 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.804-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 790 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 791 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 792 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 793 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 794 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 795 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 796 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 797 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 798 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 799 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 800 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 801 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 802 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 803 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.805-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 804 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 805 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 806 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 807 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 808 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 809 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 810 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 811 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 812 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 813 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 814 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 815 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 816 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 817 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.806-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 818 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 819 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 820 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 821 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 822 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 823 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 824 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 825 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 826 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 827 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 828 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 829 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 830 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 831 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 832 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.807-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 833 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 834 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 835 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 836 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 837 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 838 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 839 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 840 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 841 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 842 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 843 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 844 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 845 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.808-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 846 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 847 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 848 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 849 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 850 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 851 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 852 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 853 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 854 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 855 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 856 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 857 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 858 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.809-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 859 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 860 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 861 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 862 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 863 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 864 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 865 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 866 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 867 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 868 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 869 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 870 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 871 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.810-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 872 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 873 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 874 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 875 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 876 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 877 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 878 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 879 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 880 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 881 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 882 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 883 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 884 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 885 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.811-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 886 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 887 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 888 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 889 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 890 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 891 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 892 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 893 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 894 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 895 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 896 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 897 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 898 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.812-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 899 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 900 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 901 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 902 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 903 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 904 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 905 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 906 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 907 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 908 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 909 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 910 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 911 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.813-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 912 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 913 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 914 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 915 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 916 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 917 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 918 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 919 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 920 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 921 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 922 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 923 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 924 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.814-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 925 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 926 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 927 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 928 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 929 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 930 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 931 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 932 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 933 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 934 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 935 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 936 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 937 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.815-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 938 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 939 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 940 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 941 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 942 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 943 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 944 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 945 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 946 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 947 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 948 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 949 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.816-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 950 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 951 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 952 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 953 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 954 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 955 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 956 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 957 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 958 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 959 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 960 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 961 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 962 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 963 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 964 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.817-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 965 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 966 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 967 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 968 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 969 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 970 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 971 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 972 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 973 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 974 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 975 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 976 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.818-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 977 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 978 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 979 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 980 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 981 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 982 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 983 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 984 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 985 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 986 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 987 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 988 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.819-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 989 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 990 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 991 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 992 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 993 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 994 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 995 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 996 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 997 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 998 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 999 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1000 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1001 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.820-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1002 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1003 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1004 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1005 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1006 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1007 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1008 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1009 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1010 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1011 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1012 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1013 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1014 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1015 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.821-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1016 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1017 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1018 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1019 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1020 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1021 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1022 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.822-07:00,n_0@127.0.0.1:cleanup_process<0.1361.0>:ns_janitor:do_sanify_chain:387]Setting vbucket 1023 in "beer-sample" on 'n_0@127.0.0.1' from missing to active. [ns_server:info,2017-10-01T10:14:19.834-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1023 state to active [ns_server:info,2017-10-01T10:14:19.834-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1022 state to active [ns_server:info,2017-10-01T10:14:19.835-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1021 state to active [ns_server:info,2017-10-01T10:14:19.835-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1020 state to active [ns_server:info,2017-10-01T10:14:19.836-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1019 state to active [ns_server:info,2017-10-01T10:14:19.841-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1018 state to active [ns_server:info,2017-10-01T10:14:19.841-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1017 state to active [ns_server:info,2017-10-01T10:14:19.841-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1016 state to active [ns_server:info,2017-10-01T10:14:19.842-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1015 state to active [ns_server:info,2017-10-01T10:14:19.843-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1014 state to active [ns_server:info,2017-10-01T10:14:19.843-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1013 state to active [ns_server:info,2017-10-01T10:14:19.844-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1012 state to active [ns_server:info,2017-10-01T10:14:19.845-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1011 state to active [ns_server:info,2017-10-01T10:14:19.845-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 1010 state to active [ns_server:info,2017-10-01T10:14:19.846-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1009 state to active [ns_server:info,2017-10-01T10:14:19.846-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1008 state to active [ns_server:info,2017-10-01T10:14:19.846-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1007 state to active [ns_server:info,2017-10-01T10:14:19.847-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1006 state to active [ns_server:info,2017-10-01T10:14:19.848-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1005 state to active [ns_server:info,2017-10-01T10:14:19.849-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1004 state to active [ns_server:info,2017-10-01T10:14:19.850-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1003 state to active [ns_server:info,2017-10-01T10:14:19.850-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1002 state to active [ns_server:info,2017-10-01T10:14:19.851-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1001 state to active [ns_server:info,2017-10-01T10:14:19.851-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1000 state to active [ns_server:info,2017-10-01T10:14:19.852-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 999 state to active [ns_server:info,2017-10-01T10:14:19.852-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 998 state to active [ns_server:info,2017-10-01T10:14:19.853-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 997 state to active [ns_server:info,2017-10-01T10:14:19.854-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 996 state to active [ns_server:info,2017-10-01T10:14:19.854-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 995 state to active [ns_server:info,2017-10-01T10:14:19.855-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 994 state to active [ns_server:info,2017-10-01T10:14:19.855-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 993 state to active [ns_server:info,2017-10-01T10:14:19.856-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 992 state to active [ns_server:info,2017-10-01T10:14:19.856-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 991 state to active [ns_server:info,2017-10-01T10:14:19.857-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 990 state to active [ns_server:info,2017-10-01T10:14:19.857-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 989 state to active [ns_server:info,2017-10-01T10:14:19.858-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 988 state to active [ns_server:info,2017-10-01T10:14:19.859-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 987 state to active [ns_server:info,2017-10-01T10:14:19.860-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 986 state to active [ns_server:info,2017-10-01T10:14:19.860-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 985 state to active [ns_server:info,2017-10-01T10:14:19.861-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 984 state to active [ns_server:info,2017-10-01T10:14:19.861-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 983 state to active [ns_server:info,2017-10-01T10:14:19.862-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 982 state to active [ns_server:info,2017-10-01T10:14:19.862-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 981 state to active [ns_server:info,2017-10-01T10:14:19.863-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 980 state to active [ns_server:info,2017-10-01T10:14:19.864-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 979 state to active [ns_server:info,2017-10-01T10:14:19.865-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 978 state to active [ns_server:info,2017-10-01T10:14:19.866-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 977 state to active [ns_server:info,2017-10-01T10:14:19.866-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 976 state to active [ns_server:info,2017-10-01T10:14:19.867-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 975 state to active [ns_server:info,2017-10-01T10:14:19.867-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 974 state to active [ns_server:info,2017-10-01T10:14:19.868-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 973 state to active [ns_server:info,2017-10-01T10:14:19.869-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 972 state to active [ns_server:info,2017-10-01T10:14:19.870-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 971 state to active [ns_server:info,2017-10-01T10:14:19.870-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 970 state to active [ns_server:info,2017-10-01T10:14:19.871-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 969 state to active [ns_server:info,2017-10-01T10:14:19.871-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 968 state to active [ns_server:info,2017-10-01T10:14:19.872-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 967 state to active [ns_server:info,2017-10-01T10:14:19.872-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 966 state to active [ns_server:info,2017-10-01T10:14:19.874-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 965 state to active [ns_server:info,2017-10-01T10:14:19.875-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 964 state to active [ns_server:info,2017-10-01T10:14:19.876-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 963 state to active [ns_server:info,2017-10-01T10:14:19.877-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 962 state to active [ns_server:info,2017-10-01T10:14:19.877-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 961 state to active [ns_server:info,2017-10-01T10:14:19.878-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 960 state to active [ns_server:info,2017-10-01T10:14:19.878-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 959 state to active [ns_server:info,2017-10-01T10:14:19.879-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 958 state to active [ns_server:info,2017-10-01T10:14:19.880-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 957 state to active [ns_server:info,2017-10-01T10:14:19.880-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 956 state to active [ns_server:info,2017-10-01T10:14:19.880-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 955 state to active [ns_server:info,2017-10-01T10:14:19.881-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 954 state to active [ns_server:info,2017-10-01T10:14:19.882-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 953 state to active [ns_server:info,2017-10-01T10:14:19.882-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 952 state to active [ns_server:info,2017-10-01T10:14:19.883-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 951 state to active [ns_server:info,2017-10-01T10:14:19.884-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 950 state to active [ns_server:info,2017-10-01T10:14:19.884-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 949 state to active [ns_server:info,2017-10-01T10:14:19.885-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 948 state to active [ns_server:info,2017-10-01T10:14:19.886-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 947 state to active [ns_server:info,2017-10-01T10:14:19.886-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 946 state to active [ns_server:info,2017-10-01T10:14:19.887-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 945 state to active [ns_server:info,2017-10-01T10:14:19.888-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 944 state to active [ns_server:info,2017-10-01T10:14:19.889-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 943 state to active [ns_server:info,2017-10-01T10:14:19.889-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 942 state to active [ns_server:info,2017-10-01T10:14:19.890-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 941 state to active [ns_server:info,2017-10-01T10:14:19.891-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 940 state to active [ns_server:info,2017-10-01T10:14:19.891-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 939 state to active [ns_server:info,2017-10-01T10:14:19.892-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 938 state to active [ns_server:info,2017-10-01T10:14:19.892-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 937 state to active [ns_server:info,2017-10-01T10:14:19.893-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 936 state to active [ns_server:info,2017-10-01T10:14:19.893-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 935 state to active [ns_server:info,2017-10-01T10:14:19.894-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 934 state to active [ns_server:info,2017-10-01T10:14:19.894-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 933 state to active [ns_server:info,2017-10-01T10:14:19.895-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 932 state to active [ns_server:info,2017-10-01T10:14:19.896-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 931 state to active [ns_server:info,2017-10-01T10:14:19.896-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 930 state to active [ns_server:info,2017-10-01T10:14:19.897-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 929 state to active [ns_server:info,2017-10-01T10:14:19.898-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 928 state to active [ns_server:info,2017-10-01T10:14:19.898-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 927 state to active [ns_server:info,2017-10-01T10:14:19.899-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 926 state to active [ns_server:info,2017-10-01T10:14:19.899-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 925 state to active [ns_server:info,2017-10-01T10:14:19.900-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 924 state to active [ns_server:info,2017-10-01T10:14:19.901-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 923 state to active [ns_server:info,2017-10-01T10:14:19.901-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 922 state to active [ns_server:info,2017-10-01T10:14:19.902-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 921 state to active [ns_server:info,2017-10-01T10:14:19.903-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 920 state to active [ns_server:info,2017-10-01T10:14:19.903-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 919 state to active [ns_server:info,2017-10-01T10:14:19.904-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 918 state to active [ns_server:info,2017-10-01T10:14:19.904-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 917 state to active [ns_server:info,2017-10-01T10:14:19.905-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 916 state to active [ns_server:info,2017-10-01T10:14:19.906-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 915 state to active [ns_server:info,2017-10-01T10:14:19.906-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 914 state to active [ns_server:info,2017-10-01T10:14:19.907-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 913 state to active [ns_server:info,2017-10-01T10:14:19.907-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 912 state to active [ns_server:info,2017-10-01T10:14:19.908-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 911 state to active [ns_server:info,2017-10-01T10:14:19.909-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 910 state to active [ns_server:info,2017-10-01T10:14:19.910-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 909 state to active [ns_server:info,2017-10-01T10:14:19.910-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 908 state to active [ns_server:info,2017-10-01T10:14:19.912-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 907 state to active [ns_server:info,2017-10-01T10:14:19.912-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 906 state to active [ns_server:info,2017-10-01T10:14:19.913-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 905 state to active [ns_server:info,2017-10-01T10:14:19.913-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 904 state to active [ns_server:info,2017-10-01T10:14:19.914-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 903 state to active [ns_server:info,2017-10-01T10:14:19.914-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 902 state to active [ns_server:info,2017-10-01T10:14:19.915-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 901 state to active [ns_server:info,2017-10-01T10:14:19.916-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 900 state to active [ns_server:info,2017-10-01T10:14:19.917-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 899 state to active [ns_server:info,2017-10-01T10:14:19.917-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 898 state to active [ns_server:info,2017-10-01T10:14:19.918-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 897 state to active [ns_server:info,2017-10-01T10:14:19.918-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 896 state to active [ns_server:info,2017-10-01T10:14:19.919-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 895 state to active [ns_server:info,2017-10-01T10:14:19.919-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 894 state to active [ns_server:info,2017-10-01T10:14:19.920-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 893 state to active [ns_server:info,2017-10-01T10:14:19.922-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 892 state to active [ns_server:info,2017-10-01T10:14:19.922-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 891 state to active [ns_server:info,2017-10-01T10:14:19.923-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 890 state to active [ns_server:info,2017-10-01T10:14:19.924-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 889 state to active [ns_server:info,2017-10-01T10:14:19.924-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 888 state to active [ns_server:info,2017-10-01T10:14:19.925-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 887 state to active [ns_server:info,2017-10-01T10:14:19.926-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 886 state to active [ns_server:info,2017-10-01T10:14:19.927-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 885 state to active [ns_server:info,2017-10-01T10:14:19.927-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 884 state to active [ns_server:info,2017-10-01T10:14:19.927-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 883 state to active [ns_server:info,2017-10-01T10:14:19.928-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 882 state to active [ns_server:info,2017-10-01T10:14:19.929-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 881 state to active [ns_server:info,2017-10-01T10:14:19.930-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 880 state to active [ns_server:info,2017-10-01T10:14:19.930-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 879 state to active [ns_server:info,2017-10-01T10:14:19.931-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 878 state to active [ns_server:info,2017-10-01T10:14:19.931-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 877 state to active [ns_server:info,2017-10-01T10:14:19.937-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 876 state to active [ns_server:info,2017-10-01T10:14:19.937-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 875 state to active [ns_server:info,2017-10-01T10:14:19.938-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 874 state to active [ns_server:info,2017-10-01T10:14:19.939-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 873 state to active [ns_server:info,2017-10-01T10:14:19.939-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 872 state to active [ns_server:info,2017-10-01T10:14:19.940-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 871 state to active [ns_server:info,2017-10-01T10:14:19.941-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 870 state to active [ns_server:info,2017-10-01T10:14:19.941-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 869 state to active [ns_server:info,2017-10-01T10:14:19.942-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 868 state to active [ns_server:debug,2017-10-01T10:14:19.943-07:00,n_0@127.0.0.1:ns_heart_slow_status_updater<0.828.0>:ns_heart:grab_latest_stats:260]Ignoring failure to grab "beer-sample" stats: {error,no_samples} [ns_server:info,2017-10-01T10:14:19.944-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 867 state to active [ns_server:info,2017-10-01T10:14:19.945-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 866 state to active [ns_server:info,2017-10-01T10:14:19.946-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 865 state to active [ns_server:info,2017-10-01T10:14:19.947-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 864 state to active [ns_server:info,2017-10-01T10:14:19.948-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 863 state to active [ns_server:info,2017-10-01T10:14:19.949-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 862 state to active [ns_server:info,2017-10-01T10:14:19.949-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 861 state to active [ns_server:info,2017-10-01T10:14:19.950-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 860 state to active [ns_server:info,2017-10-01T10:14:19.950-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 859 state to active [ns_server:info,2017-10-01T10:14:19.950-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 858 state to active [ns_server:info,2017-10-01T10:14:19.951-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 857 state to active [ns_server:info,2017-10-01T10:14:19.951-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 856 state to active [ns_server:info,2017-10-01T10:14:19.952-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 855 state to active [ns_server:info,2017-10-01T10:14:19.952-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 854 state to active [ns_server:info,2017-10-01T10:14:19.952-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 853 state to active [ns_server:info,2017-10-01T10:14:19.953-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 852 state to active [ns_server:info,2017-10-01T10:14:19.954-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 851 state to active [ns_server:info,2017-10-01T10:14:19.954-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 850 state to active [ns_server:info,2017-10-01T10:14:19.955-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 849 state to active [ns_server:info,2017-10-01T10:14:19.955-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 848 state to active [ns_server:info,2017-10-01T10:14:19.956-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 847 state to active [ns_server:info,2017-10-01T10:14:19.956-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 846 state to active [ns_server:info,2017-10-01T10:14:19.957-07:00,n_0@127.0.0.1:<0.1405.0>:ns_memcached:do_handle_call:553]Changed vbucket 845 state to active [ns_server:info,2017-10-01T10:14:19.957-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 844 state to active [ns_server:info,2017-10-01T10:14:19.958-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 843 state to active [ns_server:info,2017-10-01T10:14:19.959-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 842 state to active [ns_server:info,2017-10-01T10:14:19.959-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 841 state to active [ns_server:info,2017-10-01T10:14:19.961-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 840 state to active [ns_server:info,2017-10-01T10:14:19.962-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 839 state to active [ns_server:info,2017-10-01T10:14:19.963-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 838 state to active [ns_server:info,2017-10-01T10:14:19.964-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 837 state to active [ns_server:info,2017-10-01T10:14:19.965-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 836 state to active [ns_server:info,2017-10-01T10:14:19.966-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 835 state to active [ns_server:info,2017-10-01T10:14:19.968-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 834 state to active [ns_server:info,2017-10-01T10:14:19.970-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 833 state to active [ns_server:info,2017-10-01T10:14:19.973-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 832 state to active [ns_server:info,2017-10-01T10:14:19.973-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 831 state to active [ns_server:info,2017-10-01T10:14:19.974-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 830 state to active [ns_server:info,2017-10-01T10:14:19.981-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 829 state to active [ns_server:info,2017-10-01T10:14:19.981-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 828 state to active [ns_server:info,2017-10-01T10:14:19.981-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 827 state to active [ns_server:info,2017-10-01T10:14:19.982-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 826 state to active [ns_server:info,2017-10-01T10:14:19.983-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 825 state to active [ns_server:info,2017-10-01T10:14:19.984-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 824 state to active [ns_server:info,2017-10-01T10:14:19.984-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 823 state to active [ns_server:info,2017-10-01T10:14:19.985-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 822 state to active [ns_server:info,2017-10-01T10:14:19.987-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 821 state to active [ns_server:info,2017-10-01T10:14:19.988-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 820 state to active [ns_server:info,2017-10-01T10:14:19.988-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 819 state to active [ns_server:info,2017-10-01T10:14:19.989-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 818 state to active [ns_server:info,2017-10-01T10:14:19.991-07:00,n_0@127.0.0.1:<0.975.0>:ns_orchestrator:handle_info:484]Skipping janitor in state janitor_running [ns_server:info,2017-10-01T10:14:19.992-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 817 state to active [ns_server:info,2017-10-01T10:14:19.994-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 816 state to active [ns_server:info,2017-10-01T10:14:19.995-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 815 state to active [ns_server:info,2017-10-01T10:14:19.996-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 814 state to active [ns_server:info,2017-10-01T10:14:19.998-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 813 state to active [ns_server:info,2017-10-01T10:14:19.998-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 812 state to active [ns_server:info,2017-10-01T10:14:20.000-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 811 state to active [ns_server:info,2017-10-01T10:14:20.001-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 810 state to active [ns_server:info,2017-10-01T10:14:20.001-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 809 state to active [ns_server:info,2017-10-01T10:14:20.001-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 808 state to active [ns_server:info,2017-10-01T10:14:20.002-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 807 state to active [ns_server:info,2017-10-01T10:14:20.003-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 806 state to active [ns_server:info,2017-10-01T10:14:20.003-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 805 state to active [ns_server:info,2017-10-01T10:14:20.003-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 804 state to active [ns_server:info,2017-10-01T10:14:20.004-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 803 state to active [ns_server:info,2017-10-01T10:14:20.005-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 802 state to active [ns_server:info,2017-10-01T10:14:20.005-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 801 state to active [ns_server:info,2017-10-01T10:14:20.005-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 800 state to active [ns_server:info,2017-10-01T10:14:20.006-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 799 state to active [ns_server:info,2017-10-01T10:14:20.006-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 798 state to active [ns_server:info,2017-10-01T10:14:20.006-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 797 state to active [ns_server:info,2017-10-01T10:14:20.007-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 796 state to active [ns_server:info,2017-10-01T10:14:20.007-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 795 state to active [ns_server:info,2017-10-01T10:14:20.007-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 794 state to active [ns_server:info,2017-10-01T10:14:20.008-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 793 state to active [ns_server:info,2017-10-01T10:14:20.008-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 792 state to active [ns_server:info,2017-10-01T10:14:20.009-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 791 state to active [ns_server:info,2017-10-01T10:14:20.013-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 790 state to active [ns_server:info,2017-10-01T10:14:20.014-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 789 state to active [ns_server:info,2017-10-01T10:14:20.014-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 788 state to active [ns_server:info,2017-10-01T10:14:20.015-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 787 state to active [ns_server:info,2017-10-01T10:14:20.016-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 786 state to active [ns_server:info,2017-10-01T10:14:20.016-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 785 state to active [ns_server:info,2017-10-01T10:14:20.018-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 784 state to active [ns_server:info,2017-10-01T10:14:20.018-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 783 state to active [ns_server:info,2017-10-01T10:14:20.019-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 782 state to active [ns_server:info,2017-10-01T10:14:20.019-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 781 state to active [ns_server:info,2017-10-01T10:14:20.020-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 780 state to active [ns_server:info,2017-10-01T10:14:20.020-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 779 state to active [ns_server:info,2017-10-01T10:14:20.021-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 778 state to active [ns_server:info,2017-10-01T10:14:20.022-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 777 state to active [ns_server:info,2017-10-01T10:14:20.022-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 776 state to active [ns_server:info,2017-10-01T10:14:20.023-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 775 state to active [ns_server:info,2017-10-01T10:14:20.023-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 774 state to active [ns_server:info,2017-10-01T10:14:20.025-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 773 state to active [ns_server:info,2017-10-01T10:14:20.025-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 772 state to active [ns_server:info,2017-10-01T10:14:20.025-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 771 state to active [ns_server:info,2017-10-01T10:14:20.026-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 770 state to active [ns_server:info,2017-10-01T10:14:20.027-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 769 state to active [ns_server:info,2017-10-01T10:14:20.027-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 768 state to active [ns_server:info,2017-10-01T10:14:20.028-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 767 state to active [ns_server:info,2017-10-01T10:14:20.028-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 766 state to active [ns_server:info,2017-10-01T10:14:20.029-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 765 state to active [ns_server:info,2017-10-01T10:14:20.029-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 764 state to active [ns_server:info,2017-10-01T10:14:20.030-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 763 state to active [ns_server:info,2017-10-01T10:14:20.030-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 762 state to active [ns_server:info,2017-10-01T10:14:20.037-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 761 state to active [ns_server:info,2017-10-01T10:14:20.037-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 760 state to active [ns_server:info,2017-10-01T10:14:20.037-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 759 state to active [ns_server:info,2017-10-01T10:14:20.038-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 758 state to active [ns_server:info,2017-10-01T10:14:20.038-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 757 state to active [ns_server:info,2017-10-01T10:14:20.039-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 756 state to active [ns_server:info,2017-10-01T10:14:20.039-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 755 state to active [ns_server:info,2017-10-01T10:14:20.039-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 754 state to active [ns_server:info,2017-10-01T10:14:20.040-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 753 state to active [ns_server:info,2017-10-01T10:14:20.040-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 752 state to active [ns_server:info,2017-10-01T10:14:20.040-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 751 state to active [ns_server:info,2017-10-01T10:14:20.041-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 750 state to active [ns_server:info,2017-10-01T10:14:20.041-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 749 state to active [ns_server:info,2017-10-01T10:14:20.041-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 748 state to active [ns_server:info,2017-10-01T10:14:20.041-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 747 state to active [ns_server:info,2017-10-01T10:14:20.042-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 746 state to active [ns_server:info,2017-10-01T10:14:20.042-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 745 state to active [ns_server:info,2017-10-01T10:14:20.042-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 744 state to active [ns_server:info,2017-10-01T10:14:20.043-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 743 state to active [ns_server:info,2017-10-01T10:14:20.043-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 742 state to active [ns_server:info,2017-10-01T10:14:20.044-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 741 state to active [ns_server:info,2017-10-01T10:14:20.044-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 740 state to active [ns_server:info,2017-10-01T10:14:20.044-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 739 state to active [ns_server:info,2017-10-01T10:14:20.044-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 738 state to active [ns_server:info,2017-10-01T10:14:20.045-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 737 state to active [ns_server:info,2017-10-01T10:14:20.045-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 736 state to active [ns_server:info,2017-10-01T10:14:20.045-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 735 state to active [ns_server:info,2017-10-01T10:14:20.046-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 734 state to active [ns_server:info,2017-10-01T10:14:20.046-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 733 state to active [ns_server:info,2017-10-01T10:14:20.047-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 732 state to active [ns_server:info,2017-10-01T10:14:20.047-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 731 state to active [ns_server:info,2017-10-01T10:14:20.049-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 730 state to active [ns_server:info,2017-10-01T10:14:20.050-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 729 state to active [ns_server:info,2017-10-01T10:14:20.051-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 728 state to active [ns_server:info,2017-10-01T10:14:20.052-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 727 state to active [ns_server:info,2017-10-01T10:14:20.053-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 726 state to active [ns_server:info,2017-10-01T10:14:20.053-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 725 state to active [ns_server:info,2017-10-01T10:14:20.053-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 724 state to active [ns_server:info,2017-10-01T10:14:20.054-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 723 state to active [ns_server:info,2017-10-01T10:14:20.055-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 722 state to active [ns_server:info,2017-10-01T10:14:20.055-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 721 state to active [ns_server:info,2017-10-01T10:14:20.055-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 720 state to active [ns_server:info,2017-10-01T10:14:20.059-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 719 state to active [ns_server:info,2017-10-01T10:14:20.059-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 718 state to active [ns_server:info,2017-10-01T10:14:20.060-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 717 state to active [ns_server:info,2017-10-01T10:14:20.060-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 716 state to active [ns_server:info,2017-10-01T10:14:20.060-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 715 state to active [ns_server:info,2017-10-01T10:14:20.061-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 714 state to active [ns_server:info,2017-10-01T10:14:20.061-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 713 state to active [ns_server:info,2017-10-01T10:14:20.062-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 712 state to active [ns_server:info,2017-10-01T10:14:20.062-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 711 state to active [ns_server:info,2017-10-01T10:14:20.062-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 710 state to active [ns_server:info,2017-10-01T10:14:20.063-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 709 state to active [ns_server:info,2017-10-01T10:14:20.064-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 708 state to active [ns_server:info,2017-10-01T10:14:20.065-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 707 state to active [ns_server:info,2017-10-01T10:14:20.065-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 706 state to active [ns_server:info,2017-10-01T10:14:20.066-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 705 state to active [ns_server:info,2017-10-01T10:14:20.067-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 704 state to active [ns_server:info,2017-10-01T10:14:20.067-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 703 state to active [ns_server:info,2017-10-01T10:14:20.068-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 702 state to active [ns_server:info,2017-10-01T10:14:20.070-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 701 state to active [ns_server:info,2017-10-01T10:14:20.070-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 700 state to active [ns_server:info,2017-10-01T10:14:20.071-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 699 state to active [ns_server:info,2017-10-01T10:14:20.071-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 698 state to active [ns_server:info,2017-10-01T10:14:20.072-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 697 state to active [ns_server:info,2017-10-01T10:14:20.073-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 696 state to active [ns_server:info,2017-10-01T10:14:20.073-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 695 state to active [ns_server:info,2017-10-01T10:14:20.074-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 694 state to active [ns_server:info,2017-10-01T10:14:20.074-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 693 state to active [ns_server:info,2017-10-01T10:14:20.075-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 692 state to active [ns_server:info,2017-10-01T10:14:20.076-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 691 state to active [ns_server:info,2017-10-01T10:14:20.078-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 690 state to active [ns_server:info,2017-10-01T10:14:20.078-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 689 state to active [ns_server:info,2017-10-01T10:14:20.079-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 688 state to active [ns_server:info,2017-10-01T10:14:20.079-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 687 state to active [ns_server:info,2017-10-01T10:14:20.080-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 686 state to active [ns_server:info,2017-10-01T10:14:20.080-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 685 state to active [ns_server:info,2017-10-01T10:14:20.080-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 684 state to active [ns_server:info,2017-10-01T10:14:20.081-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 683 state to active [ns_server:info,2017-10-01T10:14:20.081-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 682 state to active [ns_server:info,2017-10-01T10:14:20.082-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 681 state to active [ns_server:info,2017-10-01T10:14:20.083-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 680 state to active [ns_server:info,2017-10-01T10:14:20.083-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 679 state to active [ns_server:info,2017-10-01T10:14:20.084-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 678 state to active [ns_server:info,2017-10-01T10:14:20.085-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 677 state to active [ns_server:info,2017-10-01T10:14:20.085-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 676 state to active [ns_server:info,2017-10-01T10:14:20.085-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 675 state to active [ns_server:info,2017-10-01T10:14:20.086-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 674 state to active [ns_server:info,2017-10-01T10:14:20.086-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 673 state to active [ns_server:info,2017-10-01T10:14:20.087-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 672 state to active [ns_server:info,2017-10-01T10:14:20.088-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 671 state to active [ns_server:info,2017-10-01T10:14:20.090-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 670 state to active [ns_server:info,2017-10-01T10:14:20.090-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 669 state to active [ns_server:info,2017-10-01T10:14:20.091-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 668 state to active [ns_server:info,2017-10-01T10:14:20.091-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 667 state to active [ns_server:info,2017-10-01T10:14:20.092-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 666 state to active [ns_server:info,2017-10-01T10:14:20.092-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 665 state to active [ns_server:info,2017-10-01T10:14:20.093-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 664 state to active [ns_server:info,2017-10-01T10:14:20.094-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 663 state to active [ns_server:info,2017-10-01T10:14:20.094-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 662 state to active [ns_server:info,2017-10-01T10:14:20.095-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 661 state to active [ns_server:info,2017-10-01T10:14:20.095-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 660 state to active [ns_server:info,2017-10-01T10:14:20.096-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 659 state to active [ns_server:info,2017-10-01T10:14:20.104-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 658 state to active [ns_server:info,2017-10-01T10:14:20.104-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 657 state to active [ns_server:info,2017-10-01T10:14:20.105-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 656 state to active [ns_server:info,2017-10-01T10:14:20.105-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 655 state to active [ns_server:info,2017-10-01T10:14:20.106-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 654 state to active [ns_server:info,2017-10-01T10:14:20.106-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 653 state to active [ns_server:info,2017-10-01T10:14:20.106-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 652 state to active [ns_server:info,2017-10-01T10:14:20.107-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 651 state to active [ns_server:info,2017-10-01T10:14:20.107-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 650 state to active [ns_server:info,2017-10-01T10:14:20.107-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 649 state to active [ns_server:info,2017-10-01T10:14:20.108-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 648 state to active [ns_server:info,2017-10-01T10:14:20.108-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 647 state to active [ns_server:info,2017-10-01T10:14:20.108-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 646 state to active [ns_server:info,2017-10-01T10:14:20.109-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 645 state to active [ns_server:info,2017-10-01T10:14:20.109-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 644 state to active [ns_server:info,2017-10-01T10:14:20.109-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 643 state to active [ns_server:info,2017-10-01T10:14:20.110-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 642 state to active [ns_server:info,2017-10-01T10:14:20.110-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 641 state to active [ns_server:info,2017-10-01T10:14:20.110-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 640 state to active [ns_server:info,2017-10-01T10:14:20.111-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 639 state to active [ns_server:info,2017-10-01T10:14:20.111-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 638 state to active [ns_server:info,2017-10-01T10:14:20.111-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 637 state to active [ns_server:info,2017-10-01T10:14:20.112-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 636 state to active [ns_server:info,2017-10-01T10:14:20.112-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 635 state to active [ns_server:info,2017-10-01T10:14:20.113-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 634 state to active [ns_server:info,2017-10-01T10:14:20.113-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 633 state to active [ns_server:info,2017-10-01T10:14:20.113-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 632 state to active [ns_server:info,2017-10-01T10:14:20.114-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 631 state to active [ns_server:info,2017-10-01T10:14:20.115-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 630 state to active [ns_server:info,2017-10-01T10:14:20.115-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 629 state to active [ns_server:info,2017-10-01T10:14:20.116-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 628 state to active [ns_server:info,2017-10-01T10:14:20.117-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 627 state to active [ns_server:info,2017-10-01T10:14:20.118-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 626 state to active [ns_server:info,2017-10-01T10:14:20.119-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 625 state to active [ns_server:info,2017-10-01T10:14:20.119-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 624 state to active [ns_server:info,2017-10-01T10:14:20.119-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 623 state to active [ns_server:info,2017-10-01T10:14:20.120-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 622 state to active [ns_server:info,2017-10-01T10:14:20.121-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 621 state to active [ns_server:info,2017-10-01T10:14:20.121-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 620 state to active [ns_server:info,2017-10-01T10:14:20.122-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 619 state to active [ns_server:info,2017-10-01T10:14:20.123-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 618 state to active [ns_server:info,2017-10-01T10:14:20.124-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 617 state to active [ns_server:info,2017-10-01T10:14:20.124-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 616 state to active [ns_server:info,2017-10-01T10:14:20.128-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 615 state to active [ns_server:info,2017-10-01T10:14:20.128-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 614 state to active [ns_server:info,2017-10-01T10:14:20.129-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 613 state to active [ns_server:info,2017-10-01T10:14:20.129-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 612 state to active [ns_server:info,2017-10-01T10:14:20.130-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 611 state to active [ns_server:info,2017-10-01T10:14:20.130-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 610 state to active [ns_server:info,2017-10-01T10:14:20.130-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 609 state to active [ns_server:info,2017-10-01T10:14:20.131-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 608 state to active [ns_server:info,2017-10-01T10:14:20.131-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 607 state to active [ns_server:info,2017-10-01T10:14:20.131-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 606 state to active [ns_server:info,2017-10-01T10:14:20.131-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 605 state to active [ns_server:info,2017-10-01T10:14:20.132-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 604 state to active [ns_server:info,2017-10-01T10:14:20.132-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 603 state to active [ns_server:info,2017-10-01T10:14:20.133-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 602 state to active [ns_server:info,2017-10-01T10:14:20.133-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 601 state to active [ns_server:info,2017-10-01T10:14:20.133-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 600 state to active [ns_server:info,2017-10-01T10:14:20.134-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 599 state to active [ns_server:info,2017-10-01T10:14:20.135-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 598 state to active [ns_server:info,2017-10-01T10:14:20.135-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 597 state to active [ns_server:info,2017-10-01T10:14:20.136-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 596 state to active [ns_server:info,2017-10-01T10:14:20.136-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 595 state to active [ns_server:info,2017-10-01T10:14:20.137-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 594 state to active [ns_server:info,2017-10-01T10:14:20.138-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 593 state to active [ns_server:info,2017-10-01T10:14:20.138-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 592 state to active [ns_server:info,2017-10-01T10:14:20.138-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 591 state to active [ns_server:info,2017-10-01T10:14:20.138-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 590 state to active [ns_server:info,2017-10-01T10:14:20.139-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 589 state to active [ns_server:info,2017-10-01T10:14:20.140-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 588 state to active [ns_server:info,2017-10-01T10:14:20.140-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 587 state to active [ns_server:info,2017-10-01T10:14:20.141-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 586 state to active [ns_server:info,2017-10-01T10:14:20.141-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 585 state to active [ns_server:info,2017-10-01T10:14:20.141-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 584 state to active [ns_server:info,2017-10-01T10:14:20.144-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 583 state to active [ns_server:info,2017-10-01T10:14:20.145-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 582 state to active [ns_server:info,2017-10-01T10:14:20.145-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 581 state to active [ns_server:info,2017-10-01T10:14:20.146-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 580 state to active [ns_server:info,2017-10-01T10:14:20.146-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 579 state to active [ns_server:info,2017-10-01T10:14:20.147-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 578 state to active [ns_server:info,2017-10-01T10:14:20.147-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 577 state to active [ns_server:info,2017-10-01T10:14:20.147-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 576 state to active [ns_server:info,2017-10-01T10:14:20.148-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 575 state to active [ns_server:info,2017-10-01T10:14:20.151-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 574 state to active [ns_server:info,2017-10-01T10:14:20.152-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 573 state to active [ns_server:info,2017-10-01T10:14:20.153-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 572 state to active [ns_server:info,2017-10-01T10:14:20.153-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 571 state to active [ns_server:info,2017-10-01T10:14:20.153-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 570 state to active [ns_server:info,2017-10-01T10:14:20.154-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 569 state to active [ns_server:info,2017-10-01T10:14:20.154-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 568 state to active [ns_server:info,2017-10-01T10:14:20.155-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 567 state to active [ns_server:info,2017-10-01T10:14:20.155-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 566 state to active [ns_server:info,2017-10-01T10:14:20.157-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 565 state to active [ns_server:info,2017-10-01T10:14:20.157-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 564 state to active [ns_server:info,2017-10-01T10:14:20.158-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 563 state to active [ns_server:info,2017-10-01T10:14:20.158-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 562 state to active [ns_server:info,2017-10-01T10:14:20.159-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 561 state to active [ns_server:info,2017-10-01T10:14:20.159-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 560 state to active [ns_server:info,2017-10-01T10:14:20.160-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 559 state to active [ns_server:info,2017-10-01T10:14:20.161-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 558 state to active [ns_server:info,2017-10-01T10:14:20.161-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 557 state to active [ns_server:info,2017-10-01T10:14:20.162-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 556 state to active [ns_server:info,2017-10-01T10:14:20.164-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 555 state to active [ns_server:info,2017-10-01T10:14:20.164-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 554 state to active [ns_server:info,2017-10-01T10:14:20.165-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 553 state to active [ns_server:info,2017-10-01T10:14:20.165-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 552 state to active [ns_server:info,2017-10-01T10:14:20.165-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 551 state to active [ns_server:warn,2017-10-01T10:14:20.166-07:00,n_0@127.0.0.1:kv_monitor<0.1100.0>:kv_monitor:get_buckets:180]The following buckets are not ready: ["beer-sample"] [ns_server:info,2017-10-01T10:14:20.166-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 550 state to active [ns_server:info,2017-10-01T10:14:20.167-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 549 state to active [ns_server:info,2017-10-01T10:14:20.168-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 548 state to active [ns_server:info,2017-10-01T10:14:20.168-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 547 state to active [ns_server:info,2017-10-01T10:14:20.168-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 546 state to active [ns_server:info,2017-10-01T10:14:20.169-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 545 state to active [ns_server:info,2017-10-01T10:14:20.169-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 544 state to active [ns_server:info,2017-10-01T10:14:20.170-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 543 state to active [ns_server:info,2017-10-01T10:14:20.170-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 542 state to active [ns_server:info,2017-10-01T10:14:20.170-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 541 state to active [ns_server:info,2017-10-01T10:14:20.171-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 540 state to active [ns_server:info,2017-10-01T10:14:20.171-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 539 state to active [ns_server:info,2017-10-01T10:14:20.171-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 538 state to active [ns_server:info,2017-10-01T10:14:20.172-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 537 state to active [ns_server:info,2017-10-01T10:14:20.172-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 536 state to active [ns_server:info,2017-10-01T10:14:20.173-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 535 state to active [ns_server:info,2017-10-01T10:14:20.173-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 534 state to active [ns_server:info,2017-10-01T10:14:20.174-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 533 state to active [ns_server:info,2017-10-01T10:14:20.174-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 532 state to active [ns_server:info,2017-10-01T10:14:20.175-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 531 state to active [ns_server:info,2017-10-01T10:14:20.175-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 530 state to active [ns_server:info,2017-10-01T10:14:20.176-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 529 state to active [ns_server:info,2017-10-01T10:14:20.176-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 528 state to active [ns_server:info,2017-10-01T10:14:20.177-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 527 state to active [ns_server:info,2017-10-01T10:14:20.178-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 526 state to active [ns_server:info,2017-10-01T10:14:20.178-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 525 state to active [ns_server:info,2017-10-01T10:14:20.181-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 524 state to active [ns_server:info,2017-10-01T10:14:20.181-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 523 state to active [ns_server:info,2017-10-01T10:14:20.181-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 522 state to active [ns_server:info,2017-10-01T10:14:20.182-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 521 state to active [ns_server:info,2017-10-01T10:14:20.183-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 520 state to active [ns_server:info,2017-10-01T10:14:20.183-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 519 state to active [ns_server:info,2017-10-01T10:14:20.184-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 518 state to active [ns_server:info,2017-10-01T10:14:20.184-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 517 state to active [ns_server:info,2017-10-01T10:14:20.185-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 516 state to active [ns_server:info,2017-10-01T10:14:20.185-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 515 state to active [ns_server:info,2017-10-01T10:14:20.186-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 514 state to active [ns_server:info,2017-10-01T10:14:20.186-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 513 state to active [ns_server:info,2017-10-01T10:14:20.189-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 512 state to active [ns_server:info,2017-10-01T10:14:20.189-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 511 state to active [ns_server:info,2017-10-01T10:14:20.189-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 510 state to active [ns_server:info,2017-10-01T10:14:20.190-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 509 state to active [ns_server:info,2017-10-01T10:14:20.190-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 508 state to active [ns_server:info,2017-10-01T10:14:20.191-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 507 state to active [ns_server:info,2017-10-01T10:14:20.191-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 506 state to active [ns_server:info,2017-10-01T10:14:20.192-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 505 state to active [ns_server:info,2017-10-01T10:14:20.192-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 504 state to active [ns_server:info,2017-10-01T10:14:20.192-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 503 state to active [ns_server:info,2017-10-01T10:14:20.193-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 502 state to active [ns_server:info,2017-10-01T10:14:20.193-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 501 state to active [ns_server:info,2017-10-01T10:14:20.193-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 500 state to active [ns_server:info,2017-10-01T10:14:20.194-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 499 state to active [ns_server:info,2017-10-01T10:14:20.194-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 498 state to active [ns_server:info,2017-10-01T10:14:20.195-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 497 state to active [ns_server:info,2017-10-01T10:14:20.195-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 496 state to active [ns_server:info,2017-10-01T10:14:20.196-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 495 state to active [ns_server:info,2017-10-01T10:14:20.196-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 494 state to active [ns_server:info,2017-10-01T10:14:20.197-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 493 state to active [ns_server:info,2017-10-01T10:14:20.198-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 492 state to active [ns_server:info,2017-10-01T10:14:20.199-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 491 state to active [ns_server:info,2017-10-01T10:14:20.199-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 490 state to active [ns_server:info,2017-10-01T10:14:20.200-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 489 state to active [ns_server:info,2017-10-01T10:14:20.200-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 488 state to active [ns_server:info,2017-10-01T10:14:20.201-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 487 state to active [ns_server:info,2017-10-01T10:14:20.201-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 486 state to active [ns_server:info,2017-10-01T10:14:20.202-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 485 state to active [ns_server:info,2017-10-01T10:14:20.202-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 484 state to active [ns_server:info,2017-10-01T10:14:20.202-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 483 state to active [ns_server:info,2017-10-01T10:14:20.203-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 482 state to active [ns_server:info,2017-10-01T10:14:20.204-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 481 state to active [ns_server:info,2017-10-01T10:14:20.204-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 480 state to active [ns_server:info,2017-10-01T10:14:20.205-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 479 state to active [ns_server:info,2017-10-01T10:14:20.205-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 478 state to active [ns_server:info,2017-10-01T10:14:20.205-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 477 state to active [ns_server:info,2017-10-01T10:14:20.206-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 476 state to active [ns_server:info,2017-10-01T10:14:20.206-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 475 state to active [ns_server:info,2017-10-01T10:14:20.207-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 474 state to active [ns_server:info,2017-10-01T10:14:20.207-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 473 state to active [ns_server:info,2017-10-01T10:14:20.207-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 472 state to active [ns_server:info,2017-10-01T10:14:20.208-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 471 state to active [ns_server:info,2017-10-01T10:14:20.208-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 470 state to active [ns_server:info,2017-10-01T10:14:20.209-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 469 state to active [ns_server:info,2017-10-01T10:14:20.210-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 468 state to active [ns_server:info,2017-10-01T10:14:20.210-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 467 state to active [ns_server:info,2017-10-01T10:14:20.211-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 466 state to active [ns_server:info,2017-10-01T10:14:20.211-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 465 state to active [ns_server:info,2017-10-01T10:14:20.211-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 464 state to active [ns_server:info,2017-10-01T10:14:20.212-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 463 state to active [ns_server:info,2017-10-01T10:14:20.212-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 462 state to active [ns_server:info,2017-10-01T10:14:20.213-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 461 state to active [ns_server:info,2017-10-01T10:14:20.214-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 460 state to active [ns_server:info,2017-10-01T10:14:20.215-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 459 state to active [ns_server:info,2017-10-01T10:14:20.215-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 458 state to active [ns_server:info,2017-10-01T10:14:20.216-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 457 state to active [ns_server:info,2017-10-01T10:14:20.217-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 456 state to active [ns_server:info,2017-10-01T10:14:20.217-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 455 state to active [ns_server:info,2017-10-01T10:14:20.218-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 454 state to active [ns_server:info,2017-10-01T10:14:20.218-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 453 state to active [ns_server:info,2017-10-01T10:14:20.218-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 452 state to active [ns_server:info,2017-10-01T10:14:20.219-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 451 state to active [ns_server:info,2017-10-01T10:14:20.219-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 450 state to active [ns_server:info,2017-10-01T10:14:20.220-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 449 state to active [ns_server:info,2017-10-01T10:14:20.220-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 448 state to active [ns_server:info,2017-10-01T10:14:20.221-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 447 state to active [ns_server:info,2017-10-01T10:14:20.221-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 446 state to active [ns_server:info,2017-10-01T10:14:20.222-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 445 state to active [ns_server:info,2017-10-01T10:14:20.222-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 444 state to active [ns_server:info,2017-10-01T10:14:20.222-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 443 state to active [ns_server:info,2017-10-01T10:14:20.223-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 442 state to active [ns_server:info,2017-10-01T10:14:20.224-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 441 state to active [ns_server:info,2017-10-01T10:14:20.224-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 440 state to active [ns_server:info,2017-10-01T10:14:20.224-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 439 state to active [ns_server:info,2017-10-01T10:14:20.225-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 438 state to active [ns_server:info,2017-10-01T10:14:20.226-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 437 state to active [ns_server:info,2017-10-01T10:14:20.226-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 436 state to active [ns_server:info,2017-10-01T10:14:20.227-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 435 state to active [ns_server:info,2017-10-01T10:14:20.227-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 434 state to active [ns_server:info,2017-10-01T10:14:20.227-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 433 state to active [ns_server:info,2017-10-01T10:14:20.228-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 432 state to active [ns_server:info,2017-10-01T10:14:20.228-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 431 state to active [ns_server:info,2017-10-01T10:14:20.229-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 430 state to active [ns_server:info,2017-10-01T10:14:20.230-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 429 state to active [ns_server:info,2017-10-01T10:14:20.230-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 428 state to active [ns_server:info,2017-10-01T10:14:20.230-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 427 state to active [ns_server:info,2017-10-01T10:14:20.232-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 426 state to active [ns_server:info,2017-10-01T10:14:20.233-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 425 state to active [ns_server:info,2017-10-01T10:14:20.233-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 424 state to active [ns_server:info,2017-10-01T10:14:20.234-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 423 state to active [ns_server:info,2017-10-01T10:14:20.234-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 422 state to active [ns_server:info,2017-10-01T10:14:20.234-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 421 state to active [ns_server:info,2017-10-01T10:14:20.235-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 420 state to active [ns_server:info,2017-10-01T10:14:20.235-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 419 state to active [ns_server:info,2017-10-01T10:14:20.235-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 418 state to active [ns_server:info,2017-10-01T10:14:20.236-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 417 state to active [ns_server:info,2017-10-01T10:14:20.236-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 416 state to active [ns_server:info,2017-10-01T10:14:20.237-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 415 state to active [ns_server:info,2017-10-01T10:14:20.238-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 414 state to active [ns_server:info,2017-10-01T10:14:20.238-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 413 state to active [ns_server:info,2017-10-01T10:14:20.239-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 412 state to active [ns_server:info,2017-10-01T10:14:20.240-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 411 state to active [ns_server:info,2017-10-01T10:14:20.240-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 410 state to active [ns_server:info,2017-10-01T10:14:20.241-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 409 state to active [ns_server:info,2017-10-01T10:14:20.241-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 408 state to active [ns_server:info,2017-10-01T10:14:20.242-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 407 state to active [ns_server:info,2017-10-01T10:14:20.242-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 406 state to active [ns_server:info,2017-10-01T10:14:20.243-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 405 state to active [ns_server:info,2017-10-01T10:14:20.244-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 404 state to active [ns_server:info,2017-10-01T10:14:20.244-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 403 state to active [ns_server:info,2017-10-01T10:14:20.245-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 402 state to active [ns_server:info,2017-10-01T10:14:20.246-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 401 state to active [ns_server:info,2017-10-01T10:14:20.246-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 400 state to active [ns_server:info,2017-10-01T10:14:20.247-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 399 state to active [ns_server:info,2017-10-01T10:14:20.247-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 398 state to active [ns_server:info,2017-10-01T10:14:20.248-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 397 state to active [ns_server:info,2017-10-01T10:14:20.248-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 396 state to active [ns_server:info,2017-10-01T10:14:20.249-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 395 state to active [ns_server:info,2017-10-01T10:14:20.250-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 394 state to active [ns_server:info,2017-10-01T10:14:20.251-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 393 state to active [ns_server:info,2017-10-01T10:14:20.252-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 392 state to active [ns_server:info,2017-10-01T10:14:20.257-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 391 state to active [ns_server:info,2017-10-01T10:14:20.257-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 390 state to active [ns_server:info,2017-10-01T10:14:20.260-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 389 state to active [ns_server:info,2017-10-01T10:14:20.260-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 388 state to active [ns_server:info,2017-10-01T10:14:20.261-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 387 state to active [ns_server:info,2017-10-01T10:14:20.262-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 386 state to active [ns_server:info,2017-10-01T10:14:20.262-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 385 state to active [ns_server:info,2017-10-01T10:14:20.264-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 384 state to active [ns_server:info,2017-10-01T10:14:20.265-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 383 state to active [ns_server:info,2017-10-01T10:14:20.265-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 382 state to active [ns_server:info,2017-10-01T10:14:20.265-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 381 state to active [ns_server:info,2017-10-01T10:14:20.266-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 380 state to active [ns_server:info,2017-10-01T10:14:20.266-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 379 state to active [ns_server:info,2017-10-01T10:14:20.266-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 378 state to active [ns_server:info,2017-10-01T10:14:20.267-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 377 state to active [ns_server:info,2017-10-01T10:14:20.267-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 376 state to active [ns_server:info,2017-10-01T10:14:20.267-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 375 state to active [ns_server:info,2017-10-01T10:14:20.268-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 374 state to active [ns_server:info,2017-10-01T10:14:20.268-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 373 state to active [ns_server:info,2017-10-01T10:14:20.268-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 372 state to active [ns_server:info,2017-10-01T10:14:20.268-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 371 state to active [ns_server:info,2017-10-01T10:14:20.269-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 370 state to active [ns_server:info,2017-10-01T10:14:20.270-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 369 state to active [ns_server:info,2017-10-01T10:14:20.270-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 368 state to active [ns_server:info,2017-10-01T10:14:20.271-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 367 state to active [ns_server:info,2017-10-01T10:14:20.272-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 366 state to active [ns_server:info,2017-10-01T10:14:20.272-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 365 state to active [ns_server:info,2017-10-01T10:14:20.273-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 364 state to active [ns_server:info,2017-10-01T10:14:20.274-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 363 state to active [ns_server:info,2017-10-01T10:14:20.274-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 362 state to active [ns_server:info,2017-10-01T10:14:20.274-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 361 state to active [ns_server:info,2017-10-01T10:14:20.275-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 360 state to active [ns_server:info,2017-10-01T10:14:20.275-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 359 state to active [ns_server:info,2017-10-01T10:14:20.275-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 358 state to active [ns_server:info,2017-10-01T10:14:20.276-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 357 state to active [ns_server:info,2017-10-01T10:14:20.276-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 356 state to active [ns_server:info,2017-10-01T10:14:20.277-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 355 state to active [ns_server:info,2017-10-01T10:14:20.277-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 354 state to active [ns_server:info,2017-10-01T10:14:20.278-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 353 state to active [ns_server:info,2017-10-01T10:14:20.278-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 352 state to active [ns_server:info,2017-10-01T10:14:20.279-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 351 state to active [ns_server:info,2017-10-01T10:14:20.280-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 350 state to active [ns_server:info,2017-10-01T10:14:20.280-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 349 state to active [ns_server:info,2017-10-01T10:14:20.281-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 348 state to active [ns_server:info,2017-10-01T10:14:20.281-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 347 state to active [ns_server:info,2017-10-01T10:14:20.283-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 346 state to active [ns_server:info,2017-10-01T10:14:20.283-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 345 state to active [ns_server:info,2017-10-01T10:14:20.284-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 344 state to active [ns_server:info,2017-10-01T10:14:20.285-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 343 state to active [ns_server:info,2017-10-01T10:14:20.285-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 342 state to active [ns_server:info,2017-10-01T10:14:20.286-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 341 state to active [ns_server:info,2017-10-01T10:14:20.286-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 340 state to active [ns_server:info,2017-10-01T10:14:20.288-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 339 state to active [ns_server:info,2017-10-01T10:14:20.289-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 338 state to active [ns_server:info,2017-10-01T10:14:20.289-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 337 state to active [ns_server:info,2017-10-01T10:14:20.290-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 336 state to active [ns_server:info,2017-10-01T10:14:20.290-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 335 state to active [ns_server:info,2017-10-01T10:14:20.290-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 334 state to active [ns_server:info,2017-10-01T10:14:20.291-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 333 state to active [ns_server:info,2017-10-01T10:14:20.291-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 332 state to active [ns_server:info,2017-10-01T10:14:20.291-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 331 state to active [ns_server:info,2017-10-01T10:14:20.292-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 330 state to active [ns_server:info,2017-10-01T10:14:20.292-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 329 state to active [ns_server:info,2017-10-01T10:14:20.292-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 328 state to active [ns_server:info,2017-10-01T10:14:20.293-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 327 state to active [ns_server:info,2017-10-01T10:14:20.293-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 326 state to active [ns_server:info,2017-10-01T10:14:20.293-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 325 state to active [ns_server:info,2017-10-01T10:14:20.294-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 324 state to active [ns_server:info,2017-10-01T10:14:20.294-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 323 state to active [ns_server:info,2017-10-01T10:14:20.295-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 322 state to active [ns_server:info,2017-10-01T10:14:20.295-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 321 state to active [ns_server:info,2017-10-01T10:14:20.295-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 320 state to active [ns_server:info,2017-10-01T10:14:20.296-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 319 state to active [ns_server:info,2017-10-01T10:14:20.296-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 318 state to active [ns_server:info,2017-10-01T10:14:20.296-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 317 state to active [ns_server:info,2017-10-01T10:14:20.297-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 316 state to active [ns_server:info,2017-10-01T10:14:20.298-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 315 state to active [ns_server:info,2017-10-01T10:14:20.299-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 314 state to active [ns_server:info,2017-10-01T10:14:20.299-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 313 state to active [ns_server:info,2017-10-01T10:14:20.300-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 312 state to active [ns_server:info,2017-10-01T10:14:20.300-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 311 state to active [ns_server:info,2017-10-01T10:14:20.301-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 310 state to active [ns_server:info,2017-10-01T10:14:20.301-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 309 state to active [ns_server:info,2017-10-01T10:14:20.301-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 308 state to active [ns_server:info,2017-10-01T10:14:20.302-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 307 state to active [ns_server:info,2017-10-01T10:14:20.303-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 306 state to active [ns_server:info,2017-10-01T10:14:20.304-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 305 state to active [ns_server:info,2017-10-01T10:14:20.304-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 304 state to active [ns_server:info,2017-10-01T10:14:20.306-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 303 state to active [ns_server:info,2017-10-01T10:14:20.306-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 302 state to active [ns_server:info,2017-10-01T10:14:20.307-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 301 state to active [ns_server:info,2017-10-01T10:14:20.307-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 300 state to active [ns_server:info,2017-10-01T10:14:20.307-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 299 state to active [ns_server:info,2017-10-01T10:14:20.308-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 298 state to active [ns_server:info,2017-10-01T10:14:20.308-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 297 state to active [ns_server:info,2017-10-01T10:14:20.309-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 296 state to active [ns_server:info,2017-10-01T10:14:20.309-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 295 state to active [ns_server:info,2017-10-01T10:14:20.309-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 294 state to active [ns_server:info,2017-10-01T10:14:20.310-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 293 state to active [ns_server:info,2017-10-01T10:14:20.310-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 292 state to active [ns_server:info,2017-10-01T10:14:20.310-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 291 state to active [ns_server:info,2017-10-01T10:14:20.311-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 290 state to active [ns_server:info,2017-10-01T10:14:20.311-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 289 state to active [ns_server:info,2017-10-01T10:14:20.311-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 288 state to active [ns_server:info,2017-10-01T10:14:20.312-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 287 state to active [ns_server:info,2017-10-01T10:14:20.312-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 286 state to active [ns_server:info,2017-10-01T10:14:20.312-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 285 state to active [ns_server:info,2017-10-01T10:14:20.313-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 284 state to active [ns_server:info,2017-10-01T10:14:20.313-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 283 state to active [ns_server:info,2017-10-01T10:14:20.313-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 282 state to active [ns_server:info,2017-10-01T10:14:20.314-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 281 state to active [ns_server:info,2017-10-01T10:14:20.314-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 280 state to active [ns_server:info,2017-10-01T10:14:20.315-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 279 state to active [ns_server:info,2017-10-01T10:14:20.315-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 278 state to active [ns_server:info,2017-10-01T10:14:20.315-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 277 state to active [ns_server:info,2017-10-01T10:14:20.316-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 276 state to active [ns_server:info,2017-10-01T10:14:20.316-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 275 state to active [ns_server:info,2017-10-01T10:14:20.317-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 274 state to active [ns_server:info,2017-10-01T10:14:20.317-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 273 state to active [ns_server:info,2017-10-01T10:14:20.318-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 272 state to active [ns_server:info,2017-10-01T10:14:20.318-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 271 state to active [ns_server:info,2017-10-01T10:14:20.319-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 270 state to active [ns_server:info,2017-10-01T10:14:20.319-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 269 state to active [ns_server:info,2017-10-01T10:14:20.320-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 268 state to active [ns_server:info,2017-10-01T10:14:20.320-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 267 state to active [ns_server:info,2017-10-01T10:14:20.321-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 266 state to active [ns_server:info,2017-10-01T10:14:20.321-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 265 state to active [ns_server:info,2017-10-01T10:14:20.322-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 264 state to active [ns_server:info,2017-10-01T10:14:20.323-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 263 state to active [ns_server:info,2017-10-01T10:14:20.324-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 262 state to active [ns_server:info,2017-10-01T10:14:20.325-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 261 state to active [ns_server:info,2017-10-01T10:14:20.325-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 260 state to active [ns_server:info,2017-10-01T10:14:20.327-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 259 state to active [ns_server:info,2017-10-01T10:14:20.327-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 258 state to active [ns_server:info,2017-10-01T10:14:20.327-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 257 state to active [ns_server:info,2017-10-01T10:14:20.328-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 256 state to active [ns_server:info,2017-10-01T10:14:20.328-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 255 state to active [ns_server:info,2017-10-01T10:14:20.328-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 254 state to active [ns_server:info,2017-10-01T10:14:20.329-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 253 state to active [ns_server:info,2017-10-01T10:14:20.329-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 252 state to active [ns_server:info,2017-10-01T10:14:20.329-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 251 state to active [ns_server:info,2017-10-01T10:14:20.329-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 250 state to active [ns_server:info,2017-10-01T10:14:20.330-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 249 state to active [ns_server:info,2017-10-01T10:14:20.330-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 248 state to active [ns_server:info,2017-10-01T10:14:20.331-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 247 state to active [ns_server:info,2017-10-01T10:14:20.331-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 246 state to active [ns_server:info,2017-10-01T10:14:20.331-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 245 state to active [ns_server:info,2017-10-01T10:14:20.332-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 244 state to active [ns_server:info,2017-10-01T10:14:20.332-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 243 state to active [ns_server:info,2017-10-01T10:14:20.333-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 242 state to active [ns_server:info,2017-10-01T10:14:20.337-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 241 state to active [ns_server:info,2017-10-01T10:14:20.341-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 240 state to active [ns_server:info,2017-10-01T10:14:20.341-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 239 state to active [ns_server:info,2017-10-01T10:14:20.341-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 238 state to active [ns_server:info,2017-10-01T10:14:20.342-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 237 state to active [ns_server:info,2017-10-01T10:14:20.343-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 236 state to active [ns_server:info,2017-10-01T10:14:20.343-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 235 state to active [ns_server:info,2017-10-01T10:14:20.345-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 234 state to active [ns_server:info,2017-10-01T10:14:20.346-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 233 state to active [ns_server:info,2017-10-01T10:14:20.347-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 232 state to active [ns_server:info,2017-10-01T10:14:20.347-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 231 state to active [ns_server:info,2017-10-01T10:14:20.347-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 230 state to active [ns_server:info,2017-10-01T10:14:20.348-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 229 state to active [ns_server:info,2017-10-01T10:14:20.348-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 228 state to active [ns_server:info,2017-10-01T10:14:20.349-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 227 state to active [ns_server:info,2017-10-01T10:14:20.350-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 226 state to active [ns_server:info,2017-10-01T10:14:20.350-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 225 state to active [ns_server:info,2017-10-01T10:14:20.353-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 224 state to active [ns_server:info,2017-10-01T10:14:20.354-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 223 state to active [ns_server:info,2017-10-01T10:14:20.355-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 222 state to active [ns_server:info,2017-10-01T10:14:20.357-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 221 state to active [ns_server:info,2017-10-01T10:14:20.358-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 220 state to active [ns_server:info,2017-10-01T10:14:20.359-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 219 state to active [ns_server:info,2017-10-01T10:14:20.359-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 218 state to active [ns_server:info,2017-10-01T10:14:20.359-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 217 state to active [ns_server:info,2017-10-01T10:14:20.360-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 216 state to active [ns_server:info,2017-10-01T10:14:20.361-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 215 state to active [ns_server:info,2017-10-01T10:14:20.362-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 214 state to active [ns_server:info,2017-10-01T10:14:20.362-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 213 state to active [ns_server:info,2017-10-01T10:14:20.363-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 212 state to active [ns_server:info,2017-10-01T10:14:20.364-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 211 state to active [ns_server:info,2017-10-01T10:14:20.365-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 210 state to active [ns_server:info,2017-10-01T10:14:20.366-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 209 state to active [ns_server:info,2017-10-01T10:14:20.366-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 208 state to active [ns_server:info,2017-10-01T10:14:20.366-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 207 state to active [ns_server:info,2017-10-01T10:14:20.367-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 206 state to active [ns_server:info,2017-10-01T10:14:20.367-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 205 state to active [ns_server:info,2017-10-01T10:14:20.368-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 204 state to active [ns_server:info,2017-10-01T10:14:20.368-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 203 state to active [ns_server:info,2017-10-01T10:14:20.369-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 202 state to active [ns_server:info,2017-10-01T10:14:20.369-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 201 state to active [ns_server:info,2017-10-01T10:14:20.369-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 200 state to active [ns_server:info,2017-10-01T10:14:20.370-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 199 state to active [ns_server:info,2017-10-01T10:14:20.370-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 198 state to active [ns_server:info,2017-10-01T10:14:20.370-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 197 state to active [ns_server:info,2017-10-01T10:14:20.371-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 196 state to active [ns_server:info,2017-10-01T10:14:20.371-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 195 state to active [ns_server:info,2017-10-01T10:14:20.373-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 194 state to active [ns_server:info,2017-10-01T10:14:20.373-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 193 state to active [ns_server:info,2017-10-01T10:14:20.373-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 192 state to active [ns_server:info,2017-10-01T10:14:20.374-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 191 state to active [ns_server:info,2017-10-01T10:14:20.374-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 190 state to active [ns_server:info,2017-10-01T10:14:20.374-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 189 state to active [ns_server:info,2017-10-01T10:14:20.375-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 188 state to active [ns_server:info,2017-10-01T10:14:20.375-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 187 state to active [ns_server:info,2017-10-01T10:14:20.375-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 186 state to active [ns_server:info,2017-10-01T10:14:20.376-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 185 state to active [ns_server:info,2017-10-01T10:14:20.377-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 184 state to active [ns_server:info,2017-10-01T10:14:20.377-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 183 state to active [ns_server:info,2017-10-01T10:14:20.377-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 182 state to active [ns_server:info,2017-10-01T10:14:20.377-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 181 state to active [ns_server:info,2017-10-01T10:14:20.378-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 180 state to active [ns_server:info,2017-10-01T10:14:20.378-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 179 state to active [ns_server:info,2017-10-01T10:14:20.378-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 178 state to active [ns_server:info,2017-10-01T10:14:20.379-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 177 state to active [ns_server:info,2017-10-01T10:14:20.381-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 176 state to active [ns_server:info,2017-10-01T10:14:20.385-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 175 state to active [ns_server:info,2017-10-01T10:14:20.386-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 174 state to active [ns_server:info,2017-10-01T10:14:20.387-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 173 state to active [ns_server:info,2017-10-01T10:14:20.388-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 172 state to active [ns_server:info,2017-10-01T10:14:20.388-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 171 state to active [ns_server:info,2017-10-01T10:14:20.389-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 170 state to active [ns_server:info,2017-10-01T10:14:20.390-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 169 state to active [ns_server:info,2017-10-01T10:14:20.390-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 168 state to active [ns_server:info,2017-10-01T10:14:20.390-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 167 state to active [ns_server:info,2017-10-01T10:14:20.390-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 166 state to active [ns_server:info,2017-10-01T10:14:20.391-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 165 state to active [ns_server:info,2017-10-01T10:14:20.391-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 164 state to active [ns_server:info,2017-10-01T10:14:20.392-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 163 state to active [ns_server:info,2017-10-01T10:14:20.392-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 162 state to active [ns_server:info,2017-10-01T10:14:20.392-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 161 state to active [ns_server:info,2017-10-01T10:14:20.393-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 160 state to active [ns_server:info,2017-10-01T10:14:20.393-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 159 state to active [ns_server:info,2017-10-01T10:14:20.393-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 158 state to active [ns_server:info,2017-10-01T10:14:20.393-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 157 state to active [ns_server:info,2017-10-01T10:14:20.394-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 156 state to active [ns_server:info,2017-10-01T10:14:20.394-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 155 state to active [ns_server:info,2017-10-01T10:14:20.394-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 154 state to active [ns_server:info,2017-10-01T10:14:20.395-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 153 state to active [ns_server:info,2017-10-01T10:14:20.395-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 152 state to active [ns_server:info,2017-10-01T10:14:20.396-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 151 state to active [ns_server:info,2017-10-01T10:14:20.396-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 150 state to active [ns_server:info,2017-10-01T10:14:20.397-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 149 state to active [ns_server:info,2017-10-01T10:14:20.397-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 148 state to active [ns_server:info,2017-10-01T10:14:20.398-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 147 state to active [ns_server:info,2017-10-01T10:14:20.398-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 146 state to active [ns_server:info,2017-10-01T10:14:20.399-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 145 state to active [ns_server:info,2017-10-01T10:14:20.399-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 144 state to active [ns_server:info,2017-10-01T10:14:20.399-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 143 state to active [ns_server:info,2017-10-01T10:14:20.400-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 142 state to active [ns_server:info,2017-10-01T10:14:20.400-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 141 state to active [ns_server:info,2017-10-01T10:14:20.400-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 140 state to active [ns_server:info,2017-10-01T10:14:20.401-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 139 state to active [ns_server:info,2017-10-01T10:14:20.401-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 138 state to active [ns_server:info,2017-10-01T10:14:20.401-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 137 state to active [ns_server:info,2017-10-01T10:14:20.402-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 136 state to active [ns_server:info,2017-10-01T10:14:20.402-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 135 state to active [ns_server:info,2017-10-01T10:14:20.402-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 134 state to active [ns_server:info,2017-10-01T10:14:20.403-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 133 state to active [ns_server:info,2017-10-01T10:14:20.403-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 132 state to active [ns_server:info,2017-10-01T10:14:20.403-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 131 state to active [ns_server:info,2017-10-01T10:14:20.404-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 130 state to active [ns_server:info,2017-10-01T10:14:20.404-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 129 state to active [ns_server:info,2017-10-01T10:14:20.404-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 128 state to active [ns_server:info,2017-10-01T10:14:20.405-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 127 state to active [ns_server:info,2017-10-01T10:14:20.406-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 126 state to active [ns_server:info,2017-10-01T10:14:20.406-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 125 state to active [ns_server:info,2017-10-01T10:14:20.407-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 124 state to active [ns_server:info,2017-10-01T10:14:20.407-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 123 state to active [ns_server:info,2017-10-01T10:14:20.408-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 122 state to active [ns_server:info,2017-10-01T10:14:20.408-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 121 state to active [ns_server:info,2017-10-01T10:14:20.409-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 120 state to active [ns_server:info,2017-10-01T10:14:20.409-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 119 state to active [ns_server:info,2017-10-01T10:14:20.410-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 118 state to active [ns_server:info,2017-10-01T10:14:20.410-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 117 state to active [ns_server:info,2017-10-01T10:14:20.410-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 116 state to active [ns_server:info,2017-10-01T10:14:20.411-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 115 state to active [ns_server:info,2017-10-01T10:14:20.411-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 114 state to active [ns_server:info,2017-10-01T10:14:20.412-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 113 state to active [ns_server:info,2017-10-01T10:14:20.412-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 112 state to active [ns_server:info,2017-10-01T10:14:20.412-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 111 state to active [ns_server:info,2017-10-01T10:14:20.412-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 110 state to active [ns_server:info,2017-10-01T10:14:20.413-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 109 state to active [ns_server:info,2017-10-01T10:14:20.413-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 108 state to active [ns_server:info,2017-10-01T10:14:20.414-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 107 state to active [ns_server:info,2017-10-01T10:14:20.414-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 106 state to active [ns_server:info,2017-10-01T10:14:20.414-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 105 state to active [ns_server:info,2017-10-01T10:14:20.417-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 104 state to active [ns_server:info,2017-10-01T10:14:20.417-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 103 state to active [ns_server:info,2017-10-01T10:14:20.417-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 102 state to active [ns_server:info,2017-10-01T10:14:20.423-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 101 state to active [ns_server:info,2017-10-01T10:14:20.424-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 100 state to active [ns_server:info,2017-10-01T10:14:20.425-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 99 state to active [ns_server:info,2017-10-01T10:14:20.426-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 98 state to active [ns_server:info,2017-10-01T10:14:20.426-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 97 state to active [ns_server:info,2017-10-01T10:14:20.427-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 96 state to active [ns_server:info,2017-10-01T10:14:20.427-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 95 state to active [ns_server:info,2017-10-01T10:14:20.428-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 94 state to active [ns_server:info,2017-10-01T10:14:20.428-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 93 state to active [ns_server:info,2017-10-01T10:14:20.428-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 92 state to active [ns_server:info,2017-10-01T10:14:20.429-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 91 state to active [ns_server:info,2017-10-01T10:14:20.429-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 90 state to active [ns_server:info,2017-10-01T10:14:20.430-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 89 state to active [ns_server:info,2017-10-01T10:14:20.430-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 88 state to active [ns_server:info,2017-10-01T10:14:20.430-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 87 state to active [ns_server:info,2017-10-01T10:14:20.431-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 86 state to active [ns_server:info,2017-10-01T10:14:20.431-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 85 state to active [ns_server:info,2017-10-01T10:14:20.431-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 84 state to active [ns_server:info,2017-10-01T10:14:20.432-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 83 state to active [ns_server:info,2017-10-01T10:14:20.432-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 82 state to active [ns_server:info,2017-10-01T10:14:20.432-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 81 state to active [ns_server:info,2017-10-01T10:14:20.433-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 80 state to active [ns_server:info,2017-10-01T10:14:20.433-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 79 state to active [ns_server:info,2017-10-01T10:14:20.433-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 78 state to active [ns_server:info,2017-10-01T10:14:20.434-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 77 state to active [ns_server:info,2017-10-01T10:14:20.434-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 76 state to active [ns_server:info,2017-10-01T10:14:20.434-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 75 state to active [ns_server:info,2017-10-01T10:14:20.435-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 74 state to active [ns_server:info,2017-10-01T10:14:20.435-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 73 state to active [ns_server:info,2017-10-01T10:14:20.435-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 72 state to active [ns_server:info,2017-10-01T10:14:20.435-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 71 state to active [ns_server:info,2017-10-01T10:14:20.436-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 70 state to active [ns_server:info,2017-10-01T10:14:20.436-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 69 state to active [ns_server:info,2017-10-01T10:14:20.436-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 68 state to active [ns_server:info,2017-10-01T10:14:20.437-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 67 state to active [ns_server:info,2017-10-01T10:14:20.438-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 66 state to active [ns_server:info,2017-10-01T10:14:20.438-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 65 state to active [ns_server:info,2017-10-01T10:14:20.439-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 64 state to active [ns_server:info,2017-10-01T10:14:20.439-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 63 state to active [ns_server:info,2017-10-01T10:14:20.439-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 62 state to active [ns_server:info,2017-10-01T10:14:20.440-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 61 state to active [ns_server:info,2017-10-01T10:14:20.440-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 60 state to active [ns_server:info,2017-10-01T10:14:20.441-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 59 state to active [ns_server:info,2017-10-01T10:14:20.441-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 58 state to active [ns_server:info,2017-10-01T10:14:20.441-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 57 state to active [ns_server:info,2017-10-01T10:14:20.442-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 56 state to active [ns_server:info,2017-10-01T10:14:20.442-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 55 state to active [ns_server:info,2017-10-01T10:14:20.442-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 54 state to active [ns_server:info,2017-10-01T10:14:20.442-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 53 state to active [ns_server:info,2017-10-01T10:14:20.443-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 52 state to active [ns_server:info,2017-10-01T10:14:20.443-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 51 state to active [ns_server:info,2017-10-01T10:14:20.443-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 50 state to active [ns_server:info,2017-10-01T10:14:20.443-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 49 state to active [ns_server:info,2017-10-01T10:14:20.444-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 48 state to active [ns_server:info,2017-10-01T10:14:20.444-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 47 state to active [ns_server:info,2017-10-01T10:14:20.444-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 46 state to active [ns_server:info,2017-10-01T10:14:20.445-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 45 state to active [ns_server:info,2017-10-01T10:14:20.446-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 44 state to active [ns_server:info,2017-10-01T10:14:20.446-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 43 state to active [ns_server:info,2017-10-01T10:14:20.447-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 42 state to active [ns_server:info,2017-10-01T10:14:20.447-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 41 state to active [ns_server:info,2017-10-01T10:14:20.447-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 40 state to active [ns_server:info,2017-10-01T10:14:20.447-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 39 state to active [ns_server:info,2017-10-01T10:14:20.448-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 38 state to active [ns_server:info,2017-10-01T10:14:20.448-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 37 state to active [ns_server:info,2017-10-01T10:14:20.448-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 36 state to active [ns_server:info,2017-10-01T10:14:20.449-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 35 state to active [ns_server:info,2017-10-01T10:14:20.450-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 34 state to active [ns_server:info,2017-10-01T10:14:20.450-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 33 state to active [ns_server:info,2017-10-01T10:14:20.451-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 32 state to active [ns_server:info,2017-10-01T10:14:20.452-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 31 state to active [ns_server:info,2017-10-01T10:14:20.453-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 30 state to active [ns_server:info,2017-10-01T10:14:20.453-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 29 state to active [ns_server:info,2017-10-01T10:14:20.454-07:00,n_0@127.0.0.1:<0.1402.0>:ns_memcached:do_handle_call:553]Changed vbucket 28 state to active [ns_server:info,2017-10-01T10:14:20.455-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 27 state to active [ns_server:info,2017-10-01T10:14:20.455-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 26 state to active [ns_server:info,2017-10-01T10:14:20.456-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 25 state to active [ns_server:info,2017-10-01T10:14:20.456-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 24 state to active [ns_server:info,2017-10-01T10:14:20.456-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 23 state to active [ns_server:info,2017-10-01T10:14:20.457-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 22 state to active [ns_server:info,2017-10-01T10:14:20.457-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 21 state to active [ns_server:info,2017-10-01T10:14:20.457-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 20 state to active [ns_server:info,2017-10-01T10:14:20.458-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 19 state to active [ns_server:info,2017-10-01T10:14:20.458-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 18 state to active [ns_server:info,2017-10-01T10:14:20.458-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 17 state to active [ns_server:info,2017-10-01T10:14:20.459-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 16 state to active [ns_server:info,2017-10-01T10:14:20.459-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 15 state to active [ns_server:info,2017-10-01T10:14:20.459-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 14 state to active [ns_server:info,2017-10-01T10:14:20.467-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 13 state to active [ns_server:info,2017-10-01T10:14:20.468-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 12 state to active [ns_server:info,2017-10-01T10:14:20.468-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 11 state to active [ns_server:info,2017-10-01T10:14:20.468-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 10 state to active [ns_server:info,2017-10-01T10:14:20.468-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 9 state to active [ns_server:info,2017-10-01T10:14:20.469-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 8 state to active [ns_server:info,2017-10-01T10:14:20.469-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 7 state to active [ns_server:info,2017-10-01T10:14:20.470-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 6 state to active [ns_server:info,2017-10-01T10:14:20.470-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 5 state to active [ns_server:info,2017-10-01T10:14:20.470-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 4 state to active [ns_server:info,2017-10-01T10:14:20.471-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 3 state to active [ns_server:info,2017-10-01T10:14:20.471-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 2 state to active [ns_server:info,2017-10-01T10:14:20.472-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 1 state to active [ns_server:info,2017-10-01T10:14:20.472-07:00,n_0@127.0.0.1:<0.1404.0>:ns_memcached:do_handle_call:553]Changed vbucket 0 state to active [ns_server:info,2017-10-01T10:14:20.474-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:handle_call:293]Enabling traffic to bucket "beer-sample" [ns_server:info,2017-10-01T10:14:20.474-07:00,n_0@127.0.0.1:ns_memcached-beer-sample<0.1397.0>:ns_memcached:handle_call:297]Bucket "beer-sample" marked as warmed in 1 seconds [ns_server:info,2017-10-01T10:14:24.950-07:00,n_0@127.0.0.1:ns_doctor<0.836.0>:ns_doctor:update_status:314]The following buckets became ready on node 'n_0@127.0.0.1': ["beer-sample"] [ns_server:debug,2017-10-01T10:14:25.796-07:00,n_0@127.0.0.1:<0.1728.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"couchbase",admin} [ns_server:debug,2017-10-01T10:14:25.877-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"beer-sample",bucket} [ns_server:debug,2017-10-01T10:14:27.685-07:00,n_0@127.0.0.1:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@cbas",admin} [cluster:debug,2017-10-01T10:14:38.787-07:00,n_0@127.0.0.1:ns_cluster<0.161.0>:ns_cluster:handle_call:174]handling add_node("127.0.0.1", 9001, <<"0">>, ..) [cluster:info,2017-10-01T10:14:38.789-07:00,n_0@127.0.0.1:ns_cluster<0.161.0>:ns_cluster:do_change_address:436]Change of address to "172.17.0.2" is requested. [error_logger:info,2017-10-01T10:14:38.789-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,588,nodedown,'n_0@127.0.0.1'}} [error_logger:error,2017-10-01T10:14:38.789-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {<0.1392.0>,docs_kv_sup} Context: child_terminated Reason: noconnection Offender: [{pid,<11720.291.0>}, {name,capi_ddoc_manager_sup}, {mfargs, {capi_ddoc_manager_sup,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:error,2017-10-01T10:14:38.789-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {<0.1392.0>,docs_kv_sup} Context: shutdown_error Reason: noconnection Offender: [{pid,<11720.307.0>}, {name,couch_stats_reader}, {mfargs, {couch_stats_reader,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:error,2017-10-01T10:14:38.789-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {<0.1392.0>,docs_kv_sup} Context: shutdown_error Reason: noconnection Offender: [{pid,<11720.302.0>}, {name,capi_set_view_manager}, {mfargs, {capi_set_view_manager,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:error,2017-10-01T10:14:38.790-07:00,n_0@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {<0.900.0>,xdcr_sup} Context: child_terminated Reason: noconnection Offender: [{pid,<11720.267.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_n_0@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.790-07:00,n_0@127.0.0.1:<0.904.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.903.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:38.790-07:00,n_0@127.0.0.1:<0.781.0>:misc:delaying_crash:1381]Delaying crash exit:{{nodedown,'babysitter_of_n_0@127.0.0.1'}, {gen_server,call, [{ns_crash_log,'babysitter_of_n_0@127.0.0.1'}, consume,infinity]}} by 1000ms Stacktrace: [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {ns_log,crash_consumption_loop,0, [{file,"src/ns_log.erl"},{line,63}]}, {misc,delaying_crash,2,[{file,"src/misc.erl"},{line,1378}]}, {proc_lib,init_p,3,[{file,"proc_lib.erl"},{line,224}]}] [user:warn,2017-10-01T10:14:38.794-07:00,nonode@nohost:ns_node_disco<0.792.0>:ns_node_disco:handle_info:198]Node nonode@nohost saw that node 'n_0@127.0.0.1' went down. Details: [{nodedown_reason, net_kernel_terminated}] [ns_server:debug,2017-10-01T10:14:38.794-07:00,nonode@nohost:<0.2178.0>:dist_manager:teardown:271]Got nodedown msg {nodedown,'n_0@127.0.0.1', [{nodedown_reason,net_kernel_terminated}]} after terminating net kernel [ns_server:info,2017-10-01T10:14:38.794-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:do_adjust_address:287]Adjusted IP to "172.17.0.2" [ns_server:info,2017-10-01T10:14:38.794-07:00,nonode@nohost:dist_manager<0.149.0>:dist_manager:bringup:214]Attempting to bring up net_kernel with name 'n_0@172.17.0.2' [error_logger:info,2017-10-01T10:14:38.796-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.2182.0>}, {name,erl_epmd}, {mfargs,{erl_epmd,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.796-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.2183.0>}, {name,auth}, {mfargs,{auth,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [user:info,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:ns_node_disco<0.792.0>:ns_node_disco:handle_info:192]Node 'n_0@172.17.0.2' saw that node 'n_0@172.17.0.2' came up. Tags: [] [ns_server:debug,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:<0.702.0>:doc_replicator:nodeup_monitoring_loop:122]got nodeup event. Considering ddocs replication [ns_server:debug,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [error_logger:info,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',local_nodeup,{node,'n_0@172.17.0.2'}} [ns_server:debug,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:configure_net_kernel:258]Set net_kernel vebosity to 10 -> 0 [error_logger:info,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.2184.0>}, {name,net_kernel}, {mfargs, {net_kernel,start_link, [['n_0@172.17.0.2',longnames]]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [ns_server:info,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:save_node:147]saving node to "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/nodefile" [error_logger:info,2017-10-01T10:14:38.804-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_sup} started: [{pid,<0.2181.0>}, {name,net_sup_dynamic}, {mfargs, {erl_distribution,start_link, [['n_0@172.17.0.2',longnames]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:38.807-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:bringup:228]Attempted to save node name to disk: ok [ns_server:debug,2017-10-01T10:14:38.807-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:wait_for_node:235]Waiting for connection to node 'babysitter_of_n_0@127.0.0.1' to be established [error_logger:info,2017-10-01T10:14:38.807-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'babysitter_of_n_0@127.0.0.1'}} [ns_server:debug,2017-10-01T10:14:38.809-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:wait_for_node:247]Observed node 'babysitter_of_n_0@127.0.0.1' to come up [ns_server:info,2017-10-01T10:14:38.809-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:do_adjust_address:291]Re-setting cookie {{sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}, 'n_0@172.17.0.2'} [ns_server:info,2017-10-01T10:14:38.809-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:save_address_config:142]Deleting irrelevant ip file "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ip_start": {error, enoent} [ns_server:info,2017-10-01T10:14:38.809-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:save_address_config:143]saving ip config to "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ip" [ns_server:info,2017-10-01T10:14:38.810-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:do_adjust_address:302]Persisted the address successfully [ns_server:debug,2017-10-01T10:14:38.811-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:wait_for_node:235]Waiting for connection to node 'couchdb_n_0@127.0.0.1' to be established [error_logger:info,2017-10-01T10:14:38.811-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:38.811-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:38.812-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.2190.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:38.812-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:wait_for_node:247]Observed node 'couchdb_n_0@127.0.0.1' to come up [error_logger:info,2017-10-01T10:14:38.812-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'n_0@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:38.816-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.2193.0>,shutdown}} [error_logger:info,2017-10-01T10:14:38.818-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'n_0@172.17.0.2'}} [ns_server:debug,2017-10-01T10:14:38.819-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:complete_rename:338]Renaming node from 'n_0@127.0.0.1' to 'n_0@172.17.0.2'. [ns_server:debug,2017-10-01T10:14:38.820-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf buckets -> buckets: [{configs,[{"beer-sample", [{repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@127.0.0.1']}, {sasl_password,"*****"}, {map,[['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined]]}, {map_opts_hash,133465355}]}]}] -> [{configs,[{"beer-sample", [{repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map,[['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined]]}, {map_opts_hash,133465355}]}]}] [ns_server:debug,2017-10-01T10:14:38.835-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf vbucket_map_history -> vbucket_map_history: [{[['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined], ['n_0@127.0.0.1',undefined]], [{replication_topology,star},{tags,undefined},{max_slaves,10}]}] -> [{[['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined]], [{replication_topology,star},{tags,undefined},{max_slaves,10}]}] [ns_server:debug,2017-10-01T10:14:38.843-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {service_map,cbas} -> {service_map,cbas}: ['n_0@127.0.0.1'] -> ['n_0@172.17.0.2'] [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',services} -> {node,'n_0@172.17.0.2', services}: [cbas,kv] -> [cbas,kv] [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',stop_xdcr} -> {node, 'n_0@172.17.0.2', stop_xdcr}: '_deleted' -> '_deleted' [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf nodes_wanted -> nodes_wanted: ['n_0@127.0.0.1'] -> ['n_0@172.17.0.2'] [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf server_groups -> server_groups: [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@127.0.0.1']}]] -> [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@172.17.0.2']}]] [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',audit} -> {node,'n_0@172.17.0.2', audit}: [{log_path,"logs/n_0"}] -> [{log_path,"logs/n_0"}] [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',capi_port} -> {node, 'n_0@172.17.0.2', capi_port}: 9500 -> 9500 [ns_server:debug,2017-10-01T10:14:38.844-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_auth_port} -> {node, 'n_0@172.17.0.2', cbas_auth_port}: 9310 -> 9310 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_cc_client_port} -> {node, 'n_0@172.17.0.2', cbas_cc_client_port}: 9303 -> 9303 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_cc_cluster_port} -> {node, 'n_0@172.17.0.2', cbas_cc_cluster_port}: 9302 -> 9302 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_cc_http_port} -> {node, 'n_0@172.17.0.2', cbas_cc_http_port}: 9301 -> 9301 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_cluster_port} -> {node, 'n_0@172.17.0.2', cbas_cluster_port}: 9305 -> 9305 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_data_port} -> {node, 'n_0@172.17.0.2', cbas_data_port}: 9306 -> 9306 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_debug_port} -> {node, 'n_0@172.17.0.2', cbas_debug_port}: 9309 -> 9309 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_http_port} -> {node, 'n_0@172.17.0.2', cbas_http_port}: 9300 -> 9300 [ns_server:debug,2017-10-01T10:14:38.845-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_hyracks_console_port} -> {node, 'n_0@172.17.0.2', cbas_hyracks_console_port}: 9304 -> 9304 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_messaging_port} -> {node, 'n_0@172.17.0.2', cbas_messaging_port}: 9308 -> 9308 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_result_port} -> {node, 'n_0@172.17.0.2', cbas_result_port}: 9307 -> 9307 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',cbas_ssl_port} -> {node, 'n_0@172.17.0.2', cbas_ssl_port}: 19300 -> 19300 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',compaction_daemon} -> {node, 'n_0@172.17.0.2', compaction_daemon}: [{check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] -> [{check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',config_version} -> {node, 'n_0@172.17.0.2', config_version}: {5,0} -> {5,0} [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',fts_http_port} -> {node, 'n_0@172.17.0.2', fts_http_port}: 9200 -> 9200 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',fts_ssl_port} -> {node, 'n_0@172.17.0.2', fts_ssl_port}: 19200 -> 19200 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_admin_port} -> {node, 'n_0@172.17.0.2', indexer_admin_port}: 9100 -> 9100 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_http_port} -> {node, 'n_0@172.17.0.2', indexer_http_port}: 9102 -> 9102 [ns_server:debug,2017-10-01T10:14:38.846-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_https_port} -> {node, 'n_0@172.17.0.2', indexer_https_port}: 19102 -> 19102 [ns_server:debug,2017-10-01T10:14:38.847-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_scan_port} -> {node, 'n_0@172.17.0.2', indexer_scan_port}: 9101 -> 9101 [ns_server:debug,2017-10-01T10:14:38.847-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_stcatchup_port} -> {node, 'n_0@172.17.0.2', indexer_stcatchup_port}: 9104 -> 9104 [ns_server:debug,2017-10-01T10:14:38.847-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_stinit_port} -> {node, 'n_0@172.17.0.2', indexer_stinit_port}: 9103 -> 9103 [ns_server:debug,2017-10-01T10:14:38.847-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',indexer_stmaint_port} -> {node, 'n_0@172.17.0.2', indexer_stmaint_port}: 9105 -> 9105 [ns_server:debug,2017-10-01T10:14:38.848-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',is_enterprise} -> {node, 'n_0@172.17.0.2', is_enterprise}: true -> true [ns_server:debug,2017-10-01T10:14:38.848-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',isasl} -> {node,'n_0@172.17.0.2', isasl}: [{path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}] -> [{path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}] [ns_server:debug,2017-10-01T10:14:38.848-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ldap_enabled} -> {node, 'n_0@172.17.0.2', ldap_enabled}: true -> true [ns_server:debug,2017-10-01T10:14:38.848-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',membership} -> {node, 'n_0@172.17.0.2', membership}: active -> active [ns_server:debug,2017-10-01T10:14:38.848-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',memcached} -> {node, 'n_0@172.17.0.2', memcached}: [{port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts", "@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] -> [{port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts", "@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2017-10-01T10:14:38.849-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',memcached_config} -> {node, 'n_0@172.17.0.2', memcached_config}: {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]} -> {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]} [ns_server:debug,2017-10-01T10:14:38.849-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',memcached_defaults} -> {node, 'n_0@172.17.0.2', memcached_defaults}: [{maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}] -> [{maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2017-10-01T10:14:38.849-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',moxi} -> {node,'n_0@172.17.0.2',moxi}: [{port,12001},{verbosity,[]}] -> [{port,12001},{verbosity,[]}] [ns_server:debug,2017-10-01T10:14:38.849-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ns_log} -> {node,'n_0@172.17.0.2', ns_log}: [{filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}] -> [{filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}] [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',port_servers} -> {node, 'n_0@172.17.0.2', port_servers}: [] -> [] [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',projector_port} -> {node, 'n_0@172.17.0.2', projector_port}: 10000 -> 10000 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',query_port} -> {node, 'n_0@172.17.0.2', query_port}: 9499 -> 9499 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',rest} -> {node,'n_0@172.17.0.2',rest}: [{port,9000},{port_meta,local}] -> [{port,9000},{port_meta,local}] [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ssl_capi_port} -> {node, 'n_0@172.17.0.2', ssl_capi_port}: 19500 -> 19500 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ssl_proxy_downstream_port} -> {node, 'n_0@172.17.0.2', ssl_proxy_downstream_port}: 11998 -> 11998 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ssl_proxy_upstream_port} -> {node, 'n_0@172.17.0.2', ssl_proxy_upstream_port}: 11997 -> 11997 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ssl_query_port} -> {node, 'n_0@172.17.0.2', ssl_query_port}: 19499 -> 19499 [ns_server:debug,2017-10-01T10:14:38.850-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',ssl_rest_port} -> {node, 'n_0@172.17.0.2', ssl_rest_port}: 19000 -> 19000 [ns_server:debug,2017-10-01T10:14:38.851-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',uuid} -> {node,'n_0@172.17.0.2',uuid}: <<"a7cadc9d6a7b1c5e2ac6210075d857d5">> -> <<"a7cadc9d6a7b1c5e2ac6210075d857d5">> [ns_server:debug,2017-10-01T10:14:38.851-07:00,n_0@172.17.0.2:ns_config<0.165.0>:dist_manager:rename_node_in_config:350]renaming node conf {node,'n_0@127.0.0.1',xdcr_rest_port} -> {node, 'n_0@172.17.0.2', xdcr_rest_port}: 13000 -> 13000 [ns_server:debug,2017-10-01T10:14:38.852-07:00,n_0@172.17.0.2:terse_bucket_info_uploader-beer-sample<0.1399.0>:terse_bucket_info_uploader:flush_refresh_msgs:83]Flushed 1 refresh messages [ns_server:debug,2017-10-01T10:14:38.857-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets,nodes_wanted,server_groups, vbucket_map_history, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {service_map,cbas}, {node,'n_0@172.17.0.2',audit}, {node,'n_0@172.17.0.2',capi_port}, {node,'n_0@172.17.0.2',cbas_auth_port}, {node,'n_0@172.17.0.2',cbas_cc_client_port}, {node,'n_0@172.17.0.2',cbas_cc_cluster_port}, {node,'n_0@172.17.0.2',cbas_cc_http_port}, {node,'n_0@172.17.0.2',cbas_cluster_port}, {node,'n_0@172.17.0.2',cbas_data_port}, {node,'n_0@172.17.0.2',cbas_debug_port}, {node,'n_0@172.17.0.2',cbas_http_port}, {node,'n_0@172.17.0.2', cbas_hyracks_console_port}, {node,'n_0@172.17.0.2',cbas_messaging_port}, {node,'n_0@172.17.0.2',cbas_result_port}, {node,'n_0@172.17.0.2',cbas_ssl_port}, {node,'n_0@172.17.0.2',compaction_daemon}, {node,'n_0@172.17.0.2',config_version}, {node,'n_0@172.17.0.2',fts_http_port}, {node,'n_0@172.17.0.2',fts_ssl_port}, {node,'n_0@172.17.0.2',indexer_admin_port}, {node,'n_0@172.17.0.2',indexer_http_port}, {node,'n_0@172.17.0.2',indexer_https_port}, {node,'n_0@172.17.0.2',indexer_scan_port}, {node,'n_0@172.17.0.2',indexer_stcatchup_port}, {node,'n_0@172.17.0.2',indexer_stinit_port}, {node,'n_0@172.17.0.2',indexer_stmaint_port}, {node,'n_0@172.17.0.2',is_enterprise}, {node,'n_0@172.17.0.2',isasl}, {node,'n_0@172.17.0.2',ldap_enabled}, {node,'n_0@172.17.0.2',membership}, {node,'n_0@172.17.0.2',memcached}, {node,'n_0@172.17.0.2',memcached_config}, {node,'n_0@172.17.0.2',memcached_defaults}, {node,'n_0@172.17.0.2',moxi}, {node,'n_0@172.17.0.2',ns_log}, {node,'n_0@172.17.0.2',port_servers}, {node,'n_0@172.17.0.2',projector_port}, {node,'n_0@172.17.0.2',query_port}, {node,'n_0@172.17.0.2',rest}, {node,'n_0@172.17.0.2',services}, {node,'n_0@172.17.0.2',ssl_capi_port}, {node,'n_0@172.17.0.2', ssl_proxy_downstream_port}, {node,'n_0@172.17.0.2',ssl_proxy_upstream_port}, {node,'n_0@172.17.0.2',ssl_query_port}, {node,'n_0@172.17.0.2',ssl_rest_port}, {node,'n_0@172.17.0.2',stop_xdcr}, {node,'n_0@172.17.0.2',uuid}, {node,'n_0@172.17.0.2',xdcr_rest_port}]..) [ns_server:debug,2017-10-01T10:14:38.859-07:00,n_0@172.17.0.2:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:info,2017-10-01T10:14:38.861-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:handle_info:453]Got certificate and pkey change [ns_server:debug,2017-10-01T10:14:38.862-07:00,n_0@172.17.0.2:mb_master<0.967.0>:mb_master:update_peers:490]List of peers has changed from ['n_0@127.0.0.1'] to ['n_0@172.17.0.2'] [ns_server:debug,2017-10-01T10:14:38.868-07:00,n_0@172.17.0.2:terse_bucket_info_uploader-beer-sample<0.1399.0>:terse_bucket_info_uploader:flush_refresh_msgs:83]Flushed 5 refresh messages [ns_server:debug,2017-10-01T10:14:38.872-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{20,63674097278}}]}] [ns_server:debug,2017-10-01T10:14:38.872-07:00,n_0@172.17.0.2:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:14:38.886-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',xdcr_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|13000] [ns_server:debug,2017-10-01T10:14:38.887-07:00,n_0@172.17.0.2:<0.2204.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@172.17.0.2'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:14:38.887-07:00,n_0@172.17.0.2:dist_manager<0.149.0>:dist_manager:complete_rename:340]Node 'n_0@127.0.0.1' has been renamed to 'n_0@172.17.0.2'. [ns_server:debug,2017-10-01T10:14:38.887-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',uuid} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ssl_rest_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19000] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ssl_query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19499] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|11997] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|11998] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ssl_capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19500] [ns_server:debug,2017-10-01T10:14:38.888-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',rest} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {port,9000}, {port_meta,local}] [ns_server:debug,2017-10-01T10:14:38.889-07:00,n_0@172.17.0.2:<0.2204.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@172.17.0.2'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:info,2017-10-01T10:14:38.889-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:maybe_generate_local_cert:554]Detected existing node certificate that did not match cluster certificate. Will re-generate [cluster:debug,2017-10-01T10:14:38.889-07:00,n_0@172.17.0.2:<0.2176.0>:ns_cluster:maybe_rename:475]Renamed node from 'n_0@127.0.0.1' to 'n_0@172.17.0.2'. [ns_server:debug,2017-10-01T10:14:38.891-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',query_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9499] [ns_server:debug,2017-10-01T10:14:38.891-07:00,n_0@172.17.0.2:ns_node_disco_events<0.791.0>:ns_node_disco_rep_events:handle_event:42]Detected a new nodes (['n_0@172.17.0.2']). Moving config around. [ns_server:debug,2017-10-01T10:14:38.891-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',projector_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|10000] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',port_servers} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ns_log} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/ns_log"}] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',moxi} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {port,12001}, {verbosity,[]}] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',memcached_defaults} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',memcached_config} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]}] [ns_server:debug,2017-10-01T10:14:38.892-07:00,n_0@172.17.0.2:capi_ddoc_replication_srv-beer-sample<0.2212.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',memcached} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {port,12000}, {dedicated_port,11999}, {ssl_port,11996}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts","@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.rbac"}, {log_path,"logs/n_0"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}| active] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',ldap_enabled} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|true] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',isasl} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/isasl.pw"}] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',is_enterprise} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|true] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_stmaint_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9105] [ns_server:debug,2017-10-01T10:14:38.893-07:00,n_0@172.17.0.2:wait_link_to_couchdb_node<0.709.0>:ns_server_nodes_sup:wait_link_to_couchdb_node_loop:177]Link to couchdb node was unpaused. [ns_server:debug,2017-10-01T10:14:38.894-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_stinit_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9103] [ns_server:debug,2017-10-01T10:14:38.894-07:00,n_0@172.17.0.2:wait_link_to_couchdb_node<0.709.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:133]Waiting for ns_couchdb node to start [ns_server:info,2017-10-01T10:14:38.894-07:00,n_0@172.17.0.2:ns_node_disco_events<0.791.0>:ns_node_disco_log:handle_event:46]ns_node_disco_log: nodes changed: ['n_0@172.17.0.2'] [error_logger:info,2017-10-01T10:14:38.894-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<0.2211.0>}, {name,doc_replicator}, {mfargs, {capi_ddoc_manager,start_replicator, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.895-07:00,n_0@172.17.0.2:<0.833.0>:restartable:loop:71]Restarting child <0.834.0> MFA: {ns_doctor_sup,start_link,[]} Shutdown policy: infinity Caller: {<0.161.0>,#Ref<0.0.0.17882>} [error_logger:info,2017-10-01T10:14:38.895-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<0.2212.0>}, {name,doc_replication_srv}, {mfargs, {doc_replication_srv,start_link,["beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:memcached_config_mgr<0.894.0>:memcached_config_mgr:handle_info:146]Got DOWN with reason: unpaused from memcached port server: <11719.81.0>. Shutting down [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:<0.909.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.894.0>} exited with reason {shutdown, {memcached_port_server_down, <11719.81.0>, unpaused}} [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.391.0> [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:capi_ddoc_replication_srv-beer-sample<0.2212.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.391.0> [ns_server:debug,2017-10-01T10:14:38.897-07:00,n_0@172.17.0.2:<0.833.0>:restartable:shutdown_child:120]Successfully terminated process <0.834.0> [ns_server:debug,2017-10-01T10:14:38.898-07:00,n_0@172.17.0.2:memcached_config_mgr<0.2218.0>:memcached_config_mgr:init:45]waiting for completion of initial ns_ports_setup round [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:xdc_rdoc_replication_srv<0.2227.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:<0.900.0>:xdc_rdoc_manager:start_link_remote:45]Starting xdc_rdoc_manager on 'couchdb_n_0@127.0.0.1' with following links: [<0.2226.0>, <0.2227.0>, <0.2215.0>] [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:<0.837.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.836.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:xdc_rep_manager<0.2215.0>:replicated_storage:wait_for_startup:54]Start waiting for startup [ns_server:debug,2017-10-01T10:14:38.899-07:00,n_0@172.17.0.2:<0.833.0>:restartable:start_child:98]Started child process <0.2221.0> MFA: {ns_doctor_sup,start_link,[]} [ns_server:debug,2017-10-01T10:14:38.900-07:00,n_0@172.17.0.2:<0.965.0>:restartable:loop:71]Restarting child <0.967.0> MFA: {mb_master,start_link,[]} Shutdown policy: infinity Caller: {<0.161.0>,#Ref<0.0.0.17954>} [ns_server:info,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:mb_master<0.967.0>:mb_master:terminate:298]Synchronously shutting down child mb_master_sup [ns_server:debug,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:<0.968.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.967.0>} exited with reason shutdown [ns_server:debug,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:<0.965.0>:restartable:shutdown_child:120]Successfully terminated process <0.967.0> [ns_server:debug,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:<0.965.0>:mb_master:check_master_takeover_needed:140]Sending master node question to the following nodes: [] [ns_server:debug,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:<0.965.0>:mb_master:check_master_takeover_needed:142]Got replies: [] [ns_server:debug,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:<0.965.0>:mb_master:check_master_takeover_needed:148]Was unable to discover master, not going to force mastership takeover [user:info,2017-10-01T10:14:38.901-07:00,n_0@172.17.0.2:mb_master<0.2237.0>:mb_master:init:86]I'm the only node, so I'm the master. [ns_server:info,2017-10-01T10:14:38.902-07:00,n_0@172.17.0.2:ns_log<0.780.0>:ns_log:handle_cast:188]suppressing duplicate log mb_master:undefined([<<"I'm the only node, so I'm the master.">>]) because it's been seen 1 times in the past 28.912817 secs (last seen 28.912817 secs ago [ns_server:debug,2017-10-01T10:14:38.902-07:00,n_0@172.17.0.2:mb_master_sup<0.2239.0>:misc:start_singleton:855]start_singleton(gen_server, ns_tick, [], []): started as <0.2240.0> on 'n_0@172.17.0.2' [error_logger:info,2017-10-01T10:14:38.895-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.2213.0>}, {name,xdc_stats_holder}, {mfargs, {proc_lib,start_link, [xdcr_sup,link_stats_holder_body,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.902-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.2214.0>}, {name,xdc_replication_sup}, {mfargs,{xdc_replication_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:error,2017-10-01T10:14:38.902-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.894.0>, {error, {shutdown, {memcached_port_server_down, <11719.81.0>,unpaused}}}} [ns_server:debug,2017-10-01T10:14:38.902-07:00,n_0@172.17.0.2:ns_orchestrator_child_sup<0.2242.0>:misc:start_singleton:855]start_singleton(gen_server, auto_reprovision, [], []): started as <0.2244.0> on 'n_0@172.17.0.2' [error_logger:error,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {shutdown,{memcached_port_server_down,<11719.81.0>,unpaused}} Offender: [{pid,<0.894.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'capi_ddoc_manager_sup-beer-sample'} started: [{pid,<11720.390.0>}, {name,capi_ddoc_manager_events}, {mfargs, {capi_ddoc_manager,start_link_event_manager, ["beer-sample"]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,'capi_ddoc_manager_sup-beer-sample'} started: [{pid,<11720.391.0>}, {name,capi_ddoc_manager}, {mfargs, {capi_ddoc_manager,start_link, ["beer-sample",<0.2211.0>,<0.2212.0>]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.389.0>}, {name,capi_ddoc_manager_sup}, {mfargs, {capi_ddoc_manager_sup,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:ns_orchestrator_child_sup<0.2242.0>:misc:start_singleton:855]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.2245.0> on 'n_0@172.17.0.2' [error_logger:info,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.2218.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.903-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.2222.0>}, {name,ns_doctor_events}, {mfargs, {gen_event,start_link,[{local,ns_doctor_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.2215.0>}, {name,xdc_rep_manager}, {mfargs,{xdc_rep_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,30000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.2226.0>}, {name,xdc_rdoc_replicator}, {mfargs,{xdc_rdoc_manager,start_replicator,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<0.2227.0>}, {name,xdc_rdoc_replication_srv}, {mfargs,{doc_replication_srv,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.2223.0>}, {name,ns_doctor}, {mfargs,{ns_doctor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,auto_failover},{pid,<0.977.0>}} [error_logger:info,2017-10-01T10:14:38.905-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,auto_failover, {pid,<0.977.0>}, {'n_0@172.17.0.2',<0.977.0>}}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,ns_orchestrator},{pid,<0.975.0>}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,ns_orchestrator, {pid,<0.975.0>}, {'n_0@172.17.0.2',<0.975.0>}}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,auto_reprovision},{pid,<0.974.0>}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,auto_reprovision, {pid,<0.974.0>}, {'n_0@172.17.0.2',<0.974.0>}}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,ns_tick},{pid,<0.970.0>}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,ns_tick, {pid,<0.970.0>}, {'n_0@172.17.0.2',<0.970.0>}}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.2240.0>,ns_tick,<0.2240.0>,#Fun} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.2240.0>,#Ref<0.0.0.17991>}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_tick},{pid,<0.2240.0>}} [error_logger:info,2017-10-01T10:14:38.906-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.2240.0>}, {name,ns_tick}, {mfargs,{ns_tick,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.2243.0>}, {name,ns_janitor_server}, {mfargs,{ns_janitor_server,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.2244.0>,auto_reprovision,<0.2244.0>,#Fun} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.2244.0>,#Ref<0.0.0.18010>}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.907-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_reprovision},{pid,<0.2244.0>}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.2244.0>}, {name,auto_reprovision}, {mfargs,{auto_reprovision,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.2245.0>,ns_orchestrator,<0.2245.0>,#Fun} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.2245.0>,#Ref<0.0.0.18027>}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_orchestrator},{pid,<0.2245.0>}} [error_logger:info,2017-10-01T10:14:38.908-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [ns_server:debug,2017-10-01T10:14:38.895-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_stcatchup_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9104] [ns_server:debug,2017-10-01T10:14:38.910-07:00,n_0@172.17.0.2:<0.2246.0>:auto_failover:init:150]init auto_failover. [ns_server:debug,2017-10-01T10:14:38.914-07:00,n_0@172.17.0.2:ns_ports_setup<0.880.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,projector,saslauthd_port,goxdcr,xdcr_proxy,cbas] [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_child_sup} started: [{pid,<0.2245.0>}, {name,ns_orchestrator}, {mfargs,{ns_orchestrator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.2242.0>}, {name,ns_orchestrator_child_sup}, {mfargs,{ns_orchestrator_child_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.2246.0>,auto_failover,<0.2246.0>,#Fun} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.2246.0>,#Ref<0.0.0.18053>}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2']}, {replies,[{'n_0@172.17.0.2',true}]}} [error_logger:info,2017-10-01T10:14:38.917-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,auto_failover},{pid,<0.2246.0>}} [error_logger:info,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,[]}} [error_logger:info,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.396.0>}, {name,capi_set_view_manager}, {mfargs, {capi_set_view_manager,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_scan_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9101] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_https_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19102] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9102] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',indexer_admin_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9100] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',fts_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19200] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',fts_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9200] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',config_version} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|{5,0}] [ns_server:debug,2017-10-01T10:14:38.918-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',compaction_daemon} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_ssl_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|19300] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_result_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9307] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_messaging_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9308] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_hyracks_console_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9304] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9300] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_debug_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9309] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_data_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9306] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9305] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_cc_http_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9301] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_cc_cluster_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9302] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_cc_client_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9303] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',cbas_auth_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9310] [ns_server:debug,2017-10-01T10:14:38.919-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',capi_port} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}|9500] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',audit} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {log_path,"logs/n_0"}] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: server_groups -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097278}}]}, [{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['n_0@172.17.0.2']}]] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: nodes_wanted -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097278}}]}, 'n_0@172.17.0.2'] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',stop_xdcr} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{3,63674097278}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',services} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, cbas,kv] [ns_server:debug,2017-10-01T10:14:38.920-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,cbas} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, 'n_0@172.17.0.2'] [ns_server:debug,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.397.0> [ns_server:debug,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:xdc_rdoc_replication_srv<0.2227.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.397.0> [ns_server:debug,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:xdc_rep_manager<0.2215.0>:replicated_storage:wait_for_startup:57]Received replicated storage registration from <11720.397.0> [ns_server:debug,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [error_logger:info,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.900.0>,xdcr_sup} started: [{pid,<11720.397.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_n_0@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:ns_orchestrator_sup<0.2241.0>:misc:start_singleton:855]start_singleton(gen_server, auto_failover, [], []): started as <0.2246.0> on 'n_0@172.17.0.2' [error_logger:info,2017-10-01T10:14:38.922-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.1392.0>,docs_kv_sup} started: [{pid,<11720.401.0>}, {name,couch_stats_reader}, {mfargs, {couch_stats_reader,start_link_remote, ['couchdb_n_0@127.0.0.1',"beer-sample"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.923-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_orchestrator_sup} started: [{pid,<0.2246.0>}, {name,auto_failover}, {mfargs,{auto_failover,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.923-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.2241.0>}, {name,ns_orchestrator_sup}, {mfargs,{ns_orchestrator_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2017-10-01T10:14:38.923-07:00,n_0@172.17.0.2:<0.965.0>:restartable:start_child:98]Started child process <0.2237.0> MFA: {mb_master,start_link,[]} [cluster:info,2017-10-01T10:14:38.923-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:do_change_address:442]Renamed node. New name is 'n_0@172.17.0.2'. [ns_server:debug,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:<0.1136.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1055.0>} exited with reason normal [ns_server:debug,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-cbauth<0.1132.0>:json_rpc_connection:handle_info:130]Socket closed [ns_server:debug,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_info:126]Observed json rpc process {"cbas-cbauth",<0.1132.0>} died with reason shutdown [error_logger:error,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.1132.0>,{error,shutdown}} [ns_server:debug,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-service_api<0.1138.0>:json_rpc_connection:handle_info:130]Socket closed [error_logger:error,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.1138.0>,{error,shutdown}} [ns_server:error,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:service_agent-cbas<0.1094.0>:service_agent:handle_info:243]Lost json rpc connection for service cbas, reason shutdown. Terminating. [ns_server:error,2017-10-01T10:14:38.926-07:00,n_0@172.17.0.2:service_agent-cbas<0.1094.0>:service_agent:terminate:264]Terminating abnormally [ns_server:debug,2017-10-01T10:14:38.927-07:00,n_0@172.17.0.2:<0.1096.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1094.0>} exited with reason {lost_connection, shutdown} [error_logger:error,2017-10-01T10:14:38.927-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server 'service_agent-cbas' terminating ** Last message in was {'DOWN',#Ref<0.0.0.6395>,process,<0.1138.0>,shutdown} ** When Server state == {state,cbas, {dict,3,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}, {{[],[], [[{node,'n_0@127.0.0.1'}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>]], [],[],[], [[{uuid,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}| 'n_0@172.17.0.2']], [],[],[],[],[],[], [[{node,'n_0@172.17.0.2'}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>]], [],[]}}}, undefined,undefined,undefined,undefined,undefined, undefined,undefined, {<<"NQ==">>,[]}, {<<"NQ==">>, {topology, ['n_0@172.17.0.2'], [<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>], true, [<<"Not enough nodes to achieve awesomeness">>]}}, <0.1281.0>,<0.1282.0>} ** Reason for termination == ** {lost_connection,shutdown} [error_logger:error,2017-10-01T10:14:38.927-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: service_agent:init/1 pid: <0.1094.0> registered_name: 'service_agent-cbas' exception exit: {lost_connection,shutdown} in function gen_server:terminate/6 (gen_server.erl, line 744) ancestors: [service_agent_children_sup,service_agent_sup,ns_server_sup, ns_server_nodes_sup,<0.173.0>,ns_server_cluster_sup, <0.89.0>] messages: [{'EXIT',<0.1281.0>,{lost_connection,shutdown}}, {'EXIT',<0.1282.0>,{lost_connection,shutdown}}] links: [<0.1096.0>,<0.884.0>] dictionary: [] trap_exit: true status: running heap_size: 6772 stack_size: 27 reductions: 6934 neighbours: [error_logger:error,2017-10-01T10:14:38.927-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,service_agent_children_sup} Context: child_terminated Reason: {lost_connection,shutdown} Offender: [{pid,<0.1094.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:38.928-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_children_sup} started: [{pid,<0.2254.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:38.928-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"projector-cbauth",<0.1127.0>} needs_update [ns_server:debug,2017-10-01T10:14:38.930-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: vbucket_map_history -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {[['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2'|...], [...]|...], [{replication_topology,star},{tags,undefined},{max_slaves,10}]}] [ns_server:debug,2017-10-01T10:14:38.933-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.1011.0>} needs_update [ns_server:debug,2017-10-01T10:14:38.932-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{6,63674097278}}], {configs,[{"beer-sample", [{map,[{0, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {1, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {2, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {3, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {4, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {5, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {6, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {7, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {8, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {9, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {10, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {11, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {12, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {13, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {14, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {15, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {16, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {17, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {18, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {19, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {20, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {21, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {22, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {23, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {24, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {25, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {26, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {27, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {28, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {29, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {30, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {31, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {32, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {33, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {34, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {35, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {36, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {37, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {38, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {39, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {40, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {41, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {42, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {43, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {44, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {45, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {46, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {47, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {48, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {49, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {50, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {51, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {52, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {53, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {54, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {55, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {56, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {57, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {58, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {59, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {60, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {61, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {62, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {63, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {64, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {65, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {66, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {67, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {68, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {69, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {70, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {71, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {72, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {73, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {74, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {75, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {76, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {77, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {78, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {79, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {80, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {81, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {82, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {83, ['n_0@127.0.0.1',undefined], ['n_0@172.17.0.2',undefined]}, {84,['n_0@127.0.0.1',undefined],['n_0@172.17.0.2'|...]}, {85,['n_0@127.0.0.1'|...],[...]}, {86,[...],...}, {87,...}, {...}|...]}, {fastForwardMap,[]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map_opts_hash,133465355}]}]}] [ns_server:info,2017-10-01T10:14:38.937-07:00,n_0@172.17.0.2:ns_doctor<0.2223.0>:ns_doctor:update_status:314]The following buckets became ready on node 'n_0@172.17.0.2': ["beer-sample"] [cluster:debug,2017-10-01T10:14:38.939-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:do_add_node_with_connectivity:546]Posting node info to engage_cluster on {"127.0.0.1",9001}: {[{<<"requestedTargetNodeHostname">>,<<"127.0.0.1">>}, {<<"requestedServices">>,[cbas]}, {availableStorage, {struct, [{hdd, [{struct, [{path,<<"/">>},{sizeKBytes,53039240},{usagePercent,64}]}, {struct, [{path,<<"/dev">>},{sizeKBytes,4083712},{usagePercent,0}]}, {struct, [{path,<<"/sys/fs/cgroup">>}, {sizeKBytes,4083712}, {usagePercent,0}]}, {struct, [{path,<<"/ssh">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/latestbuilds">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/releases">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/usr/share/zoneinfo/Zulu">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/timezone">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/resolv.conf">>}, {sizeKBytes,816744}, {usagePercent,1}]}, {struct, [{path,<<"/etc/hostname">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/hosts">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/dev/shm">>}, {sizeKBytes,65536}, {usagePercent,0}]}, {struct, [{path,<<"/home/couchbase/jenkins">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/home/couchbase/reporef">>}, {sizeKBytes,53039240}, {usagePercent,64}]}]}]}}, {storageTotals, {struct, [{ram, {struct, [{total,8363446272}, {quotaTotal,3344957440}, {quotaUsed,104857600}, {used,5844676608}, {usedByData,27738432}, {quotaUsedPerNode,104857600}, {quotaTotalPerNode,3344957440}]}}, {hdd, {struct, [{total,54312181760}, {quotaTotal,54312181760}, {used,34759796326}, {usedByData,21263327}, {free,19552385434}]}}]}}, {storage, {struct, [{ssd,[]}, {hdd, [{struct, [{path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {index_path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {quotaMb,none}, {state,ok}]}]}]}}, {systemStats, {struct, [{cpu_utilization_rate,99.49874686716792}, {swap_total,8849977344}, {swap_used,526950400}, {mem_total,8363446272}, {mem_free,3464790016}]}}, {interestingStats, {struct, [{cmd_get,0.0}, {couch_docs_actual_disk_size,20489601}, {couch_docs_data_size,20474880}, {couch_spatial_data_size,0}, {couch_spatial_disk_size,0}, {couch_views_actual_disk_size,773726}, {couch_views_data_size,769574}, {curr_items,7303}, {curr_items_tot,7303}, {ep_bg_fetched,0.0}, {get_hits,0.0}, {mem_used,27738432}, {ops,0.0}, {vb_active_num_non_resident,0}, {vb_replica_curr_items,0}]}}, {uptime,<<"58">>}, {memoryTotal,8363446272}, {memoryFree,3464790016}, {mcdMemoryReserved,6380}, {mcdMemoryAllocated,6380}, {couchApiBase,<<"http://172.17.0.2:9500/">>}, {couchApiBaseHTTPS,<<"https://172.17.0.2:19500/">>}, {otpCookie,{sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}}, {clusterMembership,<<"active">>}, {recoveryType,none}, {status,<<"healthy">>}, {otpNode,<<"n_0@172.17.0.2">>}, {thisNode,true}, {hostname,<<"172.17.0.2:9000">>}, {clusterCompatibility,327680}, {version,<<"5.0.0-0000-enterprise">>}, {os,<<"x86_64-unknown-linux-gnu">>}, {ports, {struct, [{sslProxy,11998}, {httpsMgmt,19000}, {httpsCAPI,19500}, {proxy,12001}, {direct,12000}]}}, {services,[cbas,kv]}, {cbasMemoryQuota,3190}, {ftsMemoryQuota,319}, {indexMemoryQuota,512}, {memoryQuota,3190}]} [cluster:debug,2017-10-01T10:14:38.986-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:do_add_node_with_connectivity:553]Reply from engage_cluster on {"127.0.0.1",9001}: {ok,{struct,[{<<"availableStorage">>, {struct,[{<<"hdd">>, [{struct,[{<<"path">>,<<"/">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/dev">>}, {<<"sizeKBytes">>,4083712}, {<<"usagePercent">>,0}]}, {struct,[{<<"path">>,<<"/sys/fs/cgroup">>}, {<<"sizeKBytes">>,4083712}, {<<"usagePercent">>,0}]}, {struct,[{<<"path">>,<<"/ssh">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/latestbuilds">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/releases">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/usr/share/zoneinfo/Zulu">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/etc/timezone">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/etc/resolv.conf">>}, {<<"sizeKBytes">>,816744}, {<<"usagePercent">>,1}]}, {struct,[{<<"path">>,<<"/etc/hostname">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/etc/hosts">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/dev/shm">>}, {<<"sizeKBytes">>,65536}, {<<"usagePercent">>,0}]}, {struct,[{<<"path">>,<<"/home/couchbase/jenkins">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}, {struct,[{<<"path">>,<<"/home/couchbase/reporef">>}, {<<"sizeKBytes">>,53039240}, {<<"usagePercent">>,64}]}]}]}}, {<<"storageTotals">>, {struct,[{<<"ram">>, {struct,[{<<"total">>,8363446272}, {<<"quotaTotal">>,3344957440}, {<<"quotaUsed">>,0}, {<<"used">>,5858213888}, {<<"usedByData">>,0}, {<<"quotaUsedPerNode">>,0}, {<<"quotaTotalPerNode">>,3344957440}]}}, {<<"hdd">>, {struct,[{<<"total">>,54312181760}, {<<"quotaTotal">>,54312181760}, {<<"used">>,34759796326}, {<<"usedByData">>,0}, {<<"free">>,19552385434}]}}]}}, {<<"storage">>, {struct,[{<<"ssd">>,[]}, {<<"hdd">>, [{struct,[{<<"path">>, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/datadir">>}, {<<"index_path">>, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/datadir">>}, {<<"quotaMb">>,<<"none">>}, {<<"state">>,<<"ok">>}]}]}]}}, {<<"systemStats">>, {struct,[{<<"cpu_utilization_rate">>,0}, {<<"swap_total">>,0}, {<<"swap_used">>,0}, {<<"mem_total">>,0}, {<<"mem_free">>,0}]}}, {<<"interestingStats">>,{struct,[]}}, {<<"uptime">>,<<"56">>}, {<<"memoryTotal">>,0}, {<<"memoryFree">>,0}, {<<"mcdMemoryReserved">>,0}, {<<"mcdMemoryAllocated">>,0}, {<<"couchApiBase">>,<<"http://127.0.0.1:9501/">>}, {<<"couchApiBaseHTTPS">>,<<"https://127.0.0.1:19501/">>}, {<<"otpCookie">>, {sanitized,<<"VmEhWEyIdoMBtb/tFlVJZC9smUvl6hfgeejAmdABm5w=">>}}, {<<"clusterMembership">>,<<"active">>}, {<<"recoveryType">>,<<"none">>}, {<<"status">>,<<"healthy">>}, {<<"otpNode">>,<<"n_1@127.0.0.1">>}, {<<"thisNode">>,true}, {<<"hostname">>,<<"127.0.0.1:9001">>}, {<<"clusterCompatibility">>,327680}, {<<"version">>,<<"5.0.0-0000-enterprise">>}, {<<"os">>,<<"x86_64-unknown-linux-gnu">>}, {<<"ports">>, {struct,[{<<"sslProxy">>,11994}, {<<"httpsMgmt">>,19001}, {<<"httpsCAPI">>,19501}, {<<"proxy">>,12003}, {<<"direct">>,12002}]}}, {<<"services">>,[<<"kv">>]}, {<<"cbasMemoryQuota">>,3190}, {<<"ftsMemoryQuota">>,319}, {<<"indexMemoryQuota">>,512}, {<<"memoryQuota">>,3190}]}} [cluster:debug,2017-10-01T10:14:39.001-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:verify_otp_connectivity:622]port_please("n_1", "127.0.0.1") = 21104 [ns_server:debug,2017-10-01T10:14:39.008-07:00,n_0@172.17.0.2:mb_master<0.2237.0>:mb_master:update_peers:490]List of peers has changed from ['n_0@172.17.0.2'] to ['n_0@172.17.0.2', 'n_1@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:39.009-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([nodes_wanted,server_groups, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {node,'n_1@127.0.0.1',membership}, {node,'n_1@127.0.0.1',services}]..) [cluster:info,2017-10-01T10:14:39.010-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:node_add_transaction_finish:788]Started node add transaction by adding node 'n_1@127.0.0.1' to nodes_wanted (group: 0) [ns_server:debug,2017-10-01T10:14:39.010-07:00,n_0@172.17.0.2:ns_config_events<0.163.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2017-10-01T10:14:39.010-07:00,n_0@172.17.0.2:ns_cookie_manager<0.160.0>:ns_cookie_manager:do_cookie_sync:106]ns_cookie_manager do_cookie_sync [ns_server:debug,2017-10-01T10:14:39.010-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: server_groups -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097279}}]}, [{uuid,<<"0">>}, {name,<<"Group 1">>}, {nodes,['n_0@172.17.0.2','n_1@127.0.0.1']}]] [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',services} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097279}}]},cbas] [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097279}}]}| inactiveAdded] [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: nodes_wanted -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097279}}]}, 'n_0@172.17.0.2','n_1@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:<0.2279.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['n_0@172.17.0.2','n_1@127.0.0.1'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [ns_server:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{21,63674097279}}]}] [error_logger:info,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'n_1@127.0.0.1'}} [cluster:debug,2017-10-01T10:14:39.011-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:do_add_node_engaged_inner:695]Posting the following to complete_join on "127.0.0.1:9001": {struct, [{<<"targetNode">>,'n_1@127.0.0.1'}, {<<"requestedServices">>,[cbas]}, {availableStorage, {struct, [{hdd, [{struct, [{path,<<"/">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/dev">>}, {sizeKBytes,4083712}, {usagePercent,0}]}, {struct, [{path,<<"/sys/fs/cgroup">>}, {sizeKBytes,4083712}, {usagePercent,0}]}, {struct, [{path,<<"/ssh">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/latestbuilds">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/releases">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/usr/share/zoneinfo/Zulu">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/timezone">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/resolv.conf">>}, {sizeKBytes,816744}, {usagePercent,1}]}, {struct, [{path,<<"/etc/hostname">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/etc/hosts">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/dev/shm">>}, {sizeKBytes,65536}, {usagePercent,0}]}, {struct, [{path,<<"/home/couchbase/jenkins">>}, {sizeKBytes,53039240}, {usagePercent,64}]}, {struct, [{path,<<"/home/couchbase/reporef">>}, {sizeKBytes,53039240}, {usagePercent,64}]}]}]}}, {storageTotals, {struct, [{ram, {struct, [{total,8363446272}, {quotaTotal,3344957440}, {quotaUsed,104857600}, {used,5844676608}, {usedByData,27738432}, {quotaUsedPerNode,104857600}, {quotaTotalPerNode,3344957440}]}}, {hdd, {struct, [{total,54312181760}, {quotaTotal,54312181760}, {used,34759796326}, {usedByData,21263327}, {free,19552385434}]}}]}}, {storage, {struct, [{ssd,[]}, {hdd, [{struct, [{path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {index_path, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir">>}, {quotaMb,none}, {state,ok}]}]}]}}, {systemStats, {struct, [{cpu_utilization_rate,99.49874686716792}, {swap_total,8849977344}, {swap_used,526950400}, {mem_total,8363446272}, {mem_free,3464790016}]}}, {interestingStats, {struct, [{cmd_get,0.0}, {couch_docs_actual_disk_size,20489601}, {couch_docs_data_size,20474880}, {couch_spatial_data_size,0}, {couch_spatial_disk_size,0}, {couch_views_actual_disk_size,773726}, {couch_views_data_size,769574}, {curr_items,7303}, {curr_items_tot,7303}, {ep_bg_fetched,0.0}, {get_hits,0.0}, {mem_used,27738432}, {ops,0.0}, {vb_active_num_non_resident,0}, {vb_replica_curr_items,0}]}}, {uptime,<<"58">>}, {memoryTotal,8363446272}, {memoryFree,3464790016}, {mcdMemoryReserved,6380}, {mcdMemoryAllocated,6380}, {couchApiBase,<<"http://172.17.0.2:9500/">>}, {couchApiBaseHTTPS,<<"https://172.17.0.2:19500/">>}, {otpCookie, {sanitized,<<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>}}, {clusterMembership,<<"active">>}, {recoveryType,none}, {status,<<"healthy">>}, {otpNode,<<"n_0@172.17.0.2">>}, {thisNode,true}, {hostname,<<"172.17.0.2:9000">>}, {clusterCompatibility,327680}, {version,<<"5.0.0-0000-enterprise">>}, {os,<<"x86_64-unknown-linux-gnu">>}, {ports, {struct, [{sslProxy,11998}, {httpsMgmt,19000}, {httpsCAPI,19500}, {proxy,12001}, {direct,12000}]}}, {services,[cbas,kv]}, {cbasMemoryQuota,3190}, {ftsMemoryQuota,319}, {indexMemoryQuota,512}, {memoryQuota,3190}]} [error_logger:info,2017-10-01T10:14:39.029-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'n_1@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:39.030-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.2280.0>,shutdown}} [ns_server:debug,2017-10-01T10:14:39.030-07:00,n_0@172.17.0.2:<0.2279.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['n_0@172.17.0.2'], with cookie: {sanitized, <<"AK3LHGEgBhLAaJA3xmd7xs4yABETttlz+x3i/Oe33uM=">>} [error_logger:info,2017-10-01T10:14:39.030-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'n_1@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:39.030-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{disconnect,'n_1@127.0.0.1'}} [user:info,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:ns_node_disco<0.792.0>:ns_node_disco:handle_info:192]Node 'n_0@172.17.0.2' saw that node 'n_1@127.0.0.1' came up. Tags: [] [ns_server:debug,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:<0.2219.0>:doc_replicator:nodeup_monitoring_loop:122]got nodeup event. Considering ddocs replication [ns_server:debug,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:<0.2253.0>:doc_replicator:nodeup_monitoring_loop:122]got nodeup event. Considering ddocs replication [ns_server:debug,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:<0.702.0>:doc_replicator:nodeup_monitoring_loop:122]got nodeup event. Considering ddocs replication [ns_server:debug,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:ns_node_disco_events<0.791.0>:ns_node_disco_rep_events:handle_event:42]Detected a new nodes (['n_1@127.0.0.1']). Moving config around. [ns_server:info,2017-10-01T10:14:39.139-07:00,n_0@172.17.0.2:ns_node_disco_events<0.791.0>:ns_node_disco_log:handle_event:46]ns_node_disco_log: nodes changed: ['n_0@172.17.0.2','n_1@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:39.140-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:39.140-07:00,n_0@172.17.0.2:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.700.0> [ns_server:debug,2017-10-01T10:14:39.140-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{'_',[],['$_']}],500} [ns_server:debug,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.700.0> [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',nodeup,{node,'n_1@127.0.0.1'},{isknown,false}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {sending_nodeup_to_locker,{node,'n_1@127.0.0.1'},{mytag,{1506,878079,141105}}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {casting_init_connect, {node,'n_1@127.0.0.1'}, {initmessage, {init_connect, {5,{1506,878079,141105}}, 'n_0@172.17.0.2', {locker,no_longer_a_pid,[],<0.14.0>}}}, {resolvers,[]}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',init_connect, {vsn,{5,{1506,878079,139216}}}, {node,'n_1@127.0.0.1'}, {initmsg,{locker,no_longer_a_pid,[],<21675.14.0>}}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {init_connect,{histhelocker,<21675.14.0>}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {his_the_locker,<21675.14.0>,{node,'n_1@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:39.141-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {the_locker_nodeup,{node,'n_1@127.0.0.1'},{mytag,{1506,878079,141105}}} [error_logger:info,2017-10-01T10:14:39.142-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_locker,{multi,[], [{him,'n_1@127.0.0.1',<21675.14.0>,5, {1506,878079,141105}}], [],nonode@nohost,false,false}} [ns_server:debug,2017-10-01T10:14:39.143-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:39.143-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:warn,2017-10-01T10:14:39.153-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:doc_replicator:loop:95]Remote server node {users_storage,'n_1@127.0.0.1'} process down: noproc [ns_server:warn,2017-10-01T10:14:39.153-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:doc_replicator:loop:95]Remote server node {xdc_rdoc_replication_srv,'n_1@127.0.0.1'} process down: noproc [ns_server:info,2017-10-01T10:14:39.337-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:pull_one_node:354]Pulling config from: 'n_1@127.0.0.1' [error_logger:info,2017-10-01T10:14:39.343-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {select_node,<0.14.0>,{us,['n_0@172.17.0.2','n_1@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:39.344-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.14.0>}, {global,[<0.14.0>,<21675.14.0>]}, {nodes,['n_0@172.17.0.2','n_1@127.0.0.1']}, {retries,0}, {times,1}} [ns_server:debug,2017-10-01T10:14:39.344-07:00,n_0@172.17.0.2:terse_bucket_info_uploader-beer-sample<0.1399.0>:terse_bucket_info_uploader:flush_refresh_msgs:83]Flushed 5 refresh messages [error_logger:info,2017-10-01T10:14:39.344-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,[<0.14.0>,<21675.14.0>]},<0.14.0>} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.14.0>}, {global,[<0.14.0>,<21675.14.0>]}, {nodes,['n_0@172.17.0.2','n_1@127.0.0.1']}, {replies,[{'n_0@172.17.0.2',true},{'n_1@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock_true,{global,[<0.14.0>,<21675.14.0>]}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {sending_lock_set,<0.14.0>,{his,<21675.14.0>}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {lock_sync_done,{p,<21675.14.0>,'n_1@127.0.0.1'},{me,<0.14.0>}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',exchange, {node,'n_1@127.0.0.1'}, {namelist,[]}, {resolvers,[{'n_1@127.0.0.1',{1506,878079,141105},<0.2300.0>}]}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',lock_is_set,{node,'n_1@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:39.355-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',exchange, {node,'n_1@127.0.0.1'}, {namelist,[]}, {resolvers,[{'n_1@127.0.0.1',{1506,878079,141105},<0.2300.0>}]}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {resolver,{me,<0.2300.0>},{node,'n_1@127.0.0.1'},{namelist,[]}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {exchange_names_finish,{ops,[]},{res,[]}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {exchange_ops,{node,'n_1@127.0.0.1'}, {ops,[]}, {resolved,[]}, {mytag,{1506,878079,141105}}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {'####',resolved,{his_resolved,[]},{node,'n_1@127.0.0.1'}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {do_ops,{ops,[]}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {calling_cancel_locker,{1506,878079,141105}, [{{prot_vsn,'n_1@127.0.0.1'},5}, {{sync_tag_my,'n_1@127.0.0.1'},{1506,878079,141105}}, {'$ancestors',[kernel_sup,<0.10.0>]}, {{sync_tag_his,'n_1@127.0.0.1'},{1506,878079,139216}}, {{lock_id,'n_1@127.0.0.1'}, {global,[<0.14.0>,<21675.14.0>]}}, {'$initial_call',{global,init,1}}]} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {cancel_locker, {node,'n_1@127.0.0.1'}, {tag,{1506,878079,141105}}, {sync_tag_my,{1506,878079,141105}}, {resolvers,[{'n_1@127.0.0.1',{1506,878079,141105},<0.2300.0>}]}} [error_logger:info,2017-10-01T10:14:39.356-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {{resolver,<0.2300.0>}} [error_logger:info,2017-10-01T10:14:39.357-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {{node,'n_1@127.0.0.1'}, reset_node_state, [{{prot_vsn,'n_1@127.0.0.1'},5}, {{sync_tag_my,'n_1@127.0.0.1'},{1506,878079,141105}}, {'$ancestors',[kernel_sup,<0.10.0>]}, {{sync_tag_his,'n_1@127.0.0.1'},{1506,878079,139216}}, {{lock_id,'n_1@127.0.0.1'},{global,[<0.14.0>,<21675.14.0>]}}, {'$initial_call',{global,init,1}}]} [error_logger:info,2017-10-01T10:14:39.357-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {lock_set_loop,{known1,['n_0@172.17.0.2','n_1@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:39.357-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.14.0>}, {global,[<0.14.0>,<21675.14.0>]}, {nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:39.358-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.14.0>},{id,{global,[<0.14.0>,<21675.14.0>]}}} [error_logger:info,2017-10-01T10:14:39.358-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.14.0>}} [error_logger:info,2017-10-01T10:14:39.358-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.14.0>}, {global,[<0.14.0>,<21675.14.0>]}, {nodes,['n_1@127.0.0.1']}} [error_logger:info,2017-10-01T10:14:39.358-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_locker,{multi,[],[],[],nonode@nohost,true,false}} [error_logger:info,2017-10-01T10:14:39.358-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_locker,{multi,[],[],['n_1@127.0.0.1'],'n_1@127.0.0.1',true,false}} [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a10e75f8f9d93f0dfadacbdcef859eca">>} -> [{'_vclock',[{<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}] [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',audit} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {log_path,"logs/n_1"}] [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',capi_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9501] [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_auth_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9330] [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_cc_client_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9323] [ns_server:debug,2017-10-01T10:14:39.360-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_cc_cluster_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9322] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_cc_http_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9321] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_cluster_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9325] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_data_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9326] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_debug_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9329] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_http_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9320] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_hyracks_console_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9324] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_messaging_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9318] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_result_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9327] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',cbas_ssl_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19301] [ns_server:debug,2017-10-01T10:14:39.361-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',compaction_daemon} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {check_interval,30}, {min_db_file_size,131072}, {min_view_file_size,20971520}] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',config_version} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| {5,0}] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',fts_http_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9201] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',fts_ssl_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19201] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_admin_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9106] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_http_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9108] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_https_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19108] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_scan_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9107] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_stcatchup_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9110] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_stinit_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9109] [ns_server:debug,2017-10-01T10:14:39.362-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',indexer_stmaint_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9111] [ns_server:debug,2017-10-01T10:14:39.364-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',is_enterprise} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| true] [ns_server:debug,2017-10-01T10:14:39.364-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',isasl} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/isasl.pw"}] [ns_server:debug,2017-10-01T10:14:39.364-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ldap_enabled} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| true] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',memcached} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {port,12002}, {dedicated_port,11995}, {ssl_port,11992}, {admin_user,"@ns_server"}, {other_users,["@cbq-engine","@projector","@goxdcr","@index","@fts","@cbas"]}, {admin_pass,"*****"}, {engines,[{membase,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/config/memcached.json"}, {audit_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/config/audit.json"}, {rbac_file,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/config/memcached.rbac"}, {log_path,"logs/n_1"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',memcached_config} -> [{'_vclock', [{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/config/memcached-key.pem">>}, {cert, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {client_cert_auth,{memcached_config_mgr,client_cert_auth,[]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {connection_idle_time,connection_idle_time}, {privilege_debug,privilege_debug}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module, <<"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {admin,{"~s",[admin_user]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {rbac_file,{"~s",[rbac_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}, {xattr_enabled,{memcached_config_mgr,is_enabled,[[5,0]]}}]}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',memcached_defaults} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {connection_idle_time,0}, {verbosity,0}, {privilege_debug,false}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',moxi} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {port,12003}, {verbosity,[]}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ns_log} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {filename,"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/ns_log"}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',port_servers} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}] [ns_server:debug,2017-10-01T10:14:39.365-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',projector_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 10001] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',query_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 9498] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',rest} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}, {port,9001}, {port_meta,local}] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ssl_capi_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19501] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 11994] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 11993] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ssl_query_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19498] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',ssl_rest_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 19001] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{2,63674097238}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| '_deleted'] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',uuid} -> [{'_vclock',[{<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{2,63674097279}}]}| <<"a10e75f8f9d93f0dfadacbdcef859eca">>] [ns_server:debug,2017-10-01T10:14:39.366-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',xdcr_rest_port} -> [{'_vclock',[{<<"94f95dc075b3698ba8673c686a0e3a6d">>,{1,63674097226}}, {<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097279}}]}| 13001] [error_logger:error,2017-10-01T10:14:39.795-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: ns_log:-start_link_crash_consumer/0-fun-0-/0 pid: <0.781.0> registered_name: [] exception exit: {{nodedown,'babysitter_of_n_0@127.0.0.1'}, {gen_server,call, [{ns_crash_log,'babysitter_of_n_0@127.0.0.1'}, consume,infinity]}} in function gen_server:call/3 (gen_server.erl, line 188) in call from ns_log:crash_consumption_loop/0 (src/ns_log.erl, line 63) in call from misc:delaying_crash/2 (src/misc.erl, line 1378) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.775.0>] dictionary: [] trap_exit: false status: running heap_size: 2586 stack_size: 27 reductions: 3420 neighbours: [error_logger:error,2017-10-01T10:14:39.795-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {{nodedown,'babysitter_of_n_0@127.0.0.1'}, {gen_server,call, [{ns_crash_log,'babysitter_of_n_0@127.0.0.1'}, consume,infinity]}} Offender: [{pid,<0.781.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:14:39.795-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.2317.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:14:39.988-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_kv) for the following buckets: [<<"beer-sample">>] [ns_server:debug,2017-10-01T10:14:39.988-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_views) for the following buckets: [<<"beer-sample">>] [ns_server:info,2017-10-01T10:14:39.989-07:00,n_0@172.17.0.2:<0.2344.0>:compaction_new_daemon:spawn_scheduled_kv_compactor:471]Start compaction of vbuckets for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:14:39.992-07:00,n_0@172.17.0.2:<0.2347.0>:compaction_new_daemon:bucket_needs_compaction:972]`beer-sample` data size is 3596512, disk size is 20474880 [ns_server:debug,2017-10-01T10:14:39.992-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:14:39.993-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:info,2017-10-01T10:14:39.996-07:00,n_0@172.17.0.2:<0.2346.0>:compaction_new_daemon:spawn_scheduled_views_compactor:497]Start compaction of indexes for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:14:39.998-07:00,n_0@172.17.0.2:<0.2350.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/main` data_size is 737019, disk_size is 769574 [ns_server:debug,2017-10-01T10:14:39.999-07:00,n_0@172.17.0.2:<0.2351.0>:compaction_new_daemon:view_needs_compaction:1064]`spatial_view/beer-sample/_design/beer/main` data_size is 0, disk_size is 0 [ns_server:debug,2017-10-01T10:14:40.000-07:00,n_0@172.17.0.2:<0.2352.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/replica` data_size is 0, disk_size is 4152 [ns_server:debug,2017-10-01T10:14:40.000-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:14:40.001-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 29s [ns_server:info,2017-10-01T10:14:40.080-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:do_generate_local_cert:545]Saved local cert for node 'n_0@172.17.0.2' [ns_server:info,2017-10-01T10:14:40.088-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:handle_info:456]Wrote new pem file [ns_server:debug,2017-10-01T10:14:40.088-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:handle_info:497]Going to notify following services: [ssl_service,capi_ssl_service,xdcr_proxy, query_svc,memcached,event] [ns_server:info,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.2382.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service query_svc [ns_server:info,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.2385.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service event [ns_server:info,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.2384.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service memcached [ns_server:debug,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:memcached_refresh<0.674.0>:memcached_refresh:handle_cast:55]Refresh of ssl_certs requested [ns_server:debug,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.679.0>:restartable:loop:71]Restarting child <0.680.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} Shutdown policy: 1000 Caller: {<0.2380.0>,#Ref<0.0.0.19062>} [ns_server:debug,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.2381.0>:ns_ports_manager:restart_port_by_name:43]Requesting restart of port xdcr_proxy [ns_server:debug,2017-10-01T10:14:40.089-07:00,n_0@172.17.0.2:<0.679.0>:restartable:shutdown_child:120]Successfully terminated process <0.680.0> [ns_server:debug,2017-10-01T10:14:40.094-07:00,n_0@172.17.0.2:memcached_refresh<0.674.0>:memcached_refresh:handle_info:89]Refresh of [ssl_certs] succeeded [ns_server:info,2017-10-01T10:14:40.096-07:00,n_0@172.17.0.2:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for n1ql [ns_server:info,2017-10-01T10:14:40.096-07:00,n_0@172.17.0.2:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for cbas [ns_server:info,2017-10-01T10:14:40.097-07:00,n_0@172.17.0.2:<0.679.0>:menelaus_pluggable_ui:validate_plugin_spec:119]Loaded pluggable UI specification for fts [ns_server:debug,2017-10-01T10:14:40.100-07:00,n_0@172.17.0.2:<0.679.0>:restartable:start_child:98]Started child process <0.2386.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} [ns_server:info,2017-10-01T10:14:40.100-07:00,n_0@172.17.0.2:<0.2380.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service ssl_service [ns_server:info,2017-10-01T10:14:40.110-07:00,n_0@172.17.0.2:<0.2383.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service capi_ssl_service [ns_server:debug,2017-10-01T10:14:40.309-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:40.309-07:00,n_0@172.17.0.2:users_storage<0.701.0>:replicated_dets:handle_call:251]Suspended by process <0.700.0> [ns_server:debug,2017-10-01T10:14:40.309-07:00,n_0@172.17.0.2:users_replicator<0.700.0>:replicated_dets:select_from_dets_locked:298]Starting select with {users_storage,[{'_',[],['$_']}],500} [ns_server:debug,2017-10-01T10:14:40.310-07:00,n_0@172.17.0.2:users_storage<0.701.0>:replicated_dets:handle_call:258]Released by process <0.700.0> [ns_server:debug,2017-10-01T10:14:40.779-07:00,n_0@172.17.0.2:ns_ports_setup<0.880.0>:ns_ports_setup:children_loop_continue:111]Remote monitor <11719.74.0> was unpaused after node name change. Restart loop. [ns_server:debug,2017-10-01T10:14:40.782-07:00,n_0@172.17.0.2:ns_ports_setup<0.880.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,projector,saslauthd_port,goxdcr,xdcr_proxy,cbas] [ns_server:debug,2017-10-01T10:14:40.791-07:00,n_0@172.17.0.2:ns_ports_setup<0.880.0>:ns_ports_setup:set_children:78]Monitor ns_child_ports_sup <11719.74.0> [ns_server:info,2017-10-01T10:14:40.791-07:00,n_0@172.17.0.2:<0.2381.0>:ns_ssl_services_setup:notify_service:585]Successfully notified service xdcr_proxy [ns_server:debug,2017-10-01T10:14:40.791-07:00,n_0@172.17.0.2:memcached_config_mgr<0.2218.0>:memcached_config_mgr:init:47]ns_ports_setup seems to be ready [ns_server:info,2017-10-01T10:14:40.791-07:00,n_0@172.17.0.2:ns_ssl_services_setup<0.677.0>:ns_ssl_services_setup:handle_info:513]Succesfully notified services [event,memcached,query_svc,xdcr_proxy, capi_ssl_service,ssl_service] [ns_server:debug,2017-10-01T10:14:40.792-07:00,n_0@172.17.0.2:memcached_config_mgr<0.2218.0>:memcached_config_mgr:find_port_pid_loop:122]Found memcached port <11719.81.0> [ns_server:debug,2017-10-01T10:14:40.793-07:00,n_0@172.17.0.2:memcached_config_mgr<0.2218.0>:memcached_config_mgr:do_read_current_memcached_config:254]Got enoent while trying to read active memcached config from /home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/config/memcached.json.prev [ns_server:debug,2017-10-01T10:14:40.793-07:00,n_0@172.17.0.2:memcached_config_mgr<0.2218.0>:memcached_config_mgr:init:84]found memcached port to be already active [stats:error,2017-10-01T10:14:40.905-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:41.713-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-cbauth<0.2445.0>:json_rpc_connection:init:74]Observed revrpc connection: label "cbas-cbauth", handling process <0.2445.0> [ns_server:debug,2017-10-01T10:14:41.713-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"cbas-cbauth",<0.2445.0>} started [stats:error,2017-10-01T10:14:41.905-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [error_logger:info,2017-10-01T10:14:42.631-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= alarm_handler: {set,{system_memory_high_watermark,[]}} [stats:error,2017-10-01T10:14:42.903-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [stats:error,2017-10-01T10:14:43.913-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [stats:error,2017-10-01T10:14:44.910-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:45.678-07:00,n_0@172.17.0.2:xdcr_doc_replicator<0.2226.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [cluster:debug,2017-10-01T10:14:45.709-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:do_add_node_engaged_inner:704]Reply from complete_join on "127.0.0.1:9001": {ok,[]} [cluster:debug,2017-10-01T10:14:45.709-07:00,n_0@172.17.0.2:ns_cluster<0.161.0>:ns_cluster:handle_call:176]add_node("127.0.0.1", 9001, <<"0">>, ..) -> {ok,'n_1@127.0.0.1'} [ns_server:debug,2017-10-01T10:14:45.709-07:00,n_0@172.17.0.2:ns_audit<0.893.0>:ns_audit:handle_call:104]Audit add_node: [{user,<<"couchbase">>}, {services,[cbas]}, {port,9001}, {hostname,<<"127.0.0.1">>}, {groupUUID,<<"0">>}, {node,'n_1@127.0.0.1'}, {real_userid,{[{source,ns_server},{user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34188}]}}, {timestamp,<<"2017-10-01T10:14:45.709-07:00">>}] [rebalance:info,2017-10-01T10:14:45.881-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:drop_old_2i_indexes:1423]Going to drop possible old 2i indexes on nodes ['n_1@127.0.0.1'] [user:info,2017-10-01T10:14:45.881-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:idle:672]Starting rebalance, KeepNodes = ['n_0@172.17.0.2','n_1@127.0.0.1'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{22,63674097285}}]}] [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: counters -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}, {rebalance_start,1}] [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{23,63674097285}}]}] [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: graceful_failover_pid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}| <0.2612.0>] [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalancer_pid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}| undefined] [ns_server:debug,2017-10-01T10:14:45.882-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalance_status_uuid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}| <<"05c626874526906429dd592bd082543b">>] [ns_server:debug,2017-10-01T10:14:45.883-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalance_status -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}| running] [rebalance:info,2017-10-01T10:14:45.885-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:drop_old_2i_indexes:1429]Going to keep possible 2i indexes on nodes [] [rebalance:debug,2017-10-01T10:14:45.885-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:drop_old_2i_indexes:1445]Cleanup succeeded: [{'n_1@127.0.0.1',ok}] [ns_server:debug,2017-10-01T10:14:45.890-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{24,63674097285}}]}] [ns_server:debug,2017-10-01T10:14:45.891-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_1@127.0.0.1',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097285}}]}| active] [ns_server:debug,2017-10-01T10:14:45.891-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {node,'n_0@172.17.0.2',membership} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}| active] [ns_server:debug,2017-10-01T10:14:45.883-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([counters,graceful_failover_pid, rebalance_status,rebalance_status_uuid, rebalancer_pid, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [stats:error,2017-10-01T10:14:45.911-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:45.917-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {node,'n_0@172.17.0.2',membership}, {node,'n_1@127.0.0.1',membership}]..) [ns_server:debug,2017-10-01T10:14:45.918-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:handle_call:115]Got full synchronization request from 'n_0@172.17.0.2' [ns_server:debug,2017-10-01T10:14:45.918-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:handle_call:121]Fully synchronized config in 16 us [ns_server:debug,2017-10-01T10:14:45.918-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:45.918-07:00,n_0@172.17.0.2:ns_audit<0.893.0>:ns_audit:handle_call:104]Audit rebalance_initiated: [{delta_recovery_buckets,all}, {ejected_nodes,[]}, {known_nodes,['n_0@172.17.0.2','n_1@127.0.0.1']}, {real_userid, {[{source,ns_server},{user,<<"couchbase">>}]}}, {remote,{[{ip,<<"127.0.0.1">>},{port,34197}]}}, {timestamp,<<"2017-10-01T10:14:45.918-07:00">>}] [rebalance:debug,2017-10-01T10:14:45.949-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:rebalance_kv:677]BucketConfigs = [{"beer-sample", [{repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map,[['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined]]}, {map_opts_hash,133465355}]}] [error_logger:info,2017-10-01T10:14:45.988-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {register_name,<0.2665.0>,ns_rebalance_observer,<0.2665.0>, #Fun} [error_logger:info,2017-10-01T10:14:45.988-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {loop_the_registrar,<0.15.0>,#Fun, {<0.2665.0>,#Ref<0.0.0.20584>}} [ns_server:debug,2017-10-01T10:14:45.991-07:00,n_0@172.17.0.2:compiled_roles_cache<0.703.0>:menelaus_roles:build_compiled_roles:556]Compile roles for user {"@cbas-cbauth",admin} [ns_server:debug,2017-10-01T10:14:45.993-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{25,63674097285}}]}] [ns_server:debug,2017-10-01T10:14:45.994-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/topology">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097285}}]}| <<"{\"nodes\":[{\"nodeId\":\"a7cadc9d6a7b1c5e2ac6210075d857d5\",\"priority\":0,\"opaque\":{\"cc-http-port\":\"9301\",\"host\":\"172.17.0.2\",\"master-node\":\"true\",\"num-iodevices\":\"1\",\"starting-partition-id\":\"0\"}}],\"id\":\"2ab8edf54b8304853305273dfd12a160\"}">>] [ns_server:debug,2017-10-01T10:14:45.994-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/cbas/topology">>}]..) [error_logger:info,2017-10-01T10:14:45.997-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_1@127.0.0.1']}, {replies,[{'n_1@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:45.997-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_set_lock,{global,<0.15.0>},<0.15.0>} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {set_lock,{me,<0.15.0>}, {global,<0.15.0>}, {nodes,['n_0@172.17.0.2','n_1@127.0.0.1']}, {replies,[{'n_0@172.17.0.2',true},{'n_1@127.0.0.1',true}]}} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {ins_name,insert,{name,ns_rebalance_observer},{pid,<0.2665.0>}} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_0@172.17.0.2']}} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {handle_del_lock,{pid,<0.15.0>},{id,{global,<0.15.0>}}} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {remove_lock_1,{id,global},{pid,<0.15.0>}} [error_logger:info,2017-10-01T10:14:45.998-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {del_lock,{me,<0.15.0>},{global,<0.15.0>},{nodes,['n_1@127.0.0.1']}} [user:info,2017-10-01T10:14:45.999-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:rebalance_bucket:708]Started rebalancing bucket beer-sample [ns_server:debug,2017-10-01T10:14:45.999-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-service_api<0.2681.0>:json_rpc_connection:init:74]Observed revrpc connection: label "cbas-service_api", handling process <0.2681.0> [ns_server:debug,2017-10-01T10:14:46.000-07:00,n_0@172.17.0.2:service_agent-cbas<0.2254.0>:service_agent:do_handle_connection:328]Observed new json rpc connection for cbas: <0.2681.0> [ns_server:debug,2017-10-01T10:14:46.000-07:00,n_0@172.17.0.2:<0.2257.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {json_rpc_events,<0.2255.0>} exited with reason normal [ns_server:debug,2017-10-01T10:14:46.005-07:00,n_0@172.17.0.2:<0.2685.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1918.0>} exited with reason normal [rebalance:info,2017-10-01T10:14:45.999-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:rebalance_bucket:709]Rebalancing bucket "beer-sample" with config [{repl_type,dcp}, {uuid, <<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map, [['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined]]}, {map_opts_hash,133465355}] [rebalance:info,2017-10-01T10:14:46.009-07:00,n_0@172.17.0.2:<0.2687.0>:ns_rebalancer:rebalance_membase_bucket:734]Waiting for bucket "beer-sample" to be ready on ['n_0@172.17.0.2'] [rebalance:info,2017-10-01T10:14:46.012-07:00,n_0@172.17.0.2:<0.2687.0>:ns_rebalancer:rebalance_membase_bucket:738]Bucket is ready on all nodes [ns_server:debug,2017-10-01T10:14:46.012-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{26,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.013-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/config/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097286}}]}| <<"{\"address\":\"172.17.0.2\",\"analyticsCcHttpPort\":\"9301\",\"analyticsHttpListenPort\":\"9300\",\"authPort\":\"9310\",\"clusterAddress\":\"172.17.0.2\",\"defaultDir\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/@analytics\",\"initialRun\":false,\"iodevices\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_0/datadir/@analytics/iodevice\",\"logDir\":\"/"...>>] [ns_server:debug,2017-10-01T10:14:46.013-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv, <<"/cbas/config/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:46.062-07:00,n_0@172.17.0.2:<0.2612.0>:mb_map:generate_map_old:378]Natural map score: {0,0} [ns_server:debug,2017-10-01T10:14:46.077-07:00,n_0@172.17.0.2:<0.2612.0>:mb_map:generate_map_old:385]Rnd maps scores: {0,0}, {0,0} [ns_server:debug,2017-10-01T10:14:46.078-07:00,n_0@172.17.0.2:<0.2612.0>:mb_map:generate_map_old:392]Considering 1 maps: [{0,0}] [ns_server:debug,2017-10-01T10:14:46.078-07:00,n_0@172.17.0.2:<0.2612.0>:mb_map:generate_map_old:397]Best map score: {0,0} (true,true,true) [rebalance:debug,2017-10-01T10:14:46.079-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:do_rebalance_membase_bucket:790]Target map options: [{replication_topology,star}, {tags,undefined}, {max_slaves,10}] (hash: 133465355) [ns_server:debug,2017-10-01T10:14:46.079-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([vbucket_map_history, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:46.079-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{27,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.080-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: vbucket_map_history -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097278}}]}, {[['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2'|...], [...]|...], [{replication_topology,star},{tags,undefined},{max_slaves,10}]}] [rebalance:info,2017-10-01T10:14:46.079-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:run_mover:795]Target map (distance: {0,0}): [['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined], ['n_0@172.17.0.2',undefined]] [ns_server:debug,2017-10-01T10:14:46.087-07:00,n_0@172.17.0.2:<0.2665.0>:ns_rebalance_observer:initiate_bucket_rebalance:198]Initial estimates: [] [ns_server:debug,2017-10-01T10:14:46.087-07:00,n_0@172.17.0.2:<0.2665.0>:ns_rebalance_observer:initiate_bucket_rebalance:230]Moves: [] [ns_server:debug,2017-10-01T10:14:46.087-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:46.088-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{28,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.094-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:46.095-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{7,63674097286}}], {configs,[{"beer-sample", [{map,[]}, {fastForwardMap,[{0,[],['n_0@172.17.0.2',undefined]}, {1,[],['n_0@172.17.0.2',undefined]}, {2,[],['n_0@172.17.0.2',undefined]}, {3,[],['n_0@172.17.0.2',undefined]}, {4,[],['n_0@172.17.0.2',undefined]}, {5,[],['n_0@172.17.0.2',undefined]}, {6,[],['n_0@172.17.0.2',undefined]}, {7,[],['n_0@172.17.0.2',undefined]}, {8,[],['n_0@172.17.0.2',undefined]}, {9,[],['n_0@172.17.0.2',undefined]}, {10,[],['n_0@172.17.0.2',undefined]}, {11,[],['n_0@172.17.0.2',undefined]}, {12,[],['n_0@172.17.0.2',undefined]}, {13,[],['n_0@172.17.0.2',undefined]}, {14,[],['n_0@172.17.0.2',undefined]}, {15,[],['n_0@172.17.0.2',undefined]}, {16,[],['n_0@172.17.0.2',undefined]}, {17,[],['n_0@172.17.0.2',undefined]}, {18,[],['n_0@172.17.0.2',undefined]}, {19,[],['n_0@172.17.0.2',undefined]}, {20,[],['n_0@172.17.0.2',undefined]}, {21,[],['n_0@172.17.0.2',undefined]}, {22,[],['n_0@172.17.0.2',undefined]}, {23,[],['n_0@172.17.0.2',undefined]}, {24,[],['n_0@172.17.0.2',undefined]}, {25,[],['n_0@172.17.0.2',undefined]}, {26,[],['n_0@172.17.0.2',undefined]}, {27,[],['n_0@172.17.0.2',undefined]}, {28,[],['n_0@172.17.0.2',undefined]}, {29,[],['n_0@172.17.0.2',undefined]}, {30,[],['n_0@172.17.0.2',undefined]}, {31,[],['n_0@172.17.0.2',undefined]}, {32,[],['n_0@172.17.0.2',undefined]}, {33,[],['n_0@172.17.0.2',undefined]}, {34,[],['n_0@172.17.0.2',undefined]}, {35,[],['n_0@172.17.0.2',undefined]}, {36,[],['n_0@172.17.0.2',undefined]}, {37,[],['n_0@172.17.0.2',undefined]}, {38,[],['n_0@172.17.0.2',undefined]}, {39,[],['n_0@172.17.0.2',undefined]}, {40,[],['n_0@172.17.0.2',undefined]}, {41,[],['n_0@172.17.0.2',undefined]}, {42,[],['n_0@172.17.0.2',undefined]}, {43,[],['n_0@172.17.0.2',undefined]}, {44,[],['n_0@172.17.0.2',undefined]}, {45,[],['n_0@172.17.0.2',undefined]}, {46,[],['n_0@172.17.0.2',undefined]}, {47,[],['n_0@172.17.0.2',undefined]}, {48,[],['n_0@172.17.0.2',undefined]}, {49,[],['n_0@172.17.0.2',undefined]}, {50,[],['n_0@172.17.0.2',undefined]}, {51,[],['n_0@172.17.0.2',undefined]}, {52,[],['n_0@172.17.0.2',undefined]}, {53,[],['n_0@172.17.0.2',undefined]}, {54,[],['n_0@172.17.0.2',undefined]}, {55,[],['n_0@172.17.0.2',undefined]}, {56,[],['n_0@172.17.0.2',undefined]}, {57,[],['n_0@172.17.0.2',undefined]}, {58,[],['n_0@172.17.0.2',undefined]}, {59,[],['n_0@172.17.0.2',undefined]}, {60,[],['n_0@172.17.0.2',undefined]}, {61,[],['n_0@172.17.0.2',undefined]}, {62,[],['n_0@172.17.0.2',undefined]}, {63,[],['n_0@172.17.0.2',undefined]}, {64,[],['n_0@172.17.0.2',undefined]}, {65,[],['n_0@172.17.0.2',undefined]}, {66,[],['n_0@172.17.0.2',undefined]}, {67,[],['n_0@172.17.0.2',undefined]}, {68,[],['n_0@172.17.0.2',undefined]}, {69,[],['n_0@172.17.0.2',undefined]}, {70,[],['n_0@172.17.0.2',undefined]}, {71,[],['n_0@172.17.0.2',undefined]}, {72,[],['n_0@172.17.0.2',undefined]}, {73,[],['n_0@172.17.0.2',undefined]}, {74,[],['n_0@172.17.0.2',undefined]}, {75,[],['n_0@172.17.0.2',undefined]}, {76,[],['n_0@172.17.0.2',undefined]}, {77,[],['n_0@172.17.0.2',undefined]}, {78,[],['n_0@172.17.0.2',undefined]}, {79,[],['n_0@172.17.0.2',undefined]}, {80,[],['n_0@172.17.0.2',undefined]}, {81,[],['n_0@172.17.0.2',undefined]}, {82,[],['n_0@172.17.0.2',undefined]}, {83,[],['n_0@172.17.0.2'|...]}, {84,[],[...]}, {85,[],...}, {86,...}, {...}|...]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map_opts_hash,133465355}]}]}] [user:info,2017-10-01T10:14:46.131-07:00,n_0@172.17.0.2:<0.2753.0>:ns_vbucket_mover:init:116]Bucket "beer-sample" rebalance appears to be swap rebalance [ns_server:debug,2017-10-01T10:14:46.136-07:00,n_0@172.17.0.2:<0.2753.0>:ns_vbucket_mover:init:149]The following count of vbuckets do not need to be moved at all: 1024 [ns_server:debug,2017-10-01T10:14:46.136-07:00,n_0@172.17.0.2:<0.2753.0>:ns_vbucket_mover:init:149]The following moves are planned: [] [ns_server:debug,2017-10-01T10:14:46.136-07:00,n_0@172.17.0.2:<0.2753.0>:ns_vbucket_mover:spawn_workers:322]Got actions: [] [ns_server:debug,2017-10-01T10:14:46.136-07:00,n_0@172.17.0.2:<0.2753.0>:ns_vbucket_mover:terminate:210]running terminate/2 of ns_vbucket_mover. Reason: normal [ns_server:debug,2017-10-01T10:14:46.137-07:00,n_0@172.17.0.2:<0.2755.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.2753.0>} exited with reason normal [ns_server:info,2017-10-01T10:14:46.137-07:00,n_0@172.17.0.2:janitor_agent-beer-sample<0.1411.0>:janitor_agent:handle_info:805]Rebalancer <0.2753.0> died with reason normal. Undoing temporary vbucket states caused by rebalance [ns_server:debug,2017-10-01T10:14:46.138-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{29,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.138-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:46.139-07:00,n_0@172.17.0.2:janitor_agent-beer-sample<0.1411.0>:janitor_agent:set_rebalance_mref:854]Killing apply_vbucket_states_worker: <0.2757.0> [ns_server:debug,2017-10-01T10:14:46.143-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([buckets, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:46.145-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:run_verify_replication:903]Spawned verify_replication worker: <0.2768.0> [ns_server:debug,2017-10-01T10:14:46.152-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:46.153-07:00,n_0@172.17.0.2:capi_doc_replicator-beer-sample<0.2211.0>:doc_replicator:loop:58]doing replicate_newnodes_docs [ns_server:debug,2017-10-01T10:14:46.153-07:00,n_0@172.17.0.2:<0.2679.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {master_activity_events,<0.2665.0>} exited with reason shutdown [error_logger:info,2017-10-01T10:14:46.154-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{item,ns_rebalance_observer},{pid,<0.2665.0>}} [rebalance:info,2017-10-01T10:14:46.155-07:00,n_0@172.17.0.2:<0.2612.0>:ns_rebalancer:update_service_map:503]Updating service map for cbas: ['n_0@172.17.0.2','n_1@127.0.0.1'] [error_logger:info,2017-10-01T10:14:46.155-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]global_trace: {delete_global_name,{name,ns_rebalance_observer, {pid,<0.2665.0>}, {'n_0@172.17.0.2',<0.2665.0>}}} [ns_server:debug,2017-10-01T10:14:46.155-07:00,n_0@172.17.0.2:service_rebalancer-cbas<0.2780.0>:service_agent:wait_for_agents:77]Waiting for the service agents for service cbas to come up on nodes: ['n_0@172.17.0.2','n_1@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:46.157-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{8,63674097286}}], {configs,[{"beer-sample", [{map,[]}, {fastForwardMap,[{0,['n_0@172.17.0.2',undefined],[]}, {1,['n_0@172.17.0.2',undefined],[]}, {2,['n_0@172.17.0.2',undefined],[]}, {3,['n_0@172.17.0.2',undefined],[]}, {4,['n_0@172.17.0.2',undefined],[]}, {5,['n_0@172.17.0.2',undefined],[]}, {6,['n_0@172.17.0.2',undefined],[]}, {7,['n_0@172.17.0.2',undefined],[]}, {8,['n_0@172.17.0.2',undefined],[]}, {9,['n_0@172.17.0.2',undefined],[]}, {10,['n_0@172.17.0.2',undefined],[]}, {11,['n_0@172.17.0.2',undefined],[]}, {12,['n_0@172.17.0.2',undefined],[]}, {13,['n_0@172.17.0.2',undefined],[]}, {14,['n_0@172.17.0.2',undefined],[]}, {15,['n_0@172.17.0.2',undefined],[]}, {16,['n_0@172.17.0.2',undefined],[]}, {17,['n_0@172.17.0.2',undefined],[]}, {18,['n_0@172.17.0.2',undefined],[]}, {19,['n_0@172.17.0.2',undefined],[]}, {20,['n_0@172.17.0.2',undefined],[]}, {21,['n_0@172.17.0.2',undefined],[]}, {22,['n_0@172.17.0.2',undefined],[]}, {23,['n_0@172.17.0.2',undefined],[]}, {24,['n_0@172.17.0.2',undefined],[]}, {25,['n_0@172.17.0.2',undefined],[]}, {26,['n_0@172.17.0.2',undefined],[]}, {27,['n_0@172.17.0.2',undefined],[]}, {28,['n_0@172.17.0.2',undefined],[]}, {29,['n_0@172.17.0.2',undefined],[]}, {30,['n_0@172.17.0.2',undefined],[]}, {31,['n_0@172.17.0.2',undefined],[]}, {32,['n_0@172.17.0.2',undefined],[]}, {33,['n_0@172.17.0.2',undefined],[]}, {34,['n_0@172.17.0.2',undefined],[]}, {35,['n_0@172.17.0.2',undefined],[]}, {36,['n_0@172.17.0.2',undefined],[]}, {37,['n_0@172.17.0.2',undefined],[]}, {38,['n_0@172.17.0.2',undefined],[]}, {39,['n_0@172.17.0.2',undefined],[]}, {40,['n_0@172.17.0.2',undefined],[]}, {41,['n_0@172.17.0.2',undefined],[]}, {42,['n_0@172.17.0.2',undefined],[]}, {43,['n_0@172.17.0.2',undefined],[]}, {44,['n_0@172.17.0.2',undefined],[]}, {45,['n_0@172.17.0.2',undefined],[]}, {46,['n_0@172.17.0.2',undefined],[]}, {47,['n_0@172.17.0.2',undefined],[]}, {48,['n_0@172.17.0.2',undefined],[]}, {49,['n_0@172.17.0.2',undefined],[]}, {50,['n_0@172.17.0.2',undefined],[]}, {51,['n_0@172.17.0.2',undefined],[]}, {52,['n_0@172.17.0.2',undefined],[]}, {53,['n_0@172.17.0.2',undefined],[]}, {54,['n_0@172.17.0.2',undefined],[]}, {55,['n_0@172.17.0.2',undefined],[]}, {56,['n_0@172.17.0.2',undefined],[]}, {57,['n_0@172.17.0.2',undefined],[]}, {58,['n_0@172.17.0.2',undefined],[]}, {59,['n_0@172.17.0.2',undefined],[]}, {60,['n_0@172.17.0.2',undefined],[]}, {61,['n_0@172.17.0.2',undefined],[]}, {62,['n_0@172.17.0.2',undefined],[]}, {63,['n_0@172.17.0.2',undefined],[]}, {64,['n_0@172.17.0.2',undefined],[]}, {65,['n_0@172.17.0.2',undefined],[]}, {66,['n_0@172.17.0.2',undefined],[]}, {67,['n_0@172.17.0.2',undefined],[]}, {68,['n_0@172.17.0.2',undefined],[]}, {69,['n_0@172.17.0.2',undefined],[]}, {70,['n_0@172.17.0.2',undefined],[]}, {71,['n_0@172.17.0.2',undefined],[]}, {72,['n_0@172.17.0.2',undefined],[]}, {73,['n_0@172.17.0.2',undefined],[]}, {74,['n_0@172.17.0.2',undefined],[]}, {75,['n_0@172.17.0.2',undefined],[]}, {76,['n_0@172.17.0.2',undefined],[]}, {77,['n_0@172.17.0.2',undefined],[]}, {78,['n_0@172.17.0.2',undefined],[]}, {79,['n_0@172.17.0.2',undefined],[]}, {80,['n_0@172.17.0.2',undefined],[]}, {81,['n_0@172.17.0.2',undefined],[]}, {82,['n_0@172.17.0.2',undefined],[]}, {83,['n_0@172.17.0.2',undefined],[]}, {84,['n_0@172.17.0.2'|...],[]}, {85,[...],...}, {86,...}, {...}|...]}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map_opts_hash,133465355}]}]}] [ns_server:debug,2017-10-01T10:14:46.159-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {service_map,cbas}]..) [ns_server:debug,2017-10-01T10:14:46.159-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{30,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.160-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: buckets -> [[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{9,63674097286}}], {configs,[{"beer-sample", [{map,[]}, {fastForwardMap,[]}, {deltaRecoveryMap,undefined}, {repl_type,dcp}, {uuid,<<"3a3b0a10eb5856c673f6293be848aab5">>}, {auth_type,sasl}, {replica_index,true}, {ram_quota,104857600}, {flush_enabled,false}, {num_threads,3}, {eviction_policy,value_only}, {conflict_resolution_type,seqno}, {storage_mode,couchstore}, {type,membase}, {num_vbuckets,1024}, {num_replicas,1}, {replication_topology,star}, {servers,['n_0@172.17.0.2']}, {sasl_password,"*****"}, {map_opts_hash,133465355}]}]}] [ns_server:debug,2017-10-01T10:14:46.160-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{31,63674097286}}]}] [ns_server:debug,2017-10-01T10:14:46.160-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {service_map,cbas} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{3,63674097286}}]}, 'n_0@172.17.0.2','n_1@127.0.0.1'] [ns_server:debug,2017-10-01T10:14:46.167-07:00,n_0@172.17.0.2:service_rebalancer-cbas<0.2780.0>:service_agent:wait_for_agents_loop:95]All service agents are ready for cbas [ns_server:debug,2017-10-01T10:14:46.170-07:00,n_0@172.17.0.2:service_rebalancer-cbas-worker<0.2806.0>:service_rebalancer:rebalance:98]Rebalancing service cbas. KeepNodes: ['n_0@172.17.0.2','n_1@127.0.0.1'] EjectNodes: [] DeltaNodes: [] [error_logger:info,2017-10-01T10:14:46.812-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.2318.0>,shutdown}} [user:debug,2017-10-01T10:14:46.812-07:00,n_0@172.17.0.2:<0.2317.0>:ns_log:crash_consumption_loop:70]Service 'cbas' exited with status 0. Restarting. Messages: 239:stderr:2017-10-01T10:14:40.206-07:00 INFO CBAS.work.WorkQueue [Worker:ClusterController] Executing: JobCleanup: JobId@JID:2 Status@FAILURE Exceptions@[org.apache.hyracks.api.exceptions.HyracksDataException: java.lang.InterruptedException] , 128:stderr:2017-10-01T10:14:40.206-07:00 INFO CBAS.work.JobCleanupWork [Worker:ClusterController] Cleanup for JobRun with id: JID:2 , 248:stderr:2017-10-01T10:14:40.208-07:00 INFO CBAS.work.WorkQueue [Worker:ClusterController] Executing: JobCleanup: JobId@JID:2 Status@FAILURE Exceptions@[org.apache.hyracks.api.exceptions.HyracksDataException: HYR0003: java.lang.InterruptedException] , 128:stderr:2017-10-01T10:14:40.208-07:00 INFO CBAS.work.JobCleanupWork [Worker:ClusterController] Cleanup for JobRun with id: JID:2 , 142:stderr:2017-10-01T10:14:40.208-07:00 WARN CBAS.job.JobManager [Worker:ClusterController] Ignoring duplicate cleanup for JobRun with id: JID:2 , [goport(/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/install/bin/cbas)] 2017/10/01 10:14:40 Timeout while flushing stderr [user:debug,2017-10-01T10:14:46.815-07:00,n_0@172.17.0.2:<0.2317.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: [stats:error,2017-10-01T10:14:46.905-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:debug,2017-10-01T10:14:47.276-07:00,n_0@172.17.0.2:service_rebalancer-cbas-worker<0.2806.0>:service_rebalancer:rebalance:102]Got node infos: [{'n_1@127.0.0.1',[{node_id,<<"a10e75f8f9d93f0dfadacbdcef859eca">>}, {priority,0}, {opaque,{[{<<"cc-http-port">>,<<"9321">>}, {<<"host">>,<<"127.0.0.1">>}, {<<"num-iodevices">>,<<"1">>}]}}]}, {'n_0@172.17.0.2',[{node_id,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {priority,0}, {opaque,{[{<<"cc-http-port">>,<<"9301">>}, {<<"host">>,<<"172.17.0.2">>}, {<<"master-node">>,<<"true">>}, {<<"num-iodevices">>,<<"1">>}, {<<"starting-partition-id">>,<<"0">>}]}}]}] [ns_server:debug,2017-10-01T10:14:47.276-07:00,n_0@172.17.0.2:service_rebalancer-cbas-worker<0.2806.0>:service_rebalancer:rebalance:105]Rebalance id is <<"a1b05493cb1a6d530eb30f1411715d17">> [ns_server:debug,2017-10-01T10:14:47.282-07:00,n_0@172.17.0.2:service_rebalancer-cbas-worker<0.2806.0>:service_rebalancer:rebalance:114]Using node 'n_0@172.17.0.2' as a leader [ns_server:debug,2017-10-01T10:14:47.287-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{32,63674097287}}]}] [ns_server:debug,2017-10-01T10:14:47.287-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097287}}]}| <<"{\"Command\":1,\"Extra\":\"a1b05493cb1a6d530eb30f1411715d17\"}">>] [ns_server:debug,2017-10-01T10:14:47.287-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv, <<"/cbas/node/a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:14:47.290-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{33,63674097287}}]}] [ns_server:debug,2017-10-01T10:14:47.290-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/nextPartitionId">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097287}}]}| <<"2">>] [ns_server:debug,2017-10-01T10:14:47.291-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/cbas/nextPartitionId">>}]..) [ns_server:debug,2017-10-01T10:14:47.292-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{34,63674097287}}]}] [ns_server:debug,2017-10-01T10:14:47.292-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/node/a10e75f8f9d93f0dfadacbdcef859eca">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097287}}]}| <<"{\"Command\":1,\"Extra\":\"a1b05493cb1a6d530eb30f1411715d17\"}">>] [ns_server:debug,2017-10-01T10:14:47.292-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv, <<"/cbas/node/a10e75f8f9d93f0dfadacbdcef859eca">>}]..) [ns_server:debug,2017-10-01T10:14:47.293-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{35,63674097287}}]}] [ns_server:debug,2017-10-01T10:14:47.294-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([{local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {metakv,<<"/cbas/topology">>}]..) [ns_server:debug,2017-10-01T10:14:47.294-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/topology">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{3,63674097287}}]}| <<"{\"nodes\":[{\"nodeId\":\"a7cadc9d6a7b1c5e2ac6210075d857d5\",\"priority\":0,\"opaque\":{\"cc-http-port\":\"9301\",\"host\":\"172.17.0.2\",\"master-node\":\"true\",\"num-iodevices\":\"1\",\"starting-partition-id\":\"0\"}},{\"nodeId\":\"a10e75f8f9d93f0dfadacbdcef859eca\",\"priority\":0,\"opaque\":{\"cc-http-port\":\"9321\",\"host\":\"127.0.0.1\",\"num-iodevices\":\"1\",\"starting-partition-id\":\"1\"}}],\"id\":\"a1b05493cb1a6d530eb30f1411715d1"...>>] [ns_server:debug,2017-10-01T10:14:47.334-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a10e75f8f9d93f0dfadacbdcef859eca">>} -> [{'_vclock',[{<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{2,63674097287}}]}] [ns_server:debug,2017-10-01T10:14:47.334-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {metakv,<<"/cbas/config/node/a10e75f8f9d93f0dfadacbdcef859eca">>} -> [{'_vclock',[{<<"a10e75f8f9d93f0dfadacbdcef859eca">>,{1,63674097287}}]}| <<"{\"address\":\"127.0.0.1\",\"analyticsCcHttpPort\":\"9301\",\"analyticsHttpListenPort\":\"9320\",\"authPort\":\"9330\",\"clusterAddress\":\"172.17.0.2\",\"defaultDir\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/datadir/@analytics\",\"initialRun\":false,\"iodevices\":\"/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/ns_server/data/n_1/datadir/@analytics/iodevice\",\"logDir\":\"/h"...>>] [stats:error,2017-10-01T10:14:47.906-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:info,2017-10-01T10:14:48.904-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [stats:error,2017-10-01T10:14:48.905-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]} [ns_server:info,2017-10-01T10:14:53.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:14:58.905-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:03.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:08.905-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:debug,2017-10-01T10:15:09.002-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_views) for the following buckets: [<<"beer-sample">>] [ns_server:info,2017-10-01T10:15:09.006-07:00,n_0@172.17.0.2:<0.3643.0>:compaction_new_daemon:spawn_scheduled_views_compactor:497]Start compaction of indexes for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:15:09.007-07:00,n_0@172.17.0.2:<0.3646.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/main` data_size is 737019, disk_size is 769574 [ns_server:debug,2017-10-01T10:15:09.007-07:00,n_0@172.17.0.2:<0.3647.0>:compaction_new_daemon:view_needs_compaction:1064]`spatial_view/beer-sample/_design/beer/main` data_size is 0, disk_size is 0 [ns_server:debug,2017-10-01T10:15:09.007-07:00,n_0@172.17.0.2:<0.3648.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/replica` data_size is 0, disk_size is 4152 [ns_server:debug,2017-10-01T10:15:09.008-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:15:09.008-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [error_logger:info,2017-10-01T10:15:09.965-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.3700.0>}, {name,disk_log_sup}, {mfargs,{disk_log_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [error_logger:info,2017-10-01T10:15:09.966-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.3701.0>}, {name,disk_log_server}, {mfargs,{disk_log_server,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:15:09.994-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_kv) for the following buckets: [<<"beer-sample">>] [ns_server:info,2017-10-01T10:15:09.994-07:00,n_0@172.17.0.2:<0.3704.0>:compaction_new_daemon:spawn_scheduled_kv_compactor:471]Start compaction of vbuckets for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:15:09.995-07:00,n_0@172.17.0.2:<0.3706.0>:compaction_new_daemon:bucket_needs_compaction:972]`beer-sample` data size is 3596512, disk size is 20474880 [ns_server:debug,2017-10-01T10:15:09.996-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:15:09.996-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:info,2017-10-01T10:15:13.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:18.904-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:23.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:24.105-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:pull_one_node:354]Pulling config from: 'n_1@127.0.0.1' [ns_server:info,2017-10-01T10:15:28.906-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:33.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:info,2017-10-01T10:15:38.905-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:debug,2017-10-01T10:15:39.009-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_views) for the following buckets: [<<"beer-sample">>] [ns_server:info,2017-10-01T10:15:39.013-07:00,n_0@172.17.0.2:<0.4698.0>:compaction_new_daemon:spawn_scheduled_views_compactor:497]Start compaction of indexes for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:15:39.014-07:00,n_0@172.17.0.2:<0.4701.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/main` data_size is 737019, disk_size is 769574 [ns_server:debug,2017-10-01T10:15:39.014-07:00,n_0@172.17.0.2:<0.4702.0>:compaction_new_daemon:view_needs_compaction:1064]`spatial_view/beer-sample/_design/beer/main` data_size is 0, disk_size is 0 [ns_server:debug,2017-10-01T10:15:39.014-07:00,n_0@172.17.0.2:<0.4703.0>:compaction_new_daemon:view_needs_compaction:1064]`mapreduce_view/beer-sample/_design/beer/replica` data_size is 0, disk_size is 4152 [ns_server:debug,2017-10-01T10:15:39.015-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:15:39.015-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2017-10-01T10:15:39.997-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_scheduler_message:1312]Starting compaction (compact_kv) for the following buckets: [<<"beer-sample">>] [ns_server:info,2017-10-01T10:15:39.997-07:00,n_0@172.17.0.2:<0.4754.0>:compaction_new_daemon:spawn_scheduled_kv_compactor:471]Start compaction of vbuckets for bucket beer-sample with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2017-10-01T10:15:39.999-07:00,n_0@172.17.0.2:<0.4756.0>:compaction_new_daemon:bucket_needs_compaction:972]`beer-sample` data size is 3596512, disk size is 20474880 [ns_server:debug,2017-10-01T10:15:39.999-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_new_daemon:process_compactors_exit:1353]Finished compaction iteration. [ns_server:debug,2017-10-01T10:15:39.999-07:00,n_0@172.17.0.2:compaction_new_daemon<0.959.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:info,2017-10-01T10:15:43.903-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:handle_info:484]Skipping janitor in state rebalancing [ns_server:error,2017-10-01T10:15:47.283-07:00,n_0@172.17.0.2:service_agent-cbas<0.2254.0>:service_agent:handle_info:235]Linked process <0.2805.0> died with reason {timeout, {gen_server,call, [<0.2681.0>, {call, "ServiceAPI.StartTopologyChange", #Fun}, 60000]}}. Terminating [error_logger:error,2017-10-01T10:15:47.283-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server <0.2805.0> terminating ** Last message in was {'$gen_cast',#Fun} ** When Server state == [] ** Reason for termination == ** {timeout,{gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}} [ns_server:error,2017-10-01T10:15:47.283-07:00,n_0@172.17.0.2:service_agent-cbas<0.2254.0>:service_agent:terminate:264]Terminating abnormally [ns_server:error,2017-10-01T10:15:47.283-07:00,n_0@172.17.0.2:service_agent-cbas<0.2254.0>:service_agent:terminate:269]Terminating json rpc connection for cbas: <0.2681.0> [error_logger:error,2017-10-01T10:15:47.283-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: work_queue:init/1 pid: <0.2805.0> registered_name: [] exception exit: {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}} in function gen_server:terminate/6 (gen_server.erl, line 744) ancestors: ['service_agent-cbas',service_agent_children_sup, service_agent_sup,ns_server_sup,ns_server_nodes_sup, <0.173.0>,ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.2254.0>] dictionary: [{connection,<0.2681.0>}] trap_exit: false status: running heap_size: 610 stack_size: 27 reductions: 372 neighbours: [ns_server:debug,2017-10-01T10:15:47.284-07:00,n_0@172.17.0.2:<0.2256.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.2254.0>} exited with reason {linked_process_died, <0.2805.0>, {timeout, {gen_server, call, [<0.2681.0>, {call, "ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} [ns_server:error,2017-10-01T10:15:47.284-07:00,n_0@172.17.0.2:service_rebalancer-cbas<0.2780.0>:service_rebalancer:run_rebalance:80]Agent terminated during the rebalance: {'DOWN',#Ref<0.0.0.21277>,process, <0.2254.0>, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call, "ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}} [error_logger:error,2017-10-01T10:15:47.284-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server 'service_agent-cbas' terminating ** Last message in was {'EXIT',<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} ** When Server state == {state,cbas, {dict,4,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}, {{[[{uuid,<<"a10e75f8f9d93f0dfadacbdcef859eca">>}| 'n_1@127.0.0.1']], [], [[{node,'n_1@127.0.0.1'}| <<"a10e75f8f9d93f0dfadacbdcef859eca">>]], [],[],[], [[{uuid,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}| 'n_0@172.17.0.2']], [],[],[],[],[],[], [[{node,'n_0@172.17.0.2'}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>]], [],[]}}}, <0.2681.0>,#Ref<0.0.0.20699>,<0.2780.0>, #Ref<0.0.0.21295>,<0.2805.0>, {[{<0.2806.0>,#Ref<0.0.0.21579>}],[]}, undefined, {<<"NA==">>, [[{<<"rev">>,<<"NA==">>}, {<<"id">>, <<"prepare/a1b05493cb1a6d530eb30f1411715d17">>}, {<<"type">>,<<"task-prepared">>}, {<<"status">>,<<"task-running">>}, {<<"isCancelable">>,true}, {<<"progress">>,0}, {<<"extra">>, {[{<<"rebalanceId">>, <<"a1b05493cb1a6d530eb30f1411715d17">>}]}}]]}, {<<"NA==">>, {topology, ['n_0@172.17.0.2','n_1@127.0.0.1'], [<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>, <<"a10e75f8f9d93f0dfadacbdcef859eca">>], true,[]}}, <0.2683.0>,<0.2684.0>} ** Reason for termination == ** {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} [ns_server:debug,2017-10-01T10:15:47.284-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"projector-cbauth",<0.1127.0>} needs_update [ns_server:debug,2017-10-01T10:15:47.285-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.1011.0>} needs_update [ns_server:error,2017-10-01T10:15:47.285-07:00,n_0@172.17.0.2:service_agent-cbas<0.5009.0>:service_agent:handle_call:186]Got rebalance-only call {if_rebalance,<0.2780.0>,unset_rebalancer} that doesn't match rebalancer pid undefined [error_logger:error,2017-10-01T10:15:47.285-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: service_agent:init/1 pid: <0.2254.0> registered_name: 'service_agent-cbas' exception exit: {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} in function gen_server:terminate/6 (gen_server.erl, line 744) ancestors: [service_agent_children_sup,service_agent_sup,ns_server_sup, ns_server_nodes_sup,<0.173.0>,ns_server_cluster_sup, <0.89.0>] messages: [{'EXIT',<0.2683.0>, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}}, {'EXIT',<0.2684.0>, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}}] links: [<0.2256.0>,<0.884.0>] dictionary: [] trap_exit: true status: running heap_size: 6772 stack_size: 27 reductions: 7511 neighbours: [error_logger:error,2017-10-01T10:15:47.286-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,json_rpc_connection_sup} Context: child_terminated Reason: {service_agent_died, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}} Offender: [{pid,<0.2681.0>}, {name,json_rpc_connection}, {mfargs,{json_rpc_connection,start_link,undefined}}, {restart_type,temporary}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:error,2017-10-01T10:15:47.286-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,service_agent_children_sup} Context: child_terminated Reason: {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} Offender: [{pid,<0.2254.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:15:47.286-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_children_sup} started: [{pid,<0.5009.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:error,2017-10-01T10:15:47.288-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: service_rebalancer:-run_rebalance/7-fun-1-/0 pid: <0.2806.0> registered_name: 'service_rebalancer-cbas-worker' exception exit: {{linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}, {gen_server,call, [{'service_agent-cbas','n_0@172.17.0.2'}, {if_rebalance,<0.2780.0>, {start_rebalance, <<"a1b05493cb1a6d530eb30f1411715d17">>,rebalance, [{[{node_id,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {priority,0}, {opaque, {[{<<"cc-http-port">>,<<"9301">>}, {<<"host">>,<<"172.17.0.2">>}, {<<"master-node">>,<<"true">>}, {<<"num-iodevices">>,<<"1">>}, {<<"starting-partition-id">>,<<"0">>}]}}], full}, {[{node_id,<<"a10e75f8f9d93f0dfadacbdcef859eca">>}, {priority,0}, {opaque, {[{<<"cc-http-port">>,<<"9321">>}, {<<"host">>,<<"127.0.0.1">>}, {<<"num-iodevices">>,<<"1">>}]}}], full}], [],<0.2806.0>}}, 90000]}} in function gen_server:call/3 (gen_server.erl, line 188) in call from service_rebalancer:rebalance/8 (src/service_rebalancer.erl, line 116) ancestors: ['service_rebalancer-cbas',<0.2612.0>,<0.2245.0>, ns_orchestrator_child_sup,ns_orchestrator_sup,mb_master_sup, mb_master,<0.965.0>,ns_server_sup,ns_server_nodes_sup, <0.173.0>,ns_server_cluster_sup,<0.89.0>] messages: [] links: [<0.2780.0>] dictionary: [] trap_exit: false status: running heap_size: 2586 stack_size: 27 reductions: 6252 neighbours: [ns_server:debug,2017-10-01T10:15:47.289-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"cbas-cbauth",<0.2445.0>} needs_update [ns_server:error,2017-10-01T10:15:47.291-07:00,n_0@172.17.0.2:service_rebalancer-cbas<0.2780.0>:service_agent:process_bad_results:815]Service call unset_rebalancer (service cbas) failed on some nodes: [{'n_0@172.17.0.2',nack}] [ns_server:warn,2017-10-01T10:15:47.291-07:00,n_0@172.17.0.2:service_rebalancer-cbas<0.2780.0>:service_rebalancer:run_rebalance:89]Failed to unset rebalancer on some nodes: {error,{bad_nodes,cbas,unset_rebalancer,[{'n_0@172.17.0.2',nack}]}} [user:error,2017-10-01T10:15:47.292-07:00,n_0@172.17.0.2:<0.2245.0>:ns_orchestrator:do_log_rebalance_completion:1250]Rebalance exited with reason {service_rebalance_failed,cbas, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}} [error_logger:error,2017-10-01T10:15:47.292-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: service_rebalancer:-spawn_monitor/6-fun-0-/0 pid: <0.2780.0> registered_name: 'service_rebalancer-cbas' exception exit: {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}} in function service_rebalancer:run_rebalance/7 (src/service_rebalancer.erl, line 92) ancestors: [<0.2612.0>,<0.2245.0>,ns_orchestrator_child_sup, ns_orchestrator_sup,mb_master_sup,mb_master,<0.965.0>, ns_server_sup,ns_server_nodes_sup,<0.173.0>, ns_server_cluster_sup,<0.89.0>] messages: [{'EXIT',<0.2806.0>, {{linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}, {gen_server,call, [{'service_agent-cbas','n_0@172.17.0.2'}, {if_rebalance,<0.2780.0>, {start_rebalance, <<"a1b05493cb1a6d530eb30f1411715d17">>,rebalance, [{[{node_id,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}, {priority,0}, {opaque, {[{<<"cc-http-port">>,<<"9301">>}, {<<"host">>,<<"172.17.0.2">>}, {<<"master-node">>,<<"true">>}, {<<"num-iodevices">>,<<"1">>}, {<<"starting-partition-id">>,<<"0">>}]}}], full}, {[{node_id,<<"a10e75f8f9d93f0dfadacbdcef859eca">>}, {priority,0}, {opaque, {[{<<"cc-http-port">>,<<"9321">>}, {<<"host">>,<<"127.0.0.1">>}, {<<"num-iodevices">>,<<"1">>}]}}], full}], [],<0.2806.0>}}, 90000]}}}] links: [] dictionary: [] trap_exit: true status: running heap_size: 6772 stack_size: 27 reductions: 6796 neighbours: [ns_server:debug,2017-10-01T10:15:47.293-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{36,63674097347}}]}] [ns_server:debug,2017-10-01T10:15:47.293-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: counters -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097347}}]}, {rebalance_fail,1}, {rebalance_start,1}] [ns_server:debug,2017-10-01T10:15:47.293-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([counters, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:15:47.294-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: {local_changes_count,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>} -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{37,63674097347}}]}] [error_logger:error,2017-10-01T10:15:47.294-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: erlang:apply/2 pid: <0.2612.0> registered_name: [] exception exit: {service_rebalance_failed,cbas, {linked_process_died,<0.2805.0>, {timeout, {gen_server,call, [<0.2681.0>, {call,"ServiceAPI.StartTopologyChange", #Fun}, 60000]}}}} in function ns_rebalancer:rebalance_topology_aware_service/4 (src/ns_rebalancer.erl, line 558) in call from ns_rebalancer:'-rebalance_topology_aware_services/4-fun-0-'/4 (src/ns_rebalancer.erl, line 533) in call from lists:filtermap/2 (lists.erl, line 1302) in call from ns_rebalancer:rebalance_services/2 (src/ns_rebalancer.erl, line 467) in call from ns_rebalancer:rebalance/6 (src/ns_rebalancer.erl, line 649) ancestors: [<0.2245.0>,ns_orchestrator_child_sup,ns_orchestrator_sup, mb_master_sup,mb_master,<0.965.0>,ns_server_sup, ns_server_nodes_sup,<0.173.0>,ns_server_cluster_sup, <0.89.0>] messages: [] links: [<0.2245.0>] dictionary: [{random_seed,{8236,26623,17360}}] trap_exit: false status: running heap_size: 75113 stack_size: 27 reductions: 763127 neighbours: [ns_server:debug,2017-10-01T10:15:47.294-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: graceful_failover_pid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097347}}]}| undefined] [ns_server:debug,2017-10-01T10:15:47.294-07:00,n_0@172.17.0.2:ns_config_rep<0.797.0>:ns_config_rep:do_push_keys:323]Replicating some config keys ([graceful_failover_pid,rebalance_status, rebalance_status_uuid,rebalancer_pid, {local_changes_count, <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}]..) [ns_server:debug,2017-10-01T10:15:47.295-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalancer_pid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{1,63674097285}}]}| undefined] [ns_server:info,2017-10-01T10:15:47.294-07:00,n_0@172.17.0.2:<0.5031.0>:diag_handler:log_all_dcp_stats:194]logging dcp stats [ns_server:debug,2017-10-01T10:15:47.295-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalance_status_uuid -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097347}}]}| <<"5082c31041173b0bab097f3e9e8aa4bb">>] [ns_server:debug,2017-10-01T10:15:47.295-07:00,n_0@172.17.0.2:ns_config_log<0.169.0>:ns_config_log:log_common:145]config change: rebalance_status -> [{'_vclock',[{<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>,{2,63674097347}}]}| {none, <<"Rebalance failed. See logs for detailed reason. You can try again.">>}] [ns_server:info,2017-10-01T10:15:47.351-07:00,n_0@172.17.0.2:<0.5031.0>:diag_handler:log_all_dcp_stats:198]end of logging dcp stats [ns_server:debug,2017-10-01T10:15:48.287-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-service_api<0.5072.0>:json_rpc_connection:init:74]Observed revrpc connection: label "cbas-service_api", handling process <0.5072.0> [ns_server:debug,2017-10-01T10:15:48.287-07:00,n_0@172.17.0.2:service_agent-cbas<0.5009.0>:service_agent:do_handle_connection:328]Observed new json rpc connection for cbas: <0.5072.0> [ns_server:debug,2017-10-01T10:15:48.287-07:00,n_0@172.17.0.2:<0.5012.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {json_rpc_events,<0.5010.0>} exited with reason normal [ns_server:debug,2017-10-01T10:15:48.290-07:00,n_0@172.17.0.2:service_agent-cbas<0.5009.0>:service_agent:cleanup_service:506]Cleaning up stale tasks: [[{<<"rev">>,<<"NA==">>}, {<<"id">>,<<"prepare/a1b05493cb1a6d530eb30f1411715d17">>}, {<<"type">>,<<"task-prepared">>}, {<<"status">>,<<"task-running">>}, {<<"isCancelable">>,true}, {<<"progress">>,0}, {<<"extra">>, {[{<<"rebalanceId">>,<<"a1b05493cb1a6d530eb30f1411715d17">>}]}}]] [ns_server:debug,2017-10-01T10:15:48.326-07:00,n_0@172.17.0.2:<0.2677.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1916.0>} exited with reason normal [ns_server:debug,2017-10-01T10:15:48.326-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-service_api<0.5072.0>:json_rpc_connection:handle_info:130]Socket closed [ns_server:error,2017-10-01T10:15:48.326-07:00,n_0@172.17.0.2:service_agent-cbas<0.5009.0>:service_agent:handle_info:243]Lost json rpc connection for service cbas, reason shutdown. Terminating. [ns_server:error,2017-10-01T10:15:48.326-07:00,n_0@172.17.0.2:service_agent-cbas<0.5009.0>:service_agent:terminate:264]Terminating abnormally [ns_server:debug,2017-10-01T10:15:48.327-07:00,n_0@172.17.0.2:<0.5011.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.5009.0>} exited with reason {lost_connection, shutdown} [error_logger:error,2017-10-01T10:15:48.327-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.5072.0>,{error,shutdown}} [error_logger:error,2017-10-01T10:15:48.327-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]** Generic server 'service_agent-cbas' terminating ** Last message in was {'DOWN',#Ref<0.0.0.33336>,process,<0.5072.0>,shutdown} ** When Server state == {state,cbas, {dict,4,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}, {{[[{uuid,<<"a10e75f8f9d93f0dfadacbdcef859eca">>}| 'n_1@127.0.0.1']], [], [[{node,'n_1@127.0.0.1'}| <<"a10e75f8f9d93f0dfadacbdcef859eca">>]], [],[],[], [[{uuid,<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>}| 'n_0@172.17.0.2']], [],[],[],[],[],[], [[{node,'n_0@172.17.0.2'}| <<"a7cadc9d6a7b1c5e2ac6210075d857d5">>]], [],[]}}}, undefined,undefined,undefined,undefined,undefined, undefined,undefined, {<<"NQ==">>,[]}, {<<"NQ==">>, {topology, ['n_0@172.17.0.2','n_1@127.0.0.1'], [<<"a7cadc9d6a7b1c5e2ac6210075d857d5">>, <<"a10e75f8f9d93f0dfadacbdcef859eca">>], true,[]}}, <0.5076.0>,<0.5077.0>} ** Reason for termination == ** {lost_connection,shutdown} [error_logger:error,2017-10-01T10:15:48.328-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: service_agent:init/1 pid: <0.5009.0> registered_name: 'service_agent-cbas' exception exit: {lost_connection,shutdown} in function gen_server:terminate/6 (gen_server.erl, line 744) ancestors: [service_agent_children_sup,service_agent_sup,ns_server_sup, ns_server_nodes_sup,<0.173.0>,ns_server_cluster_sup, <0.89.0>] messages: [{'EXIT',<0.5076.0>, {shutdown, {gen_server,call, [<0.5072.0>, {call,"ServiceAPI.GetTaskList", #Fun}, 60000]}}}, {'EXIT',<0.5077.0>,{lost_connection,shutdown}}] links: [<0.5011.0>,<0.884.0>] dictionary: [] trap_exit: true status: running heap_size: 6772 stack_size: 27 reductions: 9356 neighbours: [error_logger:error,2017-10-01T10:15:48.328-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,service_agent_children_sup} Context: child_terminated Reason: {lost_connection,shutdown} Offender: [{pid,<0.5009.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2017-10-01T10:15:48.328-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,service_agent_children_sup} started: [{pid,<0.5078.0>}, {name,{service_agent,cbas}}, {mfargs,{service_agent,start_link,[cbas]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2017-10-01T10:15:48.328-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"projector-cbauth",<0.1127.0>} needs_update [ns_server:debug,2017-10-01T10:15:48.329-07:00,n_0@172.17.0.2:json_rpc_connection-cbas-cbauth<0.2445.0>:json_rpc_connection:handle_info:130]Socket closed [error_logger:error,2017-10-01T10:15:48.329-07:00,n_0@172.17.0.2:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.2445.0>,{error,shutdown}} [ns_server:debug,2017-10-01T10:15:48.332-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"goxdcr-cbauth",<0.1011.0>} needs_update [ns_server:debug,2017-10-01T10:15:48.334-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_cast:95]Observed json rpc process {"cbas-cbauth",<0.2445.0>} needs_update [ns_server:debug,2017-10-01T10:15:48.334-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:notify_cbauth:180]Process {"cbas-cbauth",<0.2445.0>} is already dead [ns_server:debug,2017-10-01T10:15:48.334-07:00,n_0@172.17.0.2:menelaus_cbauth<0.874.0>:menelaus_cbauth:handle_info:126]Observed json rpc process {"cbas-cbauth",<0.2445.0>} died with reason shutdown [stats:error,2017-10-01T10:15:48.905-07:00,n_0@172.17.0.2:index_stats_collector-cbas<0.1087.0>:base_stats_collector:handle_info:109](Collector: index_stats_collector) Exception in stats collector: {error, {badmatch, {error, {econnrefused, [{lhttpc_client, send_request, 1, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 220}]}, {lhttpc_client, execute, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 169}]}, {lhttpc_client, request, 9, [{file, "/home/couchbase/jenkins/workspace/cbas-cbcluster-test2/couchdb/src/lhttpc/lhttpc_client.erl"}, {line, 92}]}]}}}, [{cbas_rest, send,3, [{file, "src/cbas_rest.erl"}, {line, 61}]}, {cbas_rest, do_get_stats, 0, [{file, "src/cbas_rest.erl"}, {line, 41}]}, {index_stats_collector, do_grab_stats, 1, [{file, "src/index_stats_collector.erl"}, {line, 163}]}, {base_stats_collector, handle_info, 2, [{file, "src/base_stats_collector.erl"}, {line, 89}]}, {gen_server, handle_msg, 5, [{file, "gen_server.erl"}, {line, 604}]}, {proc_lib, init_p_do_apply, 3, [{file, "proc_lib.erl"}, {line, 239}]}]}