[ns_server:info,2016-05-11T16:39:32.494-07:00,nonode@nohost:<0.87.0>:ns_server:init_logging:151]Started & configured logging [ns_server:info,2016-05-11T16:39:32.506-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]Static config terms: [{error_logger_mf_dir,"/opt/couchbase/var/lib/couchbase/logs"}, {path_config_bindir,"/opt/couchbase/bin"}, {path_config_etcdir,"/opt/couchbase/etc/couchbase"}, {path_config_libdir,"/opt/couchbase/lib"}, {path_config_datadir,"/opt/couchbase/var/lib/couchbase"}, {path_config_tmpdir,"/opt/couchbase/var/lib/couchbase/tmp"}, {path_config_secdir,"/opt/couchbase/etc/security"}, {nodefile,"/opt/couchbase/var/lib/couchbase/couchbase-server.node"}, {loglevel_default,debug}, {loglevel_couchdb,info}, {loglevel_ns_server,debug}, {loglevel_error_logger,debug}, {loglevel_user,debug}, {loglevel_menelaus,debug}, {loglevel_ns_doctor,debug}, {loglevel_stats,debug}, {loglevel_rebalance,debug}, {loglevel_cluster,debug}, {loglevel_views,debug}, {loglevel_mapreduce_errors,debug}, {loglevel_xdcr,debug}, {loglevel_xdcr_trace,error}, {loglevel_access,info}, {disk_sink_opts, [{rotation, [{compress,true}, {size,41943040}, {num_files,10}, {buffer_size_max,52428800}]}]}, {disk_sink_opts_xdcr_trace, [{rotation,[{compress,false},{size,83886080},{num_files,5}]}]}, {net_kernel_verbosity,10}] [ns_server:warn,2016-05-11T16:39:32.506-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter error_logger_mf_dir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.506-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_bindir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.506-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_etcdir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_libdir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_datadir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_tmpdir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_secdir, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter nodefile, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_default, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_couchdb, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_server, which is given from command line [ns_server:warn,2016-05-11T16:39:32.507-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_error_logger, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_user, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_menelaus, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_doctor, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_stats, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_rebalance, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_cluster, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_views, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_mapreduce_errors, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr_trace, which is given from command line [ns_server:warn,2016-05-11T16:39:32.508-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_access, which is given from command line [ns_server:warn,2016-05-11T16:39:32.509-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts, which is given from command line [ns_server:warn,2016-05-11T16:39:32.509-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts_xdcr_trace, which is given from command line [ns_server:warn,2016-05-11T16:39:32.509-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter net_kernel_verbosity, which is given from command line [error_logger:info,2016-05-11T16:39:32.517-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.127.0>}, {name,local_tasks}, {mfargs,{local_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.526-07:00,nonode@nohost:ns_server_cluster_sup<0.126.0>:log_os_info:start_link:25]OS type: {unix,linux} Version: {2,6,32} Runtime info: [{otp_release,"R16B03-1"}, {erl_version,"5.10.4.0.0.1"}, {erl_version_long, "Erlang R16B03-1 (erts-5.10.4.0.0.1) [source-62b74b5] [64-bit] [smp:4:4] [async-threads:16] [kernel-poll:true]\n"}, {system_arch_raw,"x86_64-unknown-linux-gnu"}, {system_arch,"x86_64-unknown-linux-gnu"}, {localtime,{{2016,5,11},{16,39,32}}}, {memory, [{total,25063288}, {processes,8533536}, {processes_used,8532448}, {system,16529752}, {atom,331249}, {atom_used,307557}, {binary,58272}, {code,7588538}, {ets,2240552}]}, {loaded, [ns_info,log_os_info,local_tasks,restartable, ns_server_cluster_sup,calendar,ale_default_formatter, 'ale_logger-metakv','ale_logger-rebalance', 'ale_logger-xdcr_trace','ale_logger-menelaus', 'ale_logger-stats','ale_logger-access', 'ale_logger-ns_server','ale_logger-user', 'ale_logger-ns_doctor','ale_logger-cluster', 'ale_logger-xdcr',otp_internal,ns_log_sink,ale_disk_sink, misc,io_lib_fread,couch_util,ns_server,filelib,cpu_sup, memsup,disksup,os_mon,io,release_handler,overload, alarm_handler,sasl,timer,tftp_sup,httpd_sup, httpc_handler_sup,httpc_cookie,inets_trace,httpc_manager, httpc,httpc_profile_sup,httpc_sup,ftp_sup,inets_sup, inets_app,ssl,lhttpc_manager,lhttpc_sup,lhttpc, tls_connection_sup,ssl_session_cache,ssl_pkix_db, ssl_manager,ssl_sup,ssl_app,crypto_server,crypto_sup, crypto_app,ale_error_logger_handler, 'ale_logger-ale_logger','ale_logger-error_logger', beam_opcodes,beam_dict,beam_asm,beam_validator,beam_z, beam_flatten,beam_trim,beam_receive,beam_bsm,beam_peep, beam_dead,beam_split,beam_type,beam_bool,beam_except, beam_clean,beam_utils,beam_block,beam_jump,beam_a, v3_codegen,v3_life,v3_kernel,sys_core_dsetel,erl_bifs, sys_core_fold,cerl_trees,sys_core_inline,core_lib,cerl, v3_core,erl_bits,erl_expand_records,sys_pre_expand,sofs, erl_internal,sets,ordsets,erl_lint,compile, dynamic_compile,ale_utils,io_lib_pretty,io_lib_format, io_lib,ale_codegen,dict,ale,ale_dynamic_sup,ale_sup, ale_app,epp,ns_bootstrap,child_erlang,file_io_server, orddict,erl_eval,file,c,kernel_config,user_sup, supervisor_bridge,standard_error,code_server,unicode, hipe_unified_loader,gb_sets,ets,binary,code,file_server, net_kernel,global_group,erl_distribution,filename,os, inet_parse,inet,inet_udp,inet_config,inet_db,global, gb_trees,rpc,supervisor,kernel,application_master,sys, application,gen_server,erl_parse,proplists,erl_scan,lists, application_controller,proc_lib,gen,gen_event, error_logger,heart,error_handler,erts_internal,erlang, erl_prim_loader,prim_zip,zlib,prim_file,prim_inet, prim_eval,init,otp_ring0]}, {applications, [{lhttpc,"Lightweight HTTP Client","1.3.0"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {public_key,"Public key infrastructure","0.21"}, {asn1,"The Erlang ASN1 compiler version 2.0.4","2.0.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}, {ale,"Another Logger for Erlang","4.1.1-5914-enterprise"}, {inets,"INETS CXC 138 49","5.9.8"}, {ns_server,"Couchbase server","4.1.1-5914-enterprise"}, {crypto,"CRYPTO version 2","3.2"}, {ssl,"Erlang/OTP SSL application","5.3.3"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}]}, {pre_loaded, [erts_internal,erlang,erl_prim_loader,prim_zip,zlib, prim_file,prim_inet,prim_eval,init,otp_ring0]}, {process_count,94}, {node,nonode@nohost}, {nodes,[]}, {registered, [local_tasks,inets_sup,code_server,ale_stats_events, lhttpc_sup,ale,application_controller,standard_error_sup, lhttpc_manager,release_handler,ale_sup,kernel_safe_sup, httpd_sup,standard_error,overload,error_logger, ale_dynamic_sup,alarm_handler,timer_server,'sink-ns_log', sasl_safe_sup,crypto_server,'sink-disk_metakv',crypto_sup, init,'sink-disk_access_int',inet_db,os_mon_sup,tftp_sup, rex,'sink-disk_access',kernel_sup,cpu_sup, 'sink-xdcr_trace',global_name_server,tls_connection_sup, memsup,'sink-disk_reports',ssl_sup,disksup,file_server_2, 'sink-disk_stats',httpc_sup,global_group, 'sink-disk_xdcr_errors',ssl_manager,'sink-disk_xdcr', httpc_profile_sup,httpc_manager,'sink-disk_debug', httpc_handler_sup,'sink-disk_error',ns_server_cluster_sup, ftp_sup,sasl_sup,'sink-disk_default',erl_prim_loader]}, {cookie,nocookie}, {wordsize,8}, {wall_clock,2}] [ns_server:info,2016-05-11T16:39:32.536-07:00,nonode@nohost:ns_server_cluster_sup<0.126.0>:log_os_info:start_link:27]Manifest: ["","", " ", " ", " ", " "," "," ", " ", " ", " ", " "," ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", ""] [error_logger:info,2016-05-11T16:39:32.539-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.128.0>}, {name,timeout_diag_logger}, {mfargs,{timeout_diag_logger,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.542-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/opt/couchbase/var/lib/couchbase/ip_start" [ns_server:info,2016-05-11T16:39:32.542-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/opt/couchbase/var/lib/couchbase/ip" [ns_server:info,2016-05-11T16:39:32.543-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:init:163]ip config not found. Looks like we're brand new node [error_logger:info,2016-05-11T16:39:32.548-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,inet_gethost_native_sup} started: [{pid,<0.131.0>},{mfa,{inet_gethost_native,init,[[]]}}] [error_logger:info,2016-05-11T16:39:32.548-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.130.0>}, {name,inet_gethost_native_sup}, {mfargs,{inet_gethost_native,start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.611-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:bringup:214]Attempting to bring up net_kernel with name 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:39:32.626-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.133.0>}, {name,erl_epmd}, {mfargs,{erl_epmd,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.626-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.134.0>}, {name,auth}, {mfargs,{auth,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.628-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.135.0>}, {name,net_kernel}, {mfargs, {net_kernel,start_link, [['ns_1@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.628-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_sup} started: [{pid,<0.132.0>}, {name,net_sup_dynamic}, {mfargs, {erl_distribution,start_link, [['ns_1@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:32.629-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:configure_net_kernel:255]Set net_kernel vebosity to 10 -> 0 [ns_server:info,2016-05-11T16:39:32.632-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:save_node:147]saving node to "/opt/couchbase/var/lib/couchbase/couchbase-server.node" [ns_server:debug,2016-05-11T16:39:32.636-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:bringup:228]Attempted to save node name to disk: ok [ns_server:debug,2016-05-11T16:39:32.636-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:wait_for_node:235]Waiting for connection to node 'babysitter_of_ns_1@127.0.0.1' to be established [error_logger:info,2016-05-11T16:39:32.636-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'babysitter_of_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:39:32.653-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:wait_for_node:244]Observed node 'babysitter_of_ns_1@127.0.0.1' to come up [error_logger:info,2016-05-11T16:39:32.659-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.129.0>}, {name,dist_manager}, {mfargs,{dist_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.661-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.140.0>}, {name,ns_cookie_manager}, {mfargs,{ns_cookie_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.663-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.141.0>}, {name,ns_cluster}, {mfargs,{ns_cluster,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.665-07:00,ns_1@127.0.0.1:ns_config_sup<0.142.0>:ns_config_sup:init:32]loading static ns_config from "/opt/couchbase/etc/couchbase/config" [error_logger:info,2016-05-11T16:39:32.665-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.143.0>}, {name,ns_config_events}, {mfargs, {gen_event,start_link,[{local,ns_config_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.665-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.144.0>}, {name,ns_config_events_local}, {mfargs, {gen_event,start_link, [{local,ns_config_events_local}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.719-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1019]Loading static config from "/opt/couchbase/etc/couchbase/config" [ns_server:info,2016-05-11T16:39:32.721-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1033]Loading dynamic config from "/opt/couchbase/var/lib/couchbase/config/config.dat" [ns_server:info,2016-05-11T16:39:32.721-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1038]No dynamic config file found. Assuming we're brand new node [ns_server:debug,2016-05-11T16:39:32.723-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1041]Here's full dynamic config we loaded: [[]] [ns_server:info,2016-05-11T16:39:32.726-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1075]Here's full dynamic config we loaded + static & default config: [{drop_request_memory_threshold_mib,undefined}, {{request_limit,capi},undefined}, {{request_limit,rest},undefined}, {auto_failover_cfg,[{enabled,false},{timeout,120},{max_nodes,1},{count,0}]}, {replication,[{enabled,true}]}, {alert_limits,[{max_overhead_perc,50},{max_disk_used,90}]}, {email_alerts, [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server, [{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]}, {alerts, [auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events]}]}, {{node,'ns_1@127.0.0.1',ns_log}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]}, {{node,'ns_1@127.0.0.1',port_servers}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {{node,'ns_1@127.0.0.1',moxi}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11211}, {verbosity,[]}]}, {buckets,[{configs,[]}]}, {memory_quota,2328}, {{node,'ns_1@127.0.0.1',memcached_config}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>}, {cert, <<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {engine, {[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>}, {config, {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}}]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}]}]}, {{node,'ns_1@127.0.0.1',memcached}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11210}, {dedicated_port,11209}, {ssl_port,11207}, {admin_user,"_admin"}, {admin_pass,"*****"}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {engines, [{membase, [{engine,"/opt/couchbase/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached, [{engine,"/opt/couchbase/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/opt/couchbase/var/lib/couchbase/config/memcached.json"}, {audit_file,"/opt/couchbase/var/lib/couchbase/config/audit.json"}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}]}, {{node,'ns_1@127.0.0.1',memcached_defaults}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {verbosity,0}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/opt/couchbase/var/lib/couchbase/crash"}, {dedupe_nmvb_maps,false}]}, {memcached,[]}, {{node,'ns_1@127.0.0.1',audit}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {audit, [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}]}, {{node,'ns_1@127.0.0.1',isasl}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]}, {remote_clusters,[]}, {rest_creds,[{creds,[]}]}, {{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11215]}, {{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11214]}, {{node,'ns_1@127.0.0.1',indexer_stmaint_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9105]}, {{node,'ns_1@127.0.0.1',indexer_stcatchup_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9104]}, {{node,'ns_1@127.0.0.1',indexer_stinit_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9103]}, {{node,'ns_1@127.0.0.1',indexer_http_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9102]}, {{node,'ns_1@127.0.0.1',indexer_scan_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9101]}, {{node,'ns_1@127.0.0.1',indexer_admin_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9100]}, {{node,'ns_1@127.0.0.1',xdcr_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9998]}, {{node,'ns_1@127.0.0.1',projector_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9999]}, {{node,'ns_1@127.0.0.1',ssl_query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18093]}, {{node,'ns_1@127.0.0.1',query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8093]}, {{node,'ns_1@127.0.0.1',ssl_capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18092]}, {{node,'ns_1@127.0.0.1',capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8092]}, {{node,'ns_1@127.0.0.1',ssl_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18091]}, {{node,'ns_1@127.0.0.1',rest}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,8091}, {port_meta,global}]}, {{couchdb,max_parallel_replica_indexers},2}, {{couchdb,max_parallel_indexers},4}, {rest,[{port,8091}]}, {{node,'ns_1@127.0.0.1',membership}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| active]}, {nodes_wanted,['ns_1@127.0.0.1']}, {{node,'ns_1@127.0.0.1',compaction_daemon}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {check_interval,30}, {min_file_size,131072}]}, {set_view_update_daemon, [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}]}, {autocompaction, [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}]}, {max_bucket_count,10}, {index_aware_rebalance_disabled,false}, {{node,'ns_1@127.0.0.1',ldap_enabled}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',is_enterprise}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',config_version}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {4,1,1}]}, {{node,'ns_1@127.0.0.1',uuid}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| <<"0d9696803a535febe829002b30cd0eb5">>]}] [error_logger:info,2016-05-11T16:39:32.729-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.145.0>}, {name,ns_config}, {mfargs, {ns_config,start_link, ["/opt/couchbase/etc/couchbase/config", ns_config_default]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.731-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.148.0>}, {name,ns_config_remote}, {mfargs, {ns_config_replica,start_link, [{local,ns_config_remote}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.734-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.149.0>}, {name,ns_config_log}, {mfargs,{ns_config_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.734-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.142.0>}, {name,ns_config_sup}, {mfargs,{ns_config_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:32.736-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.151.0>}, {name,vbucket_filter_changes_registry}, {mfargs, {ns_process_registry,start_link, [vbucket_filter_changes_registry, [{terminate_command,shutdown}]]}}, {restart_type,permanent}, {shutdown,100}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.748-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.154.0>}, {name,remote_monitors}, {mfargs,{remote_monitors,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:32.750-07:00,ns_1@127.0.0.1:menelaus_barrier<0.155.0>:one_shot_barrier:barrier_body:58]Barrier menelaus_barrier has started [error_logger:info,2016-05-11T16:39:32.750-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.155.0>}, {name,menelaus_barrier}, {mfargs,{menelaus_sup,barrier_start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:32.750-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.156.0>}, {name,rest_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,rest_lhttpc_pool}, {connection_timeout,120000}, {pool_size,20}]]}}, {restart_type,{permanent,1}}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:32.774-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:init:334]Used ssl options: [{keyfile,"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem"}, {certfile,"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem"}, {versions,[tlsv1,'tlsv1.1','tlsv1.2']}, {cacertfile,undefined}, {dh,<<48,130,1,8,2,130,1,1,0,152,202,99,248,92,201,35,238,246,5,77,93,120,10, 118,129,36,52,111,193,167,220,49,229,106,105,152,133,121,157,73,158, 232,153,197,197,21,171,140,30,207,52,165,45,8,221,162,21,199,183,66, 211,247,51,224,102,214,190,130,96,253,218,193,35,43,139,145,89,200,250, 145,92,50,80,134,135,188,205,254,148,122,136,237,220,186,147,187,104, 159,36,147,217,117,74,35,163,145,249,175,242,18,221,124,54,140,16,246, 169,84,252,45,47,99,136,30,60,189,203,61,86,225,117,255,4,91,46,110, 167,173,106,51,65,10,248,94,225,223,73,40,232,140,26,11,67,170,118,190, 67,31,127,233,39,68,88,132,171,224,62,187,207,160,189,209,101,74,8,205, 174,146,173,80,105,144,246,25,153,86,36,24,178,163,64,202,221,95,184, 110,244,32,226,217,34,55,188,230,55,16,216,247,173,246,139,76,187,66, 211,159,17,46,20,18,48,80,27,250,96,189,29,214,234,241,34,69,254,147, 103,220,133,40,164,84,8,44,241,61,164,151,9,135,41,60,75,4,202,133,173, 72,6,69,167,89,112,174,40,229,171,2,1,2>>}, {ciphers,[{dhe_rsa,aes_256_cbc,sha256}, {dhe_dss,aes_256_cbc,sha256}, {rsa,aes_256_cbc,sha256}, {dhe_rsa,aes_128_cbc,sha256}, {dhe_dss,aes_128_cbc,sha256}, {rsa,aes_128_cbc,sha256}, {dhe_rsa,aes_256_cbc,sha}, {dhe_dss,aes_256_cbc,sha}, {rsa,aes_256_cbc,sha}, {dhe_rsa,'3des_ede_cbc',sha}, {dhe_dss,'3des_ede_cbc',sha}, {rsa,'3des_ede_cbc',sha}, {dhe_rsa,aes_128_cbc,sha}, {dhe_dss,aes_128_cbc,sha}, {rsa,aes_128_cbc,sha}]}] [ns_server:debug,2016-05-11T16:39:34.806-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_server_cert:generate_cert_and_pkey:66]Generated certificate and private key in 2030344 us [ns_server:debug,2016-05-11T16:39:34.807-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: cert_and_pkey -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIFE2n0hhgvIcwCwYJKoZIhvcNAQELMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgM2M3NDBmY2EwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIDNjNzQwZmNh\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0Cv7vNECWUBN/JieYqSf\n+O0Dymyr49xvXzfdqH89k/RdSS8zFrw6CFlR23s494dEGWyHIFGsp8go2qKZh83T\noFl5B3ef3HnuJrnefGmbA+elwNB/lcU"...>>, <<"*****">>}] [ns_server:debug,2016-05-11T16:39:34.808-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}] [error_logger:info,2016-05-11T16:39:34.813-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.158.0>}, {name,ns_ssl_services_setup}, {mfargs,{ns_ssl_services_setup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:34.861-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.163.0>}, {name,ns_rest_ssl_service}, {mfargs, {restartable,start_link, [{ns_ssl_services_setup, start_link_rest_service,[]}, 1000]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:34.861-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.157.0>}, {name,ns_ssl_services_sup}, {mfargs,{ns_ssl_services_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:34.874-07:00,ns_1@127.0.0.1:wait_link_to_couchdb_node<0.182.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:126]Waiting for ns_couchdb node to start [error_logger:info,2016-05-11T16:39:34.874-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.181.0>}, {name,start_couchdb_node}, {mfargs,{ns_server_nodes_sup,start_couchdb_node,[]}}, {restart_type,{permanent,5}}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:34.874-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:39:34.875-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:39:34.875-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.185.0>,shutdown}} [error_logger:info,2016-05-11T16:39:34.875-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:39:35.076-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:39:35.077-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:39:35.077-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.188.0>,shutdown}} [error_logger:info,2016-05-11T16:39:35.077-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:39:35.278-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:39:35.279-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.191.0>,shutdown}} [ns_server:debug,2016-05-11T16:39:35.279-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:39:35.279-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:39:35.480-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:39:35.568-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:35.772-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:35.973-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:36.175-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:36.377-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:36.579-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:36.781-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:39:36.983-07:00,ns_1@127.0.0.1:<0.183.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [error_logger:info,2016-05-11T16:39:37.423-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.204.0>}, {name,timer2_server}, {mfargs,{timer2,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:37.623-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.181.0>:ns_port_server:log:210]ns_couchdb<0.181.0>: Apache CouchDB (LogLevel=info) is starting. [error_logger:info,2016-05-11T16:39:37.770-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.182.0>}, {name,wait_for_couchdb_node}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:37.777-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:ns_storage_conf:setup_db_and_ix_paths:53]Initialize db_and_ix_paths variable with [{db_path, "/opt/couchbase/var/lib/couchbase/data"}, {index_path, "/opt/couchbase/var/lib/couchbase/data"}] [error_logger:info,2016-05-11T16:39:37.783-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.207.0>}, {name,ns_disksup}, {mfargs,{ns_disksup,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:37.784-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.208.0>}, {name,diag_handler_worker}, {mfargs,{work_queue,start_link,[diag_handler_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:39:37.788-07:00,ns_1@127.0.0.1:ns_server_sup<0.206.0>:dir_size:start_link:39]Starting quick version of dir_size with program name: godu [error_logger:info,2016-05-11T16:39:37.788-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.209.0>}, {name,dir_size}, {mfargs,{dir_size,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:37.799-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.210.0>}, {name,request_throttler}, {mfargs,{request_throttler,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2016-05-11T16:39:37.803-07:00,ns_1@127.0.0.1:ns_log<0.211.0>:ns_log:read_logs:128]Couldn't load logs from "/opt/couchbase/var/lib/couchbase/ns_log" (perhaps it's first startup): {error, enoent} [error_logger:info,2016-05-11T16:39:37.804-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.211.0>}, {name,ns_log}, {mfargs,{ns_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:37.804-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.212.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:37.809-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.213.0>:ns_config_isasl_sync:init:63]isasl_sync init: ["/opt/couchbase/var/lib/couchbase/isasl.pw","_admin", "2bb824636f76a257101e37d538281ca2"] [ns_server:debug,2016-05-11T16:39:37.809-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.213.0>:ns_config_isasl_sync:init:71]isasl_sync init buckets: [] [ns_server:debug,2016-05-11T16:39:37.809-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.213.0>:ns_config_isasl_sync:writeSASLConf:143]Writing isasl passwd file: "/opt/couchbase/var/lib/couchbase/isasl.pw" [ns_server:warn,2016-05-11T16:39:37.827-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.213.0>:ns_memcached:connect:1290]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:info,2016-05-11T16:39:37.871-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.181.0>:ns_port_server:log:210]ns_couchdb<0.181.0>: Apache CouchDB has started. Time to relax. ns_couchdb<0.181.0>: 27549: Booted. Waiting for shutdown request ns_couchdb<0.181.0>: working as port [error_logger:info,2016-05-11T16:39:38.828-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.213.0>}, {name,ns_config_isasl_sync}, {mfargs,{ns_config_isasl_sync,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.828-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.216.0>}, {name,ns_log_events}, {mfargs,{gen_event,start_link,[{local,ns_log_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.831-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.218.0>}, {name,ns_node_disco_events}, {mfargs, {gen_event,start_link, [{local,ns_node_disco_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.831-07:00,ns_1@127.0.0.1:ns_node_disco<0.219.0>:ns_node_disco:init:138]Initting ns_node_disco with [] [ns_server:debug,2016-05-11T16:39:38.831-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [user:info,2016-05-11T16:39:38.831-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_init:86]Initial otp cookie generated: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.832-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229178}}]}] [ns_server:debug,2016-05-11T16:39:38.832-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:39:38.832-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: otp -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, {cookie,vuzfvvczcpnjsgwq}] [ns_server:debug,2016-05-11T16:39:38.835-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [ns_server:debug,2016-05-11T16:39:38.835-07:00,ns_1@127.0.0.1:<0.220.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.837-07:00,ns_1@127.0.0.1:<0.220.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [error_logger:info,2016-05-11T16:39:38.837-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.219.0>}, {name,ns_node_disco}, {mfargs,{ns_node_disco,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.839-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.223.0>}, {name,ns_node_disco_log}, {mfargs,{ns_node_disco_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.841-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.224.0>}, {name,ns_node_disco_conf_events}, {mfargs,{ns_node_disco_conf_events,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.844-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:init:68]init pulling [error_logger:info,2016-05-11T16:39:38.844-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.225.0>}, {name,ns_config_rep_merger}, {mfargs,{ns_config_rep,start_link_merger,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.844-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:init:70]init pushing [ns_server:debug,2016-05-11T16:39:38.845-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:init:74]init reannouncing [ns_server:debug,2016-05-11T16:39:38.845-07:00,ns_1@127.0.0.1:ns_config_events<0.143.0>:ns_node_disco_conf_events:handle_event:50]ns_node_disco_conf_events config on otp [ns_server:debug,2016-05-11T16:39:38.845-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [ns_server:debug,2016-05-11T16:39:38.846-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:39:38.846-07:00,ns_1@127.0.0.1:ns_config_events<0.143.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2016-05-11T16:39:38.847-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: otp -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, {cookie,vuzfvvczcpnjsgwq}] [ns_server:debug,2016-05-11T16:39:38.847-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: cert_and_pkey -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIFE2n0hhgvIcwCwYJKoZIhvcNAQELMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgM2M3NDBmY2EwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIDNjNzQwZmNh\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0Cv7vNECWUBN/JieYqSf\n+O0Dymyr49xvXzfdqH89k/RdSS8zFrw6CFlR23s494dEGWyHIFGsp8go2qKZh83T\noFl5B3ef3HnuJrnefGmbA+elwNB/lcU"...>>, <<"*****">>}] [ns_server:debug,2016-05-11T16:39:38.847-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: alert_limits -> [{max_overhead_perc,50},{max_disk_used,90}] [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:<0.230.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: audit -> [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}] [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:<0.230.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.848-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: auto_failover_cfg -> [{enabled,false},{timeout,120},{max_nodes,1},{count,0}] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: autocompaction -> [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: buckets -> [[],{configs,[]}] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: drop_request_memory_threshold_mib -> undefined [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: email_alerts -> [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server,[{user,[]}, {pass,"*****"}, {host,"localhost"}, {port,25}, {encrypt,false}]}, {alerts,[auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events]}] [error_logger:info,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.226.0>}, {name,ns_config_rep}, {mfargs,{ns_config_rep,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: index_aware_rebalance_disabled -> false [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [error_logger:info,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.217.0>}, {name,ns_node_disco_sup}, {mfargs,{ns_node_disco_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: max_bucket_count -> 10 [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: memcached -> [] [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:<0.231.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: memory_quota -> 2328 [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: nodes_wanted -> ['ns_1@127.0.0.1'] [ns_server:debug,2016-05-11T16:39:38.849-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([alert_limits,audit,auto_failover_cfg, autocompaction,buckets,cert_and_pkey, drop_request_memory_threshold_mib,email_alerts, index_aware_rebalance_disabled, max_bucket_count,memcached,memory_quota, nodes_wanted,otp,remote_clusters,replication, rest,rest_creds,set_view_update_daemon, {couchdb,max_parallel_indexers}, {couchdb,max_parallel_replica_indexers}, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {request_limit,capi}, {request_limit,rest}, {node,'ns_1@127.0.0.1',audit}, {node,'ns_1@127.0.0.1',capi_port}, {node,'ns_1@127.0.0.1',compaction_daemon}, {node,'ns_1@127.0.0.1',config_version}, {node,'ns_1@127.0.0.1',indexer_admin_port}, {node,'ns_1@127.0.0.1',indexer_http_port}, {node,'ns_1@127.0.0.1',indexer_scan_port}, {node,'ns_1@127.0.0.1',indexer_stcatchup_port}, {node,'ns_1@127.0.0.1',indexer_stinit_port}, {node,'ns_1@127.0.0.1',indexer_stmaint_port}, {node,'ns_1@127.0.0.1',is_enterprise}, {node,'ns_1@127.0.0.1',isasl}, {node,'ns_1@127.0.0.1',ldap_enabled}, {node,'ns_1@127.0.0.1',membership}, {node,'ns_1@127.0.0.1',memcached}, {node,'ns_1@127.0.0.1',memcached_config}, {node,'ns_1@127.0.0.1',memcached_defaults}, {node,'ns_1@127.0.0.1',moxi}, {node,'ns_1@127.0.0.1',ns_log}, {node,'ns_1@127.0.0.1',port_servers}, {node,'ns_1@127.0.0.1',projector_port}, {node,'ns_1@127.0.0.1',query_port}, {node,'ns_1@127.0.0.1',rest}, {node,'ns_1@127.0.0.1',ssl_capi_port}, {node,'ns_1@127.0.0.1', ssl_proxy_downstream_port}, {node,'ns_1@127.0.0.1',ssl_proxy_upstream_port}, {node,'ns_1@127.0.0.1',ssl_query_port}, {node,'ns_1@127.0.0.1',ssl_rest_port}, {node,'ns_1@127.0.0.1',uuid}, {node,'ns_1@127.0.0.1',xdcr_rest_port}]..) [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: remote_clusters -> [] [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:<0.231.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: replication -> [{enabled,true}] [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest -> [{port,8091}] [ns_server:debug,2016-05-11T16:39:38.850-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest_creds -> [{creds,[]}] [ns_server:debug,2016-05-11T16:39:38.851-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: set_view_update_daemon -> [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}] [ns_server:debug,2016-05-11T16:39:38.851-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {couchdb,max_parallel_indexers} -> 4 [ns_server:debug,2016-05-11T16:39:38.851-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {couchdb,max_parallel_replica_indexers} -> 2 [ns_server:debug,2016-05-11T16:39:38.851-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {request_limit,capi} -> undefined [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {request_limit,rest} -> undefined [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',audit} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',capi_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|8092] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',compaction_daemon} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {check_interval,30}, {min_file_size,131072}] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',config_version} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {4,1,1}] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_admin_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9100] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_http_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9102] [ns_server:debug,2016-05-11T16:39:38.852-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_scan_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9101] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stcatchup_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9104] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stinit_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9103] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stmaint_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9105] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',is_enterprise} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|true] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',isasl} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ldap_enabled} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|true] [error_logger:info,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.234.0>}, {name,vbucket_map_mirror}, {mfargs,{vbucket_map_mirror,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.853-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',membership} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| active] [ns_server:debug,2016-05-11T16:39:38.854-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11210}, {dedicated_port,11209}, {ssl_port,11207}, {admin_user,"_admin"}, {admin_pass,"*****"}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {engines,[{membase,[{engine,"/opt/couchbase/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/opt/couchbase/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/opt/couchbase/var/lib/couchbase/config/memcached.json"}, {audit_file,"/opt/couchbase/var/lib/couchbase/config/audit.json"}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2016-05-11T16:39:38.854-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached_config} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>}, {cert, <<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {engine, {[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>}, {config, {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}}]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}]}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached_defaults} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {verbosity,0}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/opt/couchbase/var/lib/couchbase/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',moxi} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11211}, {verbosity,[]}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ns_log} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {filename,"/opt/couchbase/var/lib/couchbase/ns_log"}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',port_servers} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',projector_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9999] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',query_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|8093] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',rest} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,8091}, {port_meta,global}] [ns_server:debug,2016-05-11T16:39:38.855-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_capi_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18092] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|11214] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|11215] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_query_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18093] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_rest_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18091] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',uuid} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| <<"0d9696803a535febe829002b30cd0eb5">>] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',xdcr_rest_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9998] [ns_server:debug,2016-05-11T16:39:38.856-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229178}}]}] [error_logger:info,2016-05-11T16:39:38.857-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.236.0>}, {name,bucket_info_cache}, {mfargs,{bucket_info_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.857-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.239.0>}, {name,ns_tick_event}, {mfargs,{gen_event,start_link,[{local,ns_tick_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.857-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.240.0>}, {name,buckets_events}, {mfargs, {gen_event,start_link,[{local,buckets_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.861-07:00,ns_1@127.0.0.1:ns_log_events<0.216.0>:ns_mail_log:init:44]ns_mail_log started up [error_logger:info,2016-05-11T16:39:38.861-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_mail_sup} started: [{pid,<0.242.0>}, {name,ns_mail_log}, {mfargs,{ns_mail_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.861-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.241.0>}, {name,ns_mail_sup}, {mfargs,{ns_mail_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:38.861-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.243.0>}, {name,ns_stats_event}, {mfargs, {gen_event,start_link,[{local,ns_stats_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.865-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.244.0>}, {name,samples_loader_tasks}, {mfargs,{samples_loader_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.871-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.246.0>}, {name,ns_heart}, {mfargs,{ns_heart,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.871-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.249.0>}, {name,ns_heart_slow_updater}, {mfargs,{ns_heart,start_link_slow_updater,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.871-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.245.0>}, {name,ns_heart_sup}, {mfargs,{ns_heart_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:38.873-07:00,ns_1@127.0.0.1:ns_heart<0.246.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,186}]}, {ns_heart,handle_info,2, [{file,"src/ns_heart.erl"},{line,118}]}]}} [ns_server:debug,2016-05-11T16:39:38.874-07:00,ns_1@127.0.0.1:ns_heart<0.246.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,186}]}]}} [error_logger:info,2016-05-11T16:39:38.875-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.252.0>}, {name,ns_doctor_events}, {mfargs, {gen_event,start_link,[{local,ns_doctor_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.885-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.253.0>}, {name,ns_doctor}, {mfargs,{ns_doctor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.885-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.250.0>}, {name,ns_doctor_sup}, {mfargs, {restartable,start_link, [{ns_doctor_sup,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:38.896-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.256.0>}, {name,remote_clusters_info}, {mfargs,{remote_clusters_info,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.896-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.257.0>}, {name,master_activity_events}, {mfargs, {gen_event,start_link, [{local,master_activity_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.901-07:00,ns_1@127.0.0.1:<0.258.0>:mb_master:check_master_takeover_needed:141]Sending master node question to the following nodes: [] [ns_server:debug,2016-05-11T16:39:38.901-07:00,ns_1@127.0.0.1:<0.258.0>:mb_master:check_master_takeover_needed:143]Got replies: [] [ns_server:debug,2016-05-11T16:39:38.901-07:00,ns_1@127.0.0.1:<0.258.0>:mb_master:check_master_takeover_needed:149]Was unable to discover master, not going to force mastership takeover [user:info,2016-05-11T16:39:38.907-07:00,ns_1@127.0.0.1:mb_master<0.261.0>:mb_master:init:86]I'm the only node, so I'm the master. [ns_server:debug,2016-05-11T16:39:38.909-07:00,ns_1@127.0.0.1:ns_heart<0.246.0>:ns_heart:grab_local_xdcr_replications:458]Ignoring exception getting xdcr replication infos {exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}}, [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {xdc_replication_sup,all_local_replication_infos,0, [{file,"src/xdc_replication_sup.erl"},{line,58}]}, {ns_heart,grab_local_xdcr_replications,0, [{file,"src/ns_heart.erl"},{line,437}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,317}]}, {ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,186}]}, {ns_heart,handle_info,2,[{file,"src/ns_heart.erl"},{line,118}]}, {gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,604}]}]} [ns_server:debug,2016-05-11T16:39:38.913-07:00,ns_1@127.0.0.1:ns_heart<0.246.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [ns_server:debug,2016-05-11T16:39:38.937-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.249.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,243}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]}} [ns_server:debug,2016-05-11T16:39:38.938-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.249.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,243}]}]}} [ns_server:debug,2016-05-11T16:39:38.939-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.249.0>:ns_heart:grab_local_xdcr_replications:458]Ignoring exception getting xdcr replication infos {exit,{noproc,{gen_server,call,[xdc_replication_sup,which_children,infinity]}}, [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {xdc_replication_sup,all_local_replication_infos,0, [{file,"src/xdc_replication_sup.erl"},{line,58}]}, {ns_heart,grab_local_xdcr_replications,0, [{file,"src/ns_heart.erl"},{line,437}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,317}]}, {ns_heart,current_status_slow,1,[{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,slow_updater_loop,0,[{file,"src/ns_heart.erl"},{line,243}]}, {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]} [ns_server:debug,2016-05-11T16:39:38.939-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.249.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [ns_server:debug,2016-05-11T16:39:38.948-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:do_upgrade_config:707]Upgrading config by changes: [{set,cluster_compat_version,[2,0]}] [ns_server:info,2016-05-11T16:39:38.949-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_online_config_upgrader:upgrade_config_from_2_0_to_2_5:54]Performing online config upgrade to 2.5 version [ns_server:debug,2016-05-11T16:39:38.949-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:do_upgrade_config:707]Upgrading config by changes: [{set,cluster_compat_version,[2,5]}, {set,server_groups, [[{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]}] [ns_server:info,2016-05-11T16:39:38.949-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_online_config_upgrader:upgrade_config_from_2_5_to_3_0:58]Performing online config upgrade to 3.0 version [ns_server:debug,2016-05-11T16:39:38.949-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:do_upgrade_config:707]Upgrading config by changes: [{set,cluster_compat_version,[3,0]}, {set,rest_creds,null}, {set,read_only_user_creds,null}] [ns_server:info,2016-05-11T16:39:38.949-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_online_config_upgrader:upgrade_config_from_3_0_to_4_0:63]Performing online config upgrade to 4.0 version [ns_server:debug,2016-05-11T16:39:38.956-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:do_upgrade_config:707]Upgrading config by changes: [{set,cluster_compat_version,[4,0]}, {delete,goxdcr_upgrade}, {set,{node,'ns_1@127.0.0.1',stop_xdcr},true}, {set,{metakv,<<"/indexing/settings/config">>}, <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.log_level\":\"info\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>}] [ns_server:info,2016-05-11T16:39:38.956-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_online_config_upgrader:upgrade_config_from_4_0_to_4_1:68]Performing online config upgrade to 4.1 version [ns_server:debug,2016-05-11T16:39:38.956-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:do_upgrade_config:707]Upgrading config by changes: [{set,cluster_compat_version,[4,1]}, {set,{service_map,n1ql},[]}, {set,{service_map,index},[]}] [ns_server:info,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:391]Got certificate and pkey change [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {service_map,index} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}] [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {service_map,n1ql} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}] [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {metakv,<<"/indexing/settings/config">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.log_level\":\"info\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>] [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}|true] [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: goxdcr_upgrade -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| '_deleted'] [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: read_only_user_creds -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}|null] [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: server_groups -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, [{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]] [ns_server:debug,2016-05-11T16:39:38.957-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([cluster_compat_version,goxdcr_upgrade, read_only_user_creds,rest_creds,server_groups, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {metakv,<<"/indexing/settings/config">>}, {service_map,index}, {service_map,n1ql}, {node,'ns_1@127.0.0.1',stop_xdcr}]..) [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: cluster_compat_version -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{5,63630229178}}]},4,1] [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest_creds -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}|null] [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{3,63630229178}}]}] [ns_server:debug,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:handle_call:115]Got full synchronization request from 'ns_1@127.0.0.1' [ns_server:info,2016-05-11T16:39:38.958-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:maybe_generate_local_cert:474]Failed to read node certificate. Perhaps it wasn't created yet. Error: {error, {badmatch, {error, enoent}}} [ns_server:debug,2016-05-11T16:39:38.959-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:handle_call:121]Fully synchronized config in 27 us [user:warn,2016-05-11T16:39:38.959-07:00,ns_1@127.0.0.1:<0.265.0>:ns_orchestrator:consider_switching_compat_mode:1141]Changed cluster compat mode from undefined to [4,1] [ns_server:debug,2016-05-11T16:39:38.959-07:00,ns_1@127.0.0.1:mb_master_sup<0.264.0>:misc:start_singleton:1035]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.265.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:39:38.959-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.265.0>}, {name,ns_orchestrator}, {mfargs,{ns_orchestrator,start_link,[]}}, {restart_type,permanent}, {shutdown,20}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.962-07:00,ns_1@127.0.0.1:mb_master_sup<0.264.0>:misc:start_singleton:1035]start_singleton(gen_server, ns_tick, [], []): started as <0.280.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:39:38.962-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.280.0>}, {name,ns_tick}, {mfargs,{ns_tick,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:38.966-07:00,ns_1@127.0.0.1:<0.281.0>:auto_failover:init:147]init auto_failover. [ns_server:debug,2016-05-11T16:39:38.966-07:00,ns_1@127.0.0.1:mb_master_sup<0.264.0>:misc:start_singleton:1035]start_singleton(gen_server, auto_failover, [], []): started as <0.281.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:39:38.967-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.281.0>}, {name,auto_failover}, {mfargs,{auto_failover,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.967-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.258.0>}, {name,mb_master}, {mfargs, {restartable,start_link, [{mb_master,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:38.967-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.282.0>}, {name,master_activity_events_ingress}, {mfargs, {gen_event,start_link, [{local,master_activity_events_ingress}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.968-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.283.0>}, {name,master_activity_events_timestamper}, {mfargs, {master_activity_events,start_link_timestamper,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.969-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.284.0>}, {name,master_activity_events_pids_watcher}, {mfargs, {master_activity_events_pids_watcher,start_link, []}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.996-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.285.0>}, {name,master_activity_events_keeper}, {mfargs,{master_activity_events_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.998-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.287.0>}, {name,xdcr_ckpt_store}, {mfargs,{simple_store,start_link,[xdcr_ckpt_data]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.999-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.288.0>}, {name,metakv_worker}, {mfargs,{work_queue,start_link,[metakv_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.999-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.289.0>}, {name,index_events}, {mfargs,{gen_event,start_link,[{local,index_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:38.999-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.290.0>}, {name,index_settings_manager}, {mfargs,{index_settings_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.002-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.293.0>}, {name,menelaus_ui_auth}, {mfargs,{menelaus_ui_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.005-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.294.0>}, {name,menelaus_web_cache}, {mfargs,{menelaus_web_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.008-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.295.0>}, {name,menelaus_stats_gatherer}, {mfargs,{menelaus_stats_gatherer,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.008-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.296.0>}, {name,json_rpc_events}, {mfargs, {gen_event,start_link,[{local,json_rpc_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.010-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.297.0>}, {name,menelaus_web}, {mfargs,{menelaus_web,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.012-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.314.0>}, {name,menelaus_event}, {mfargs,{menelaus_event,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.017-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.315.0>}, {name,hot_keys_keeper}, {mfargs,{hot_keys_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.021-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.316.0>}, {name,menelaus_web_alerts_srv}, {mfargs,{menelaus_web_alerts_srv,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.032-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.317.0>}, {name,menelaus_cbauth}, {mfargs,{menelaus_cbauth,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [user:info,2016-05-11T16:39:39.033-07:00,ns_1@127.0.0.1:ns_server_sup<0.206.0>:menelaus_sup:start_link:46]Couchbase Server has started on web port 8091 on node 'ns_1@127.0.0.1'. Version: "4.1.1-5914-enterprise". [error_logger:info,2016-05-11T16:39:39.033-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.292.0>}, {name,menelaus}, {mfargs,{menelaus_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.033-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.321.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.044-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.323.0>}, {name,ns_memcached_sockets_pool}, {mfargs,{ns_memcached_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.045-07:00,ns_1@127.0.0.1:ns_audit_cfg<0.324.0>:ns_audit_cfg:write_audit_json:158]Writing new content to "/opt/couchbase/var/lib/couchbase/config/audit.json" : [{auditd_enabled, false}, {disabled, []}, {log_path, "/opt/couchbase/var/lib/couchbase/logs"}, {rotate_interval, 86400}, {rotate_size, 20971520}, {sync, []}, {version, 1}, {descriptors_path, "/opt/couchbase/etc/security"}] [ns_server:debug,2016-05-11T16:39:39.047-07:00,ns_1@127.0.0.1:ns_audit_cfg<0.324.0>:ns_audit_cfg:handle_info:107]Instruct memcached to reload audit config [error_logger:info,2016-05-11T16:39:39.047-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.324.0>}, {name,ns_audit_cfg}, {mfargs,{ns_audit_cfg,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2016-05-11T16:39:39.048-07:00,ns_1@127.0.0.1:<0.326.0>:ns_memcached:connect:1290]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:debug,2016-05-11T16:39:39.050-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,xdcr_proxy] [ns_server:debug,2016-05-11T16:39:39.061-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_setup:set_children:72]Monitor ns_child_ports_sup <11470.68.0> [ns_server:debug,2016-05-11T16:39:39.063-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:init:44]waiting for completion of initial ns_ports_setup round [error_logger:info,2016-05-11T16:39:39.063-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.329.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.063-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:init:46]ns_ports_setup seems to be ready [ns_server:info,2016-05-11T16:39:39.066-07:00,ns_1@127.0.0.1:<0.330.0>:ns_memcached_log_rotator:init:28]Starting log rotator on "/opt/couchbase/var/lib/couchbase/logs"/"memcached.log"* with an initial period of 39003ms [error_logger:info,2016-05-11T16:39:39.067-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.330.0>}, {name,ns_memcached_log_rotator}, {mfargs,{ns_memcached_log_rotator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.069-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:find_port_pid_loop:119]Found memcached port <11470.75.0> [error_logger:info,2016-05-11T16:39:39.071-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.333.0>}, {name,memcached_clients_pool}, {mfargs,{memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.072-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:init:77]wrote memcached config to /opt/couchbase/var/lib/couchbase/config/memcached.json. Will activate memcached port server [error_logger:info,2016-05-11T16:39:39.074-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.334.0>}, {name,proxied_memcached_clients_pool}, {mfargs,{proxied_memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.074-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.335.0>}, {name,xdc_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,xdc_lhttpc_pool}, {connection_timeout,120000}, {pool_size,200}]]}}, {restart_type,{permanent,1}}, {shutdown,10000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.076-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.336.0>}, {name,ns_null_connection_pool}, {mfargs, {ns_null_connection_pool,start_link, [ns_null_connection_pool]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.079-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:init:80]activated memcached port server [error_logger:info,2016-05-11T16:39:39.082-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<0.338.0>}, {name,xdc_stats_holder}, {mfargs, {proc_lib,start_link, [xdcr_sup,link_stats_holder_body,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.082-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<0.339.0>}, {name,xdc_replication_sup}, {mfargs,{xdc_replication_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.089-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<0.340.0>}, {name,xdc_rep_manager}, {mfargs,{xdc_rep_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,30000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.089-07:00,ns_1@127.0.0.1:xdc_rep_manager<0.340.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [ns_server:debug,2016-05-11T16:39:39.101-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.342.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [ns_server:debug,2016-05-11T16:39:39.101-07:00,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.343.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [error_logger:info,2016-05-11T16:39:39.102-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<0.342.0>}, {name,xdc_rdoc_replicator}, {mfargs,{doc_replicator,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.102-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<0.343.0>}, {name,xdc_rdoc_replication_srv}, {mfargs,{doc_replication_srv,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.105-07:00,ns_1@127.0.0.1:<0.337.0>:xdc_rdoc_manager:start_link_remote:42]Starting xdc_rdoc_manager on 'couchdb_ns_1@127.0.0.1' with following links: [<0.342.0>, <0.343.0>, <0.340.0>] [ns_server:debug,2016-05-11T16:39:39.114-07:00,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.343.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.247.0> [error_logger:info,2016-05-11T16:39:39.115-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.337.0>,xdcr_sup} started: [{pid,<11471.247.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_ns_1@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.115-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.342.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.247.0> [ns_server:debug,2016-05-11T16:39:39.115-07:00,ns_1@127.0.0.1:xdc_rep_manager<0.340.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.247.0> [error_logger:info,2016-05-11T16:39:39.116-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.337.0>}, {name,xdcr_sup}, {mfargs,{xdcr_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:39.116-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([{local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {node,'ns_1@127.0.0.1',stop_xdcr}]..) [ns_server:debug,2016-05-11T16:39:39.116-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{4,63630229179}}]}] [ns_server:debug,2016-05-11T16:39:39.116-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229179}}]}| '_deleted'] [error_logger:info,2016-05-11T16:39:39.118-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.348.0>}, {name,xdcr_dcp_sockets_pool}, {mfargs,{xdcr_dcp_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.124-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.350.0>}, {name,ns_bucket_worker}, {mfargs,{work_queue,start_link,[ns_bucket_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_sup} started: [{pid,<0.352.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.131-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.342.0>:doc_replicator:loop:64]doing replicate_newnodes_docs [error_logger:info,2016-05-11T16:39:39.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.351.0>}, {name,ns_bucket_sup}, {mfargs,{ns_bucket_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.349.0>}, {name,ns_bucket_worker_sup}, {mfargs,{ns_bucket_worker_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.138-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.353.0>}, {name,system_stats_collector}, {mfargs,{system_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.140-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.357.0>}, {name,{stats_archiver,"@system"}}, {mfargs,{stats_archiver,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.150-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.359.0>}, {name,{stats_reader,"@system"}}, {mfargs,{stats_reader,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.150-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.360.0>}, {name,{stats_archiver,"@system-processes"}}, {mfargs, {stats_archiver,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.151-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.362.0>}, {name,{stats_reader,"@system-processes"}}, {mfargs, {stats_reader,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.158-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.363.0>}, {name,{stats_archiver,"@query"}}, {mfargs,{stats_archiver,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.158-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.365.0>}, {name,{stats_reader,"@query"}}, {mfargs,{stats_reader,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.161-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.366.0>}, {name,query_stats_collector}, {mfargs,{query_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.162-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.368.0>}, {name,{stats_archiver,"@global"}}, {mfargs,{stats_archiver,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.162-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.370.0>}, {name,{stats_reader,"@global"}}, {mfargs,{stats_reader,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.165-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.371.0>}, {name,global_stats_collector}, {mfargs,{global_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.167-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.373.0>}, {name,goxdcr_status_keeper}, {mfargs,{goxdcr_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.173-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.375.0>}, {name,index_stats_children_sup}, {mfargs, {supervisor,start_link, [{local,index_stats_children_sup}, index_stats_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:39.174-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,goxdcr,xdcr_proxy] [error_logger:info,2016-05-11T16:39:39.179-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.377.0>}, {name,index_status_keeper_worker}, {mfargs, {work_queue,start_link, [index_status_keeper_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.208-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.378.0>}, {name,index_status_keeper}, {mfargs,{index_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.208-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.376.0>}, {name,index_status_keeper_sup}, {mfargs,{index_status_keeper_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.209-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.381.0>}, {name,index_stats_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.209-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.374.0>}, {name,index_stats_sup}, {mfargs,{index_stats_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.223-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.385.0>}, {name,compaction_daemon}, {mfargs,{compaction_daemon,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.253-07:00,ns_1@127.0.0.1:<0.388.0>:new_concurrency_throttle:init:113]init concurrent throttle process, pid: <0.388.0>, type: kv_throttle# of available token: 1 [ns_server:debug,2016-05-11T16:39:39.254-07:00,ns_1@127.0.0.1:goxdcr_status_keeper<0.373.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2016-05-11T16:39:39.254-07:00,ns_1@127.0.0.1:goxdcr_status_keeper<0.373.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2016-05-11T16:39:39.262-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:39:39.262-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:39:39.262-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:39:39.262-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [error_logger:info,2016-05-11T16:39:39.262-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.386.0>}, {name,compaction_new_daemon}, {mfargs,{compaction_new_daemon,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,86400000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_master. Rescheduling compaction. [ns_server:debug,2016-05-11T16:39:39.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_master too soon. Next run will be in 3600s [error_logger:info,2016-05-11T16:39:39.282-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,cluster_logs_sup} started: [{pid,<0.391.0>}, {name,ets_holder}, {mfargs, {cluster_logs_collection_task, start_link_ets_holder,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:39:39.282-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.390.0>}, {name,cluster_logs_sup}, {mfargs,{cluster_logs_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.287-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.392.0>}, {name,remote_api}, {mfargs,{remote_api,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:39:39.287-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:one_shot_barrier:notify:27]Notifying on barrier menelaus_barrier [error_logger:info,2016-05-11T16:39:39.287-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.206.0>}, {name,ns_server_sup}, {mfargs,{ns_server_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:39:39.287-07:00,ns_1@127.0.0.1:menelaus_barrier<0.155.0>:one_shot_barrier:barrier_body:62]Barrier menelaus_barrier got notification from <0.153.0> [ns_server:debug,2016-05-11T16:39:39.287-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:one_shot_barrier:notify:32]Successfuly notified on barrier menelaus_barrier [error_logger:info,2016-05-11T16:39:39.288-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.152.0>}, {name,ns_server_nodes_sup}, {mfargs, {restartable,start_link, [{ns_server_nodes_sup,start_link,[]}, infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:39:39.288-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= application: ns_server started_at: 'ns_1@127.0.0.1' [ns_server:debug,2016-05-11T16:39:39.288-07:00,ns_1@127.0.0.1:<0.2.0>:child_erlang:child_loop:115]27510: Entered child_loop [ns_server:debug,2016-05-11T16:39:39.340-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.394.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:39:39.341-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.317.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'saslauthd-saslauthd-port',<0.394.0>} started [ns_server:debug,2016-05-11T16:39:39.341-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.394.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@saslauthd-saslauthd-port">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL, <<"http://127.0.0.1:8091/_cbauth">>}]}]}]} [ns_server:debug,2016-05-11T16:39:39.342-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.394.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0}, {<<"result">>,null}, {<<"error">>, <<"rpc: can't find service AuthCacheSvc.UpdateDB">>}] [ns_server:debug,2016-05-11T16:39:39.381-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:39:39.381-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.317.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'goxdcr-cbauth',<0.397.0>} started [ns_server:debug,2016-05-11T16:39:39.382-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@goxdcr-cbauth">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL, <<"http://127.0.0.1:8091/_cbauth">>}]}]}]} [ns_server:debug,2016-05-11T16:39:39.385-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0},{<<"result">>,true},{<<"error">>,null}] [ns_server:info,2016-05-11T16:39:40.752-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:do_generate_local_cert:462]Saved local cert for node 'ns_1@127.0.0.1' [ns_server:info,2016-05-11T16:39:40.763-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:394]Wrote new pem file [ns_server:debug,2016-05-11T16:39:40.763-07:00,ns_1@127.0.0.1:<0.163.0>:restartable:loop:71]Restarting child <0.164.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} Shutdown policy: 1000 Caller: {<0.408.0>,#Ref<0.0.0.1491>} [ns_server:debug,2016-05-11T16:39:40.763-07:00,ns_1@127.0.0.1:<0.410.0>:ns_ports_manager:restart_port_by_name:43]Requesting restart of port xdcr_proxy [user:debug,2016-05-11T16:39:40.956-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: 27590: Booted. Waiting for shutdown request 27590: got shutdown request. Exiting [ns_server:info,2016-05-11T16:39:40.957-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:430]Succesfully notified services [memcached,query_svc,xdcr_proxy, capi_ssl_service,ssl_service] [ns_server:debug,2016-05-11T16:39:45.343-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([memory_quota, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {metakv,<<"/indexing/settings/config">>}]..) [ns_server:debug,2016-05-11T16:39:45.343-07:00,ns_1@127.0.0.1:<0.464.0>:ns_audit:put:224]Audit cluster_settings: [{cluster_name,<<>>}, {index_memory_quota,256}, {memory_quota,3103}, {real_userid,{[{source,<<"ns_server">>}, {user,<<"Administrator">>}]}}, {remote,{[{ip,<<"10.17.2.126">>},{port,60204}]}}, {timestamp,<<"2016-05-11T16:39:45.343-07:00">>}] [ns_server:debug,2016-05-11T16:39:45.344-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {metakv,<<"/indexing/settings/config">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>] [ns_server:debug,2016-05-11T16:39:45.344-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: memory_quota -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]}|3103] [ns_server:debug,2016-05-11T16:39:45.345-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{5,63630229185}}]}] [ns_server:debug,2016-05-11T16:39:45.395-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{6,63630229185}}]}] [ns_server:debug,2016-05-11T16:39:45.395-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',services} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]},kv] [ns_server:debug,2016-05-11T16:39:45.396-07:00,ns_1@127.0.0.1:<0.471.0>:ns_audit:put:224]Audit setup_node_services: [{services,[kv]}, {node,'ns_1@127.0.0.1'}, {real_userid, {[{source,<<"ns_server">>}, {user,<<"Administrator">>}]}}, {remote,{[{ip,<<"10.17.2.126">>},{port,60205}]}}, {timestamp,<<"2016-05-11T16:39:45.395-07:00">>}] [ns_server:debug,2016-05-11T16:39:45.396-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([{local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {node,'ns_1@127.0.0.1',services}]..) [ns_server:debug,2016-05-11T16:39:45.398-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,saslauthd_port,goxdcr,xdcr_proxy] [user:debug,2016-05-11T16:39:45.408-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: 27640: Booted. Waiting for shutdown request 27640: got shutdown request. Exiting [ns_server:debug,2016-05-11T16:39:45.449-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([rest, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}]..) [ns_server:debug,2016-05-11T16:39:45.449-07:00,ns_1@127.0.0.1:<0.479.0>:ns_audit:put:224]Audit password_change: [{userid,<<"Administrator">>}, {role,admin}, {real_userid,{[{source,<<"ns_server">>}, {user,<<"Administrator">>}]}}, {remote,{[{ip,<<"10.17.2.126">>},{port,60206}]}}, {timestamp,<<"2016-05-11T16:39:45.449-07:00">>}] [ns_server:debug,2016-05-11T16:39:45.449-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{7,63630229185}}]}] [ns_server:debug,2016-05-11T16:39:45.450-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([rest_creds, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}]..) [ns_server:debug,2016-05-11T16:39:45.450-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,1}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@goxdcr-cbauth">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL,<<"http://127.0.0.1:8091/_cbauth">>}, {admin, {[{user,<<"Administrator">>}, {salt,<<"6MjdqNKaQBvU0VSCXiFXdQ==">>}, {mac, <<"7UfQhOmYXvBGevp8sxirMxrkozw=">>}]}}]}]}]} [ns_server:debug,2016-05-11T16:39:45.451-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest -> [{port,8091}] [ns_server:debug,2016-05-11T16:39:45.451-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{8,63630229185}}]}] [ns_server:debug,2016-05-11T16:39:45.451-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest_creds -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| {"Administrator",{password,"*****"}}] [ns_server:debug,2016-05-11T16:39:45.451-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,1},{<<"result">>,true},{<<"error">>,null}] [ns_server:debug,2016-05-11T16:39:45.468-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,moxi,projector,saslauthd_port,goxdcr,xdcr_proxy] [ns_server:debug,2016-05-11T16:39:45.670-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.491.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:39:45.671-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.317.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'projector-cbauth',<0.491.0>} started [ns_server:debug,2016-05-11T16:39:45.671-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.491.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@projector-cbauth">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL,<<"http://127.0.0.1:8091/_cbauth">>}, {admin, {[{user,<<"Administrator">>}, {salt,<<"6MjdqNKaQBvU0VSCXiFXdQ==">>}, {mac, <<"7UfQhOmYXvBGevp8sxirMxrkozw=">>}]}}]}]}]} [ns_server:debug,2016-05-11T16:39:45.675-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.491.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0},{<<"result">>,true},{<<"error">>,null}] [ns_server:debug,2016-05-11T16:40:09.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:40:09.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:40:09.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:40:09.263-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [error_logger:info,2016-05-11T16:40:38.916-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.770.0>}, {name,disk_log_sup}, {mfargs,{disk_log_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:40:38.916-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.771.0>}, {name,disk_log_server}, {mfargs,{disk_log_server,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:40:39.264-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:40:39.264-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:40:39.264-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:40:39.264-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:41:09.265-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:41:09.265-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:41:09.265-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:41:09.265-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:41:39.266-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:41:39.266-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:41:39.266-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:41:39.266-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:41:43.335-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{9,63630229303}}]}] [ns_server:debug,2016-05-11T16:41:43.335-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: ssl_minimum_protocol -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229303}}]}| 'tlsv1.1'] [ns_server:debug,2016-05-11T16:41:43.335-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([ssl_minimum_protocol, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}]..) [ns_server:debug,2016-05-11T16:41:43.337-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:409]Notify services [ssl_service,capi_ssl_service] about ssl_minimum_protocol change [ns_server:debug,2016-05-11T16:41:43.337-07:00,ns_1@127.0.0.1:<0.163.0>:restartable:loop:71]Restarting child <0.414.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} Shutdown policy: 1000 Caller: {<0.1109.0>,#Ref<0.0.0.7567>} [ns_server:debug,2016-05-11T16:41:43.340-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:apply_changed_memcached_config:158]New memcached config is hot-reloadable. [ns_server:info,2016-05-11T16:41:43.340-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:430]Succesfully notified services [ssl_service,capi_ssl_service] [ns_server:debug,2016-05-11T16:41:43.341-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:do_read_current_memcached_config:251]Got enoent while trying to read active memcached config from /opt/couchbase/var/lib/couchbase/config/memcached.json.prev [user:info,2016-05-11T16:41:43.348-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:hot_reload_config:218]Hot-reloaded memcached.json for config change of the following keys: [<<"ssl_minimum_protocol">>] [ns_server:debug,2016-05-11T16:41:43.387-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,moxi,projector,saslauthd_port,goxdcr,xdcr_proxy] [user:debug,2016-05-11T16:41:43.392-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'moxi' exited with status 0. Restarting. Messages: 2016-05-11 16:39:45: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.327) env: MOXI_SASL_PLAIN_USR (5) 2016-05-11 16:39:45: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.336) env: MOXI_SASL_PLAIN_PWD (32) EOL on stdin. Exiting [user:debug,2016-05-11T16:41:43.400-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: 27663: Booted. Waiting for shutdown request 27663: got shutdown request. Exiting [ns_server:debug,2016-05-11T16:42:09.267-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:42:09.267-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:42:09.267-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:42:09.267-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:42:35.773-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{10,63630229355}}]}] [ns_server:debug,2016-05-11T16:42:35.774-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: ssl_minimum_protocol -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229355}}]}| 'tlsv1.2'] [ns_server:debug,2016-05-11T16:42:35.774-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:409]Notify services [ssl_service,capi_ssl_service] about ssl_minimum_protocol change [ns_server:debug,2016-05-11T16:42:35.774-07:00,ns_1@127.0.0.1:ns_config_rep<0.226.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([ssl_minimum_protocol, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}]..) [ns_server:debug,2016-05-11T16:42:35.774-07:00,ns_1@127.0.0.1:<0.163.0>:restartable:loop:71]Restarting child <0.1111.0> MFA: {ns_ssl_services_setup,start_link_rest_service,[]} Shutdown policy: 1000 Caller: {<0.1408.0>,#Ref<0.0.0.10132>} [ns_server:debug,2016-05-11T16:42:35.776-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:apply_changed_memcached_config:158]New memcached config is hot-reloadable. [ns_server:debug,2016-05-11T16:42:35.777-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:do_read_current_memcached_config:251]Got enoent while trying to read active memcached config from /opt/couchbase/var/lib/couchbase/config/memcached.json.prev [ns_server:info,2016-05-11T16:42:35.778-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:handle_info:430]Succesfully notified services [ssl_service,capi_ssl_service] [user:info,2016-05-11T16:42:35.785-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:hot_reload_config:218]Hot-reloaded memcached.json for config change of the following keys: [<<"ssl_minimum_protocol">>] [ns_server:info,2016-05-11T16:42:35.785-07:00,ns_1@127.0.0.1:ns_log<0.211.0>:ns_log:handle_cast:188]suppressing duplicate log memcached_config_mgr:undefined([<<"Hot-reloaded memcached.json for config change of the following keys: [<<\"ssl_minimum_protocol\">>]">>]) because it's been seen 1 times in the past 52.437436 secs (last seen 52.437436 secs ago [ns_server:debug,2016-05-11T16:42:35.825-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,moxi,projector,saslauthd_port,goxdcr,xdcr_proxy] [user:debug,2016-05-11T16:42:35.835-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: 28726: Booted. Waiting for shutdown request 28726: got shutdown request. Exiting [user:debug,2016-05-11T16:42:35.839-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'moxi' exited with status 0. Restarting. Messages: 2016-05-11 16:41:43: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.327) env: MOXI_SASL_PLAIN_USR (5) 2016-05-11 16:41:43: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.336) env: MOXI_SASL_PLAIN_PWD (32) EOL on stdin. Exiting [ns_server:debug,2016-05-11T16:42:39.268-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:42:39.268-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:42:39.268-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:42:39.268-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:43:09.269-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:43:09.269-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:43:09.269-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:43:09.269-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.386.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [user:debug,2016-05-11T16:43:24.827-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'xdcr_proxy' exited with status 0. Restarting. Messages: 29635: Booted. Waiting for shutdown request 29635: got shutdown request. Exiting [user:debug,2016-05-11T16:43:24.829-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'moxi' exited with status 0. Restarting. Messages: 2016-05-11 16:42:35: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.327) env: MOXI_SASL_PLAIN_USR (5) 2016-05-11 16:42:35: (/home/couchbase/jenkins/workspace/sherlock-unix/moxi/src/cproxy_config.c.336) env: MOXI_SASL_PLAIN_PWD (32) EOL on stdin. Exiting [user:debug,2016-05-11T16:43:24.831-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'projector' exited with status 0. Restarting. Messages: 2016-05-11T16:39:45.686-07:00 [Info] PROJ[:9999] settings indexer.settings.inmemory_snapshot.interval will updated to `200` 2016-05-11T16:39:45.686-07:00 [Info] PROJ[:9999] settings indexer.settings.maxVbQueueLength will updated to `0` 2016-05-11T16:39:45.686-07:00 [Info] PROJ[:9999] settings indexer.settings.wal_size will updated to `4096` 2016-05-11T16:39:45.686-07:00 [Info] amd64 linux; cpus: 4; GOMAXPROCS: 1; version: go1.4.2 [goport] 2016/05/11 16:43:24 got new line on a stdin; terminating. [ns_server:debug,2016-05-11T16:43:24.832-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.491.0>:json_rpc_connection:handle_info:146]Socket closed [ns_server:debug,2016-05-11T16:43:24.832-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.317.0>:menelaus_cbauth:handle_info:105]Observed json rpc process {'projector-cbauth',<0.491.0>} died with reason shutdown [ns_server:debug,2016-05-11T16:43:24.833-07:00,ns_1@127.0.0.1:<0.495.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.304.0>} exited with reason normal [user:debug,2016-05-11T16:43:24.833-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'goxdcr' exited with status 0. Restarting. Messages: ReplicationManager 2016-05-11T16:39:39.413-07:00 [INFO] pollEOF: About to start stdin polling HttpServer 2016-05-11T16:39:39.414-07:00 [INFO] [xdcr:127.0.0.1:9998] new http server xdcr 127.0.0.1:9998 / AdminPort 2016-05-11T16:39:39.414-07:00 [INFO] http server started 127.0.0.1:9998 ! HttpServer 2016-05-11T16:39:39.414-07:00 [INFO] [xdcr:127.0.0.1:9998] starting ... [goport] 2016/05/11 16:43:24 got new line on a stdin; terminating. [ns_server:debug,2016-05-11T16:43:24.834-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.397.0>:json_rpc_connection:handle_info:146]Socket closed [ns_server:debug,2016-05-11T16:43:24.834-07:00,ns_1@127.0.0.1:<0.400.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.301.0>} exited with reason normal [ns_server:debug,2016-05-11T16:43:24.834-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.317.0>:menelaus_cbauth:handle_info:105]Observed json rpc process {'goxdcr-cbauth',<0.397.0>} died with reason shutdown [ns_server:debug,2016-05-11T16:43:24.834-07:00,ns_1@127.0.0.1:<0.403.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.302.0>} exited with reason normal [ns_server:debug,2016-05-11T16:43:24.834-07:00,ns_1@127.0.0.1:<0.404.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.303.0>} exited with reason normal [ns_server:debug,2016-05-11T16:43:24.835-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.394.0>:json_rpc_connection:handle_info:146]Socket closed [user:debug,2016-05-11T16:43:24.836-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'saslauthd_port' exited with status 0. Restarting. Messages: 2016/05/11 16:43:24 Got EOL. Exiting [stats:error,2016-05-11T16:43:25.448-07:00,ns_1@127.0.0.1:<0.371.0>:base_stats_collector:handle_info:109](Collector: global_stats_collector) Exception in stats collector: {throw, {error, closed}, [{mc_binary, recv_with_data, 4, [{file, "src/mc_binary.erl"}, {line, 49}]}, {mc_binary, quick_stats_recv, 3, [{file, "src/mc_binary.erl"}, {line, 56}]}, {mc_binary, quick_stats_loop, 5, [{file, "src/mc_binary.erl"}, {line, 156}]}, {mc_binary, quick_stats, 5, [{file, "src/mc_binary.erl"}, {line, 141}]}, {ns_memcached_sockets_pool, '-executing_on_socket/1-fun-0-', 1, [{file, "src/ns_memcached_sockets_pool.erl"}, {line, 61}]}, {misc, '-executing_on_new_process/1-fun-0-', 3, [{file, "src/misc.erl"}, {line, 1496}]}]} [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.331.0>:remote_monitors:monitor_loop:129]Monitored remote process <11470.75.0> went down with: shutdown [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.328.0>:remote_monitors:monitor_loop:129]Monitored remote process <11470.68.0> went down with: shutdown [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.329.0>:memcached_config_mgr:handle_info:143]Got DOWN with reason: shutdown from memcached port server: <11470.75.0>. Shutting down [user:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.212.0>:ns_log:crash_consumption_loop:70]Service 'memcached' exited with status 0. Restarting. Messages: 2016-05-11T16:43:00.444503-07:00 WARNING 41: ERROR: SSL_accept() returned -1 with error 1 error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol 2016-05-11T16:43:00.444575-07:00 WARNING 41 Closing connection [127.0.0.1:51469 - 127.0.0.1:11207] due to read error: Connection reset by peer 2016-05-11T16:43:00.829936-07:00 WARNING 41 Closing connection [::1:53893 - ::1:11207] due to read error: Connection reset by peer EOL on stdin. Initiating shutdown [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.2.0>:child_erlang:child_loop:119]27510: Got EOL [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:ns_ports_setup<0.321.0>:ns_ports_setup:children_loop_continue:108]ns_child_ports_sup <11470.68.0> died on babysitter node with shutdown. Restart. [ns_server:info,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.2.0>:ns_bootstrap:stop:42]Initiated server shutdown [ns_server:debug,2016-05-11T16:43:25.449-07:00,ns_1@127.0.0.1:<0.322.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.321.0>} exited with reason {child_ports_sup_died, <11470.68.0>, shutdown} [ns_server:debug,2016-05-11T16:43:25.450-07:00,ns_1@127.0.0.1:<0.332.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.329.0>} exited with reason {shutdown, {memcached_port_server_down, <11470.75.0>, shutdown}} [ns_server:debug,2016-05-11T16:43:25.450-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.1690.0>:memcached_config_mgr:init:44]waiting for completion of initial ns_ports_setup round [ns_server:debug,2016-05-11T16:43:25.451-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.1693.0>:memcached_config_mgr:init:44]waiting for completion of initial ns_ports_setup round [ns_server:debug,2016-05-11T16:43:25.454-07:00,ns_1@127.0.0.1:<0.387.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.386.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:<0.382.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.381.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:<0.380.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.378.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:<0.379.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.378.0>} exited with reason shutdown [error_logger:error,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: ns_ports_setup:setup_body_tramp/0 pid: <0.321.0> registered_name: ns_ports_setup exception error: {child_ports_sup_died,<11470.68.0>,shutdown} in function ns_ports_setup:children_loop_continue/3 (src/ns_ports_setup.erl, line 109) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.152.0>, ns_server_cluster_sup,<0.87.0>] messages: [] links: [<0.206.0>,<0.322.0>] dictionary: [{'ns_ports_setup-projector-available', "/opt/couchbase/bin/projector"}, {'ns_ports_setup-saslauthd-port-available', "/opt/couchbase/bin/saslauthd-port"}, {'ns_ports_setup-goxdcr-available', "/opt/couchbase/bin/goxdcr"}] trap_exit: false status: running heap_size: 4185 stack_size: 27 reductions: 70027 neighbours: [error_logger:info,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Initiated server shutdown [error_logger:error,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.329.0>, {error, {shutdown, {memcached_port_server_down, <11470.75.0>,shutdown}}}} [ns_server:debug,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:<0.372.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.371.0>} exited with reason shutdown [error_logger:error,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {shutdown,{memcached_port_server_down,<11470.75.0>,shutdown}} Offender: [{pid,<0.329.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:25.455-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.1690.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:error,2016-05-11T16:43:25.456-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {child_ports_sup_died,<11470.68.0>,shutdown} Offender: [{pid,<0.321.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:25.456-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.1691.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:error,2016-05-11T16:43:25.456-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]Supervisor received unexpected message: {ack,<0.1690.0>, {error, {noproc, {gen_server,call, [ns_ports_setup,sync,infinity]}}}} [error_logger:error,2016-05-11T16:43:25.457-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.1690.0> registered_name: [] exception exit: {noproc,{gen_server,call,[ns_ports_setup,sync,infinity]}} in function gen_server:init_it/6 (gen_server.erl, line 328) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.152.0>, ns_server_cluster_sup,<0.87.0>] messages: [] links: [<0.206.0>] dictionary: [] trap_exit: false status: running heap_size: 610 stack_size: 27 reductions: 994 neighbours: [ns_server:debug,2016-05-11T16:43:25.457-07:00,ns_1@127.0.0.1:ns_ports_setup<0.1691.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,moxi,projector,saslauthd_port,goxdcr,xdcr_proxy] [error_logger:error,2016-05-11T16:43:25.457-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_server_sup} Context: child_terminated Reason: {noproc,{gen_server,call,[ns_ports_setup,sync,infinity]}} Offender: [{pid,<0.1690.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:25.457-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.1693.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:25.458-07:00,ns_1@127.0.0.1:ns_ports_setup<0.1691.0>:misc:delaying_crash:1625]Delaying crash exit:{noproc, {gen_server,call, [{ns_ports_manager,'babysitter_of_ns_1@127.0.0.1'}, {set_dynamic_children, [{memcached,"/opt/couchbase/bin/memcached", ["-C", "/opt/couchbase/var/lib/couchbase/config/memcached.json"], [{env, [{"EVENT_NOSELECT","1"}, {"MEMCACHED_TOP_KEYS","5"}, {"ISASL_PWFILE", "/opt/couchbase/var/lib/couchbase/isasl.pw"}]}, use_stdio,stderr_to_stdout,exit_status, port_server_dont_start,stream]}, {moxi,"/opt/couchbase/bin/moxi", ["-Z", "port_listen=11211,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200", "-z", "url=http://127.0.0.1:8091/pools/default/saslBucketsStreaming", "-p","0","-Y","y","-O","stderr",[]], [{env, [{"EVENT_NOSELECT","1"}, {"MOXI_SASL_PLAIN_USR","@moxi"}, {"MOXI_SASL_PLAIN_PWD", "2bb824636f76a257101e37d538281ca2"}, {"http_proxy",[]}]}, use_stdio,exit_status,stderr_to_stdout,stream]}, {projector,"/opt/couchbase/bin/goport",[], [use_stdio,exit_status,stderr_to_stdout,stream, {log,"projector.log"}, {env, [{"GOPORT_ARGS", "[\"/opt/couchbase/bin/projector\",\"-kvaddrs=127.0.0.1:11210\",\"-adminport=:9999\",\"-diagDir=/opt/couchbase/var/lib/couchbase/crash\",\"127.0.0.1:8091\"]"}, {"GOTRACEBACK",[]}, {"CBAUTH_REVRPC_URL", "http://%40:2bb824636f76a257101e37d538281ca2@127.0.0.1:8091/projector"}]}]}, {saslauthd_port,"/opt/couchbase/bin/saslauthd-port", [], [use_stdio,exit_status,stderr_to_stdout,stream, {env, [{"GOTRACEBACK",[]}, {"CBAUTH_REVRPC_URL", "http://%40:2bb824636f76a257101e37d538281ca2@127.0.0.1:8091/saslauthd"}]}]}, {goxdcr,"/opt/couchbase/bin/goport",[], [use_stdio,exit_status,stderr_to_stdout,stream, {log,"goxdcr.log"}, {env, [{"GOPORT_ARGS", "[\"/opt/couchbase/bin/goxdcr\",\"-localProxyPort=11215\",\"-sourceKVAdminPort=8091\",\"-xdcrRestPort=9998\",\"-isEnterprise=true\"]"}, {"GOTRACEBACK",[]}, {"CBAUTH_REVRPC_URL", "http://%40:2bb824636f76a257101e37d538281ca2@127.0.0.1:8091/goxdcr"}]}]}, {xdcr_proxy,"/opt/couchbase/lib/erlang/bin/erl", ["-pa", "/opt/couchbase/lib/erlang/lib/appmon-2.1.14.2/ebin", "/opt/couchbase/lib/erlang/lib/asn1-2.0.4/ebin", "/opt/couchbase/lib/erlang/lib/common_test-1.7.4/ebin", "/opt/couchbase/lib/erlang/lib/compiler-4.9.4/ebin", "/opt/couchbase/lib/erlang/lib/cosEvent-2.1.14/ebin", "/opt/couchbase/lib/erlang/lib/cosEventDomain-1.1.13/ebin", "/opt/couchbase/lib/erlang/lib/cosFileTransfer-1.1.15/ebin", "/opt/couchbase/lib/erlang/lib/cosNotification-1.1.20/ebin", "/opt/couchbase/lib/erlang/lib/cosProperty-1.1.16/ebin", "/opt/couchbase/lib/erlang/lib/cosTime-1.1.13/ebin", "/opt/couchbase/lib/erlang/lib/cosTransactions-1.2.13/ebin", "/opt/couchbase/lib/erlang/lib/crypto-3.2/ebin", "/opt/couchbase/lib/erlang/lib/dialyzer-2.6.1/ebin", "/opt/couchbase/lib/erlang/lib/diameter-1.5/ebin", "/opt/couchbase/lib/erlang/lib/edoc-0.7.12.1/ebin", "/opt/couchbase/lib/erlang/lib/eldap-1.0.2/ebin", "/opt/couchbase/lib/erlang/lib/erl_docgen-0.3.4.1/ebin", "/opt/couchbase/lib/erlang/lib/erl_interface-3.7.15", "/opt/couchbase/lib/erlang/lib/erts-5.10.4.0.0.1/ebin", "/opt/couchbase/lib/erlang/lib/et-1.4.4.5/ebin", "/opt/couchbase/lib/erlang/lib/eunit-2.2.6/ebin", "/opt/couchbase/lib/erlang/lib/gs-1.5.15.2/ebin", "/opt/couchbase/lib/erlang/lib/hipe-3.10.2.2/ebin", "/opt/couchbase/lib/erlang/lib/ic-4.3.4/ebin", "/opt/couchbase/lib/erlang/lib/inets-5.9.8/ebin", "/opt/couchbase/lib/erlang/lib/mnesia-4.11/ebin", "/opt/couchbase/lib/erlang/lib/orber-3.6.26.1/ebin", "/opt/couchbase/lib/erlang/lib/os_mon-2.2.14/ebin", "/opt/couchbase/lib/erlang/lib/otp_mibs-1.0.8/ebin", "/opt/couchbase/lib/erlang/lib/parsetools-2.0.10/ebin", "/opt/couchbase/lib/erlang/lib/percept-0.8.8.2/ebin", "/opt/couchbase/lib/erlang/lib/pman-2.7.1.4/ebin", "/opt/couchbase/lib/erlang/lib/public_key-0.21/ebin", "/opt/couchbase/lib/erlang/lib/reltool-0.6.4.1/ebin", "/opt/couchbase/lib/erlang/lib/runtime_tools-1.8.13/ebin", "/opt/couchbase/lib/erlang/lib/sasl-2.3.4/ebin", "/opt/couchbase/lib/erlang/lib/snmp-4.25/ebin", "/opt/couchbase/lib/erlang/lib/ssh-3.0/ebin", "/opt/couchbase/lib/erlang/lib/ssl-5.3.3/ebin", "/opt/couchbase/lib/erlang/lib/syntax_tools-1.6.13/ebin", "/opt/couchbase/lib/erlang/lib/test_server-3.6.4/ebin", "/opt/couchbase/lib/erlang/lib/toolbar-1.4.2.3/ebin", "/opt/couchbase/lib/erlang/lib/tools-2.6.13/ebin", "/opt/couchbase/lib/erlang/lib/tv-2.1.4.10/ebin", "/opt/couchbase/lib/erlang/lib/typer-0.9.5/ebin", "/opt/couchbase/lib/erlang/lib/webtool-0.8.9.2/ebin", "/opt/couchbase/lib/erlang/lib/xmerl-1.3.6/ebin", "/opt/couchbase/lib/couchdb/plugins/gc-couchbase-1.0.0/ebin", "/opt/couchbase/lib/couchdb/plugins/vtree-0.1.0/ebin", "/opt/couchbase/lib/couchdb/plugins/wkb-1.2.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/couch-1.2.0a-961ad59-git/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/couch_dcp-1.0.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/couch_index_merger-1.0.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/couch_set_view-1.0.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/couch_view_parser-1.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/ejson-0.1.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/erlang-oauth/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/etap/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/lhttpc-1.3/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/mapreduce-1.0/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/mochiweb-1.4.1/ebin", "/opt/couchbase/lib/couchdb/erlang/lib/snappy-1.0.4/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/ale/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/gen_smtp/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/mlockall/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/ns_babysitter/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/ns_couchdb/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/ns_server/ebin", "/opt/couchbase/lib/ns_server/erlang/lib/ns_ssl_proxy/ebin", "/opt/couchbase/lib/erlang/lib/stdlib-1.19.4/ebin", "/opt/couchbase/lib/erlang/lib/kernel-2.16.4/ebin", ".","-smp","enable","+P","327680","+K","true", "-kernel","error_logger","false","-sasl", "sasl_error_logger","false","-nouser","-run", "child_erlang","child_start","ns_ssl_proxy"], [use_stdio, {env, [{"NS_SSL_PROXY_ENV_ARGS", "[{upstream_port,11215},\n {downstream_port,11214},\n {local_memcached_port,11210},\n {cert_file,\"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem\"},\n {private_key_file,\"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem\"},\n {cacert_file,\"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem-ca\"},\n {ssl_minimum_protocol,'tlsv1.2'},\n {loglevel_cluster,debug},\n {path_config_libdir,\"/opt/couchbase/lib\"},\n {path_config_tmpdir,\"/opt/couchbase/var/lib/couchbase/tmp\"},\n {loglevel_ns_server,debug},\n {loglevel_access,info},\n {loglevel_error_logger,debug},\n {loglevel_xdcr,debug},\n {path_config_secdir,\"/opt/couchbase/etc/security\"},\n {loglevel_mapreduce_errors,debug},\n {loglevel_ns_doctor,debug},\n {loglevel_default,debug},\n {loglevel_stats,debug},\n {loglevel_user,debug},\n {error_logger_mf_dir,\"/opt/couchbase/var/lib/couchbase/logs\"},\n {path_config_etcdir,\"/opt/couchbase/etc/couchbase\"},\n {path_config_bindir,\"/opt/couchbase/bin\"},\n {net_kernel_verbosity,10},\n {loglevel_rebalance,debug},\n {loglevel_menelaus,debug},\n {disk_sink_opts,[{rotation,[{compress,true},\n {size,41943040},\n {num_files,10},\n {buffer_size_max,52428800}]}]},\n {loglevel_views,debug},\n {path_config_datadir,\"/opt/couchbase/var/lib/couchbase\"},\n {loglevel_xdcr_trace,error},\n {loglevel_couchdb,info}]"}, {"ERL_CRASH_DUMP", "erl_crash.dump.1463009967.27439.xdcr_proxy"}]}]}]}, infinity]}} by 1000ms Stacktrace: [{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]}, {ns_ports_setup,set_children,2, [{file,"src/ns_ports_setup.erl"},{line,68}]}, {ns_ports_setup,set_children_and_loop,3, [{file,"src/ns_ports_setup.erl"},{line,84}]}, {misc,delaying_crash,2,[{file,"src/misc.erl"},{line,1622}]}, {proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}] [ns_server:debug,2016-05-11T16:43:25.474-07:00,ns_1@127.0.0.1:<0.369.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.368.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.474-07:00,ns_1@127.0.0.1:<0.367.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.366.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.486-07:00,ns_1@127.0.0.1:<0.364.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.363.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.573-07:00,ns_1@127.0.0.1:<0.361.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.360.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.589-07:00,ns_1@127.0.0.1:<0.358.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_stats_event,<0.357.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.590-07:00,ns_1@127.0.0.1:<0.356.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_tick_event,<0.353.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.590-07:00,ns_1@127.0.0.1:<0.355.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ale_stats_events,<0.353.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.590-07:00,ns_1@127.0.0.1:<0.352.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.351.0>} exited with reason shutdown [error_logger:error,2016-05-11T16:43:25.590-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================SUPERVISOR REPORT========================= Supervisor: {local,ns_bucket_sup} Context: shutdown_error Reason: normal Offender: [{pid,<0.352.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.341.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.340.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.325.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.324.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.1692.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.1691.0>} exited with reason killed [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.318.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {json_rpc_events,<0.317.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.320.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.317.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.591-07:00,ns_1@127.0.0.1:<0.319.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_node_disco_events,<0.317.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.592-07:00,ns_1@127.0.0.1:<0.291.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.290.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.592-07:00,ns_1@127.0.0.1:<0.286.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {master_activity_events,<0.285.0>} exited with reason killed [ns_server:info,2016-05-11T16:43:25.592-07:00,ns_1@127.0.0.1:mb_master<0.261.0>:mb_master:terminate:299]Synchronously shutting down child mb_master_sup [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.262.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.261.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.254.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.253.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.247.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {buckets_events,<0.246.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.238.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.236.0>} exited with reason killed [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.227.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events_local,<0.226.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.235.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.234.0>} exited with reason killed [ns_server:debug,2016-05-11T16:43:25.593-07:00,ns_1@127.0.0.1:<0.214.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.213.0>} exited with reason shutdown [error_logger:error,2016-05-11T16:43:25.594-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================CRASH REPORT========================= crasher: initial call: gen_event:init_it/6 pid: <0.237.0> registered_name: bucket_info_cache_invalidations exception exit: killed in function gen_event:terminate_server/4 (gen_event.erl, line 320) ancestors: [bucket_info_cache,ns_server_sup,ns_server_nodes_sup, <0.152.0>,ns_server_cluster_sup,<0.87.0>] messages: [] links: [] dictionary: [] trap_exit: true status: running heap_size: 376 stack_size: 27 reductions: 155 neighbours: [ns_server:debug,2016-05-11T16:43:25.596-07:00,ns_1@127.0.0.1:<0.205.0>:remote_monitors:handle_down:158]Caller of remote monitor <0.182.0> died with shutdown. Exiting [ns_server:debug,2016-05-11T16:43:25.596-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.181.0>:ns_port_server:terminate:182]Sending shutdown to port ns_couchdb [error_logger:info,2016-05-11T16:43:25.606-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.194.0>,connection_closed}} [ns_server:debug,2016-05-11T16:43:25.608-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.181.0>:ns_port_server:terminate:185]ns_couchdb has exited [ns_server:info,2016-05-11T16:43:25.608-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.181.0>:ns_port_server:log:210]ns_couchdb<0.181.0>: 27549: got shutdown request. Exiting ns_couchdb<0.181.0>: [os_mon] memory supervisor port (memsup): Erlang has closed ns_couchdb<0.181.0>: [os_mon] cpu supervisor port (cpu_sup): Erlang has closed [ns_server:debug,2016-05-11T16:43:25.608-07:00,ns_1@127.0.0.1:<0.159.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.158.0>} exited with reason shutdown [ns_server:debug,2016-05-11T16:43:25.609-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:wait_saver:833]Done waiting for saver. [ns_server:debug,2016-05-11T16:43:25.608-07:00,ns_1@127.0.0.1:<0.150.0>:ns_pubsub:do_subscribe_link:145]Parent process of subscription {ns_config_events,<0.149.0>} exited with reason shutdown [ns_server:info,2016-05-11T16:43:25.610-07:00,ns_1@127.0.0.1:<0.2.0>:ns_bootstrap:stop:46]Successfully stopped ns_server [error_logger:info,2016-05-11T16:43:25.610-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= application: ns_server exited: stopped type: permanent [ns_server:info,2016-05-11T16:43:31.684-07:00,nonode@nohost:<0.87.0>:ns_server:init_logging:151]Started & configured logging [ns_server:info,2016-05-11T16:43:31.694-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]Static config terms: [{error_logger_mf_dir,"/opt/couchbase/var/lib/couchbase/logs"}, {path_config_bindir,"/opt/couchbase/bin"}, {path_config_etcdir,"/opt/couchbase/etc/couchbase"}, {path_config_libdir,"/opt/couchbase/lib"}, {path_config_datadir,"/opt/couchbase/var/lib/couchbase"}, {path_config_tmpdir,"/opt/couchbase/var/lib/couchbase/tmp"}, {path_config_secdir,"/opt/couchbase/etc/security"}, {nodefile,"/opt/couchbase/var/lib/couchbase/couchbase-server.node"}, {loglevel_default,debug}, {loglevel_couchdb,info}, {loglevel_ns_server,debug}, {loglevel_error_logger,debug}, {loglevel_user,debug}, {loglevel_menelaus,debug}, {loglevel_ns_doctor,debug}, {loglevel_stats,debug}, {loglevel_rebalance,debug}, {loglevel_cluster,debug}, {loglevel_views,debug}, {loglevel_mapreduce_errors,debug}, {loglevel_xdcr,debug}, {loglevel_xdcr_trace,error}, {loglevel_access,info}, {disk_sink_opts, [{rotation, [{compress,true}, {size,41943040}, {num_files,10}, {buffer_size_max,52428800}]}]}, {disk_sink_opts_xdcr_trace, [{rotation,[{compress,false},{size,83886080},{num_files,5}]}]}, {net_kernel_verbosity,10}] [ns_server:warn,2016-05-11T16:43:31.694-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter error_logger_mf_dir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.694-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_bindir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.694-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_etcdir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_libdir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_datadir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_tmpdir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter path_config_secdir, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter nodefile, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_default, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_couchdb, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_server, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_error_logger, which is given from command line [ns_server:warn,2016-05-11T16:43:31.695-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_user, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_menelaus, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_ns_doctor, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_stats, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_rebalance, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_cluster, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_views, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_mapreduce_errors, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_xdcr_trace, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter loglevel_access, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts, which is given from command line [ns_server:warn,2016-05-11T16:43:31.696-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter disk_sink_opts_xdcr_trace, which is given from command line [ns_server:warn,2016-05-11T16:43:31.697-07:00,nonode@nohost:<0.87.0>:ns_server:log_pending:32]not overriding parameter net_kernel_verbosity, which is given from command line [error_logger:info,2016-05-11T16:43:31.701-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.127.0>}, {name,local_tasks}, {mfargs,{local_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.708-07:00,nonode@nohost:ns_server_cluster_sup<0.126.0>:log_os_info:start_link:25]OS type: {unix,linux} Version: {2,6,32} Runtime info: [{otp_release,"R16B03-1"}, {erl_version,"5.10.4.0.0.1"}, {erl_version_long, "Erlang R16B03-1 (erts-5.10.4.0.0.1) [source-62b74b5] [64-bit] [smp:4:4] [async-threads:16] [kernel-poll:true]\n"}, {system_arch_raw,"x86_64-unknown-linux-gnu"}, {system_arch,"x86_64-unknown-linux-gnu"}, {localtime,{{2016,5,11},{16,43,31}}}, {memory, [{total,25526480}, {processes,8979840}, {processes_used,8979008}, {system,16546640}, {atom,331249}, {atom_used,307557}, {binary,75024}, {code,7588538}, {ets,2240552}]}, {loaded, [ns_info,log_os_info,local_tasks,restartable, ns_server_cluster_sup,calendar,ale_default_formatter, 'ale_logger-metakv','ale_logger-rebalance', 'ale_logger-xdcr_trace','ale_logger-menelaus', 'ale_logger-stats','ale_logger-access', 'ale_logger-ns_server','ale_logger-user', 'ale_logger-ns_doctor','ale_logger-cluster', 'ale_logger-xdcr',otp_internal,ns_log_sink,io_lib_fread, ale_disk_sink,misc,couch_util,ns_server,filelib,cpu_sup, memsup,disksup,os_mon,io,release_handler,overload, alarm_handler,sasl,timer,tftp_sup,httpd_sup, httpc_handler_sup,httpc_cookie,inets_trace,httpc_manager, httpc,httpc_profile_sup,httpc_sup,ftp_sup,inets_sup, inets_app,ssl,lhttpc_manager,lhttpc_sup,lhttpc, tls_connection_sup,ssl_session_cache,ssl_pkix_db, ssl_manager,ssl_sup,ssl_app,crypto_server,crypto_sup, crypto_app,ale_error_logger_handler, 'ale_logger-ale_logger','ale_logger-error_logger', beam_opcodes,beam_dict,beam_asm,beam_validator,beam_z, beam_flatten,beam_trim,beam_receive,beam_bsm,beam_peep, beam_dead,beam_split,beam_type,beam_bool,beam_except, beam_clean,beam_utils,beam_block,beam_jump,beam_a, v3_codegen,v3_life,v3_kernel,sys_core_dsetel,erl_bifs, sys_core_fold,cerl_trees,sys_core_inline,core_lib,cerl, v3_core,erl_bits,erl_expand_records,sys_pre_expand,sofs, erl_internal,sets,ordsets,erl_lint,compile, dynamic_compile,ale_utils,io_lib_pretty,io_lib_format, io_lib,ale_codegen,dict,ale,ale_dynamic_sup,ale_sup, ale_app,epp,ns_bootstrap,child_erlang,file_io_server, orddict,erl_eval,file,c,kernel_config,user_sup, supervisor_bridge,standard_error,code_server,unicode, hipe_unified_loader,gb_sets,ets,binary,code,file_server, net_kernel,global_group,erl_distribution,filename,os, inet_parse,inet,inet_udp,inet_config,inet_db,global, gb_trees,rpc,supervisor,kernel,application_master,sys, application,gen_server,erl_parse,proplists,erl_scan,lists, application_controller,proc_lib,gen,gen_event, error_logger,heart,error_handler,erts_internal,erlang, erl_prim_loader,prim_zip,zlib,prim_file,prim_inet, prim_eval,init,otp_ring0]}, {applications, [{lhttpc,"Lightweight HTTP Client","1.3.0"}, {os_mon,"CPO CXC 138 46","2.2.14"}, {public_key,"Public key infrastructure","0.21"}, {asn1,"The Erlang ASN1 compiler version 2.0.4","2.0.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}, {ale,"Another Logger for Erlang","4.1.1-5914-enterprise"}, {inets,"INETS CXC 138 49","5.9.8"}, {ns_server,"Couchbase server","4.1.1-5914-enterprise"}, {crypto,"CRYPTO version 2","3.2"}, {ssl,"Erlang/OTP SSL application","5.3.3"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}]}, {pre_loaded, [erts_internal,erlang,erl_prim_loader,prim_zip,zlib, prim_file,prim_inet,prim_eval,init,otp_ring0]}, {process_count,94}, {node,nonode@nohost}, {nodes,[]}, {registered, [local_tasks,inets_sup,code_server,ale_stats_events, lhttpc_sup,ale,application_controller,standard_error_sup, lhttpc_manager,release_handler,ale_sup,kernel_safe_sup, httpd_sup,standard_error,overload,error_logger, ale_dynamic_sup,alarm_handler,timer_server,'sink-ns_log', sasl_safe_sup,'sink-disk_default',crypto_server, 'sink-disk_metakv',crypto_sup,init,'sink-disk_access_int', inet_db,os_mon_sup,tftp_sup,rex,'sink-disk_access', kernel_sup,cpu_sup,'sink-xdcr_trace',global_name_server, tls_connection_sup,memsup,'sink-disk_reports',ssl_sup, disksup,file_server_2,'sink-disk_stats',httpc_sup, global_group,'sink-disk_xdcr_errors',ssl_manager, 'sink-disk_xdcr',httpc_profile_sup,httpc_manager, 'sink-disk_debug',httpc_handler_sup,'sink-disk_error', ns_server_cluster_sup,ftp_sup,sasl_sup,erl_prim_loader]}, {cookie,nocookie}, {wordsize,8}, {wall_clock,2}] [ns_server:info,2016-05-11T16:43:31.718-07:00,nonode@nohost:ns_server_cluster_sup<0.126.0>:log_os_info:start_link:27]Manifest: ["","", " ", " ", " ", " "," "," ", " ", " ", " ", " "," ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", ""] [error_logger:info,2016-05-11T16:43:31.722-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.128.0>}, {name,timeout_diag_logger}, {mfargs,{timeout_diag_logger,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.726-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/opt/couchbase/var/lib/couchbase/ip_start" [ns_server:info,2016-05-11T16:43:31.726-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:read_address_config_from_path:86]Reading ip config from "/opt/couchbase/var/lib/couchbase/ip" [ns_server:info,2016-05-11T16:43:31.726-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:init:163]ip config not found. Looks like we're brand new node [error_logger:info,2016-05-11T16:43:31.732-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,inet_gethost_native_sup} started: [{pid,<0.131.0>},{mfa,{inet_gethost_native,init,[[]]}}] [error_logger:info,2016-05-11T16:43:31.732-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.130.0>}, {name,inet_gethost_native_sup}, {mfargs,{inet_gethost_native,start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.795-07:00,nonode@nohost:dist_manager<0.129.0>:dist_manager:bringup:214]Attempting to bring up net_kernel with name 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:43:31.812-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.133.0>}, {name,erl_epmd}, {mfargs,{erl_epmd,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.812-07:00,nonode@nohost:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.134.0>}, {name,auth}, {mfargs,{auth,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:31.813-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:configure_net_kernel:255]Set net_kernel vebosity to 10 -> 0 [error_logger:info,2016-05-11T16:43:31.813-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,net_sup} started: [{pid,<0.135.0>}, {name,net_kernel}, {mfargs, {net_kernel,start_link, [['ns_1@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.814-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_sup} started: [{pid,<0.132.0>}, {name,net_sup_dynamic}, {mfargs, {erl_distribution,start_link, [['ns_1@127.0.0.1',longnames]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [ns_server:info,2016-05-11T16:43:31.816-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:save_node:147]saving node to "/opt/couchbase/var/lib/couchbase/couchbase-server.node" [ns_server:debug,2016-05-11T16:43:31.818-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:bringup:228]Attempted to save node name to disk: ok [ns_server:debug,2016-05-11T16:43:31.818-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:wait_for_node:235]Waiting for connection to node 'babysitter_of_ns_1@127.0.0.1' to be established [error_logger:info,2016-05-11T16:43:31.818-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'babysitter_of_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:43:31.829-07:00,ns_1@127.0.0.1:dist_manager<0.129.0>:dist_manager:wait_for_node:244]Observed node 'babysitter_of_ns_1@127.0.0.1' to come up [error_logger:info,2016-05-11T16:43:31.836-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.129.0>}, {name,dist_manager}, {mfargs,{dist_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.838-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.140.0>}, {name,ns_cookie_manager}, {mfargs,{ns_cookie_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.838-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.141.0>}, {name,ns_cluster}, {mfargs,{ns_cluster,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.842-07:00,ns_1@127.0.0.1:ns_config_sup<0.142.0>:ns_config_sup:init:32]loading static ns_config from "/opt/couchbase/etc/couchbase/config" [error_logger:info,2016-05-11T16:43:31.842-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.143.0>}, {name,ns_config_events}, {mfargs, {gen_event,start_link,[{local,ns_config_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.842-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.144.0>}, {name,ns_config_events_local}, {mfargs, {gen_event,start_link, [{local,ns_config_events_local}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.903-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1019]Loading static config from "/opt/couchbase/etc/couchbase/config" [ns_server:info,2016-05-11T16:43:31.905-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1033]Loading dynamic config from "/opt/couchbase/var/lib/couchbase/config/config.dat" [ns_server:debug,2016-05-11T16:43:31.914-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1041]Here's full dynamic config we loaded: [[{ssl_minimum_protocol, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229355}}]}| 'tlsv1.2']}, {rest_creds, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| {"Administrator",{password,"*****"}}]}, {rest,[{port,8091}]}, {{node,'ns_1@127.0.0.1',services}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]}, kv]}, {{metakv,<<"/indexing/settings/config">>}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>]}, {memory_quota, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]}| 3103]}, {{node,'ns_1@127.0.0.1',stop_xdcr}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229179}}]}| '_deleted']}, {{service_map,index}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}]}, {{service_map,n1ql}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}]}, {goxdcr_upgrade, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| '_deleted']}, {read_only_user_creds, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| null]}, {server_groups, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, [{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]}, {cluster_compat_version, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{5,63630229178}}]}, 4,1]}, {otp, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, {cookie,vuzfvvczcpnjsgwq}]}, {cert_and_pkey, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIFE2n0hhgvIcwCwYJKoZIhvcNAQELMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgM2M3NDBmY2EwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIDNjNzQwZmNh\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0Cv7vNECWUBN/JieYqSf\n+O0Dymyr49xvXzfdqH89k/RdSS8zFrw6CFlR23s494dEGWyHIFGsp8go2qKZh83T\noFl5B3ef3HnuJrnefGmbA+elwNB/lcU9echX8hj7MjYyORQGDjBMgBHFBc5Xzgbh\na+qcdC5H30hfwaLkN9UegUted6uiKmRvFZDkPbVUIFpcZ8Ut2OUhX6+ytMgCY9gb\nvOmQ+ZfdAklDC0UTFaXBgAnU+74sDrJ91OV6Gy33IKycApxZIZdO7vc4x+d56EJK\nvBGr0pV4LAxOJhJHpu/yXry8zLcWFUfxNcU0u7kyCNxzE5ErqSn+OroJhmdfDYkL\n7wIDAQABozgwNjAOBgNVHQ8BAf8EBAMCAKQwEwYDVR0lBAwwCgYIKwYBBQUHAwEw\nDwYDVR0TAQH/BAUwAwEB/zALBgkqhkiG9w0BAQsDggEBAKyH8frLdiivm9B50Ock\nfH/dgo3FoEUbZWWcgbitlpODuJO1lH1yKdIJZdypbYx+S9hcfTcYVb/qJp5Y0mk8\nFMJNtBMYmUY0TttqCEHCqjIgZ3H1H0m8Ir0sv6lJ9FGr4uyPMc57Gcxu8/IUFHah\nzxVRjxcV463lRwkbZyUdsAd5U7WKb8+mEWdCRLHBEAfYD2KK/v1xh6SsADmj8Q+S\noGoIMC3dfoh5fFzhjuzsXIt1X0mKfORvxvhs7J5uTt9UGZL5C3Kc+QKkFkLgrpK5\neTc0San4wzlRGiEOgCc2vxxEInUAcduMDxNbs/GWFyKTMEXDLjBPF/X/2aMnEzlG\nt7Q=\n-----END CERTIFICATE-----\n">>, <<"*****">>}]}, {alert_limits,[{max_overhead_perc,50},{max_disk_used,90}]}, {audit, [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}]}, {auto_failover_cfg,[{enabled,false},{timeout,120},{max_nodes,1},{count,0}]}, {autocompaction, [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}]}, {buckets,[{configs,[]}]}, {drop_request_memory_threshold_mib,undefined}, {email_alerts, [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server, [{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]}, {alerts, [auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events]}]}, {index_aware_rebalance_disabled,false}, {max_bucket_count,10}, {memcached,[]}, {nodes_wanted,['ns_1@127.0.0.1']}, {remote_clusters,[]}, {replication,[{enabled,true}]}, {set_view_update_daemon, [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}]}, {{couchdb,max_parallel_indexers},4}, {{couchdb,max_parallel_replica_indexers},2}, {{request_limit,capi},undefined}, {{request_limit,rest},undefined}, {{node,'ns_1@127.0.0.1',audit}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {{node,'ns_1@127.0.0.1',capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8092]}, {{node,'ns_1@127.0.0.1',compaction_daemon}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {check_interval,30}, {min_file_size,131072}]}, {{node,'ns_1@127.0.0.1',config_version}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {4,1,1}]}, {{node,'ns_1@127.0.0.1',indexer_admin_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9100]}, {{node,'ns_1@127.0.0.1',indexer_http_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9102]}, {{node,'ns_1@127.0.0.1',indexer_scan_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9101]}, {{node,'ns_1@127.0.0.1',indexer_stcatchup_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9104]}, {{node,'ns_1@127.0.0.1',indexer_stinit_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9103]}, {{node,'ns_1@127.0.0.1',indexer_stmaint_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9105]}, {{node,'ns_1@127.0.0.1',is_enterprise}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',isasl}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]}, {{node,'ns_1@127.0.0.1',ldap_enabled}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',membership}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| active]}, {{node,'ns_1@127.0.0.1',memcached}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11210}, {dedicated_port,11209}, {ssl_port,11207}, {admin_user,"_admin"}, {admin_pass,"*****"}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {engines, [{membase, [{engine,"/opt/couchbase/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached, [{engine,"/opt/couchbase/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/opt/couchbase/var/lib/couchbase/config/memcached.json"}, {audit_file,"/opt/couchbase/var/lib/couchbase/config/audit.json"}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}]}, {{node,'ns_1@127.0.0.1',memcached_config}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>}, {cert, <<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {engine, {[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>}, {config, {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}}]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}]}]}, {{node,'ns_1@127.0.0.1',memcached_defaults}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {verbosity,0}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/opt/couchbase/var/lib/couchbase/crash"}, {dedupe_nmvb_maps,false}]}, {{node,'ns_1@127.0.0.1',moxi}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11211}, {verbosity,[]}]}, {{node,'ns_1@127.0.0.1',ns_log}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]}, {{node,'ns_1@127.0.0.1',port_servers}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {{node,'ns_1@127.0.0.1',projector_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9999]}, {{node,'ns_1@127.0.0.1',query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8093]}, {{node,'ns_1@127.0.0.1',rest}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,8091}, {port_meta,global}]}, {{node,'ns_1@127.0.0.1',ssl_capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18092]}, {{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11214]}, {{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11215]}, {{node,'ns_1@127.0.0.1',ssl_query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18093]}, {{node,'ns_1@127.0.0.1',ssl_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18091]}, {{node,'ns_1@127.0.0.1',uuid}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| <<"0d9696803a535febe829002b30cd0eb5">>]}, {{node,'ns_1@127.0.0.1',xdcr_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9998]}, {{local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>}, [{'_vclock', [{<<"0d9696803a535febe829002b30cd0eb5">>,{10,63630229355}}]}]}]] [ns_server:info,2016-05-11T16:43:31.920-07:00,ns_1@127.0.0.1:ns_config<0.145.0>:ns_config:load_config:1075]Here's full dynamic config we loaded + static & default config: [{{local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{10,63630229355}}]}]}, {{node,'ns_1@127.0.0.1',xdcr_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9998]}, {{node,'ns_1@127.0.0.1',uuid}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| <<"0d9696803a535febe829002b30cd0eb5">>]}, {{node,'ns_1@127.0.0.1',ssl_rest_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18091]}, {{node,'ns_1@127.0.0.1',ssl_query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18093]}, {{node,'ns_1@127.0.0.1',ssl_proxy_upstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11215]}, {{node,'ns_1@127.0.0.1',ssl_proxy_downstream_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 11214]}, {{node,'ns_1@127.0.0.1',ssl_capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 18092]}, {{node,'ns_1@127.0.0.1',rest}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,8091}, {port_meta,global}]}, {{node,'ns_1@127.0.0.1',query_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8093]}, {{node,'ns_1@127.0.0.1',projector_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9999]}, {{node,'ns_1@127.0.0.1',port_servers}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {{node,'ns_1@127.0.0.1',ns_log}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {filename,"/opt/couchbase/var/lib/couchbase/ns_log"}]}, {{node,'ns_1@127.0.0.1',moxi}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11211}, {verbosity,[]}]}, {{node,'ns_1@127.0.0.1',memcached_defaults}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {verbosity,0}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/opt/couchbase/var/lib/couchbase/crash"}, {dedupe_nmvb_maps,false}]}, {{node,'ns_1@127.0.0.1',memcached_config}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>}, {cert, <<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {engine, {[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>}, {config, {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}}]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}]}]}, {{node,'ns_1@127.0.0.1',memcached}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11210}, {dedicated_port,11209}, {ssl_port,11207}, {admin_user,"_admin"}, {admin_pass,"*****"}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {engines, [{membase, [{engine,"/opt/couchbase/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached, [{engine,"/opt/couchbase/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/opt/couchbase/var/lib/couchbase/config/memcached.json"}, {audit_file,"/opt/couchbase/var/lib/couchbase/config/audit.json"}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}]}, {{node,'ns_1@127.0.0.1',membership}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| active]}, {{node,'ns_1@127.0.0.1',ldap_enabled}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',isasl}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}]}, {{node,'ns_1@127.0.0.1',is_enterprise}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| true]}, {{node,'ns_1@127.0.0.1',indexer_stmaint_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9105]}, {{node,'ns_1@127.0.0.1',indexer_stinit_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9103]}, {{node,'ns_1@127.0.0.1',indexer_stcatchup_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9104]}, {{node,'ns_1@127.0.0.1',indexer_scan_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9101]}, {{node,'ns_1@127.0.0.1',indexer_http_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9102]}, {{node,'ns_1@127.0.0.1',indexer_admin_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 9100]}, {{node,'ns_1@127.0.0.1',config_version}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {4,1,1}]}, {{node,'ns_1@127.0.0.1',compaction_daemon}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {check_interval,30}, {min_file_size,131072}]}, {{node,'ns_1@127.0.0.1',capi_port}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| 8092]}, {{node,'ns_1@127.0.0.1',audit}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}]}, {{request_limit,rest},undefined}, {{request_limit,capi},undefined}, {{couchdb,max_parallel_replica_indexers},2}, {{couchdb,max_parallel_indexers},4}, {set_view_update_daemon, [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}]}, {replication,[{enabled,true}]}, {remote_clusters,[]}, {nodes_wanted,['ns_1@127.0.0.1']}, {memcached,[]}, {max_bucket_count,10}, {index_aware_rebalance_disabled,false}, {email_alerts, [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server, [{user,[]},{pass,"*****"},{host,"localhost"},{port,25},{encrypt,false}]}, {alerts, [auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events]}]}, {drop_request_memory_threshold_mib,undefined}, {buckets,[{configs,[]}]}, {autocompaction, [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}]}, {auto_failover_cfg,[{enabled,false},{timeout,120},{max_nodes,1},{count,0}]}, {audit, [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}]}, {alert_limits,[{max_overhead_perc,50},{max_disk_used,90}]}, {cert_and_pkey, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIFE2n0hhgvIcwCwYJKoZIhvcNAQELMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgM2M3NDBmY2EwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIDNjNzQwZmNh\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0Cv7vNECWUBN/JieYqSf\n+O0Dymyr49xvXzfdqH89k/RdSS8zFrw6CFlR23s494dEGWyHIFGsp8go2qKZh83T\noFl5B3ef3HnuJrnefGmbA+elwNB/lcU9echX8hj7MjYyORQGDjBMgBHFBc5Xzgbh\na+qcdC5H30hfwaLkN9UegUted6uiKmRvFZDkPbVUIFpcZ8Ut2OUhX6+ytMgCY9gb\nvOmQ+ZfdAklDC0UTFaXBgAnU+74sDrJ91OV6Gy33IKycApxZIZdO7vc4x+d56EJK\nvBGr0pV4LAxOJhJHpu/yXry8zLcWFUfxNcU0u7kyCNxzE5ErqSn+OroJhmdfDYkL\n7wIDAQABozgwNjAOBgNVHQ8BAf8EBAMCAKQwEwYDVR0lBAwwCgYIKwYBBQUHAwEw\nDwYDVR0TAQH/BAUwAwEB/zALBgkqhkiG9w0BAQsDggEBAKyH8frLdiivm9B50Ock\nfH/dgo3FoEUbZWWcgbitlpODuJO1lH1yKdIJZdypbYx+S9hcfTcYVb/qJp5Y0mk8\nFMJNtBMYmUY0TttqCEHCqjIgZ3H1H0m8Ir0sv6lJ9FGr4uyPMc57Gcxu8/IUFHah\nzxVRjxcV463lRwkbZyUdsAd5U7WKb8+mEWdCRLHBEAfYD2KK/v1xh6SsADmj8Q+S\noGoIMC3dfoh5fFzhjuzsXIt1X0mKfORvxvhs7J5uTt9UGZL5C3Kc+QKkFkLgrpK5\neTc0San4wzlRGiEOgCc2vxxEInUAcduMDxNbs/GWFyKTMEXDLjBPF/X/2aMnEzlG\nt7Q=\n-----END CERTIFICATE-----\n">>, <<"*****">>}]}, {otp, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, {cookie,vuzfvvczcpnjsgwq}]}, {cluster_compat_version, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{5,63630229178}}]}, 4,1]}, {server_groups, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, [{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]]}, {read_only_user_creds, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| null]}, {goxdcr_upgrade, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| '_deleted']}, {{service_map,n1ql}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}]}, {{service_map,index}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}]}, {{node,'ns_1@127.0.0.1',stop_xdcr}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229179}}]}| '_deleted']}, {memory_quota, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]}| 3103]}, {{metakv,<<"/indexing/settings/config">>}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>]}, {{node,'ns_1@127.0.0.1',services}, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]},kv]}, {rest,[{port,8091}]}, {rest_creds, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| {"Administrator",{password,"*****"}}]}, {ssl_minimum_protocol, [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229355}}]}| 'tlsv1.2']}] [error_logger:info,2016-05-11T16:43:31.924-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.145.0>}, {name,ns_config}, {mfargs, {ns_config,start_link, ["/opt/couchbase/etc/couchbase/config", ns_config_default]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.926-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.148.0>}, {name,ns_config_remote}, {mfargs, {ns_config_replica,start_link, [{local,ns_config_remote}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.929-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_config_sup} started: [{pid,<0.149.0>}, {name,ns_config_log}, {mfargs,{ns_config_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.929-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.142.0>}, {name,ns_config_sup}, {mfargs,{ns_config_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:31.933-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.151.0>}, {name,vbucket_filter_changes_registry}, {mfargs, {ns_process_registry,start_link, [vbucket_filter_changes_registry, [{terminate_command,shutdown}]]}}, {restart_type,permanent}, {shutdown,100}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.953-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.154.0>}, {name,remote_monitors}, {mfargs,{remote_monitors,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:31.955-07:00,ns_1@127.0.0.1:menelaus_barrier<0.155.0>:one_shot_barrier:barrier_body:58]Barrier menelaus_barrier has started [error_logger:info,2016-05-11T16:43:31.956-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.155.0>}, {name,menelaus_barrier}, {mfargs,{menelaus_sup,barrier_start_link,[]}}, {restart_type,temporary}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:31.956-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.156.0>}, {name,rest_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,rest_lhttpc_pool}, {connection_timeout,120000}, {pool_size,20}]]}}, {restart_type,{permanent,1}}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:31.988-07:00,ns_1@127.0.0.1:ns_ssl_services_setup<0.158.0>:ns_ssl_services_setup:init:334]Used ssl options: [{keyfile,"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem"}, {certfile,"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem"}, {versions,['tlsv1.2']}, {cacertfile,"/opt/couchbase/var/lib/couchbase/config/ssl-cert-key.pem-ca"}, {dh,<<48,130,1,8,2,130,1,1,0,152,202,99,248,92,201,35,238,246,5,77,93,120,10, 118,129,36,52,111,193,167,220,49,229,106,105,152,133,121,157,73,158, 232,153,197,197,21,171,140,30,207,52,165,45,8,221,162,21,199,183,66, 211,247,51,224,102,214,190,130,96,253,218,193,35,43,139,145,89,200,250, 145,92,50,80,134,135,188,205,254,148,122,136,237,220,186,147,187,104, 159,36,147,217,117,74,35,163,145,249,175,242,18,221,124,54,140,16,246, 169,84,252,45,47,99,136,30,60,189,203,61,86,225,117,255,4,91,46,110, 167,173,106,51,65,10,248,94,225,223,73,40,232,140,26,11,67,170,118,190, 67,31,127,233,39,68,88,132,171,224,62,187,207,160,189,209,101,74,8,205, 174,146,173,80,105,144,246,25,153,86,36,24,178,163,64,202,221,95,184, 110,244,32,226,217,34,55,188,230,55,16,216,247,173,246,139,76,187,66, 211,159,17,46,20,18,48,80,27,250,96,189,29,214,234,241,34,69,254,147, 103,220,133,40,164,84,8,44,241,61,164,151,9,135,41,60,75,4,202,133,173, 72,6,69,167,89,112,174,40,229,171,2,1,2>>}, {ciphers,[{dhe_rsa,aes_256_cbc,sha256}, {dhe_dss,aes_256_cbc,sha256}, {rsa,aes_256_cbc,sha256}, {dhe_rsa,aes_128_cbc,sha256}, {dhe_dss,aes_128_cbc,sha256}, {rsa,aes_128_cbc,sha256}, {dhe_rsa,aes_256_cbc,sha}, {dhe_dss,aes_256_cbc,sha}, {rsa,aes_256_cbc,sha}, {dhe_rsa,'3des_ede_cbc',sha}, {dhe_dss,'3des_ede_cbc',sha}, {rsa,'3des_ede_cbc',sha}, {dhe_rsa,aes_128_cbc,sha}, {dhe_dss,aes_128_cbc,sha}, {rsa,aes_128_cbc,sha}]}] [error_logger:info,2016-05-11T16:43:32.002-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.158.0>}, {name,ns_ssl_services_setup}, {mfargs,{ns_ssl_services_setup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:32.059-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_ssl_services_sup} started: [{pid,<0.160.0>}, {name,ns_rest_ssl_service}, {mfargs, {restartable,start_link, [{ns_ssl_services_setup, start_link_rest_service,[]}, 1000]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:32.059-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.157.0>}, {name,ns_ssl_services_sup}, {mfargs,{ns_ssl_services_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:43:32.083-07:00,ns_1@127.0.0.1:wait_link_to_couchdb_node<0.179.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:126]Waiting for ns_couchdb node to start [error_logger:info,2016-05-11T16:43:32.083-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.178.0>}, {name,start_couchdb_node}, {mfargs,{ns_server_nodes_sup,start_couchdb_node,[]}}, {restart_type,{permanent,5}}, {shutdown,86400000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:32.083-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:43:32.084-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:43:32.084-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.182.0>,shutdown}} [error_logger:info,2016-05-11T16:43:32.085-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:43:32.286-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:43:32.287-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.185.0>,shutdown}} [ns_server:debug,2016-05-11T16:43:32.287-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:43:32.287-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:43:32.488-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:43:32.489-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: {badrpc,nodedown} [error_logger:info,2016-05-11T16:43:32.489-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{'EXIT',<0.188.0>,shutdown}} [error_logger:info,2016-05-11T16:43:32.489-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{net_kernel,875,nodedown,'couchdb_ns_1@127.0.0.1'}} [error_logger:info,2016-05-11T16:43:32.690-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================INFO REPORT========================= {net_kernel,{connect,normal,'couchdb_ns_1@127.0.0.1'}} [ns_server:debug,2016-05-11T16:43:32.729-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:32.933-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:33.134-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:33.335-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:33.537-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:33.739-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:33.941-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [ns_server:debug,2016-05-11T16:43:34.143-07:00,ns_1@127.0.0.1:<0.180.0>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:140]ns_couchdb is not ready: false [error_logger:info,2016-05-11T16:43:34.639-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.201.0>}, {name,timer2_server}, {mfargs,{timer2,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:34.840-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.178.0>:ns_port_server:log:210]ns_couchdb<0.178.0>: Apache CouchDB (LogLevel=info) is starting. [error_logger:info,2016-05-11T16:43:35.024-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.179.0>}, {name,wait_for_couchdb_node}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:35.032-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:ns_storage_conf:setup_db_and_ix_paths:53]Initialize db_and_ix_paths variable with [{db_path, "/opt/couchbase/var/lib/couchbase/data"}, {index_path, "/opt/couchbase/var/lib/couchbase/data"}] [error_logger:info,2016-05-11T16:43:35.039-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.204.0>}, {name,ns_disksup}, {mfargs,{ns_disksup,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:35.041-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.205.0>}, {name,diag_handler_worker}, {mfargs,{work_queue,start_link,[diag_handler_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:35.052-07:00,ns_1@127.0.0.1:ns_server_sup<0.203.0>:dir_size:start_link:39]Starting quick version of dir_size with program name: godu [error_logger:info,2016-05-11T16:43:35.053-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.206.0>}, {name,dir_size}, {mfargs,{dir_size,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:35.055-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.207.0>}, {name,request_throttler}, {mfargs,{request_throttler,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:35.066-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.208.0>}, {name,ns_log}, {mfargs,{ns_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:35.066-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.209.0>}, {name,ns_crash_log_consumer}, {mfargs,{ns_log,start_link_crash_consumer,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:35.070-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.210.0>:ns_config_isasl_sync:init:63]isasl_sync init: ["/opt/couchbase/var/lib/couchbase/isasl.pw","_admin", "2bb824636f76a257101e37d538281ca2"] [ns_server:debug,2016-05-11T16:43:35.071-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.210.0>:ns_config_isasl_sync:init:71]isasl_sync init buckets: [] [ns_server:debug,2016-05-11T16:43:35.072-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.210.0>:ns_config_isasl_sync:writeSASLConf:143]Writing isasl passwd file: "/opt/couchbase/var/lib/couchbase/isasl.pw" [ns_server:info,2016-05-11T16:43:35.085-07:00,ns_1@127.0.0.1:ns_couchdb_port<0.178.0>:ns_port_server:log:210]ns_couchdb<0.178.0>: Apache CouchDB has started. Time to relax. ns_couchdb<0.178.0>: 31622: Booted. Waiting for shutdown request ns_couchdb<0.178.0>: working as port [ns_server:warn,2016-05-11T16:43:35.092-07:00,ns_1@127.0.0.1:ns_config_isasl_sync<0.210.0>:ns_memcached:connect:1290]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [error_logger:info,2016-05-11T16:43:36.093-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.210.0>}, {name,ns_config_isasl_sync}, {mfargs,{ns_config_isasl_sync,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.093-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.213.0>}, {name,ns_log_events}, {mfargs,{gen_event,start_link,[{local,ns_log_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.095-07:00,ns_1@127.0.0.1:ns_node_disco<0.216.0>:ns_node_disco:init:138]Initting ns_node_disco with [] [error_logger:info,2016-05-11T16:43:36.095-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.215.0>}, {name,ns_node_disco_events}, {mfargs, {gen_event,start_link, [{local,ns_node_disco_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.095-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [user:info,2016-05-11T16:43:36.095-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:130]Node 'ns_1@127.0.0.1' synchronized otp cookie vuzfvvczcpnjsgwq from cluster [ns_server:debug,2016-05-11T16:43:36.096-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:43:36.101-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [ns_server:debug,2016-05-11T16:43:36.101-07:00,ns_1@127.0.0.1:<0.217.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:43:36.103-07:00,ns_1@127.0.0.1:<0.217.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [error_logger:info,2016-05-11T16:43:36.103-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.216.0>}, {name,ns_node_disco}, {mfargs,{ns_node_disco,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.105-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.219.0>}, {name,ns_node_disco_log}, {mfargs,{ns_node_disco_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.107-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.220.0>}, {name,ns_node_disco_conf_events}, {mfargs,{ns_node_disco_conf_events,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.112-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.221.0>}, {name,ns_config_rep_merger}, {mfargs,{ns_config_rep,start_link_merger,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.113-07:00,ns_1@127.0.0.1:ns_config_rep<0.222.0>:ns_config_rep:init:68]init pulling [ns_server:debug,2016-05-11T16:43:36.113-07:00,ns_1@127.0.0.1:ns_config_rep<0.222.0>:ns_config_rep:init:70]init pushing [ns_server:debug,2016-05-11T16:43:36.115-07:00,ns_1@127.0.0.1:ns_config_rep<0.222.0>:ns_config_rep:init:74]init reannouncing [ns_server:debug,2016-05-11T16:43:36.116-07:00,ns_1@127.0.0.1:ns_config_events<0.143.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2016-05-11T16:43:36.116-07:00,ns_1@127.0.0.1:ns_config_events<0.143.0>:ns_node_disco_conf_events:handle_event:50]ns_node_disco_conf_events config on otp [ns_server:debug,2016-05-11T16:43:36.117-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [ns_server:debug,2016-05-11T16:43:36.117-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:43:36.117-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: alert_limits -> [{max_overhead_perc,50},{max_disk_used,90}] [ns_server:debug,2016-05-11T16:43:36.117-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: audit -> [{auditd_enabled,false}, {rotate_interval,86400}, {rotate_size,20971520}, {disabled,[]}, {sync,[]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}] [ns_server:debug,2016-05-11T16:43:36.117-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: auto_failover_cfg -> [{enabled,false},{timeout,120},{max_nodes,1},{count,0}] [ns_server:debug,2016-05-11T16:43:36.118-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: autocompaction -> [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:debug,2016-05-11T16:43:36.118-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: buckets -> [[],{configs,[]}] [ns_server:debug,2016-05-11T16:43:36.118-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: cert_and_pkey -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229174}}]}| {<<"-----BEGIN CERTIFICATE-----\nMIIC/jCCAeigAwIBAgIIFE2n0hhgvIcwCwYJKoZIhvcNAQELMCQxIjAgBgNVBAMT\nGUNvdWNoYmFzZSBTZXJ2ZXIgM2M3NDBmY2EwHhcNMTMwMTAxMDAwMDAwWhcNNDkx\nMjMxMjM1OTU5WjAkMSIwIAYDVQQDExlDb3VjaGJhc2UgU2VydmVyIDNjNzQwZmNh\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0Cv7vNECWUBN/JieYqSf\n+O0Dymyr49xvXzfdqH89k/RdSS8zFrw6CFlR23s494dEGWyHIFGsp8go2qKZh83T\noFl5B3ef3HnuJrnefGmbA+elwNB/lcU"...>>, <<"*****">>}] [ns_server:debug,2016-05-11T16:43:36.118-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: cluster_compat_version -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{5,63630229178}}]},4,1] [ns_server:debug,2016-05-11T16:43:36.118-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: drop_request_memory_threshold_mib -> undefined [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: email_alerts -> [{recipients,["root@localhost"]}, {sender,"couchbase@localhost"}, {enabled,false}, {email_server,[{user,[]}, {pass,"*****"}, {host,"localhost"}, {port,25}, {encrypt,false}]}, {alerts,[auto_failover_node,auto_failover_maximum_reached, auto_failover_other_nodes_down,auto_failover_cluster_too_small, auto_failover_disabled,ip,disk,overhead,ep_oom_errors, ep_item_commit_failed,audit_dropped_events]}] [error_logger:info,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_node_disco_sup} started: [{pid,<0.222.0>}, {name,ns_config_rep}, {mfargs,{ns_config_rep,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: goxdcr_upgrade -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}| '_deleted'] [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: index_aware_rebalance_disabled -> false [error_logger:info,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.214.0>}, {name,ns_node_disco_sup}, {mfargs,{ns_node_disco_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_sync:110]ns_cookie_manager do_cookie_sync [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: max_bucket_count -> 10 [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:147]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server" [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:<0.226.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: memcached -> [] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:<0.226.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: memory_quota -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]}|3103] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: nodes_wanted -> ['ns_1@127.0.0.1'] [ns_server:debug,2016-05-11T16:43:36.119-07:00,ns_1@127.0.0.1:ns_config_rep<0.222.0>:ns_config_rep:do_push_keys:321]Replicating some config keys ([alert_limits,audit,auto_failover_cfg, autocompaction,buckets,cert_and_pkey, cluster_compat_version, drop_request_memory_threshold_mib,email_alerts, goxdcr_upgrade,index_aware_rebalance_disabled, max_bucket_count,memcached,memory_quota, nodes_wanted,otp,read_only_user_creds, remote_clusters,replication,rest,rest_creds, server_groups,set_view_update_daemon, ssl_minimum_protocol, {couchdb,max_parallel_indexers}, {couchdb,max_parallel_replica_indexers}, {local_changes_count, <<"0d9696803a535febe829002b30cd0eb5">>}, {metakv,<<"/indexing/settings/config">>}, {request_limit,capi}, {request_limit,rest}, {service_map,index}, {service_map,n1ql}, {node,'ns_1@127.0.0.1',audit}, {node,'ns_1@127.0.0.1',capi_port}, {node,'ns_1@127.0.0.1',compaction_daemon}, {node,'ns_1@127.0.0.1',config_version}, {node,'ns_1@127.0.0.1',indexer_admin_port}, {node,'ns_1@127.0.0.1',indexer_http_port}, {node,'ns_1@127.0.0.1',indexer_scan_port}, {node,'ns_1@127.0.0.1',indexer_stcatchup_port}, {node,'ns_1@127.0.0.1',indexer_stinit_port}, {node,'ns_1@127.0.0.1',indexer_stmaint_port}, {node,'ns_1@127.0.0.1',is_enterprise}, {node,'ns_1@127.0.0.1',isasl}, {node,'ns_1@127.0.0.1',ldap_enabled}, {node,'ns_1@127.0.0.1',membership}, {node,'ns_1@127.0.0.1',memcached}, {node,'ns_1@127.0.0.1',memcached_config}, {node,'ns_1@127.0.0.1',memcached_defaults}, {node,'ns_1@127.0.0.1',moxi}, {node,'ns_1@127.0.0.1',ns_log}, {node,'ns_1@127.0.0.1',port_servers}, {node,'ns_1@127.0.0.1',projector_port}, {node,'ns_1@127.0.0.1',query_port}, {node,'ns_1@127.0.0.1',rest}, {node,'ns_1@127.0.0.1',services}, {node,'ns_1@127.0.0.1',ssl_capi_port}, {node,'ns_1@127.0.0.1', ssl_proxy_downstream_port}, {node,'ns_1@127.0.0.1',ssl_proxy_upstream_port}, {node,'ns_1@127.0.0.1',ssl_query_port}, {node,'ns_1@127.0.0.1',ssl_rest_port}, {node,'ns_1@127.0.0.1',stop_xdcr}, {node,'ns_1@127.0.0.1',uuid}, {node,'ns_1@127.0.0.1',xdcr_rest_port}]..) [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: otp -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, {cookie,vuzfvvczcpnjsgwq}] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: read_only_user_creds -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}|null] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: remote_clusters -> [] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: replication -> [{enabled,true}] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest -> [{port,8091}] [ns_server:debug,2016-05-11T16:43:36.120-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: rest_creds -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| {"Administrator",{password,"*****"}}] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: server_groups -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}, [{uuid,<<"0">>},{name,<<"Group 1">>},{nodes,['ns_1@127.0.0.1']}]] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: set_view_update_daemon -> [{update_interval,5000}, {update_min_changes,5000}, {replica_update_min_changes,5000}] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: ssl_minimum_protocol -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229355}}]}| 'tlsv1.2'] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {couchdb,max_parallel_indexers} -> 4 [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {couchdb,max_parallel_replica_indexers} -> 2 [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_cookie_manager<0.140.0>:ns_cookie_manager:do_cookie_save:149]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie-ns-server": ok [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {local_changes_count,<<"0d9696803a535febe829002b30cd0eb5">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{10,63630229355}}]}] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:<0.227.0>:ns_node_disco:do_nodes_wanted_updated_fun:224]ns_node_disco: nodes_wanted updated: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {metakv,<<"/indexing/settings/config">>} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229185}}]}| <<"{\"indexer.settings.compaction.interval\":\"00:00,00:00\",\"indexer.settings.persisted_snapshot.interval\":5000,\"indexer.settings.log_level\":\"info\",\"indexer.settings.compaction.min_frag\":30,\"indexer.settings.inmemory_snapshot.interval\":200,\"indexer.settings.max_cpu_percent\":400,\"indexer.settings.recovery.max_rollbacks\":5,\"indexer.settings.memory_quota\":268435456}">>] [ns_server:debug,2016-05-11T16:43:36.121-07:00,ns_1@127.0.0.1:<0.227.0>:ns_node_disco:do_nodes_wanted_updated_fun:230]ns_node_disco: nodes_wanted pong: ['ns_1@127.0.0.1'], with cookie: vuzfvvczcpnjsgwq [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {request_limit,capi} -> undefined [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {request_limit,rest} -> undefined [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {service_map,index} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}] [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {service_map,n1ql} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229178}}]}] [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',audit} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}] [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',capi_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|8092] [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',compaction_daemon} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {check_interval,30}, {min_file_size,131072}] [ns_server:debug,2016-05-11T16:43:36.122-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',config_version} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {4,1,1}] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_admin_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9100] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_http_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9102] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_scan_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9101] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stcatchup_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9104] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stinit_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9103] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',indexer_stmaint_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9105] [ns_server:debug,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',is_enterprise} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|true] [ns_server:debug,2016-05-11T16:43:36.124-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',isasl} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {path,"/opt/couchbase/var/lib/couchbase/isasl.pw"}] [error_logger:info,2016-05-11T16:43:36.123-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.230.0>}, {name,vbucket_map_mirror}, {mfargs,{vbucket_map_mirror,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.124-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ldap_enabled} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|true] [ns_server:debug,2016-05-11T16:43:36.124-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',membership} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| active] [ns_server:debug,2016-05-11T16:43:36.124-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11210}, {dedicated_port,11209}, {ssl_port,11207}, {admin_user,"_admin"}, {admin_pass,"*****"}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {engines,[{membase,[{engine,"/opt/couchbase/lib/memcached/ep.so"}, {static_config_string,"failpartialwarmup=false"}]}, {memcached,[{engine,"/opt/couchbase/lib/memcached/default_engine.so"}, {static_config_string,"vb0=true"}]}]}, {config_path,"/opt/couchbase/var/lib/couchbase/config/memcached.json"}, {audit_file,"/opt/couchbase/var/lib/couchbase/config/audit.json"}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"}, {log_prefix,"memcached.log"}, {log_generations,20}, {log_cyclesize,10485760}, {log_sleeptime,19}, {log_rotation_period,39003}] [ns_server:debug,2016-05-11T16:43:36.125-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached_config} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| {[{interfaces, {memcached_config_mgr,omit_missing_mcd_ports, [{[{host,<<"*">>},{port,port},{maxconn,maxconn}]}, {[{host,<<"*">>}, {port,dedicated_port}, {maxconn,dedicated_port_maxconn}]}, {[{host,<<"*">>}, {port,ssl_port}, {maxconn,maxconn}, {ssl, {[{key, <<"/opt/couchbase/var/lib/couchbase/config/memcached-key.pem">>}, {cert, <<"/opt/couchbase/var/lib/couchbase/config/memcached-cert.pem">>}]}}]}]}}, {ssl_cipher_list,{"~s",[ssl_cipher_list]}}, {ssl_minimum_protocol,{memcached_config_mgr,ssl_minimum_protocol,[]}}, {breakpad, {[{enabled,breakpad_enabled}, {minidump_dir,{memcached_config_mgr,get_minidump_dir,[]}}]}}, {extensions, [{[{module,<<"/opt/couchbase/lib/memcached/stdin_term_handler.so">>}, {config,<<>>}]}, {[{module,<<"/opt/couchbase/lib/memcached/file_logger.so">>}, {config, {"cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}}]}]}, {engine, {[{module,<<"/opt/couchbase/lib/memcached/bucket_engine.so">>}, {config, {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}}]}}, {verbosity,verbosity}, {audit_file,{"~s",[audit_file]}}, {dedupe_nmvb_maps,dedupe_nmvb_maps}]}] [ns_server:debug,2016-05-11T16:43:36.125-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',memcached_defaults} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {maxconn,30000}, {dedicated_port_maxconn,5000}, {ssl_cipher_list,"HIGH"}, {verbosity,0}, {breakpad_enabled,true}, {breakpad_minidump_dir_path,"/opt/couchbase/var/lib/couchbase/crash"}, {dedupe_nmvb_maps,false}] [ns_server:debug,2016-05-11T16:43:36.125-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',moxi} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,11211}, {verbosity,[]}] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ns_log} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {filename,"/opt/couchbase/var/lib/couchbase/ns_log"}] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',port_servers} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',projector_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9999] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',query_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|8093] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',rest} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}, {port,8091}, {port_meta,global}] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',services} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229185}}]},kv] [ns_server:debug,2016-05-11T16:43:36.126-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_capi_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18092] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_proxy_downstream_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|11214] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_proxy_upstream_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|11215] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_query_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18093] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',ssl_rest_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|18091] [error_logger:info,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.232.0>}, {name,bucket_info_cache}, {mfargs,{bucket_info_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',stop_xdcr} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{2,63630229179}}]}| '_deleted'] [error_logger:info,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.235.0>}, {name,ns_tick_event}, {mfargs,{gen_event,start_link,[{local,ns_tick_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.127-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',uuid} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}| <<"0d9696803a535febe829002b30cd0eb5">>] [ns_server:debug,2016-05-11T16:43:36.128-07:00,ns_1@127.0.0.1:ns_config_log<0.149.0>:ns_config_log:log_common:138]config change: {node,'ns_1@127.0.0.1',xdcr_rest_port} -> [{'_vclock',[{<<"0d9696803a535febe829002b30cd0eb5">>,{1,63630229172}}]}|9998] [error_logger:info,2016-05-11T16:43:36.128-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.236.0>}, {name,buckets_events}, {mfargs, {gen_event,start_link,[{local,buckets_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_mail_sup} started: [{pid,<0.238.0>}, {name,ns_mail_log}, {mfargs,{ns_mail_log,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.131-07:00,ns_1@127.0.0.1:ns_log_events<0.213.0>:ns_mail_log:init:44]ns_mail_log started up [error_logger:info,2016-05-11T16:43:36.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.237.0>}, {name,ns_mail_sup}, {mfargs,{ns_mail_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.131-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.239.0>}, {name,ns_stats_event}, {mfargs, {gen_event,start_link,[{local,ns_stats_event}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.133-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.240.0>}, {name,samples_loader_tasks}, {mfargs,{samples_loader_tasks,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.138-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.242.0>}, {name,ns_heart}, {mfargs,{ns_heart,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.138-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_heart_sup} started: [{pid,<0.244.0>}, {name,ns_heart_slow_updater}, {mfargs,{ns_heart,start_link_slow_updater,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.139-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.241.0>}, {name,ns_heart_sup}, {mfargs,{ns_heart_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.140-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.248.0>}, {name,ns_doctor_events}, {mfargs, {gen_event,start_link,[{local,ns_doctor_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.142-07:00,ns_1@127.0.0.1:ns_heart<0.242.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,186}]}, {ns_heart,handle_info,2, [{file,"src/ns_heart.erl"},{line,118}]}]}} [ns_server:debug,2016-05-11T16:43:36.142-07:00,ns_1@127.0.0.1:ns_heart<0.242.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,update_current_status,1, [{file,"src/ns_heart.erl"},{line,186}]}]}} [error_logger:info,2016-05-11T16:43:36.148-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_doctor_sup} started: [{pid,<0.249.0>}, {name,ns_doctor}, {mfargs,{ns_doctor,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.148-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.245.0>}, {name,ns_doctor_sup}, {mfargs, {restartable,start_link, [{ns_doctor_sup,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.187-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.254.0>}, {name,disk_log_sup}, {mfargs,{disk_log_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.187-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,kernel_safe_sup} started: [{pid,<0.255.0>}, {name,disk_log_server}, {mfargs,{disk_log_server,start_link,[]}}, {restart_type,permanent}, {shutdown,2000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.197-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.252.0>}, {name,remote_clusters_info}, {mfargs,{remote_clusters_info,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.198-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.258.0>}, {name,master_activity_events}, {mfargs, {gen_event,start_link, [{local,master_activity_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.210-07:00,ns_1@127.0.0.1:<0.259.0>:mb_master:check_master_takeover_needed:141]Sending master node question to the following nodes: [] [ns_server:debug,2016-05-11T16:43:36.211-07:00,ns_1@127.0.0.1:<0.259.0>:mb_master:check_master_takeover_needed:143]Got replies: [] [ns_server:debug,2016-05-11T16:43:36.211-07:00,ns_1@127.0.0.1:<0.259.0>:mb_master:check_master_takeover_needed:149]Was unable to discover master, not going to force mastership takeover [user:info,2016-05-11T16:43:36.216-07:00,ns_1@127.0.0.1:mb_master<0.261.0>:mb_master:init:86]I'm the only node, so I'm the master. [ns_server:debug,2016-05-11T16:43:36.246-07:00,ns_1@127.0.0.1:mb_master_sup<0.263.0>:misc:start_singleton:1035]start_singleton(gen_fsm, ns_orchestrator, [], []): started as <0.265.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:43:36.246-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.265.0>}, {name,ns_orchestrator}, {mfargs,{ns_orchestrator,start_link,[]}}, {restart_type,permanent}, {shutdown,20}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.253-07:00,ns_1@127.0.0.1:mb_master_sup<0.263.0>:misc:start_singleton:1035]start_singleton(gen_server, ns_tick, [], []): started as <0.266.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:43:36.253-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.266.0>}, {name,ns_tick}, {mfargs,{ns_tick,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.256-07:00,ns_1@127.0.0.1:ns_heart<0.242.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2016-05-11T16:43:36.259-07:00,ns_1@127.0.0.1:<0.267.0>:auto_failover:init:147]init auto_failover. [ns_server:debug,2016-05-11T16:43:36.260-07:00,ns_1@127.0.0.1:mb_master_sup<0.263.0>:misc:start_singleton:1035]start_singleton(gen_server, auto_failover, [], []): started as <0.267.0> on 'ns_1@127.0.0.1' [error_logger:info,2016-05-11T16:43:36.260-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,mb_master_sup} started: [{pid,<0.267.0>}, {name,auto_failover}, {mfargs,{auto_failover,start_link,[]}}, {restart_type,permanent}, {shutdown,10}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.260-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.259.0>}, {name,mb_master}, {mfargs, {restartable,start_link, [{mb_master,start_link,[]},infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.260-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.268.0>}, {name,master_activity_events_ingress}, {mfargs, {gen_event,start_link, [{local,master_activity_events_ingress}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.261-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.269.0>}, {name,master_activity_events_timestamper}, {mfargs, {master_activity_events,start_link_timestamper,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.262-07:00,ns_1@127.0.0.1:ns_heart<0.242.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [error_logger:info,2016-05-11T16:43:36.266-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.271.0>}, {name,master_activity_events_pids_watcher}, {mfargs, {master_activity_events_pids_watcher,start_link, []}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.269-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.244.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,243}]}, {proc_lib,init_p_do_apply,3, [{file,"proc_lib.erl"},{line,239}]}]}} [ns_server:debug,2016-05-11T16:43:36.269-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.244.0>:ns_heart:grab_latest_stats:259]Ignoring failure to grab "@system-processes" stats: {'EXIT',{badarg,[{ets,last,['stats_archiver-@system-processes-minute'],[]}, {stats_archiver,latest_sample,2, [{file,"src/stats_archiver.erl"},{line,116}]}, {ns_heart,grab_latest_stats,1, [{file,"src/ns_heart.erl"},{line,255}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,'-current_status_slow_inner/0-lc$^0/1-0-',1, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow_inner,0, [{file,"src/ns_heart.erl"},{line,276}]}, {ns_heart,current_status_slow,1, [{file,"src/ns_heart.erl"},{line,249}]}, {ns_heart,slow_updater_loop,0, [{file,"src/ns_heart.erl"},{line,243}]}]}} [ns_server:debug,2016-05-11T16:43:36.272-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.244.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2016-05-11T16:43:36.272-07:00,ns_1@127.0.0.1:ns_heart_slow_status_updater<0.244.0>:cluster_logs_collection_task:maybe_build_cluster_logs_task:43]Ignoring exception trying to read cluster_logs_collection_task_status table: error:badarg [error_logger:info,2016-05-11T16:43:36.295-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.276.0>}, {name,master_activity_events_keeper}, {mfargs,{master_activity_events_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.298-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.280.0>}, {name,xdcr_ckpt_store}, {mfargs,{simple_store,start_link,[xdcr_ckpt_data]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.298-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.281.0>}, {name,metakv_worker}, {mfargs,{work_queue,start_link,[metakv_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.298-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.282.0>}, {name,index_events}, {mfargs,{gen_event,start_link,[{local,index_events}]}}, {restart_type,permanent}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.304-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.283.0>}, {name,index_settings_manager}, {mfargs,{index_settings_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.306-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.287.0>}, {name,menelaus_ui_auth}, {mfargs,{menelaus_ui_auth,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.308-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.288.0>}, {name,menelaus_web_cache}, {mfargs,{menelaus_web_cache,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.310-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.289.0>}, {name,menelaus_stats_gatherer}, {mfargs,{menelaus_stats_gatherer,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.311-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.290.0>}, {name,json_rpc_events}, {mfargs, {gen_event,start_link,[{local,json_rpc_events}]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.311-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.291.0>}, {name,menelaus_web}, {mfargs,{menelaus_web,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.313-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.308.0>}, {name,menelaus_event}, {mfargs,{menelaus_event,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.317-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.309.0>}, {name,hot_keys_keeper}, {mfargs,{hot_keys_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.321-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.310.0>}, {name,menelaus_web_alerts_srv}, {mfargs,{menelaus_web_alerts_srv,start_link,[]}}, {restart_type,permanent}, {shutdown,5000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.326-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,menelaus_sup} started: [{pid,<0.311.0>}, {name,menelaus_cbauth}, {mfargs,{menelaus_cbauth,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [user:info,2016-05-11T16:43:36.326-07:00,ns_1@127.0.0.1:ns_server_sup<0.203.0>:menelaus_sup:start_link:46]Couchbase Server has started on web port 8091 on node 'ns_1@127.0.0.1'. Version: "4.1.1-5914-enterprise". [error_logger:info,2016-05-11T16:43:36.327-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.286.0>}, {name,menelaus}, {mfargs,{menelaus_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.327-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.315.0>}, {name,ns_ports_setup}, {mfargs,{ns_ports_setup,start,[]}}, {restart_type,{permanent,4}}, {shutdown,brutal_kill}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.334-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.317.0>}, {name,ns_memcached_sockets_pool}, {mfargs,{ns_memcached_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.334-07:00,ns_1@127.0.0.1:ns_audit_cfg<0.318.0>:ns_audit_cfg:write_audit_json:158]Writing new content to "/opt/couchbase/var/lib/couchbase/config/audit.json" : [{auditd_enabled, false}, {disabled, []}, {log_path, "/opt/couchbase/var/lib/couchbase/logs"}, {rotate_interval, 86400}, {rotate_size, 20971520}, {sync, []}, {version, 1}, {descriptors_path, "/opt/couchbase/etc/security"}] [ns_server:debug,2016-05-11T16:43:36.338-07:00,ns_1@127.0.0.1:ns_audit_cfg<0.318.0>:ns_audit_cfg:handle_info:107]Instruct memcached to reload audit config [error_logger:info,2016-05-11T16:43:36.339-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.318.0>}, {name,ns_audit_cfg}, {mfargs,{ns_audit_cfg,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:warn,2016-05-11T16:43:36.339-07:00,ns_1@127.0.0.1:<0.320.0>:ns_memcached:connect:1290]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying. [ns_server:debug,2016-05-11T16:43:36.342-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.322.0>:memcached_config_mgr:init:44]waiting for completion of initial ns_ports_setup round [error_logger:info,2016-05-11T16:43:36.342-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.322.0>}, {name,memcached_config_mgr}, {mfargs,{memcached_config_mgr,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,1000}, {child_type,worker}] [ns_server:info,2016-05-11T16:43:36.345-07:00,ns_1@127.0.0.1:<0.323.0>:ns_memcached_log_rotator:init:28]Starting log rotator on "/opt/couchbase/var/lib/couchbase/logs"/"memcached.log"* with an initial period of 39003ms [error_logger:info,2016-05-11T16:43:36.345-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.323.0>}, {name,ns_memcached_log_rotator}, {mfargs,{ns_memcached_log_rotator,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.347-07:00,ns_1@127.0.0.1:ns_ports_setup<0.315.0>:ns_ports_manager:set_dynamic_children:54]Setting children [memcached,moxi,projector,saslauthd_port,goxdcr,xdcr_proxy] [error_logger:info,2016-05-11T16:43:36.349-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.324.0>}, {name,memcached_clients_pool}, {mfargs,{memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.353-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.325.0>}, {name,proxied_memcached_clients_pool}, {mfargs,{proxied_memcached_clients_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.353-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.326.0>}, {name,xdc_lhttpc_pool}, {mfargs, {lhttpc_manager,start_link, [[{name,xdc_lhttpc_pool}, {connection_timeout,120000}, {pool_size,200}]]}}, {restart_type,{permanent,1}}, {shutdown,10000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.355-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.327.0>}, {name,ns_null_connection_pool}, {mfargs, {ns_null_connection_pool,start_link, [ns_null_connection_pool]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.362-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<0.329.0>}, {name,xdc_stats_holder}, {mfargs, {proc_lib,start_link, [xdcr_sup,link_stats_holder_body,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.366-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<0.330.0>}, {name,xdc_replication_sup}, {mfargs,{xdc_replication_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.369-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<0.331.0>}, {name,xdc_rep_manager}, {mfargs,{xdc_rep_manager,start_link,[]}}, {restart_type,permanent}, {shutdown,30000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.370-07:00,ns_1@127.0.0.1:xdc_rep_manager<0.331.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [ns_server:debug,2016-05-11T16:43:36.374-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.333.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [ns_server:debug,2016-05-11T16:43:36.374-07:00,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.334.0>:ns_couchdb_api:wait_for_doc_manager:284]Start waiting for doc manager [error_logger:info,2016-05-11T16:43:36.374-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<0.333.0>}, {name,xdc_rdoc_replicator}, {mfargs,{doc_replicator,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.375-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<0.334.0>}, {name,xdc_rdoc_replication_srv}, {mfargs,{doc_replication_srv,start_link_xdcr,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.377-07:00,ns_1@127.0.0.1:<0.328.0>:xdc_rdoc_manager:start_link_remote:42]Starting xdc_rdoc_manager on 'couchdb_ns_1@127.0.0.1' with following links: [<0.333.0>, <0.334.0>, <0.331.0>] [ns_server:debug,2016-05-11T16:43:36.381-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.333.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.252.0> [ns_server:debug,2016-05-11T16:43:36.381-07:00,ns_1@127.0.0.1:xdc_rdoc_replication_srv<0.334.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.252.0> [ns_server:debug,2016-05-11T16:43:36.381-07:00,ns_1@127.0.0.1:xdc_rep_manager<0.331.0>:ns_couchdb_api:wait_for_doc_manager:287]Received doc manager registration from <11471.252.0> [error_logger:info,2016-05-11T16:43:36.382-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {<0.328.0>,xdcr_sup} started: [{pid,<11471.252.0>}, {name,xdc_rdoc_manager}, {mfargs, {xdc_rdoc_manager,start_link_remote, ['couchdb_ns_1@127.0.0.1']}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.382-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.328.0>}, {name,xdcr_sup}, {mfargs,{xdcr_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.390-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.336.0>}, {name,xdcr_dcp_sockets_pool}, {mfargs,{xdcr_dcp_sockets_pool,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.395-07:00,ns_1@127.0.0.1:xdcr_doc_replicator<0.333.0>:doc_replicator:loop:64]doing replicate_newnodes_docs [error_logger:info,2016-05-11T16:43:36.395-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.338.0>}, {name,ns_bucket_worker}, {mfargs,{work_queue,start_link,[ns_bucket_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.403-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_sup} started: [{pid,<0.340.0>}, {name,buckets_observing_subscription}, {mfargs,{ns_bucket_sup,subscribe_on_config_events,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.404-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_bucket_worker_sup} started: [{pid,<0.339.0>}, {name,ns_bucket_sup}, {mfargs,{ns_bucket_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.404-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.337.0>}, {name,ns_bucket_worker_sup}, {mfargs,{ns_bucket_worker_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.407-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.341.0>}, {name,system_stats_collector}, {mfargs,{system_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.408-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.345.0>}, {name,{stats_archiver,"@system"}}, {mfargs,{stats_archiver,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.415-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.347.0>}, {name,{stats_reader,"@system"}}, {mfargs,{stats_reader,start_link,["@system"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.416-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.348.0>}, {name,{stats_archiver,"@system-processes"}}, {mfargs, {stats_archiver,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.416-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.350.0>}, {name,{stats_reader,"@system-processes"}}, {mfargs, {stats_reader,start_link,["@system-processes"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.417-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.351.0>}, {name,{stats_archiver,"@query"}}, {mfargs,{stats_archiver,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.418-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.353.0>}, {name,{stats_reader,"@query"}}, {mfargs,{stats_reader,start_link,["@query"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.422-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.354.0>}, {name,query_stats_collector}, {mfargs,{query_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.432-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.356.0>}, {name,{stats_archiver,"@global"}}, {mfargs,{stats_archiver,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.432-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.358.0>}, {name,{stats_reader,"@global"}}, {mfargs,{stats_reader,start_link,["@global"]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.437-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.359.0>}, {name,global_stats_collector}, {mfargs,{global_stats_collector,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.441-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.361.0>}, {name,goxdcr_status_keeper}, {mfargs,{goxdcr_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.442-07:00,ns_1@127.0.0.1:goxdcr_status_keeper<0.361.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [ns_server:debug,2016-05-11T16:43:36.442-07:00,ns_1@127.0.0.1:goxdcr_status_keeper<0.361.0>:goxdcr_rest:get_from_goxdcr:154]Goxdcr is temporary not available. Return empty list. [error_logger:info,2016-05-11T16:43:36.448-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.366.0>}, {name,index_stats_children_sup}, {mfargs, {supervisor,start_link, [{local,index_stats_children_sup}, index_stats_sup,child]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.463-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.368.0>}, {name,index_status_keeper_worker}, {mfargs, {work_queue,start_link, [index_status_keeper_worker]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.469-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_status_keeper_sup} started: [{pid,<0.369.0>}, {name,index_status_keeper}, {mfargs,{index_status_keeper,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.470-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.367.0>}, {name,index_status_keeper_sup}, {mfargs,{index_status_keeper_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.470-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,index_stats_sup} started: [{pid,<0.372.0>}, {name,index_stats_worker}, {mfargs, {erlang,apply, [#Fun,[]]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.471-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.365.0>}, {name,index_stats_sup}, {mfargs,{index_stats_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.489-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.374.0>}, {name,compaction_daemon}, {mfargs,{compaction_daemon,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.502-07:00,ns_1@127.0.0.1:<0.377.0>:new_concurrency_throttle:init:113]init concurrent throttle process, pid: <0.377.0>, type: kv_throttle# of available token: 1 [ns_server:debug,2016-05-11T16:43:36.512-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [error_logger:info,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.375.0>}, {name,compaction_new_daemon}, {mfargs,{compaction_new_daemon,start_link,[]}}, {restart_type,{permanent,4}}, {shutdown,86400000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_master. Rescheduling compaction. [ns_server:debug,2016-05-11T16:43:36.513-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_master too soon. Next run will be in 3600s [error_logger:info,2016-05-11T16:43:36.515-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,cluster_logs_sup} started: [{pid,<0.379.0>}, {name,ets_holder}, {mfargs, {cluster_logs_collection_task, start_link_ets_holder,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [error_logger:info,2016-05-11T16:43:36.515-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.378.0>}, {name,cluster_logs_sup}, {mfargs,{cluster_logs_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:one_shot_barrier:notify:27]Notifying on barrier menelaus_barrier [ns_server:debug,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:menelaus_barrier<0.155.0>:one_shot_barrier:barrier_body:62]Barrier menelaus_barrier got notification from <0.153.0> [error_logger:info,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_sup} started: [{pid,<0.380.0>}, {name,remote_api}, {mfargs,{remote_api,start_link,[]}}, {restart_type,permanent}, {shutdown,1000}, {child_type,worker}] [ns_server:debug,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:ns_server_nodes_sup<0.153.0>:one_shot_barrier:notify:32]Successfuly notified on barrier menelaus_barrier [error_logger:info,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_nodes_sup} started: [{pid,<0.203.0>}, {name,ns_server_sup}, {mfargs,{ns_server_sup,start_link,[]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [error_logger:info,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= supervisor: {local,ns_server_cluster_sup} started: [{pid,<0.152.0>}, {name,ns_server_nodes_sup}, {mfargs, {restartable,start_link, [{ns_server_nodes_sup,start_link,[]}, infinity]}}, {restart_type,permanent}, {shutdown,infinity}, {child_type,supervisor}] [ns_server:debug,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:<0.2.0>:child_erlang:child_loop:115]31507: Entered child_loop [error_logger:info,2016-05-11T16:43:36.517-07:00,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203] =========================PROGRESS REPORT========================= application: ns_server started_at: 'ns_1@127.0.0.1' [ns_server:debug,2016-05-11T16:43:36.558-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.383.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:43:36.558-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.311.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'saslauthd-saslauthd-port',<0.383.0>} started [ns_server:debug,2016-05-11T16:43:36.558-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.383.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@saslauthd-saslauthd-port">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL,<<"http://127.0.0.1:8091/_cbauth">>}, {admin, {[{user,<<"Administrator">>}, {salt,<<"6MjdqNKaQBvU0VSCXiFXdQ==">>}, {mac, <<"7UfQhOmYXvBGevp8sxirMxrkozw=">>}]}}]}]}]} [ns_server:debug,2016-05-11T16:43:36.559-07:00,ns_1@127.0.0.1:json_rpc_connection-saslauthd-saslauthd-port<0.383.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0}, {<<"result">>,null}, {<<"error">>, <<"rpc: can't find service AuthCacheSvc.UpdateDB">>}] [ns_server:debug,2016-05-11T16:43:36.563-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.386.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:43:36.564-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.311.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'projector-cbauth',<0.386.0>} started [ns_server:debug,2016-05-11T16:43:36.564-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.386.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@projector-cbauth">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL,<<"http://127.0.0.1:8091/_cbauth">>}, {admin, {[{user,<<"Administrator">>}, {salt,<<"6MjdqNKaQBvU0VSCXiFXdQ==">>}, {mac, <<"7UfQhOmYXvBGevp8sxirMxrkozw=">>}]}}]}]}]} [ns_server:debug,2016-05-11T16:43:36.566-07:00,ns_1@127.0.0.1:json_rpc_connection-projector-cbauth<0.386.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0},{<<"result">>,true},{<<"error">>,null}] [ns_server:debug,2016-05-11T16:43:36.605-07:00,ns_1@127.0.0.1:ns_ports_setup<0.315.0>:ns_ports_setup:set_children:72]Monitor ns_child_ports_sup <11470.68.0> [ns_server:debug,2016-05-11T16:43:36.605-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.322.0>:memcached_config_mgr:init:46]ns_ports_setup seems to be ready [ns_server:debug,2016-05-11T16:43:36.609-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.322.0>:memcached_config_mgr:find_port_pid_loop:119]Found memcached port <11470.75.0> [ns_server:debug,2016-05-11T16:43:36.618-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.322.0>:memcached_config_mgr:init:77]wrote memcached config to /opt/couchbase/var/lib/couchbase/config/memcached.json. Will activate memcached port server [ns_server:debug,2016-05-11T16:43:36.618-07:00,ns_1@127.0.0.1:memcached_config_mgr<0.322.0>:memcached_config_mgr:init:80]activated memcached port server [ns_server:debug,2016-05-11T16:43:36.684-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.394.0>:json_rpc_connection:init:85]connected [ns_server:debug,2016-05-11T16:43:36.684-07:00,ns_1@127.0.0.1:menelaus_cbauth<0.311.0>:menelaus_cbauth:handle_cast:77]Observed json rpc process {'goxdcr-cbauth',<0.394.0>} started [ns_server:debug,2016-05-11T16:43:36.684-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.394.0>:json_rpc_connection:handle_call:175]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,0}, {method,<<"AuthCacheSvc.UpdateDB">>}, {params, [{[{specialUser,<<"@goxdcr-cbauth">>}, {nodes, [{[{host,<<"127.0.0.1">>}, {user,<<"_admin">>}, {password,"*****"}, {ports, [8091,18091,18092,8092,11207,9999,11210, 11211]}, {local,true}]}]}, {buckets,[]}, {tokenCheckURL,<<"http://127.0.0.1:8091/_cbauth">>}, {admin, {[{user,<<"Administrator">>}, {salt,<<"6MjdqNKaQBvU0VSCXiFXdQ==">>}, {mac, <<"7UfQhOmYXvBGevp8sxirMxrkozw=">>}]}}]}]}]} [ns_server:debug,2016-05-11T16:43:36.686-07:00,ns_1@127.0.0.1:json_rpc_connection-goxdcr-cbauth<0.394.0>:json_rpc_connection:handle_info:111]got response: [{<<"id">>,0},{<<"result">>,true},{<<"error">>,null}] [ns_server:debug,2016-05-11T16:44:06.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:44:06.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:44:06.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:44:06.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:44:36.515-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:44:36.515-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:44:36.515-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:44:36.515-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:45:06.516-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:45:06.516-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:45:06.516-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:45:06.516-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:45:36.517-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:45:36.517-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:45:36.517-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:45:36.517-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:46:06.518-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:46:06.518-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:46:06.518-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:46:06.518-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:46:36.519-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:46:36.519-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:46:36.519-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:46:36.519-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:47:06.520-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:47:06.520-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:47:06.520-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:47:06.520-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:47:36.521-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:47:36.521-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:47:36.521-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:47:36.521-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:48:06.522-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:48:06.522-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:48:06.522-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:48:06.522-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:48:36.523-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:48:36.523-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:48:36.523-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:48:36.523-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:49:06.524-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:49:06.524-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:49:06.524-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:49:06.524-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:49:36.525-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:49:36.525-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:49:36.525-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:49:36.525-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:50:06.526-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:50:06.526-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:50:06.527-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:50:06.528-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:50:36.528-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:50:36.528-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:50:36.528-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:50:36.528-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:51:06.529-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:51:06.529-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:51:06.529-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:51:06.529-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:51:36.530-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:51:36.530-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:51:36.530-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:51:36.530-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:52:06.531-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:52:06.531-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:52:06.531-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:52:06.531-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:52:36.532-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:52:36.532-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:52:36.532-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:52:36.533-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:53:06.533-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:53:06.533-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:53:06.534-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:53:06.535-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:53:36.535-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:53:36.535-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:53:36.535-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:53:36.535-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:54:06.536-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:54:06.536-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:54:06.536-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:54:06.536-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:54:36.537-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:54:36.537-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:54:36.537-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:54:36.537-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:55:06.538-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:55:06.538-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:55:06.538-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:55:06.538-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:55:36.539-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:55:36.539-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:55:36.539-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:55:36.539-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:56:06.540-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:56:06.540-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:56:06.540-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:56:06.540-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:56:36.541-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:56:36.541-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:56:36.541-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:56:36.541-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:57:06.542-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:57:06.542-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:57:06.542-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:57:06.542-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:57:36.543-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:57:36.543-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:57:36.543-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:57:36.543-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:58:06.544-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:58:06.544-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:58:06.544-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:58:06.544-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:58:36.545-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:58:36.545-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:58:36.545-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:58:36.545-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:59:06.546-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:59:06.546-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:59:06.546-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:59:06.546-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:59:36.547-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T16:59:36.547-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T16:59:36.547-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T16:59:36.547-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:00:06.548-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:00:06.548-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:00:06.548-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:00:06.548-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:00:36.549-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:00:36.549-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:00:36.549-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:00:36.549-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:01:06.550-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:01:06.550-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:01:06.550-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:01:06.551-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:01:36.551-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:01:36.551-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:01:36.551-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:01:36.551-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:02:06.552-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:02:06.552-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:02:06.552-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:02:06.552-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:02:36.553-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:02:36.553-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:02:36.554-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:02:36.555-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:03:06.555-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:03:06.555-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:03:06.557-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:03:06.557-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:03:36.556-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:03:36.556-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:03:36.558-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:03:36.558-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:04:06.557-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:04:06.557-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:04:06.559-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:04:06.559-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:04:36.558-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:04:36.558-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:04:36.559-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:04:36.560-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:05:06.560-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:05:06.560-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:05:06.561-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:05:06.562-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:05:36.562-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:05:36.562-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:05:36.562-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:05:36.562-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:06:06.563-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:06:06.563-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:06:06.564-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:06:06.565-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:06:36.565-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:06:36.565-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:06:36.565-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:06:36.565-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:07:06.566-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:07:06.566-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:07:06.566-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:07:06.566-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:07:36.567-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:07:36.567-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:07:36.567-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:07:36.567-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:08:06.568-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:08:06.568-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:08:06.568-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:08:06.568-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:08:36.569-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:08:36.569-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:08:36.569-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:08:36.569-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:09:06.570-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:09:06.570-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:09:06.571-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:09:06.572-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:09:36.572-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:09:36.572-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:09:36.572-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:09:36.572-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:10:06.573-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:10:06.573-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:10:06.573-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:10:06.573-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:10:36.574-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:10:36.574-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:10:36.574-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:10:36.574-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:11:06.575-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:11:06.575-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:11:06.575-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:11:06.575-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:11:36.576-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:11:36.576-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:11:36.576-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:11:36.576-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:12:06.577-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:12:06.577-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:12:06.578-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:12:06.579-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:12:36.579-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:12:36.579-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:12:36.579-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:12:36.579-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:13:06.580-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:13:06.580-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:13:06.580-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:13:06.580-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:13:36.581-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:13:36.581-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:13:36.581-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:13:36.581-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:14:06.582-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:14:06.582-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:14:06.582-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:14:06.582-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:14:36.583-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:14:36.583-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:14:36.583-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:14:36.583-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:15:06.584-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:15:06.584-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:15:06.584-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:15:06.584-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:15:36.585-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:15:36.585-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:15:36.585-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:15:36.585-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:16:06.586-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:16:06.586-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:16:06.586-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:16:06.586-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:16:36.587-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:16:36.587-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:16:36.587-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:16:36.587-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:17:06.588-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:17:06.588-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:17:06.588-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:17:06.588-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:17:36.589-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:17:36.589-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:17:36.589-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:17:36.589-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:18:06.590-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:18:06.590-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:18:06.590-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:18:06.590-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:18:36.591-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:18:36.591-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:18:36.591-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:18:36.591-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:19:06.592-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:19:06.592-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:19:06.592-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:19:06.592-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:19:36.593-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:19:36.593-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:19:36.593-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:19:36.593-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:20:06.594-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:20:06.594-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:20:06.594-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:20:06.594-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:20:36.595-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:20:36.595-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:20:36.595-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:20:36.595-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:21:06.596-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:21:06.596-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:21:06.596-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:21:06.596-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:21:36.597-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:21:36.597-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:21:36.597-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:21:36.597-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:22:06.598-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:22:06.598-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:22:06.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:22:06.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:22:36.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:22:36.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:22:36.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:22:36.599-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:23:06.600-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:23:06.600-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:23:06.600-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:23:06.600-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:23:36.601-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:23:36.601-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:23:36.601-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:23:36.601-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:24:06.602-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:24:06.602-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:24:06.602-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:24:06.602-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:24:36.603-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:24:36.603-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:24:36.603-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:24:36.603-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:25:06.604-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:25:06.604-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:25:06.604-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:25:06.604-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:25:36.605-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:25:36.605-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:25:36.605-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:25:36.605-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:26:06.606-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:26:06.606-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:26:06.606-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:26:06.606-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:26:36.607-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:26:36.607-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:26:36.607-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:26:36.607-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:27:06.608-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:27:06.608-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:27:06.609-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:27:06.610-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:27:36.610-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:27:36.610-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:27:36.610-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:27:36.610-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:28:06.611-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:28:06.611-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:28:06.611-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:28:06.611-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:28:36.612-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:28:36.612-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:28:36.612-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:28:36.612-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:29:06.613-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:29:06.613-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:29:06.613-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:29:06.613-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:29:36.614-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:29:36.614-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:29:36.614-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:29:36.614-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:30:06.615-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:30:06.615-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:30:06.615-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:30:06.615-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:30:36.616-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:30:36.616-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:30:36.616-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:30:36.616-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:31:06.617-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:31:06.617-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:31:06.617-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:31:06.617-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:31:36.618-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:31:36.618-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:31:36.618-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:31:36.618-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:32:06.619-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:32:06.619-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:32:06.619-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:32:06.619-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:32:36.620-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:32:36.620-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:32:36.620-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:32:36.620-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:33:06.621-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:33:06.621-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:33:06.621-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:33:06.621-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:33:36.622-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:33:36.622-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:33:36.623-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:33:36.624-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:34:06.624-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:34:06.624-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:34:06.626-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:34:06.626-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:34:36.625-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:34:36.625-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:34:36.627-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:34:36.627-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:35:06.626-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:35:06.626-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:35:06.628-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:35:06.628-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:35:36.627-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:35:36.627-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:35:36.629-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:35:36.629-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:36:06.628-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:36:06.628-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:36:06.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:36:06.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:36:36.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:36:36.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:36:36.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:36:36.630-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:37:06.631-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:37:06.631-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:37:06.632-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:37:06.632-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:37:36.632-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:37:36.632-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:37:36.634-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:37:36.634-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:38:06.633-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:38:06.633-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:38:06.635-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:38:06.635-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:38:36.634-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:38:36.634-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:38:36.636-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:38:36.636-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:39:06.635-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:39:06.635-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:39:06.637-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:39:06.637-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:39:36.636-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:39:36.636-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:39:36.638-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:39:36.638-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:40:06.637-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:40:06.637-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:40:06.639-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:40:06.639-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:40:36.638-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:40:36.638-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:40:36.640-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:40:36.640-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:41:06.639-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:41:06.639-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:41:06.641-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:41:06.641-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:41:36.640-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:41:36.640-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:41:36.642-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:41:36.642-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:42:06.641-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:42:06.641-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:42:06.643-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:42:06.643-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:42:36.642-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:42:36.642-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:42:36.644-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:42:36.644-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:43:06.643-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:43:06.643-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:43:06.645-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:43:06.645-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:43:36.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_master. Rescheduling compaction. [ns_server:debug,2016-05-11T17:43:36.514-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_master too soon. Next run will be in 3600s [ns_server:debug,2016-05-11T17:43:36.644-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:43:36.644-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:43:36.646-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:43:36.646-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:44:06.645-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:44:06.645-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:44:06.647-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:44:06.647-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:44:36.646-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:44:36.646-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:44:36.648-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:44:36.648-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:45:06.647-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:45:06.647-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:45:06.649-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:45:06.649-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:45:36.648-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:45:36.648-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:45:36.650-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:45:36.650-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:46:06.649-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:46:06.649-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:46:06.651-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:46:06.651-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:46:36.650-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:46:36.650-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:46:36.652-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:46:36.652-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:47:06.651-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:47:06.651-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:47:06.653-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:47:06.653-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:47:36.652-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:47:36.652-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:47:36.654-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:47:36.654-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:48:06.653-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:48:06.653-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:48:06.654-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:48:06.654-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:48:36.655-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:48:36.655-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:48:36.655-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:48:36.655-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:49:06.656-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:49:06.656-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:49:06.656-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:49:06.656-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:49:36.657-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:49:36.657-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:49:36.657-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:49:36.657-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:50:06.658-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:50:06.658-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:50:06.658-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:50:06.658-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:50:36.659-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:50:36.659-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:50:36.659-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:50:36.659-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:51:06.660-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:51:06.660-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:51:06.661-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:51:06.662-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:51:36.662-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:51:36.662-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:51:36.662-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:51:36.662-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:52:06.663-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:52:06.663-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:52:06.663-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:52:06.663-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:52:36.664-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:52:36.664-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:52:36.664-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:52:36.664-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:53:06.665-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:53:06.665-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:53:06.665-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:53:06.665-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:53:36.666-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:53:36.666-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:53:36.666-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:53:36.666-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:54:06.667-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:54:06.667-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:54:06.667-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:54:06.667-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:54:36.668-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:54:36.668-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:54:36.668-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:54:36.668-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:55:06.669-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:55:06.669-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:55:06.669-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:55:06.669-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:55:36.670-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:55:36.670-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:55:36.671-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:55:36.671-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:56:06.672-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:56:06.672-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:56:06.672-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:56:06.672-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:56:36.673-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:56:36.673-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:56:36.673-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:56:36.673-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:57:06.674-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:57:06.674-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:57:06.674-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:57:06.674-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:57:36.675-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:57:36.675-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:57:36.675-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:57:36.675-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:58:06.676-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:58:06.676-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:58:06.676-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:58:06.676-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:58:36.677-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:58:36.677-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:58:36.677-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:58:36.677-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:59:06.678-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:59:06.678-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:59:06.678-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:59:06.678-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:59:36.679-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T17:59:36.679-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T17:59:36.679-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T17:59:36.679-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:00:06.680-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:00:06.680-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:00:06.680-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:00:06.680-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:00:36.681-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:00:36.681-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:00:36.681-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:00:36.681-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:01:06.682-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:01:06.682-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:01:06.682-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:01:06.682-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:01:36.683-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:01:36.683-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:01:36.683-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:01:36.683-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:02:06.684-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:02:06.684-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:02:06.684-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:02:06.684-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:02:36.685-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:02:36.685-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:02:36.685-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:02:36.685-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:03:06.686-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:03:06.686-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:03:06.686-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:03:06.686-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:03:36.687-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:03:36.687-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:03:36.687-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:03:36.687-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:04:06.688-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:04:06.688-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:04:06.688-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:04:06.688-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:04:36.689-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:04:36.689-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:04:36.689-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:04:36.689-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:05:06.690-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:05:06.690-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:05:06.690-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:05:06.690-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:05:36.691-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:05:36.691-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:05:36.691-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:05:36.691-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:06:06.692-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_kv. Rescheduling compaction. [ns_server:debug,2016-05-11T18:06:06.692-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_kv too soon. Next run will be in 30s [ns_server:debug,2016-05-11T18:06:06.692-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_new_daemon:process_scheduler_message:1248]No buckets to compact for compact_views. Rescheduling compaction. [ns_server:debug,2016-05-11T18:06:06.692-07:00,ns_1@127.0.0.1:compaction_new_daemon<0.375.0>:compaction_scheduler:schedule_next:60]Finished compaction for compact_views too soon. Next run will be in 30s