Generate Diagnostic Report (just info level)

Log

Event Module Code Server Node Time
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 14:56:57 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 14:54:58 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{timeout,
{gen_server,call,[ns_doctor,get_nodes]}}},
{trace,
[{gen_server,call,2},
{ns_doctor,build_tasks_list,1},
{menelaus_web,handle_tasks,1},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}] (repeated 1 times)
menelaus_web019 ns_1@10.5.2.11 14:54:23 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{timeout,
{gen_server,call,[ns_doctor,get_nodes]}}},
{trace,
[{gen_server,call,2},
{ns_doctor,build_tasks_list,1},
{menelaus_web,handle_tasks,1},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:53:36 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{timeout,
{gen_server,call,
[ns_node_disco,nodes_wanted]}}},
{trace,
[{gen_server,call,2},
{menelaus_web,handle_tasks,1},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}] (repeated 2 times)
menelaus_web019 ns_1@10.5.2.11 14:53:29 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_event,call,
[ns_node_disco_events,
{menelaus_event,ns_node_disco_events},
{register_watcher,<0.13047.50>}]}}},
{trace,
[{gen_event,call1,3},
{menelaus_event,register_watcher,1},
{menelaus_web,handle_pool_info,2},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:53:28 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_server,call,
[ns_node_disco,nodes_wanted]}}},
{trace,
[{gen_server,call,2},
{menelaus_web,build_pool_info,4},
{menelaus_web,handle_pool_info_wait,6},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:53:08 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. (repeated 1 times) mb_master000 ns_1@10.5.2.13 14:52:55 - Fri Jul 6, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.11' in 4 seconds. ns_memcached001 ns_1@10.5.2.11 14:52:48 - Fri Jul 6, 2012
Rebalance exited with reason {exited,
{'EXIT',<0.24185.49>,
{{badmatch,{error,timeout}},
{gen_server,call,
[{'ns_memcached-default','ns_1@10.5.2.11'},
{stats,<<"tap">>},
60000]}}}}
ns_orchestrator002 ns_1@10.5.2.11 14:52:43 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 14:52:35 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_event,call,
[ns_config_events,
{menelaus_event,ns_config_events},
{register_watcher,<0.4118.50>}]}}},
{trace,
[{gen_event,call1,3},
{menelaus_event,register_watcher,1},
{menelaus_web,handle_pool_info,2},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:52:33 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{timeout,
{gen_server,call,
[ns_node_disco,nodes_wanted]}}},
{trace,
[{gen_server,call,2},
{menelaus_web,handle_tasks,1},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:52:23 - Fri Jul 6, 2012
<0.24184.49> exited with {exited,
{'EXIT',<0.24185.49>,
{{badmatch,{error,timeout}},
{gen_server,call,
[{'ns_memcached-default','ns_1@10.5.2.11'},
{stats,<<"tap">>},
60000]}}}}
ns_vbucket_mover000 ns_1@10.5.2.11 14:52:21 - Fri Jul 6, 2012
IP address seems to have changed. Unable to listen on 'ns_1@10.5.2.11'. menelaus_web_alerts_srv001 ns_1@10.5.2.11 14:52:20 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,{gen_server,call,[ns_config,get]}}},
{trace,
[{diag_handler,diagnosing_timeouts,1},
{ns_bucket,get_buckets,0},
{ns_bucket,failover_warnings,0},
{menelaus_web,build_pool_info,4},
{menelaus_web,handle_pool_info_wait_tail,5},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5}]}]
menelaus_web019 ns_1@10.5.2.11 14:51:51 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default/tasks"},
{type,exit},
{what,
{timeout,{gen_server,call,[ns_config,get]}}},
{trace,
[{diag_handler,diagnosing_timeouts,1},
{menelaus_auth,check_auth,1},
{menelaus_auth,apply_auth_with_auth_data,4},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:51:49 - Fri Jul 6, 2012
Control connection to memcached on 'ns_1@10.5.2.11' disconnected: {badmatch,
{error,
timeout}}
ns_memcached004 ns_1@10.5.2.11 14:51:39 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_fsm,sync_send_all_state_event,
[mb_master,master_node]}}},
{trace,
[{gen_fsm,sync_send_all_state_event,2},
{ns_cluster_membership,
is_stop_rebalance_safe,0},
{menelaus_web,build_pool_info,4},
{menelaus_web,handle_pool_info_wait_tail,5},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}] (repeated 1 times)
menelaus_web019 ns_1@10.5.2.11 14:51:24 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 14:51:13 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_event,call,
[ns_config_events,
{menelaus_event,ns_config_events},
{unregister_watcher,<0.974.41>}]}}},
{trace,
[{gen_event,call1,3},
{menelaus_event,unregister_watcher,1},
{menelaus_web,handle_pool_info_wait_tail,5},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:51:08 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_fsm,sync_send_all_state_event,
[mb_master,master_node]}}},
{trace,
[{gen_fsm,sync_send_all_state_event,2},
{ns_cluster_membership,
is_stop_rebalance_safe,0},
{menelaus_web,build_pool_info,4},
{menelaus_web,handle_pool_info_wait_tail,5},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:51:06 - Fri Jul 6, 2012
Server error during processing: ["web request failed",
{path,"/pools/default"},
{type,exit},
{what,
{timeout,
{gen_server,call,
[ns_cookie_manager,cookie_get]}}},
{trace,
[{gen_server,call,2},
{menelaus_web,build_nodes_info_fun,3},
{menelaus_web,build_pool_info,4},
{menelaus_web,handle_pool_info_wait,6},
{menelaus_web,check_and_handle_pool_info,2},
{menelaus_web,loop,3},
{mochiweb_http,headers,5},
{proc_lib,init_p_do_apply,3}]}]
menelaus_web019 ns_1@10.5.2.11 14:51:06 - Fri Jul 6, 2012
Started rebalancing bucket default (repeated 1 times) ns_rebalancer000 ns_1@10.5.2.11 13:39:23 - Fri Jul 6, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.14'], EjectNodes = ['ns_1@10.5.2.15']
(repeated 1 times)
ns_orchestrator004 ns_1@10.5.2.11 13:38:23 - Fri Jul 6, 2012
Rebalance stopped by user.
ns_orchestrator007 ns_1@10.5.2.11 13:38:11 - Fri Jul 6, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 13:38:10 - Fri Jul 6, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.14'], EjectNodes = ['ns_1@10.5.2.15']
ns_orchestrator004 ns_1@10.5.2.11 13:38:05 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 12:50:57 - Fri Jul 6, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 01:17:40 - Fri Jul 6, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 00:59:14 - Fri Jul 6, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.15','ns_1@10.5.2.14'], EjectNodes = []
ns_orchestrator004 ns_1@10.5.2.11 00:58:43 - Fri Jul 6, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.14' in 0 seconds. ns_memcached001 ns_1@10.5.2.14 00:58:41 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.11 00:58:40 - Fri Jul 6, 2012
Started node add transaction by adding node 'ns_1@10.5.2.14' to nodes_wanted
ns_cluster000 ns_1@10.5.2.11 00:58:39 - Fri Jul 6, 2012
Deleting old data files of bucket "default" ns_storage_conf000 ns_1@10.5.2.14 00:58:08 - Fri Jul 6, 2012
Node ns_1@10.5.2.14 joined cluster ns_cluster003 ns_1@10.5.2.14 00:58:06 - Fri Jul 6, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@10.5.2.14'. menelaus_sup001 ns_1@10.5.2.14 00:58:05 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.14' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.14 00:58:05 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.14' saw that node 'ns_1@10.5.2.13' came up. ns_node_disco004 ns_1@10.5.2.14 00:58:05 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.13 00:58:05 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.15 00:58:05 - Fri Jul 6, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 00:50:02 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.14' went down. ns_node_disco005 ns_1@10.5.2.11 00:37:32 - Fri Jul 6, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 00:37:30 - Fri Jul 6, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.15'], EjectNodes = []
ns_orchestrator004 ns_1@10.5.2.11 00:37:29 - Fri Jul 6, 2012
Failed over 'ns_1@10.5.2.14': ok ns_orchestrator006 ns_1@10.5.2.11 00:37:25 - Fri Jul 6, 2012
Starting failing over 'ns_1@10.5.2.14' ns_orchestrator000 ns_1@10.5.2.11 00:37:24 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.14' went down. ns_node_disco005 ns_1@10.5.2.13 00:36:57 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.5.2.14' went down. ns_node_disco005 ns_1@10.5.2.15 00:36:57 - Fri Jul 6, 2012
Node 'ns_1@10.5.2.14' is leaving cluster. ns_cluster001 ns_1@10.5.2.14 00:36:54 - Fri Jul 6, 2012
Shutting down bucket "default" on 'ns_1@10.5.2.14' for deletion ns_memcached002 ns_1@10.5.2.14 00:36:50 - Fri Jul 6, 2012
Haven't heard from a higher priority node or a master, so I'm taking over. mb_master000 ns_1@10.5.2.13 19:53:21 - Thu Jul 5, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 15:43:08 - Thu Jul 5, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 15:37:03 - Thu Jul 5, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.14','ns_1@10.5.2.15'], EjectNodes = []
ns_orchestrator004 ns_1@10.5.2.11 15:37:01 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.11 15:36:59 - Thu Jul 5, 2012
Started node add transaction by adding node 'ns_1@10.5.2.14' to nodes_wanted
ns_cluster000 ns_1@10.5.2.11 15:36:59 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.11 15:36:48 - Thu Jul 5, 2012
Started node add transaction by adding node 'ns_1@10.5.2.15' to nodes_wanted
ns_cluster000 ns_1@10.5.2.11 15:36:47 - Thu Jul 5, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.14' in 0 seconds. ns_memcached001 ns_1@10.5.2.14 15:36:29 - Thu Jul 5, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.15' in 0 seconds. ns_memcached001 ns_1@10.5.2.15 15:36:28 - Thu Jul 5, 2012
Deleting old data files of bucket "default" ns_storage_conf000 ns_1@10.5.2.15 15:36:26 - Thu Jul 5, 2012
Node ns_1@10.5.2.14 joined cluster ns_cluster003 ns_1@10.5.2.14 15:36:24 - Thu Jul 5, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@10.5.2.14'. menelaus_sup001 ns_1@10.5.2.14 15:36:24 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.14' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.14 15:36:24 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.14' saw that node 'ns_1@10.5.2.13' came up. ns_node_disco004 ns_1@10.5.2.14 15:36:24 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.13 15:36:24 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.5.2.14' came up. ns_node_disco004 ns_1@10.5.2.15 15:36:24 - Thu Jul 5, 2012
Node ns_1@10.5.2.15 joined cluster ns_cluster003 ns_1@10.5.2.15 15:36:13 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.13 15:36:13 - Thu Jul 5, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@10.5.2.15'. menelaus_sup001 ns_1@10.5.2.15 15:36:13 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.5.2.13' came up. ns_node_disco004 ns_1@10.5.2.15 15:36:13 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.15' went down. ns_node_disco005 ns_1@10.5.2.11 15:27:27 - Thu Jul 5, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 15:27:27 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.15' went down. ns_node_disco005 ns_1@10.5.2.13 15:26:52 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.15' is leaving cluster. ns_cluster001 ns_1@10.5.2.15 15:26:52 - Thu Jul 5, 2012
Shutting down bucket "default" on 'ns_1@10.5.2.15' for deletion ns_memcached002 ns_1@10.5.2.15 15:26:52 - Thu Jul 5, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 15:22:33 - Thu Jul 5, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13'], EjectNodes = ['ns_1@10.5.2.15']
ns_orchestrator004 ns_1@10.5.2.11 15:22:33 - Thu Jul 5, 2012
Rebalance stopped by user.
ns_orchestrator007 ns_1@10.5.2.11 15:18:59 - Thu Jul 5, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 15:16:28 - Thu Jul 5, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13'], EjectNodes = ['ns_1@10.5.2.15']
ns_orchestrator004 ns_1@10.5.2.11 15:16:26 - Thu Jul 5, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 15:08:53 - Thu Jul 5, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 15:02:47 - Thu Jul 5, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13',
'ns_1@10.5.2.15'], EjectNodes = []
ns_orchestrator004 ns_1@10.5.2.11 15:02:46 - Thu Jul 5, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.15' in 0 seconds. ns_memcached001 ns_1@10.5.2.15 15:02:13 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.11 14:57:58 - Thu Jul 5, 2012
Started node add transaction by adding node 'ns_1@10.5.2.15' to nodes_wanted
ns_cluster000 ns_1@10.5.2.11 14:57:57 - Thu Jul 5, 2012
Node ns_1@10.5.2.15 joined cluster ns_cluster003 ns_1@10.5.2.15 14:57:24 - Thu Jul 5, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@10.5.2.15'. menelaus_sup001 ns_1@10.5.2.15 14:57:24 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.5.2.15' came up. ns_node_disco004 ns_1@10.5.2.13 14:57:23 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.5.2.13' came up. ns_node_disco004 ns_1@10.5.2.15 14:57:23 - Thu Jul 5, 2012
Rebalance completed successfully.
ns_orchestrator001 ns_1@10.5.2.11 14:38:42 - Thu Jul 5, 2012
Started rebalancing bucket default ns_rebalancer000 ns_1@10.5.2.11 14:29:36 - Thu Jul 5, 2012
Starting rebalance, KeepNodes = ['ns_1@10.5.2.11','ns_1@10.5.2.13'], EjectNodes = []
ns_orchestrator004 ns_1@10.5.2.11 14:29:36 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.13' came up. ns_node_disco004 ns_1@10.5.2.11 14:29:30 - Thu Jul 5, 2012
Started node add transaction by adding node 'ns_1@10.5.2.13' to nodes_wanted
ns_cluster000 ns_1@10.5.2.11 14:29:29 - Thu Jul 5, 2012
Renamed node. New name is 'ns_1@10.5.2.11'.
ns_cluster000 ns_1@10.5.2.11 14:29:29 - Thu Jul 5, 2012
I'm the only node, so I'm the master. mb_master000 ns_1@10.5.2.11 14:29:29 - Thu Jul 5, 2012
Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.5.2.11' came up. ns_node_disco004 ns_1@10.5.2.11 14:29:28 - Thu Jul 5, 2012
Node nonode@nohost saw that node 'ns_1@127.0.0.1' went down. ns_node_disco005 nonode@nohost 14:29:28 - Thu Jul 5, 2012
Decided to change address to "10.5.2.11"
ns_cluster000 ns_1@127.0.0.1 14:29:28 - Thu Jul 5, 2012
Bucket "default" loaded on node 'ns_1@10.5.2.13' in 0 seconds. ns_memcached001 ns_1@10.5.2.13 14:29:02 - Thu Jul 5, 2012
Node ns_1@10.5.2.13 joined cluster ns_cluster003 ns_1@10.5.2.13 14:28:55 - Thu Jul 5, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@10.5.2.13'. menelaus_sup001 ns_1@10.5.2.13 14:28:55 - Thu Jul 5, 2012
Bucket "default" loaded on node 'ns_1@127.0.0.1' in 0 seconds. ns_memcached001 ns_1@127.0.0.1 14:25:50 - Thu Jul 5, 2012
Created bucket "default" of type: membase
[{num_replicas,1},
{replica_index,false},
{ram_quota,748683264},
{auth_type,sasl}]
menelaus_web012 ns_1@127.0.0.1 14:25:50 - Thu Jul 5, 2012
Couchbase Server has started on web port 8091 on node 'ns_1@127.0.0.1'. menelaus_sup001 ns_1@127.0.0.1 13:42:23 - Thu Jul 5, 2012
I'm the only node, so I'm the master. mb_master000 ns_1@127.0.0.1 13:42:23 - Thu Jul 5, 2012
Initial otp cookie generated: horbvdnguspxysgw ns_cookie_manager003 ns_1@127.0.0.1 13:42:23 - Thu Jul 5, 2012

Configure BucketCreate Bucket

Bucket Settings
  • Bucket Type:
Memory Size
Cluster quota (192 Gb)
Other Buckets (15 Gb) This Bucket (5 Gb) Free (12 Gb)
Access Control
Replicas
Auto-Compaction

The Auto-Compaction daemon compacts databases and their respective view indexes when all the condition parameters are satisfied.

  • Database Fragmentation

  • %
  • MB
  • View Fragmentation

  • %
  • MB
  • : - :
Cancel

Flushing this bucket will result in complete data loss for this bucket.
Are you sure you want to flush it?

Cancel

Removing this bucket will result in complete data loss for this bucket.
Are you sure you want to remove it?

Cancel
Cancel
Cancel

Security What's this? Enter the username and password for the node you want to add to this cluster.
Cancel
Warning – Removing this server from the cluster will reduce cache capacity across all data buckets. Are you sure you want to remove this server?
Warning – Adding a server to this cluster means all data on that server will be removed.
Attention – A significant amount of data stored on this node does not yet have replica (backup) copies! Failing over the node now will irrecoverably lose that data when the incomplete replica is activated and this node is removed from the cluster. It is recommended to select "Remove Server" and rebalance to safely remove the node without any data loss.
Warning – Failing over the node will remove it from the cluster and activate a replica. Operations currently in flight and not yet replicated, will be lost. Rebalancing will be required to add the node back into the cluster. Consider using "Remove from Cluster" and rebalancing instead of Failover, to avoid any loss of data. Please confirm Failover.
Attention – There are not replica (backup) copies of all data on this node! Failing over the node now will irrecoverably lose that data when the incomplete replica is activated and this node is removed from the cluster. If the node might come back online, it is recommended to wait. Check this box if you want to failover the node, despite the resulting data loss
Warning – Failing over the node will remove it from the cluster and activate a replica. Operations not replicated before the node became unresponsive, will be lost. Rebalancing will be required to add the node back into the cluster. Please confirm Failover.
Cancel
Warning – Stopping rebalance is unsafe at this moment since cluster may be in a partitioned state. Continue only if you're perfectly sure that this is not the case.
Cancel

Really eject this node from cluster?

Warning – The views contained in this Design Document will no longer be accessible. Are you sure you want to remove this Design Document?
Warning – Are you sure you want to remove this Document?

NOTE: Development Mode (_design/dev_) provides the opportunity to edit and test your map/reduce views against a subset of data—without overloading the cluster.

Cancel
Cancel
Couchbase Logo

About Couchbase Server

Cluster State ID:

Security What's this? These are the login credentials for the remote cluster.
Cancel
Replicate changes from:
  • this cluster
To:
Cancel
Simple. Fast. Elastic.
2.0.0 enterprise edition (build-1409)

Configure Server

Step 1 of 5

Configure Disk Storage


---

---

Join Cluster / Start new Cluster

If you want to add this server to an existing Couchbase Cluster, select "Join a cluster now". Alternatively, you may create a new Couchbase Cluster by selecting "Start a new cluster".

If you start a new cluster the "Per Server RAM Quota" you set below will define the amount of RAM each server provides to the Couchbase Cluster. This value will be inherited by all servers subsequently joining the cluster, so please set appropriately.

MB (256 MB — MB)

Create Default Bucket

Step 3 of 5

Bucket Settings

Bucket Type:

Memory Size

Cluster quota (192 Gb)
This Bucket (5 Gb) Free (12 Gb)

Replicas

Notifications

Step 4 of 5

Update Notifications

What's this?

Enabling software update notifications allows notification in the Couchbase web console when a new version of Couchbase Server is available. Configuration information transferred in the update check is anonymous and does not include any stored key-value data.

Community Updates

Please provide your email address to join the community and receive news on coming events.

Product Registration

Register your Enterprise Edition of Couchbase Server below.

Sample Buckets

Step 2 of 5

Sample Data and MapReduce

Sample buckets are available to demonstrate the power of Couchbase Server. These samples contain data and sample MapReduce queries.

Installed Samples

    Available Samples

      Configure Server

      Step 5 of 5

      Secure this Server

      Please create an administrator account for this Server. If you want to join other servers to this one to form a cluster, you will need to use these administrator credentials in the "join cluster" process.