Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-7635

Rebalance exited with reason {pre_rebalance_config_synchronization_failed NODE, even node was ejected more than an hour before

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.1
    • Fix Version/s: 2.0.1
    • Component/s: ns_server
    • Security Level: Public
    • Labels:
      None
    • Environment:
      centos 64

      Description

      build version=2.0.1-145-rel
      http://qa.hq.northscale.net/view/2.0.1/job/ubuntu-64-2.0-new-rebalance-tests-P0/335/consoleFull

      ./testrunner -i /tmp/rebalance-tests.ini get-logs=True,wait_timeout=180,GROUP=P0,get-cbcollect-info=True -t rebalance.rebalanceout.RebalanceOutTests.rebalance_out_with_ops,nodes_out=3,GROUP=OUT;P0

      Test Input params:

      {'GROUP': 'P0', 'case_number': 10, 'cluster_name': 'rebalance-tests', 'conf_file': 'conf/py-newrebalance.conf', 'get-cbcollect-info': 'True', 'get-logs': 'True', 'ini': '/tmp/rebalance-tests.ini', 'nodes_out': '3', 'num_nodes': 6, 'spec': 'py-newrebalance', rebalance_out_with_ops (rebalance.rebalanceout.RebalanceOutTests) ... 'wait_timeout': '180'}

      [2013-01-29 04:48:06,977] - [basetestcase:55] INFO - ============== basetestcase setup was started for test #10 rebalance_out_with_ops==============
      [2013-01-29 04:48:07,995] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,019] - [rest_client:606] INFO - pools/default params : memoryQuota=2106&username=Administrator&password=password
      [2013-01-29 04:48:08,032] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,056] - [rest_client:606] INFO - pools/default params : memoryQuota=2106&username=Administrator&password=password
      [2013-01-29 04:48:08,077] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,095] - [rest_client:606] INFO - pools/default params : memoryQuota=2106&username=Administrator&password=password
      [2013-01-29 04:48:08,118] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,152] - [rest_client:606] INFO - pools/default params : memoryQuota=2054&username=Administrator&password=password
      [2013-01-29 04:48:08,170] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,191] - [rest_client:606] INFO - pools/default params : memoryQuota=2106&username=Administrator&password=password
      [2013-01-29 04:48:08,210] - [rest_client:595] INFO - settings/web params : username=Administrator&password=password&port=8091
      [2013-01-29 04:48:08,234] - [rest_client:606] INFO - pools/default params : memoryQuota=2054&username=Administrator&password=password
      [2013-01-29 04:48:09,241] - [task:232] INFO - adding node 10.5.2.14:8091 to cluster
      [2013-01-29 04:48:09,242] - [rest_client:712] INFO - adding remote node @10.5.2.14:8091 to this cluster @10.5.2.11:8091
      [2013-01-29 04:48:09,705] - [task:232] INFO - adding node 10.5.2.13:8091 to cluster
      [2013-01-29 04:48:09,706] - [rest_client:712] INFO - adding remote node @10.5.2.13:8091 to this cluster @10.5.2.11:8091
      [2013-01-29 04:48:10,621] - [task:232] INFO - adding node 10.3.121.106:8091 to cluster
      [2013-01-29 04:48:10,621] - [rest_client:712] INFO - adding remote node @10.3.121.106:8091 to this cluster @10.5.2.11:8091
      [2013-01-29 04:48:11,599] - [task:232] INFO - adding node 10.5.2.15:8091 to cluster
      [2013-01-29 04:48:11,599] - [rest_client:712] INFO - adding remote node @10.5.2.15:8091 to this cluster @10.5.2.11:8091
      [2013-01-29 04:48:12,510] - [task:232] INFO - adding node 10.3.121.107:8091 to cluster
      [2013-01-29 04:48:12,511] - [rest_client:712] INFO - adding remote node @10.3.121.107:8091 to this cluster @10.5.2.11:8091
      [2013-01-29 04:48:19,589] - [rest_client:795] INFO - rebalance params : password=password&ejectedNodes=&user=Administrator&knownNodes=ns_1%4010.5.2.14%2Cns_1%4010.3.121.106%2Cns_1%4010.3.121.107%2Cns_1%4010.5.2.13%2Cns_1%4010.5.2.11%2Cns_1%4010.5.2.15
      [2013-01-29 04:48:19,607] - [rest_client:799] INFO - rebalance operation started
      [2013-01-29 04:48:19,616] - [rest_client:896] INFO - rebalance percentage : 0 %
      [2013-01-29 04:48:29,633] - [rest_client:879] ERROR -

      {u'status': u'none', u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try rebalance again.'}

      - rebalance failed
      [2013-01-29 04:48:29,634] - [rest_client:880] INFO - Latest logs from UI:
      [2013-01-29 04:48:29,754] - [rest_client:881] ERROR - {u'node': u'ns_1@10.5.2.11', u'code': 2, u'text': u"Rebalance exited with reason

      {pre_rebalance_config_synchronization_failed,\n ['ns_1@10.3.121.107']}

      \n", u'shortText': u'message', u'module': u'ns_orchestrator', u'tstamp': 1359465438152, u'type': u'info'}
      [2013-01-29 04:48:29,755] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.5.2.11', u'code': 4, u'text': u"Starting rebalance, KeepNodes = ['ns_1@10.5.2.14','ns_1@10.3.121.106',\n 'ns_1@10.3.121.107','ns_1@10.5.2.13',\n 'ns_1@10.5.2.11','ns_1@10.5.2.15'], EjectNodes = []\n", u'shortText': u'message', u'module': u'ns_orchestrator', u'tstamp': 1359465431759, u'type': u'info'}

      [2013-01-29 04:48:29,755] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.3.121.107', u'code': 3, u'text': u'Node ns_1@10.3.121.107 joined cluster', u'shortText': u'message', u'module': u'ns_cluster', u'tstamp': 1359465429227, u'type': u'info'}

      [2013-01-29 04:48:29,756] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.3.121.107', u'code': 1, u'text': u"Couchbase Server has started on web port 8091 on node 'ns_1@10.3.121.107'.", u'shortText': u'web start ok', u'module': u'menelaus_sup', u'tstamp': 1359465427603, u'type': u'info'}

      [2013-01-29 04:48:29,756] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.5.2.15', u'code': 4, u'text': u"Node 'ns_1@10.5.2.15' saw that node 'ns_1@10.3.121.107' came up.", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1359465426106, u'type': u'info'}

      [2013-01-29 04:48:29,757] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.5.2.14', u'code': 4, u'text': u"Node 'ns_1@10.5.2.14' saw that node 'ns_1@10.3.121.107' came up.", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1359465426079, u'type': u'info'}

      [2013-01-29 04:48:29,758] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.3.121.107', u'code': 4, u'text': u"Node 'ns_1@10.3.121.107' saw that node 'ns_1@10.5.2.15' came up.", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1359465426070, u'type': u'info'}

      [2013-01-29 04:48:29,759] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.3.121.107', u'code': 4, u'text': u"Node 'ns_1@10.3.121.107' saw that node 'ns_1@10.5.2.14' came up.", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1359465426065, u'type': u'info'}

      [2013-01-29 04:48:29,759] - [rest_client:881] ERROR -

      {u'node': u'ns_1@10.5.2.13', u'code': 4, u'text': u"Node 'ns_1@10.5.2.13' saw that node 'ns_1@10.3.121.107' came up.", u'shortText': u'node up', u'module': u'ns_node_disco', u'tstamp': 1359465426063, u'type': u'info'}

      previously 10.3.121.107 was ejected almost an hour ago

      2013-01-29 03:34:04,889] - [rest_client:795] INFO - rebalance params : password=password&ejectedNodes=ns_1%4010.5.2.14%2Cns_1%4010.3.121.106%2Cns_1%4010.3.121.107%2Cns_1%4010.5.2.13%2Cns_1%4010.5.2.15&user=Administrator&knownNodes=ns_1%4010.5.2.14%2Cns_1%4010.3.121.106%2Cns_1%4010.3.121.107%2Cns_1%4010.5.2.13%2Cns_1%4010.5.2.11%2Cns_1%4010.5.2.15
      [2013-01-29 03:34:04,908] - [rest_client:799] INFO - rebalance operation started
      [2013-01-29 03:34:04,919] - [rest_client:896] INFO - rebalance percentage : 0 %
      [2013-01-29 03:34:08,933] - [rest_client:855] INFO - rebalance progress took 4.02412295341 seconds
      [2013-01-29 03:34:08,933] - [rest_client:856] INFO - sleep for 4.02412295341 seconds after rebalance...

      from server logs:

      2013-01-29 04:02:56.312 ns_orchestrator:4:info:message(ns_1@10.5.2.11) - Starting rebalance, KeepNodes = ['ns_1@10.5.2.11'], EjectNodes = ['ns_1@10.5.2.14',
      'ns_1@10.3.121.106',
      'ns_1@10.3.121.107',
      'ns_1@10.5.2.13',
      'ns_1@10.5.2.15']

      then

      2013-01-29 05:17:09.227 ns_cluster:3:info:message(ns_1@10.3.121.107) - Node ns_1@10.3.121.107 joined cluster
      2013-01-29 05:17:11.759 ns_orchestrator:4:info:message(ns_1@10.5.2.11) - Starting rebalance, KeepNodes = ['ns_1@10.5.2.14','ns_1@10.3.121.106',
      'ns_1@10.3.121.107','ns_1@10.5.2.13',
      'ns_1@10.5.2.11','ns_1@10.5.2.15'], EjectNodes = []

      2013-01-29 05:17:18.152 ns_orchestrator:2:info:message(ns_1@10.5.2.11) - Rebalance exited with reason

      {pre_rebalance_config_synchronization_failed, ['ns_1@10.3.121.107']}

      logs_node (debug):

      No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

        Show
        andreibaranouski Andrei Baranouski added a comment - https://s3.amazonaws.com/bugdb/jira/MB-7635/10.5.2.11-1292013-56-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/10.5.2.14-1292013-57-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/10.5.2.13-1292013-58-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/10.5.2.15-1292013-511-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/10.3.121.106-1292013-510-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/10.3.121.107-1292013-512-diag.zip https://s3.amazonaws.com/bugdb/jira/MB-7635/812e7375-1f2a-42f6-a186-0ccfbdd730cd-10.5.2.11-diag.txt.gz https://s3.amazonaws.com/bugdb/jira/MB-7635/812e7375-1f2a-42f6-a186-0ccfbdd730cd-10.3.121.107-diag.txt.gz
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Andrei
        If this is a regression from 2.0 can you add regression to the labels
        Also did you happen to see timeouts non the orchestrator logs

        Another request is please set the fix version to 2.0.1 otherwise we will not go through the bug in the scrub meeting

        Show
        farshid Farshid Ghods (Inactive) added a comment - Andrei If this is a regression from 2.0 can you add regression to the labels Also did you happen to see timeouts non the orchestrator logs Another request is please set the fix version to 2.0.1 otherwise we will not go through the bug in the scrub meeting
        Show
        andreibaranouski Andrei Baranouski added a comment - reproduced on build 2.0.2-706-rel http://qa.hq.northscale.net/view/2.0.1/job/ubuntu-64-2.0-new-rebalance-tests-P0/336/consoleFull
        Hide
        andreibaranouski Andrei Baranouski added a comment -

        logs are clean, without any timeouts

        [cluster:debug,2013-01-29T5:17:04.709,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:verify_otp_connectivity:398]port_please("ns_1", "10.3.121.107") = 21100

        [cluster:info,2013-01-29T5:17:04.712,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:node_add_transaction:503]Started node add transaction by adding node 'ns_1@10.3.121.107' to nodes_wanted

        [ns_server:debug,2013-01-29T5:17:04.717,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317]Replicating some config keys ([nodes_wanted,

        {node,'ns_1@10.3.121.107',membership}]..)
        [ns_server:debug,2013-01-29T5:17:04.722,ns_1@10.5.2.11:ns_config_events<0.6575.0>:ns_node_disco_conf_events:handle_event:44]ns_node_disco_conf_events config on nodes_wanted
        [ns_server:debug,2013-01-29T5:17:04.722,ns_1@10.5.2.11:mb_master<0.6624.0>:mb_master:update_peers:509]List of peers has changed from ['ns_1@10.3.121.106','ns_1@10.5.2.11',
        'ns_1@10.5.2.13','ns_1@10.5.2.14',
        'ns_1@10.5.2.15'] to ['ns_1@10.3.121.106',
        'ns_1@10.3.121.107',
        'ns_1@10.5.2.11',
        'ns_1@10.5.2.13',
        'ns_1@10.5.2.14',
        'ns_1@10.5.2.15']
        [ns_server:debug,2013-01-29T5:17:04.723,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        nodes_wanted ->
        ['ns_1@10.3.121.106','ns_1@10.3.121.107','ns_1@10.5.2.11','ns_1@10.5.2.13',
        'ns_1@10.5.2.14','ns_1@10.5.2.15']
        [ns_server:debug,2013-01-29T5:17:04.723,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:{node,'ns_1@10.3.121.107',membership}

        ->
        inactiveAdded
        [ns_server:debug,2013-01-29T5:17:04.724,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_sync:115]ns_cookie_manager do_cookie_sync
        [ns_server:debug,2013-01-29T5:17:04.727,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_save:152]saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie"
        [cluster:debug,2013-01-29T5:17:04.727,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:do_add_node_engaged_inner:471]Posting the following to complete_join on "10.3.121.107:8091":
        {struct,
        [

        {<<"targetNode">>,'ns_1@10.3.121.107'}

        ,
        {availableStorage,
        {struct,
        [{hdd,
        [{struct,
        [

        {path,<<"/">>}

        ,

        {sizeKBytes,55007284}

        ,

        {usagePercent,13}

        ]},
        {struct,
        [

        {path,<<"/boot">>}

        ,

        {sizeKBytes,101086}

        ,

        {usagePercent,21}

        ]},
        {struct,
        [

        {path,<<"/dev/shm">>}

        ,

        {sizeKBytes,2021844}

        ,

        {usagePercent,0}

        ]}]}]}},

        {memoryQuota,2106}

        ,
        {storageTotals,
        {struct,
        [{ram,
        {struct,
        [

        {total,4140740608}

        ,

        {quotaTotal,2208301056}

        ,

        {quotaUsed,0}

        ,

        {used,2612301824}

        ,

        {usedByData,0}]}},
        {hdd,
        {struct,
        [{total,56327458816},
        {quotaTotal,56327458816},
        {used,7322569646},
        {usedByData,0}

        ,

        {free,49004889170}

        ]}}]}},
        {storage,
        {struct,
        [

        {ssd,[]}

        ,
        {hdd,
        [{struct,
        [

        {path,<<"/opt/couchbase/var/lib/couchbase/data">>}

        ,

        {index_path, <<"/opt/couchbase/var/lib/couchbase/data">>}

        ,

        {quotaMb,none}

        ,

        {state,ok}

        ]}]}]}},
        {systemStats,
        {struct,
        [

        {cpu_utilization_rate,7.106598984771574}

        ,

        {swap_total,6140452864}

        ,

        {swap_used,54947840}

        ]}},
        {interestingStats,{struct,[]}},

        {uptime,<<"531">>}

        ,

        {memoryTotal,4140740608}

        ,

        {memoryFree,1528438784}

        ,

        {mcdMemoryReserved,3159}

        ,

        {mcdMemoryAllocated,3159}

        ,

        {couchApiBase,<<"http://10.5.2.11:8092/">>}

        ,

        {otpNode,<<"ns_1@10.5.2.11">>}

        ,

        {otpCookie,<<"iustxuacjffxggse">>}

        ,

        {clusterMembership,<<"active">>}

        ,

        {status,<<"healthy">>}

        ,

        {thisNode,true}

        ,

        {hostname,<<"10.5.2.11:8091">>}

        ,

        {clusterCompatibility,131072}

        ,

        {version,<<"2.0.1-145-rel-enterprise">>}

        ,

        {os,<<"x86_64-unknown-linux-gnu">>}

        ,
        {ports,{struct,[

        {proxy,11211}

        ,

        {direct,11210}

        ]}}]}

        [ns_server:debug,2013-01-29T5:17:04.729,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:04.730,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:04.733,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_save:154]attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie": ok
        [ns_server:debug,2013-01-29T5:17:04.733,ns_1@10.5.2.11:<0.28483.1>:ns_node_disco:do_nodes_wanted_updated_fun:202]ns_node_disco: nodes_wanted updated: ['ns_1@10.3.121.106','ns_1@10.3.121.107',
        'ns_1@10.5.2.11','ns_1@10.5.2.13',
        'ns_1@10.5.2.14','ns_1@10.5.2.15'], with cookie: iustxuacjffxggse
        [ns_server:debug,2013-01-29T5:17:04.747,ns_1@10.5.2.11:<0.28483.1>:ns_node_disco:do_nodes_wanted_updated_fun:208]ns_node_disco: nodes_wanted pong: ['ns_1@10.3.121.106','ns_1@10.5.2.11',
        'ns_1@10.5.2.13','ns_1@10.5.2.14',
        'ns_1@10.5.2.15'], with cookie: iustxuacjffxggse
        [ns_server:debug,2013-01-29T5:17:05.143,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:05.145,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [user:info,2013-01-29T5:17:05.842,ns_1@10.5.2.11:ns_node_disco<0.6595.0>:ns_node_disco:handle_info:162]Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.3.121.107' came up.
        [ns_server:debug,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_node_disco_events<0.6594.0>:ns_node_disco_rep_events:handle_event:42]Detected a new nodes (['ns_1@10.3.121.107']). Moving config around.
        [ns_server:debug,2013-01-29T5:17:05.844,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:132]doing replicate_newnodes_docs
        [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_node_disco_events<0.6594.0>:ns_node_disco_log:handle_event:46]ns_node_disco_log: nodes changed: ['ns_1@10.3.121.106','ns_1@10.3.121.107',
        'ns_1@10.5.2.11','ns_1@10.5.2.13',
        'ns_1@10.5.2.14','ns_1@10.5.2.15']
        [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_info:220]Replicating config to/from:
        ['ns_1@10.3.121.107']
        [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_pull:341]Pulling config from: 'ns_1@10.3.121.107'

        [ns_server:debug,2013-01-29T5:17:05.845,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:replicate_change_to_node:160]Sending _design/_replicator_info to ns_1@10.3.121.107
        [ns_server:warn,2013-01-29T5:17:05.846,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:128]Remote server node

        {xdc_rdoc_replication_srv,'ns_1@10.3.121.107'}

        process down: noproc
        [ns_server:debug,2013-01-29T5:17:05.851,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',config_version}

        ->

        {2,0}

        [ns_server:debug,2013-01-29T5:17:05.851,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',isasl}

        ->
        [

        {path,"/opt/couchbase/var/lib/couchbase/data/isasl.pw"}

        ]
        [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',memcached}

        ->
        [

        {mccouch_port,11213}

        ,
        {engines,
        [{membase,
        [

        {engine,"/opt/couchbase/lib/memcached/ep.so"}

        ,

        {static_config_string, "vb0=false;waitforwarmup=false;failpartialwarmup=false"}

        ]},
        {memcached,
        [

        {engine,"/opt/couchbase/lib/memcached/default_engine.so"}

        ,

        {static_config_string,"vb0=true"}

        ]}]},

        {log_path,"/opt/couchbase/var/lib/couchbase/logs"}

        ,

        {log_prefix,"memcached.log"}

        ,

        {log_generations,10}

        ,

        {log_cyclesize,104857600}

        ,

        {log_sleeptime,19}

        ,

        {log_rotation_period,39003}

        ,

        {dedicated_port,11209},
        {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"},
        {port,11210},
        {dedicated_port,11209}

        ,

        {admin_user,"_admin"}

        ,

        {admin_pass,"_admin"}

        ,

        {verbosity,[]}

        ]
        [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',ns_log}

        ->
        [

        {filename,"/opt/couchbase/var/lib/couchbase/data/ns_log"}

        ]
        [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',port_servers}

        ->
        [{moxi,"/opt/couchbase/bin/moxi",
        ["-Z",

        {"port_listen=~B,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200", [port]}

        ,
        "-z",
        {"url=http://127.0.0.1:~B/pools/default/saslBucketsStreaming",
        [

        {misc,this_node_rest_port,[]}

        ]},
        "-p","0","-Y","y","-O","stderr",

        {"~s",[verbosity]}],
        [{env,[{"EVENT_NOSELECT","1"},
        {"MOXI_SASL_PLAIN_USR",{"~s",[{ns_moxi_sup,rest_user,[]}]}},
        {"MOXI_SASL_PLAIN_PWD",{"~s",[{ns_moxi_sup,rest_pass,[]}]}}]},
        use_stdio,exit_status,port_server_send_eol,stderr_to_stdout,stream]},
        {memcached,"/opt/couchbase/bin/memcached",
        ["-X","/opt/couchbase/lib/memcached/stdin_term_handler.so","-X",
        {"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]},
        "-l",
        {"0.0.0.0:~B,0.0.0.0:~B:1000",[port,dedicated_port]},
        "-p",
        {"~B",[port]},
        "-E","/opt/couchbase/lib/memcached/bucket_engine.so","-B",
        "binary","-r","-c","10000","-e",
        {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]},
        {"~s",[verbosity]}

        ],
        [{env,[

        {"EVENT_NOSELECT","1"}

        ,

        {"MEMCACHED_TOP_KEYS","100"}

        ,
        {"ISASL_PWFILE",{"~s",[

        {isasl,path}

        ]}},

        {"ISASL_DB_CHECK_TIME","1"}

        ]},
        use_stdio,stderr_to_stdout,exit_status,port_server_send_eol,
        stream]}]
        [ns_server:debug,2013-01-29T5:17:05.866,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_info:225]config pull_and_push done.

        [ns_server:debug,2013-01-29T5:17:06.127,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:06.127,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:06.276,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:06.279,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:07.119,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:07.119,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:08.582,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:08.584,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:08.771,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:08.772,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:08.912,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:132]doing replicate_newnodes_docs
        [ns_server:debug,2013-01-29T5:17:08.913,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:replicate_change_to_node:160]Sending _design/_replicator_info to ns_1@10.3.121.107
        [ns_server:debug,2013-01-29T5:17:09.110,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:09.112,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:09.189,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:09.190,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:09.601,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:09.602,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [cluster:debug,2013-01-29T5:17:11.735,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:do_add_node_engaged_inner:478]Reply from complete_join on "10.3.121.107:8091":

        {ok,[]}

        [cluster:debug,2013-01-29T5:17:11.735,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:handle_call:121]add_node("10.3.121.107", 8091, ..) ->

        {ok,'ns_1@10.3.121.107'}

        [ns_server:debug,2013-01-29T5:17:11.759,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.5.2.15',membership} ->
        active
        [ns_server:debug,2013-01-29T5:17:11.760,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317]Replicating some config keys ([{node,'ns_1@10.3.121.106',membership},
        {node,'ns_1@10.3.121.107',membership},
        {node,'ns_1@10.5.2.11',membership},
        {node,'ns_1@10.5.2.13',membership},
        {node,'ns_1@10.5.2.14',membership},
        {node,'ns_1@10.5.2.15',membership}

        ]..)
        [user:info,2013-01-29T5:17:11.759,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:idle:492]Starting rebalance, KeepNodes = ['ns_1@10.5.2.14','ns_1@10.3.121.106',
        'ns_1@10.3.121.107','ns_1@10.5.2.13',
        'ns_1@10.5.2.11','ns_1@10.5.2.15'], EjectNodes = []

        [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.5.2.11',membership}

        ->
        active
        [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.5.2.13',membership}

        ->
        active
        [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.107',membership}

        ->
        active
        [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.3.121.106',membership}

        ->
        active
        [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:

        {node,'ns_1@10.5.2.14',membership}

        ->
        active
        [ns_server:debug,2013-01-29T5:17:11.774,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317]Replicating some config keys ([counters,rebalance_status,rebalancer_pid]..)
        [ns_server:debug,2013-01-29T5:17:11.783,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        counters ->
        [

        {rebalance_start,24},{rebalance_success,23}]
        [ns_server:debug,2013-01-29T5:17:11.793,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_call:113]Got full synchronization request from 'ns_1@10.5.2.11'
        [ns_server:debug,2013-01-29T5:17:11.793,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_call:119]Fully synchronized config in 11 us
        [ns_server:debug,2013-01-29T5:17:11.802,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        rebalancer_pid ->
        <0.28549.1>
        [ns_server:debug,2013-01-29T5:17:11.810,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        rebalance_status ->
        running
        [ns_server:debug,2013-01-29T5:17:11.815,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:11.815,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:12.660,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:12.661,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:13.202,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:13.203,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:13.991,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:13.992,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:14.325,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:14.326,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:info,2013-01-29T5:17:14.714,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:handle_info:282]Skipping janitor in state rebalancing: {rebalancing_state,<0.28549.1>,
        {dict,0,16,16,8,80,48,
        {[],[],[],[],[],[],[],[],[],[],[],[], [],[],[],[]},
        {{[],[],[],[],[],[],[],[],[],[],[],
        [],[],[],[],[]}}},
        ['ns_1@10.5.2.14','ns_1@10.3.121.106',
        'ns_1@10.3.121.107','ns_1@10.5.2.13',
        'ns_1@10.5.2.11','ns_1@10.5.2.15'],
        [],[]}
        [ns_server:debug,2013-01-29T5:17:14.741,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:14.743,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:15.600,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:15.601,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:16.128,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:16.131,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:16.445,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:16.446,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:16.807,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:16.808,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:17.095,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:17.096,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:debug,2013-01-29T5:17:17.697,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386]No buckets to compact. Rescheduling compaction.
        [ns_server:debug,2013-01-29T5:17:17.698,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406]Finished compaction too soon. Next run will be in 30s
        [ns_server:error,2013-01-29T5:17:18.152,ns_1@10.5.2.11:<0.28549.1>:ns_config_rep:synchronize_remote:287]Failed to synchronize config to some nodes:
        ['ns_1@10.3.121.107']
        [user:info,2013-01-29T5:17:18.152,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:handle_info:319]Rebalance exited with reason {pre_rebalance_config_synchronization_failed, ['ns_1@10.3.121.107']}

        [ns_server:info,2013-01-29T5:17:18.156,ns_1@10.5.2.11:<0.28596.1>:diag_handler:log_all_tap_and_checkpoint_stats:127]logging tap & checkpoint stats
        [ns_server:debug,2013-01-29T5:17:18.162,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317]Replicating some config keys ([counters,rebalance_status,rebalancer_pid]..)
        [ns_server:info,2013-01-29T5:17:18.165,ns_1@10.5.2.11:<0.28596.1>:diag_handler:log_all_tap_and_checkpoint_stats:131]end of logging tap & checkpoint stats
        [ns_server:debug,2013-01-29T5:17:18.174,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        counters ->
        [{rebalance_fail,1},{rebalance_start,24}

        ,

        {rebalance_success,23}

        ]
        [ns_server:debug,2013-01-29T5:17:18.191,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        rebalancer_pid ->
        undefined
        [ns_server:debug,2013-01-29T5:17:18.202,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111]config change:
        rebalance_status ->

        {none,<<"Rebalance failed. See logs for detailed reason. You can try rebalance again.">>}
        Show
        andreibaranouski Andrei Baranouski added a comment - logs are clean, without any timeouts [cluster:debug,2013-01-29T5:17:04.709,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:verify_otp_connectivity:398] port_please("ns_1", "10.3.121.107") = 21100 [cluster:info,2013-01-29T5:17:04.712,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:node_add_transaction:503] Started node add transaction by adding node 'ns_1@10.3.121.107' to nodes_wanted [ns_server:debug,2013-01-29T5:17:04.717,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317] Replicating some config keys ([nodes_wanted, {node,'ns_1@10.3.121.107',membership}]..) [ns_server:debug,2013-01-29T5:17:04.722,ns_1@10.5.2.11:ns_config_events<0.6575.0>:ns_node_disco_conf_events:handle_event:44] ns_node_disco_conf_events config on nodes_wanted [ns_server:debug,2013-01-29T5:17:04.722,ns_1@10.5.2.11:mb_master<0.6624.0>:mb_master:update_peers:509] List of peers has changed from ['ns_1@10.3.121.106','ns_1@10.5.2.11', 'ns_1@10.5.2.13','ns_1@10.5.2.14', 'ns_1@10.5.2.15'] to ['ns_1@10.3.121.106', 'ns_1@10.3.121.107', 'ns_1@10.5.2.11', 'ns_1@10.5.2.13', 'ns_1@10.5.2.14', 'ns_1@10.5.2.15'] [ns_server:debug,2013-01-29T5:17:04.723,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: nodes_wanted -> ['ns_1@10.3.121.106','ns_1@10.3.121.107','ns_1@10.5.2.11','ns_1@10.5.2.13', 'ns_1@10.5.2.14','ns_1@10.5.2.15'] [ns_server:debug,2013-01-29T5:17:04.723,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change:{node,'ns_1@10.3.121.107',membership} -> inactiveAdded [ns_server:debug,2013-01-29T5:17:04.724,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_sync:115] ns_cookie_manager do_cookie_sync [ns_server:debug,2013-01-29T5:17:04.727,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_save:152] saving cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie" [cluster:debug,2013-01-29T5:17:04.727,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:do_add_node_engaged_inner:471] Posting the following to complete_join on "10.3.121.107:8091": {struct, [ {<<"targetNode">>,'ns_1@10.3.121.107'} , {availableStorage, {struct, [{hdd, [{struct, [ {path,<<"/">>} , {sizeKBytes,55007284} , {usagePercent,13} ]}, {struct, [ {path,<<"/boot">>} , {sizeKBytes,101086} , {usagePercent,21} ]}, {struct, [ {path,<<"/dev/shm">>} , {sizeKBytes,2021844} , {usagePercent,0} ]}]}]}}, {memoryQuota,2106} , {storageTotals, {struct, [{ram, {struct, [ {total,4140740608} , {quotaTotal,2208301056} , {quotaUsed,0} , {used,2612301824} , {usedByData,0}]}}, {hdd, {struct, [{total,56327458816}, {quotaTotal,56327458816}, {used,7322569646}, {usedByData,0} , {free,49004889170} ]}}]}}, {storage, {struct, [ {ssd,[]} , {hdd, [{struct, [ {path,<<"/opt/couchbase/var/lib/couchbase/data">>} , {index_path, <<"/opt/couchbase/var/lib/couchbase/data">>} , {quotaMb,none} , {state,ok} ]}]}]}}, {systemStats, {struct, [ {cpu_utilization_rate,7.106598984771574} , {swap_total,6140452864} , {swap_used,54947840} ]}}, {interestingStats,{struct,[]}}, {uptime,<<"531">>} , {memoryTotal,4140740608} , {memoryFree,1528438784} , {mcdMemoryReserved,3159} , {mcdMemoryAllocated,3159} , {couchApiBase,<<"http://10.5.2.11:8092/">>} , {otpNode,<<"ns_1@10.5.2.11">>} , {otpCookie,<<"iustxuacjffxggse">>} , {clusterMembership,<<"active">>} , {status,<<"healthy">>} , {thisNode,true} , {hostname,<<"10.5.2.11:8091">>} , {clusterCompatibility,131072} , {version,<<"2.0.1-145-rel-enterprise">>} , {os,<<"x86_64-unknown-linux-gnu">>} , {ports,{struct,[ {proxy,11211} , {direct,11210} ]}}]} [ns_server:debug,2013-01-29T5:17:04.729,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:04.730,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:04.733,ns_1@10.5.2.11:ns_cookie_manager<0.6433.0>:ns_cookie_manager:do_cookie_save:154] attempted to save cookie to "/opt/couchbase/var/lib/couchbase/couchbase-server.cookie": ok [ns_server:debug,2013-01-29T5:17:04.733,ns_1@10.5.2.11:<0.28483.1>:ns_node_disco:do_nodes_wanted_updated_fun:202] ns_node_disco: nodes_wanted updated: ['ns_1@10.3.121.106','ns_1@10.3.121.107', 'ns_1@10.5.2.11','ns_1@10.5.2.13', 'ns_1@10.5.2.14','ns_1@10.5.2.15'], with cookie: iustxuacjffxggse [ns_server:debug,2013-01-29T5:17:04.747,ns_1@10.5.2.11:<0.28483.1>:ns_node_disco:do_nodes_wanted_updated_fun:208] ns_node_disco: nodes_wanted pong: ['ns_1@10.3.121.106','ns_1@10.5.2.11', 'ns_1@10.5.2.13','ns_1@10.5.2.14', 'ns_1@10.5.2.15'], with cookie: iustxuacjffxggse [ns_server:debug,2013-01-29T5:17:05.143,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:05.145,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [user:info,2013-01-29T5:17:05.842,ns_1@10.5.2.11:ns_node_disco<0.6595.0>:ns_node_disco:handle_info:162] Node 'ns_1@10.5.2.11' saw that node 'ns_1@10.3.121.107' came up. [ns_server:debug,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_node_disco_events<0.6594.0>:ns_node_disco_rep_events:handle_event:42] Detected a new nodes ( ['ns_1@10.3.121.107'] ). Moving config around. [ns_server:debug,2013-01-29T5:17:05.844,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:132] doing replicate_newnodes_docs [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_node_disco_events<0.6594.0>:ns_node_disco_log:handle_event:46] ns_node_disco_log: nodes changed: ['ns_1@10.3.121.106','ns_1@10.3.121.107', 'ns_1@10.5.2.11','ns_1@10.5.2.13', 'ns_1@10.5.2.14','ns_1@10.5.2.15'] [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_info:220] Replicating config to/from: ['ns_1@10.3.121.107'] [ns_server:info,2013-01-29T5:17:05.844,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_pull:341] Pulling config from: 'ns_1@10.3.121.107' [ns_server:debug,2013-01-29T5:17:05.845,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:replicate_change_to_node:160] Sending _design/_replicator_info to ns_1@10.3.121.107 [ns_server:warn,2013-01-29T5:17:05.846,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:128] Remote server node {xdc_rdoc_replication_srv,'ns_1@10.3.121.107'} process down: noproc [ns_server:debug,2013-01-29T5:17:05.851,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',config_version} -> {2,0} [ns_server:debug,2013-01-29T5:17:05.851,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',isasl} -> [ {path,"/opt/couchbase/var/lib/couchbase/data/isasl.pw"} ] [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',memcached} -> [ {mccouch_port,11213} , {engines, [{membase, [ {engine,"/opt/couchbase/lib/memcached/ep.so"} , {static_config_string, "vb0=false;waitforwarmup=false;failpartialwarmup=false"} ]}, {memcached, [ {engine,"/opt/couchbase/lib/memcached/default_engine.so"} , {static_config_string,"vb0=true"} ]}]}, {log_path,"/opt/couchbase/var/lib/couchbase/logs"} , {log_prefix,"memcached.log"} , {log_generations,10} , {log_cyclesize,104857600} , {log_sleeptime,19} , {log_rotation_period,39003} , {dedicated_port,11209}, {bucket_engine,"/opt/couchbase/lib/memcached/bucket_engine.so"}, {port,11210}, {dedicated_port,11209} , {admin_user,"_admin"} , {admin_pass,"_admin"} , {verbosity,[]} ] [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',ns_log} -> [ {filename,"/opt/couchbase/var/lib/couchbase/data/ns_log"} ] [ns_server:debug,2013-01-29T5:17:05.852,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',port_servers} -> [{moxi,"/opt/couchbase/bin/moxi", ["-Z", {"port_listen=~B,default_bucket_name=default,downstream_max=1024,downstream_conn_max=4,connect_max_errors=5,connect_retry_interval=30000,connect_timeout=400,auth_timeout=100,cycle=200,downstream_conn_queue_timeout=200,downstream_timeout=5000,wait_queue_timeout=200", [port]} , "-z", {"url= http://127.0.0.1:~B/pools/default/saslBucketsStreaming ", [ {misc,this_node_rest_port,[]} ]}, "-p","0","-Y","y","-O","stderr", {"~s",[verbosity]}], [{env,[{"EVENT_NOSELECT","1"}, {"MOXI_SASL_PLAIN_USR",{"~s", [{ns_moxi_sup,rest_user,[]}] }}, {"MOXI_SASL_PLAIN_PWD",{"~s", [{ns_moxi_sup,rest_pass,[]}] }}]}, use_stdio,exit_status,port_server_send_eol,stderr_to_stdout,stream]}, {memcached,"/opt/couchbase/bin/memcached", ["-X","/opt/couchbase/lib/memcached/stdin_term_handler.so","-X", {"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=~B;sleeptime=~B;filename=~s/~s", [log_cyclesize,log_sleeptime,log_path,log_prefix]}, "-l", {"0.0.0.0:~B,0.0.0.0:~B:1000",[port,dedicated_port]}, "-p", {"~B",[port]}, "-E","/opt/couchbase/lib/memcached/bucket_engine.so","-B", "binary","-r","-c","10000","-e", {"admin=~s;default_bucket_name=default;auto_create=false", [admin_user]}, {"~s",[verbosity]} ], [{env,[ {"EVENT_NOSELECT","1"} , {"MEMCACHED_TOP_KEYS","100"} , {"ISASL_PWFILE",{"~s",[ {isasl,path} ]}}, {"ISASL_DB_CHECK_TIME","1"} ]}, use_stdio,stderr_to_stdout,exit_status,port_server_send_eol, stream]}] [ns_server:debug,2013-01-29T5:17:05.866,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_info:225] config pull_and_push done. [ns_server:debug,2013-01-29T5:17:06.127,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:06.127,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:06.276,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:06.279,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:07.119,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:07.119,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:08.582,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:08.584,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:08.771,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:08.772,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:08.912,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:handle_info:132] doing replicate_newnodes_docs [ns_server:debug,2013-01-29T5:17:08.913,ns_1@10.5.2.11:xdc_rdoc_replication_srv<0.6706.0>:xdc_rdoc_replication_srv:replicate_change_to_node:160] Sending _design/_replicator_info to ns_1@10.3.121.107 [ns_server:debug,2013-01-29T5:17:09.110,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:09.112,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:09.189,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:09.190,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:09.601,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:09.602,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [cluster:debug,2013-01-29T5:17:11.735,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:do_add_node_engaged_inner:478] Reply from complete_join on "10.3.121.107:8091": {ok,[]} [cluster:debug,2013-01-29T5:17:11.735,ns_1@10.5.2.11:ns_cluster<0.6434.0>:ns_cluster:handle_call:121] add_node("10.3.121.107", 8091, ..) -> {ok,'ns_1@10.3.121.107'} [ns_server:debug,2013-01-29T5:17:11.759,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.5.2.15',membership} -> active [ns_server:debug,2013-01-29T5:17:11.760,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317] Replicating some config keys ([{node,'ns_1@10.3.121.106',membership}, {node,'ns_1@10.3.121.107',membership}, {node,'ns_1@10.5.2.11',membership}, {node,'ns_1@10.5.2.13',membership}, {node,'ns_1@10.5.2.14',membership}, {node,'ns_1@10.5.2.15',membership} ]..) [user:info,2013-01-29T5:17:11.759,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:idle:492] Starting rebalance, KeepNodes = ['ns_1@10.5.2.14','ns_1@10.3.121.106', 'ns_1@10.3.121.107','ns_1@10.5.2.13', 'ns_1@10.5.2.11','ns_1@10.5.2.15'], EjectNodes = [] [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.5.2.11',membership} -> active [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.5.2.13',membership} -> active [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.107',membership} -> active [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.3.121.106',membership} -> active [ns_server:debug,2013-01-29T5:17:11.772,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: {node,'ns_1@10.5.2.14',membership} -> active [ns_server:debug,2013-01-29T5:17:11.774,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317] Replicating some config keys ( [counters,rebalance_status,rebalancer_pid] ..) [ns_server:debug,2013-01-29T5:17:11.783,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: counters -> [ {rebalance_start,24},{rebalance_success,23}] [ns_server:debug,2013-01-29T5:17:11.793,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_call:113] Got full synchronization request from 'ns_1@10.5.2.11' [ns_server:debug,2013-01-29T5:17:11.793,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:handle_call:119] Fully synchronized config in 11 us [ns_server:debug,2013-01-29T5:17:11.802,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: rebalancer_pid -> <0.28549.1> [ns_server:debug,2013-01-29T5:17:11.810,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: rebalance_status -> running [ns_server:debug,2013-01-29T5:17:11.815,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:11.815,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:12.660,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:12.661,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:13.202,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:13.203,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:13.991,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:13.992,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:14.325,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:14.326,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:info,2013-01-29T5:17:14.714,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:handle_info:282] Skipping janitor in state rebalancing: {rebalancing_state,<0.28549.1>, {dict,0,16,16,8,80,48, {[],[],[],[],[],[],[],[],[],[],[],[], [],[],[],[]}, {{[],[],[],[],[],[],[],[],[],[],[], [],[],[],[],[]}}}, ['ns_1@10.5.2.14','ns_1@10.3.121.106', 'ns_1@10.3.121.107','ns_1@10.5.2.13', 'ns_1@10.5.2.11','ns_1@10.5.2.15'], [],[]} [ns_server:debug,2013-01-29T5:17:14.741,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:14.743,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:15.600,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:15.601,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:16.128,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:16.131,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:16.445,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:16.446,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:16.807,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:16.808,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:17.095,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:17.096,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:debug,2013-01-29T5:17:17.697,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:handle_info:386] No buckets to compact. Rescheduling compaction. [ns_server:debug,2013-01-29T5:17:17.698,ns_1@10.5.2.11:compaction_daemon<0.6702.0>:compaction_daemon:schedule_next_compaction:1406] Finished compaction too soon. Next run will be in 30s [ns_server:error,2013-01-29T5:17:18.152,ns_1@10.5.2.11:<0.28549.1>:ns_config_rep:synchronize_remote:287] Failed to synchronize config to some nodes: ['ns_1@10.3.121.107'] [user:info,2013-01-29T5:17:18.152,ns_1@10.5.2.11:<0.6628.0>:ns_orchestrator:handle_info:319] Rebalance exited with reason {pre_rebalance_config_synchronization_failed, ['ns_1@10.3.121.107']} [ns_server:info,2013-01-29T5:17:18.156,ns_1@10.5.2.11:<0.28596.1>:diag_handler:log_all_tap_and_checkpoint_stats:127] logging tap & checkpoint stats [ns_server:debug,2013-01-29T5:17:18.162,ns_1@10.5.2.11:ns_config_rep<0.6601.0>:ns_config_rep:do_push_keys:317] Replicating some config keys ( [counters,rebalance_status,rebalancer_pid] ..) [ns_server:info,2013-01-29T5:17:18.165,ns_1@10.5.2.11:<0.28596.1>:diag_handler:log_all_tap_and_checkpoint_stats:131] end of logging tap & checkpoint stats [ns_server:debug,2013-01-29T5:17:18.174,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: counters -> [{rebalance_fail,1},{rebalance_start,24} , {rebalance_success,23} ] [ns_server:debug,2013-01-29T5:17:18.191,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: rebalancer_pid -> undefined [ns_server:debug,2013-01-29T5:17:18.202,ns_1@10.5.2.11:ns_config_log<0.6583.0>:ns_config_log:log_common:111] config change: rebalance_status -> {none,<<"Rebalance failed. See logs for detailed reason. You can try rebalance again.">>}
        Show
        Aliaksey Artamonau Aliaksey Artamonau added a comment - http://review.couchbase.org/24360

          People

          • Assignee:
            Aliaksey Artamonau Aliaksey Artamonau
            Reporter:
            andreibaranouski Andrei Baranouski
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Gerrit Reviews

              There are no open Gerrit changes