Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
2.0-beta
-
Security Level: Public
Description
2.0.0-1700-rel
http://qa.hq.northscale.net/job/centos-64-2.0-rebalance-regressions/31/consoleFull
./testrunner -i /tmp/rebalance_regression.ini get-logs=True -t rebalancetests.IncrementalRebalanceInTests.test_load,replica=1,delete-ratio=0.6,expiry-ratio=0.2,keys-count=1000000
[user:info,2012-09-09T7:38:31.658,ns_1@10.3.121.92:'ns_memcached-bucket-0':ns_memcached:terminate:656]Control connection to memcached on 'ns_1@10.3.121.92' disconnected: {{badmatch,
{error,
closed}},
[
,
,
,
,
,
]}
[error_logger:error,2012-09-09T7:38:31.658,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: erlang:apply/2
pid: <0.32623.216>
registered_name: []
exception error: no match of right hand side value
in function mc_binary:quick_stats_recv/3
in call from mc_binary:quick_stats_loop/5
in call from mc_binary:quick_stats/5
in call from ns_memcached:do_handle_call/3
in call from ns_memcached:worker_loop/3
ancestors: ['ns_memcached-bucket-0','single_bucket_sup-bucket-0',
<0.32599.216>]
messages: []
links: [<0.32620.216>]
dictionary: []
trap_exit: false
status: running
heap_size: 46368
stack_size: 24
reductions: 49193344
neighbours:
[user:info,2012-09-09T7:38:31.658,ns_1@10.3.121.92:ns_port_memcached:ns_port_server:handle_info:107]Port server memcached on node 'ns_1@10.3.121.92' exited with status 139. Restarting. Messages: Sun Sep 9 14:38:20.555057 3: Schedule cleanup of "eq_tapq:rebalance_594"
Sun Sep 9 14:38:20.556030 3: TAP (Producer) eq_tapq:replication_building_594_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:20.582064 3: TAP (Producer) eq_tapq:rebalance_594 - Clear the tap queues by force
Sun Sep 9 14:38:20.622910 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 595,
Sun Sep 9 14:38:20.623481 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 595
Sun Sep 9 14:38:20.718312 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds
Sun Sep 9 14:38:20.853729 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:20.853858 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:20.853881 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_VBUCKET_SET with vbucket 595 and state "pending"
Sun Sep 9 14:38:20.874955 3: TAP (Producer) eq_tapq:rebalance_595 - VBucket <595> is going dead to complete vbucket takeover.
Sun Sep 9 14:38:20.874981 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_VBUCKET_SET with vbucket 595 and state "active"
Sun Sep 9 14:38:20.876570 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_595>
Sun Sep 9 14:38:20.876632 3: TAP (Producer) eq_tapq:rebalance_595 - disconnected
Sun Sep 9 14:38:20.881370 3: Schedule cleanup of "eq_tapq:rebalance_595"
Sun Sep 9 14:38:20.883018 3: TAP (Producer) eq_tapq:rebalance_595 - Clear the tap queues by force
Sun Sep 9 14:38:20.890833 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Connection is closed by force.
Sun Sep 9 14:38:21.311618 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 596
Sun Sep 9 14:38:21.311752 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:21.311781 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:21.311799 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "initial_vbucket_stream" and vbucket 596
Sun Sep 9 14:38:21.312602 3: Schedule cleanup of "eq_tapq:anon_23683"
Sun Sep 9 14:38:21.314258 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:21.468165 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 596,
Sun Sep 9 14:38:21.470597 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 596
Sun Sep 9 14:38:21.639197 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds
Sun Sep 9 14:38:21.775106 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:21.775140 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:21.776101 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_VBUCKET_SET with vbucket 596 and state "pending"
Sun Sep 9 14:38:21.788389 3: TAP (Producer) eq_tapq:rebalance_596 - VBucket <596> is going dead to complete vbucket takeover.
Sun Sep 9 14:38:21.788460 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_VBUCKET_SET with vbucket 596 and state "active"
Sun Sep 9 14:38:21.789782 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_596>
Sun Sep 9 14:38:21.789878 3: TAP (Producer) eq_tapq:rebalance_596 - disconnected
Sun Sep 9 14:38:21.803562 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Connection is closed by force.
Sun Sep 9 14:38:21.805133 3: Schedule cleanup of "eq_tapq:rebalance_596"
Sun Sep 9 14:38:21.805227 3: TAP (Producer) eq_tapq:rebalance_596 - Clear the tap queues by force
Sun Sep 9 14:38:22.174541 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 597
Sun Sep 9 14:38:22.174902 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:22.174989 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:22.175142 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "initial_vbucket_stream" and vbucket 597
Sun Sep 9 14:38:22.176108 3: Schedule cleanup of "eq_tapq:anon_23684"
Sun Sep 9 14:38:22.176264 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:22.280135 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 597,
Sun Sep 9 14:38:22.280417 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 597
Sun Sep 9 14:38:22.463514 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds
Sun Sep 9 14:38:22.651163 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:22.651189 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:22.651200 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_VBUCKET_SET with vbucket 597 and state "pending"
Sun Sep 9 14:38:22.712049 3: TAP (Producer) eq_tapq:rebalance_597 - VBucket <597> is going dead to complete vbucket takeover.
Sun Sep 9 14:38:22.712071 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_VBUCKET_SET with vbucket 597 and state "active"
Sun Sep 9 14:38:22.716160 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_597>
Sun Sep 9 14:38:22.716837 3: TAP (Producer) eq_tapq:rebalance_597 - disconnected
Sun Sep 9 14:38:22.728137 3: Schedule cleanup of "eq_tapq:rebalance_597"
Sun Sep 9 14:38:22.728606 3: TAP (Producer) eq_tapq:rebalance_597 - Clear the tap queues by force
Sun Sep 9 14:38:22.748018 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Connection is closed by force.
Sun Sep 9 14:38:23.156533 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 598
Sun Sep 9 14:38:23.156585 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:23.156599 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:23.156609 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "initial_vbucket_stream" and vbucket 598
Sun Sep 9 14:38:23.156893 3: Schedule cleanup of "eq_tapq:anon_23685"
Sun Sep 9 14:38:23.157362 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:23.212888 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 598,
Sun Sep 9 14:38:23.212954 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 598
Sun Sep 9 14:38:23.384437 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds
Sun Sep 9 14:38:23.497782 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:23.497937 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:23.498040 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_VBUCKET_SET with vbucket 598 and state "pending"
Sun Sep 9 14:38:23.523915 3: TAP (Producer) eq_tapq:rebalance_598 - VBucket <598> is going dead to complete vbucket takeover.
Sun Sep 9 14:38:23.523939 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_VBUCKET_SET with vbucket 598 and state "active"
Sun Sep 9 14:38:23.613372 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_598>
Sun Sep 9 14:38:23.613612 3: TAP (Producer) eq_tapq:rebalance_598 - disconnected
Sun Sep 9 14:38:23.626782 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Connection is closed by force.
Sun Sep 9 14:38:23.954691 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 599
Sun Sep 9 14:38:23.955257 3: Schedule cleanup of "eq_tapq:anon_23686"
Sun Sep 9 14:38:23.955303 3: Schedule cleanup of "eq_tapq:rebalance_598"
Sun Sep 9 14:38:23.955779 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:23.955836 3: TAP (Producer) eq_tapq:rebalance_598 - Clear the tap queues by force
Sun Sep 9 14:38:23.955887 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:23.955906 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:23.955951 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "initial_vbucket_stream" and vbucket 599
Sun Sep 9 14:38:23.989855 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 599,
Sun Sep 9 14:38:23.989938 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 599
Sun Sep 9 14:38:24.094300 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds
Sun Sep 9 14:38:24.235823 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:24.235848 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:24.235859 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_VBUCKET_SET with vbucket 599 and state "pending"
Sun Sep 9 14:38:24.252356 3: TAP (Producer) eq_tapq:rebalance_599 - VBucket <599> is going dead to complete vbucket takeover.
Sun Sep 9 14:38:24.252389 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_VBUCKET_SET with vbucket 599 and state "active"
Sun Sep 9 14:38:24.260124 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_599>
Sun Sep 9 14:38:24.260222 3: TAP (Producer) eq_tapq:rebalance_599 - disconnected
Sun Sep 9 14:38:24.281377 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Connection is closed by force.
Sun Sep 9 14:38:24.726076 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 600
Sun Sep 9 14:38:24.726140 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
Sun Sep 9 14:38:24.726163 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
Sun Sep 9 14:38:24.726239 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "initial_vbucket_stream" and vbucket 600
Sun Sep 9 14:38:24.726440 3: Schedule cleanup of "eq_tapq:anon_23687"
Sun Sep 9 14:38:24.726476 3: Schedule cleanup of "eq_tapq:rebalance_599"
Sun Sep 9 14:38:24.727312 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Clear the tap queues by force
Sun Sep 9 14:38:24.764938 3: TAP (Producer) eq_tapq:rebalance_599 - Clear the tap queues by force
Sun Sep 9 14:38:24.952566 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 600,
Sun Sep 9 14:38:24.952642 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command "close_backfill" and vbucket 600
[error_logger:error,2012-09-09T7:38:31.669,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: erlang:apply/2
pid: <0.32624.216>
registered_name: []
exception error: no match of right hand side value {error,closed}
in function mc_client_binary:cmd_binary_vocal_recv/5
in call from mc_client_binary:get_vbucket/2
in call from ns_memcached:do_handle_call/3
in call from ns_memcached:worker_loop/3
ancestors: ['ns_memcached-bucket-0','single_bucket_sup-bucket-0',
<0.32599.216>]
messages: []
links: <0.32620.216>,#Port<0.8495132>
dictionary: []
trap_exit: false
status: running
heap_size: 75025
stack_size: 24
reductions: 214459846
neighbours:
[ns_server:info,2012-09-09T7:38:31.671,ns_1@10.3.121.92:<0.2237.99>:supervisor_cushion:handle_info:62]Cushion managed supervisor for memcached failed:
{abnormal,139}[rebalance:warn,2012-09-09T7:38:31.673,ns_1@10.3.121.92:<0.28874.232>:ebucketmigrator_srv:do_confirm_sent_messages:662]Got error while trying to read close ack:{error,closed}
[error_logger:error,2012-09-09T7:38:31.680,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: erlang:apply/2
pid: <0.32625.216>
registered_name: []
exception error: no match of right hand side value {error,closed}
in function mc_binary:quick_stats_recv/3
in call from mc_binary:quick_stats_loop/5
in call from mc_binary:quick_stats/5
in call from ns_memcached:do_handle_call/3
in call from ns_memcached:worker_loop/3
ancestors: ['ns_memcached-bucket-0','single_bucket_sup-bucket-0',
<0.32599.216>]
messages: []
links: [<0.32620.216>]
dictionary: []
trap_exit: false
status: running
heap_size: 46368
stack_size: 24
reductions: 186836391
neighbours:
[error_logger:error,2012-09-09T7:38:31.681,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_msg:76]** Generic server 'ns_memcached-bucket-0' terminating
** Last message in was check_config
** When Server state == {state,3,0,0,
{[],[]},
{[],[]},
{[],[]},
warmed,
{1347,199568,893848},
"bucket-0",#Port<0.8495126>,
{interval,#Ref<0.0.1024.250643>},
[{<0.32626.216>,#Ref<0.0.1063.32105>}]}
** Reason for termination ==
** badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}
[error_logger:error,2012-09-09T7:38:31.683,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: ns_memcached:init/1
pid: <0.32620.216>
registered_name: 'ns_memcached-bucket-0'
exception exit: badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}
in function gen_server:terminate/6
ancestors: ['single_bucket_sup-bucket-0',<0.32599.216>]
messages: [{'$gen_call',{<0.2195.99>,#Ref<0.0.1072.132199>},connected},
{'EXIT',<0.32623.216>,{badmatch,{error,closed}}},
{'EXIT',<0.32624.216>,{badmatch,{error,closed}}},
{'EXIT',<0.32625.216>,{badmatch,{error,closed}}}]
links: [<0.32600.216>,<0.32626.216>,<0.57.0>]
dictionary: []
trap_exit: true
status: running
heap_size: 317811
stack_size: 24
reductions: 3227321
neighbours:
neighbour: [{pid,<0.32626.216>},
{registered_name,[]},
{initial_call,{erlang,apply,['Argument__1','Argument__2']}},
{current_function,{gen,do_call,4}},
{ancestors,['ns_memcached-bucket-0', 'single_bucket_sup-bucket-0',<0.32599.216>]},
{messages,[]},
{links,[<0.32620.216>,#Port<0.8495130>]},
{dictionary,[]},
{trap_exit,false},
{status,waiting},
{heap_size,75025},
{stack_size,24},
{reductions,109598496}]
[error_logger:error,2012-09-09T7:38:31.683,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.2239.99> terminating
** Last message in was {#Port<0.2697355>,{exit_status,139}}
** When Server state == {state,#Port<0.2697355>,memcached,
{["Sun Sep 9 14:38:24.952642 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 600", "Sun Sep 9 14:38:24.952566 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 600, ", "Sun Sep 9 14:38:24.764938 3: TAP (Producer) eq_tapq:rebalance_599 - Clear the tap queues by force", "Sun Sep 9 14:38:24.727312 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:24.726476 3: Schedule cleanup of \"eq_tapq:rebalance_599\"", "Sun Sep 9 14:38:24.726440 3: Schedule cleanup of \"eq_tapq:anon_23687\"", "Sun Sep 9 14:38:24.726239 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"initial_vbucket_stream\" and vbucket 600", "Sun Sep 9 14:38:24.726163 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:24.726140 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:24.726076 3: TAP (Producer) eq_tapq:replication_building_600_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 600", "Sun Sep 9 14:38:24.281377 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Connection is closed by force.", "Sun Sep 9 14:38:24.260222 3: TAP (Producer) eq_tapq:rebalance_599 - disconnected", "Sun Sep 9 14:38:24.260124 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_599>", "Sun Sep 9 14:38:24.252389 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_VBUCKET_SET with vbucket 599 and state \"active\"", "Sun Sep 9 14:38:24.252356 3: TAP (Producer) eq_tapq:rebalance_599 - VBucket <599> is going dead to complete vbucket takeover.", "Sun Sep 9 14:38:24.235859 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_VBUCKET_SET with vbucket 599 and state \"pending\"", "Sun Sep 9 14:38:24.235848 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:24.235823 3: TAP (Producer) eq_tapq:rebalance_599 - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:24.094300 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds", "Sun Sep 9 14:38:23.989938 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 599", "Sun Sep 9 14:38:23.989855 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 599, ", "Sun Sep 9 14:38:23.955951 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"initial_vbucket_stream\" and vbucket 599", "Sun Sep 9 14:38:23.955906 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:23.955887 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:23.955836 3: TAP (Producer) eq_tapq:rebalance_598 - Clear the tap queues by force", "Sun Sep 9 14:38:23.955779 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:23.955303 3: Schedule cleanup of \"eq_tapq:rebalance_598\"", "Sun Sep 9 14:38:23.955257 3: Schedule cleanup of \"eq_tapq:anon_23686\"", "Sun Sep 9 14:38:23.954691 3: TAP (Producer) eq_tapq:replication_building_599_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 599", "Sun Sep 9 14:38:23.626782 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Connection is closed by force.", "Sun Sep 9 14:38:23.613612 3: TAP (Producer) eq_tapq:rebalance_598 - disconnected", "Sun Sep 9 14:38:23.613372 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_598>", "Sun Sep 9 14:38:23.523939 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_VBUCKET_SET with vbucket 598 and state \"active\"", "Sun Sep 9 14:38:23.523915 3: TAP (Producer) eq_tapq:rebalance_598 - VBucket <598> is going dead to complete vbucket takeover.", "Sun Sep 9 14:38:23.498040 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_VBUCKET_SET with vbucket 598 and state \"pending\"", "Sun Sep 9 14:38:23.497937 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:23.497782 3: TAP (Producer) eq_tapq:rebalance_598 - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:23.384437 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds", "Sun Sep 9 14:38:23.212954 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 598", "Sun Sep 9 14:38:23.212888 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 598, ", "Sun Sep 9 14:38:23.157362 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:23.156893 3: Schedule cleanup of \"eq_tapq:anon_23685\"", "Sun Sep 9 14:38:23.156609 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"initial_vbucket_stream\" and vbucket 598", "Sun Sep 9 14:38:23.156599 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:23.156585 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:23.156533 3: TAP (Producer) eq_tapq:replication_building_598_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 598", "Sun Sep 9 14:38:22.748018 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Connection is closed by force.", "Sun Sep 9 14:38:22.728606 3: TAP (Producer) eq_tapq:rebalance_597 - Clear the tap queues by force", "Sun Sep 9 14:38:22.728137 3: Schedule cleanup of \"eq_tapq:rebalance_597\"", "Sun Sep 9 14:38:22.716837 3: TAP (Producer) eq_tapq:rebalance_597 - disconnected", "Sun Sep 9 14:38:22.716160 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_597>", "Sun Sep 9 14:38:22.712071 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_VBUCKET_SET with vbucket 597 and state \"active\"", "Sun Sep 9 14:38:22.712049 3: TAP (Producer) eq_tapq:rebalance_597 - VBucket <597> is going dead to complete vbucket takeover.", "Sun Sep 9 14:38:22.651200 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_VBUCKET_SET with vbucket 597 and state \"pending\"", "Sun Sep 9 14:38:22.651189 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0"], ["Sun Sep 9 14:38:20.555057 3: Schedule cleanup of \"eq_tapq:rebalance_594\"", "Sun Sep 9 14:38:20.556030 3: TAP (Producer) eq_tapq:replication_building_594_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:20.582064 3: TAP (Producer) eq_tapq:rebalance_594 - Clear the tap queues by force", "Sun Sep 9 14:38:20.622910 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 595, ", "Sun Sep 9 14:38:20.623481 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 595", "Sun Sep 9 14:38:20.718312 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds", "Sun Sep 9 14:38:20.853729 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:20.853858 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:20.853881 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_VBUCKET_SET with vbucket 595 and state \"pending\"", "Sun Sep 9 14:38:20.874955 3: TAP (Producer) eq_tapq:rebalance_595 - VBucket <595> is going dead to complete vbucket takeover.", "Sun Sep 9 14:38:20.874981 3: TAP (Producer) eq_tapq:rebalance_595 - Sending TAP_VBUCKET_SET with vbucket 595 and state \"active\"", "Sun Sep 9 14:38:20.876570 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_595>", "Sun Sep 9 14:38:20.876632 3: TAP (Producer) eq_tapq:rebalance_595 - disconnected", "Sun Sep 9 14:38:20.881370 3: Schedule cleanup of \"eq_tapq:rebalance_595\"", "Sun Sep 9 14:38:20.883018 3: TAP (Producer) eq_tapq:rebalance_595 - Clear the tap queues by force", "Sun Sep 9 14:38:20.890833 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Connection is closed by force.", "Sun Sep 9 14:38:21.311618 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 596", "Sun Sep 9 14:38:21.311752 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:21.311781 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:21.311799 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"initial_vbucket_stream\" and vbucket 596", "Sun Sep 9 14:38:21.312602 3: Schedule cleanup of \"eq_tapq:anon_23683\"", "Sun Sep 9 14:38:21.314258 3: TAP (Producer) eq_tapq:replication_building_595_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:21.468165 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 596, ", "Sun Sep 9 14:38:21.470597 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 596", "Sun Sep 9 14:38:21.639197 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds", "Sun Sep 9 14:38:21.775106 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:21.775140 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:21.776101 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_VBUCKET_SET with vbucket 596 and state \"pending\"", "Sun Sep 9 14:38:21.788389 3: TAP (Producer) eq_tapq:rebalance_596 - VBucket <596> is going dead to complete vbucket takeover.", "Sun Sep 9 14:38:21.788460 3: TAP (Producer) eq_tapq:rebalance_596 - Sending TAP_VBUCKET_SET with vbucket 596 and state \"active\"", "Sun Sep 9 14:38:21.789782 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_596>", "Sun Sep 9 14:38:21.789878 3: TAP (Producer) eq_tapq:rebalance_596 - disconnected", "Sun Sep 9 14:38:21.803562 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Connection is closed by force.", "Sun Sep 9 14:38:21.805133 3: Schedule cleanup of \"eq_tapq:rebalance_596\"", "Sun Sep 9 14:38:21.805227 3: TAP (Producer) eq_tapq:rebalance_596 - Clear the tap queues by force", "Sun Sep 9 14:38:22.174541 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Schedule the backfill for vbucket 597", "Sun Sep 9 14:38:22.174902 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0", "Sun Sep 9 14:38:22.174989 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"enable_checkpoint_sync\" and vbucket 0", "Sun Sep 9 14:38:22.175142 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"initial_vbucket_stream\" and vbucket 597", "Sun Sep 9 14:38:22.176108 3: Schedule cleanup of \"eq_tapq:anon_23684\"", "Sun Sep 9 14:38:22.176264 3: TAP (Producer) eq_tapq:replication_building_596_'ns_1@10.3.121.94' - Clear the tap queues by force", "Sun Sep 9 14:38:22.280135 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Backfill is completed with VBuckets 597, ", "Sun Sep 9 14:38:22.280417 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - Sending TAP_OPAQUE with command \"close_backfill\" and vbucket 597", "Sun Sep 9 14:38:22.463514 3: TAP (Producer) eq_tapq:replication_building_597_'ns_1@10.3.121.94' - disconnected, keep alive for 300 seconds", "Sun Sep 9 14:38:22.651163 3: TAP (Producer) eq_tapq:rebalance_597 - Sending TAP_OPAQUE with command \"opaque_enable_auto_nack\" and vbucket 0"]},
undefined,[],0,true}
** Reason for termination ==
** {abnormal,139}
[error_logger:error,2012-09-09T7:38:31.689,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: ns_port_server:init/1
pid: <0.2239.99>
registered_name: ns_port_memcached
exception exit:
in function gen_server:terminate/6
ancestors: [<0.2237.99>,ns_port_sup,ns_server_sup,ns_server_cluster_sup,
<0.60.0>]
messages: [{'EXIT',#Port<0.2697355>,normal}]
links: [<0.2237.99>]
dictionary: []
trap_exit: true
status: running
heap_size: 121393
stack_size: 24
reductions: 26128832
neighbours:
[ns_server:error,2012-09-09T7:38:31.691,ns_1@10.3.121.92:ns_doctor:ns_doctor:update_status:204]The following buckets became not ready on node 'ns_1@10.3.121.92': ["bucket-0"], those of them are active ["bucket-0"]
[ns_server:debug,2012-09-09T7:38:31.691,ns_1@10.3.121.92:<0.29688.232>:supervisor_cushion:init:43]starting ns_port_server with delay of 5000
[error_logger:error,2012-09-09T7:38:31.692,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,'single_bucket_sup-bucket-0'}
Context: child_terminated
Reason: badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}
Offender: [{pid,<0.32620.216>},
{name,{ns_memcached,"bucket-0"}},
{mfargs,{ns_memcached,start_link,["bucket-0"]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[stats:error,2012-09-09T7:38:31.675,ns_1@10.3.121.92:<0.32632.216>:stats_collector:handle_info:106]Exception in stats collector: {exit,
{badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
['ns_memcached-bucket-0',
{stats,<<>>},
60000]}},
[{gen_server,call,3},
{ns_memcached,do_call,3},
{stats_collector,grab_all_stats,1},
{stats_collector,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]}
[couchdb:info,2012-09-09T7:38:31.694,ns_1@10.3.121.92:<0.29683.232>:couch_log:info:39]Native compactor output: Compacted /opt/couchbase/var/lib/couchbase/data/bucket-0/328.couch.27 -> /opt/co
[error_logger:error,2012-09-09T7:38:31.695,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.2237.99> terminating
** Last message in was {die,{abnormal,139}}
** When Server state == {state,memcached,5000,
{1347,150861,354415},
undefined,60000}
** Reason for termination ==
** {abnormal,139}
[couchdb:info,2012-09-09T7:38:31.695,ns_1@10.3.121.92:<0.29683.232>:couch_log:info:39]Native compactor output: uchbase/var/lib/couchbase/data/bucket-0/328.couch.27.compact
[error_logger:error,2012-09-09T7:38:31.696,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: supervisor_cushion:init/1
pid: <0.2237.99>
registered_name: []
exception exit:
in function gen_server:terminate/6
ancestors: [ns_port_sup,ns_server_sup,ns_server_cluster_sup,<0.60.0>]
messages: []
links: [<0.2231.99>]
dictionary: []
trap_exit: true
status: running
heap_size: 1597
stack_size: 24
reductions: 1677
neighbours:
[error_logger:error,2012-09-09T7:38:31.697,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,ns_port_sup}
Context: child_terminated
Reason: {abnormal,139}
Offender: [
{pid,<0.2237.99>},
{name,
{memcached,"/opt/couchbase/bin/memcached",
["-X",
"/opt/couchbase/lib/memcached/stdin_term_handler.so",
"-X",
"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=104857600;sleeptime=19;filename=/opt/couchbase/var/lib/couchbase/logs/memcached.log",
"-l","0.0.0.0:11210,0.0.0.0:11209:1000","-p",
"11210","-E",
"/opt/couchbase/lib/memcached/bucket_engine.so",
"-B","binary","-r","-c","10000","-e",
"admin=_admin;default_bucket_name=default;auto_create=false",
[]],
[{env,
[
{"MEMCACHED_TOP_KEYS","100"},
{"ISASL_PWFILE", "/opt/couchbase/var/lib/couchbase/data/isasl.pw"},
{"ISASL_DB_CHECK_TIME","1"}]},
use_stdio,stderr_to_stdout,exit_status,
port_server_send_eol,stream]}},
{mfargs,
{erlang,apply,
[#Fun<ns_port_sup.3.119727222>,
[memcached,"/opt/couchbase/bin/memcached",
["-X",
"/opt/couchbase/lib/memcached/stdin_term_handler.so",
"-X",
"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=104857600;sleeptime=19;filename=/opt/couchbase/var/lib/couchbase/logs/memcached.log",
"-l","0.0.0.0:11210,0.0.0.0:11209:1000","-p",
"11210","-E",
"/opt/couchbase/lib/memcached/bucket_engine.so",
"-B","binary","-r","-c","10000","-e",
"admin=_admin;default_bucket_name=default;auto_create=false",
[]],
[{env,
[{"EVENT_NOSELECT","1"}
,
{"ISASL_PWFILE", "/opt/couchbase/var/lib/couchbase/data/isasl.pw"},
{"ISASL_DB_CHECK_TIME","1"}]},
use_stdio,stderr_to_stdout,exit_status,
port_server_send_eol,stream]]]}},
{restart_type,permanent},
{shutdown,86400000},
{child_type,worker}]
[ns_server:error,2012-09-09T7:38:31.698,ns_1@10.3.121.92:<0.28893.232>:misc:sync_shutdown_many_i_am_trapping_exits:1373]Shutdown of the following failed: [{<0.28907.232>,
{badmatch,{error,econnrefused}}}]
[ns_server:warn,2012-09-09T7:38:31.699,ns_1@10.3.121.92:'ns_memcached-bucket-0':ns_memcached:connect:1059]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying.
[ns_server:error,2012-09-09T7:38:31.699,ns_1@10.3.121.92:<0.28893.232>:misc:try_with_maybe_ignorant_after:1409]Eating exception from ignorant after-block:
{error,{badmatch,[{<0.28907.232>,{badmatch,{error,econnrefused}}}]},
[{misc,sync_shutdown_many_i_am_trapping_exits,1},
{misc,try_with_maybe_ignorant_after,2},
{ns_single_vbucket_mover,mover,6},
{proc_lib,init_p_do_apply,3}]}
[error_logger:error,2012-09-09T7:38:31.699,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================SUPERVISOR REPORT=========================
Supervisor: {local,menelaus_sup}
Context: child_terminated
Reason: {badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,['ns_memcached-bucket-0',topkeys,60000]}}
Offender: [{pid,<0.2225.99>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[rebalance:error,2012-09-09T7:38:31.699,ns_1@10.3.121.92:<0.19689.229>:ns_vbucket_mover:handle_info:250]<0.28893.232> exited with {badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
[{'ns_memcached-bucket-0','ns_1@10.3.121.92'},
{get_vbucket,600},
60000]}}
[error_logger:info,2012-09-09T7:38:31.700,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================PROGRESS REPORT=========================
supervisor: {local,menelaus_sup}
started: [{pid,<0.29693.232>},
{name,hot_keys_keeper},
{mfargs,{hot_keys_keeper,start_link,[]}},
{restart_type,permanent},
{shutdown,5000},
{child_type,worker}]
[couchdb:info,2012-09-09T7:38:31.702,ns_1@10.3.121.92:<0.29683.232>:couch_log:info:39]Native initial compact succeeded for "bucket-0/328"
[error_logger:info,2012-09-09T7:38:31.701,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================PROGRESS REPORT=========================
supervisor: {local,ns_port_sup}
started: [{pid,<0.29688.232>},
{name,
{memcached,"/opt/couchbase/bin/memcached",
["-X",
"/opt/couchbase/lib/memcached/stdin_term_handler.so",
"-X",
"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=104857600;sleeptime=19;filename=/opt/couchbase/var/lib/couchbase/logs/memcached.log",
"-l","0.0.0.0:11210,0.0.0.0:11209:1000","-p",
"11210","-E",
"/opt/couchbase/lib/memcached/bucket_engine.so",
"-B","binary","-r","-c","10000","-e",
"admin=_admin;default_bucket_name=default;auto_create=false",
[]],
[{env,
[{"EVENT_NOSELECT","1"},
{"MEMCACHED_TOP_KEYS","100"}
,
,
use_stdio,stderr_to_stdout,exit_status,
port_server_send_eol,stream]}},
{mfargs,
{erlang,apply,
[#Fun<ns_port_sup.3.119727222>,
[memcached,"/opt/couchbase/bin/memcached",
["-X",
"/opt/couchbase/lib/memcached/stdin_term_handler.so",
"-X",
"/opt/couchbase/lib/memcached/file_logger.so,cyclesize=104857600;sleeptime=19;filename=/opt/couchbase/var/lib/couchbase/logs/memcached.log",
"-l","0.0.0.0:11210,0.0.0.0:11209:1000",
"-p","11210","-E",
"/opt/couchbase/lib/memcached/bucket_engine.so",
"-B","binary","-r","-c","10000","-e",
"admin=_admin;default_bucket_name=default;auto_create=false",
[]],
[{env,
[{"EVENT_NOSELECT","1"},
{"MEMCACHED_TOP_KEYS","100"},
{"ISASL_PWFILE", "/opt/couchbase/var/lib/couchbase/data/isasl.pw"},
{"ISASL_DB_CHECK_TIME","1"}
]},
use_stdio,stderr_to_stdout,exit_status,
port_server_send_eol,stream]]]}},
,
,
]
[error_logger:error,2012-09-09T7:38:31.703,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: erlang:apply/2
pid: <0.28907.232>
registered_name: []
exception error: no match of right hand side value
in function ns_replicas_builder_utils:kill_a_bunch_of_tap_names/3
in call from misc:try_with_maybe_ignorant_after/2
in call from ns_replicas_builder:build_replicas_main/6
ancestors: [<0.28893.232>,<0.19689.229>,<0.19650.229>]
messages: [
]
links: [<0.28893.232>]
dictionary: []
trap_exit: true
status: running
heap_size: 317811
stack_size: 24
reductions: 36953
neighbours:
[couchdb:info,2012-09-09T7:38:31.703,ns_1@10.3.121.92:<0.5231.217>:couch_log:info:39]CouchDB swapping files /opt/couchbase/var/lib/couchbase/data/bucket-0/328.couch.28 and /opt/couchbase/var/lib/couchbase/data/bucket-0/328.couch.27.compact.
[error_logger:error,2012-09-09T7:38:31.704,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_report:72]
=========================CRASH REPORT=========================
crasher:
initial call: ns_single_vbucket_mover:mover/6
pid: <0.28893.232>
registered_name: []
exception exit: {badmatch,{error,closed,
[
{mc_client_binary,select_bucket,2},
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
[{'ns_memcached-bucket-0','ns_1@10.3.121.92'},
{get_vbucket,600},
60000]}}
in function gen_server:call/3
in call from ns_memcached:do_call/3
in call from ns_single_vbucket_mover:run_mover/4
in call from ns_single_vbucket_mover:mover_inner_old_style/6
in call from misc:try_with_maybe_ignorant_after/2
in call from ns_single_vbucket_mover:mover/6
ancestors: [<0.19689.229>,<0.19650.229>]
messages: []
links: [<0.19689.229>]
dictionary: [{cleanup_list,[<0.28907.232>]}]
trap_exit: true
status: running
heap_size: 2584
stack_size: 24
reductions: 3986
neighbours:
[ns_server:info,2012-09-09T7:38:31.705,ns_1@10.3.121.92:'janitor_agent-bucket-0':janitor_agent:handle_info:646]Undoing temporary vbucket states caused by rebalance
[user:info,2012-09-09T7:38:31.706,ns_1@10.3.121.92:<0.2262.99>:ns_orchestrator:handle_info:295]Rebalance exited with reason {badmatch,{error,closed,
[{mc_client_binary,cmd_binary_vocal_recv,5}
,
{ns_memcached,ensure_bucket,2},
{ns_memcached,handle_info,2},
{gen_server,handle_msg,5},
{proc_lib,init_p_do_apply,3}]},
{gen_server,call,
[{'ns_memcached-bucket-0', 'ns_1@10.3.121.92'},
{get_vbucket,600},
60000]}}
[ns_server:debug,2012-09-09T7:38:31.705,ns_1@10.3.121.92:<0.19693.229>:ns_pubsub:do_subscribe_link:134]Parent process of subscription {ns_node_disco_events,<0.19689.229>} exited with reason {{{badmatch,
{error,
closed}},
[{mc_client_binary, cmd_binary_vocal_recv, 5},
{mc_client_binary, select_bucket, 2},
{ns_memcached, ensure_bucket, 2},
{ns_memcached, handle_info, 2},
{gen_server, handle_msg, 5},
{proc_lib, init_p_do_apply, 3}]},
{gen_server,
call,
[{'ns_memcached-bucket-0', 'ns_1@10.3.121.92'},
{get_vbucket, 600},
60000]}}
[ns_server:debug,2012-09-09T7:38:31.706,ns_1@10.3.121.92:<0.19693.229>:ns_pubsub:do_subscribe_link:149]Deleting {ns_node_disco_events,<0.19689.229>} event handler: ok
[couchdb:info,2012-09-09T7:38:31.707,ns_1@10.3.121.92:<0.5231.217>:couch_log:info:39]Compaction for db "bucket-0/328" completed.
[ns_server:info,2012-09-09T7:38:31.708,ns_1@10.3.121.92:<0.30304.232>:compaction_daemon:spawn_vbucket_compactor:644]Compacting <<"bucket-0/329">>
[couchdb:info,2012-09-09T7:38:31.708,ns_1@10.3.121.92:<0.5224.217>:couch_log:info:39]Starting compaction for db "bucket-0/329"
[ns_server:debug,2012-09-09T7:38:31.711,ns_1@10.3.121.92:'capi_set_view_manager-bucket-0':capi_set_view_manager:handle_info:337]doing replicate_newnodes_docs
[ns_server:info,2012-09-09T7:38:31.714,ns_1@10.3.121.92:<0.30307.232>:diag_handler:log_all_tap_and_checkpoint_stats:126]logging tap & checkpoint stats
[ns_server:debug,2012-09-09T7:38:31.730,ns_1@10.3.121.92:ns_config_rep:ns_config_rep:do_push_keys:317]Replicating some config keys ([counters,rebalance_status,rebalancer_pid]..)
[ns_server:debug,2012-09-09T7:38:31.735,ns_1@10.3.121.92:ns_config_log:ns_config_log:log_common:111]config change:
counters ->
[{rebalance_fail,2},
{rebalance_start,33},
{rebalance_success,26},
{rebalance_stop,5}]
[ns_server:debug,2012-09-09T7:38:31.735,ns_1@10.3.121.92:'capi_set_view_manager-bucket-0':capi_set_view_manager:handle_info:337]doing replicate_newnodes_docs
[ns_server:debug,2012-09-09T7:38:31.748,ns_1@10.3.121.92:'capi_set_view_manager-bucket-0':capi_set_view_manager:handle_info:337]doing replicate_newnodes_docs
[couchdb:info,2012-09-09T7:38:31.748,ns_1@10.3.121.92:<0.30305.232>:couch_log:info:39]Native compactor output: Compacted /opt/couchbase/var/lib/couchbase/data/bucket-0/329.couch.27 -> /opt/co
[ns_server:debug,2012-09-09T7:38:31.757,ns_1@10.3.121.92:ns_config_log:ns_config_log:log_common:111]config change:
rebalancer_pid ->
undefined
[ns_server:debug,2012-09-09T7:38:31.757,ns_1@10.3.121.92:'capi_set_view_manager-bucket-0':capi_set_view_manager:handle_info:337]doing replicate_newnodes_docs
[error_logger:error,2012-09-09T7:38:31.705,ns_1@10.3.121.92:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.19689.229> terminating
** Last message in was {'EXIT',<0.28893.232>,
{badmatch,{error,closed,
[{mc_cl ient_binary,cmd_binary_vocal_recv,5},
{mc_client_binary,select_bucket,2}
,
,
,
,
]},
{gen_server,call,
[
,
,
60000]}}}
-
- When Server state == {state,"bucket-0",<0.19693.229>,
{dict,2,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[], [],[],[]},
{{[],[],
[['ns_1@10.3.121.92'|1536]],
[],
[['ns_1@10.3.121.94'|512]],
[],[],[],[],[],[],[],[],[],[],[]}}},
1,
{array,1024,0,undefined,
{{{{['ns_1@10.3.121.92',
- When Server state == {state,"bucket-0",<0.19693.229>,