Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-7371

eacces error on Windows when creating directories or folding directory contents (was: [windows] couchdb process failed because of eacces and rebalance failed)

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • 2.0.1
    • 2.0
    • storage-engine
    • Security Level: Public
    • None

    Description

      test to repro:
      -t view.createdeleteview.CreateDeleteViewTests.rebalance_in_with_ddoc_ops,ddoc_ops=delete,test_with_view=True,nodes_in=3,num_ddocs=2,num_views_per_ddoc=3,items=200000

      1 bucket/ 2 development ddocs with 1 view with default map fn/cluster is under load
      Steps:
      rebalance 1->4 is failed, try to delete ddoc

      2012-12-05 03:18:19.886 ns_orchestrator:4:info:message(ns_1@10.3.3.38) - Starting rebalance, KeepNodes = ['ns_1@10.3.2.243','ns_1@10.3.2.239',
      'ns_1@10.3.3.39','ns_1@10.3.3.38'], EjectNodes = []
      ...
      [views:debug,2012-12-05T3:19:11.058,ns_1@10.3.3.38:capi_set_view_manager-default<0.26430.8>:capi_set_view_manager:apply_index_states:489]
      couch_set_view:mark_partitions_unindexable([<<"default">>,
      <<"_design/dev_ddoc1">>,
      [10,11,12,13,14,15,16,17,18]]) returned ok in 0ms
      [rebalance:info,2012-12-05T3:19:11.058,ns_1@10.3.3.38:<0.2085.9>:janitor_agent:get_replication_persistence_checkpoint_id:470]default: Doing get_replication_persistence_checkpoint_id call for vbucket 18 on ns_1@10.3.3.38
      [rebalance:info,2012-12-05T3:19:11.089,ns_1@10.3.3.38:<0.2103.9>:janitor_agent:wait_index_updated:459]default: Doing wait_index_updated call for ns_1@10.3.2.239 (vbucket 18)
      [ns_server:info,2012-12-05T3:19:11.401,ns_1@10.3.3.38:ns_port_memcached<0.436.0>:ns_port_server:log:171]memcached<0.436.0>: Wed Dec 05 03:19:11.122390 Pacific Standard Time 3: TAP (Producer) eq_tapq:replication_building_18_'ns_1@10.3.2.239' - disconnected, keep alive for 300 seconds
      memcached<0.436.0>: Wed Dec 05 03:19:11.278640 Pacific Standard Time 3: TAP (Producer) eq_tapq:replication_building_18_'ns_1@10.3.2.239' - Connection is closed by force.
      ...
      [ns_server:info,2012-12-05T3:19:11.401,ns_1@10.3.3.38:<0.2090.9>:ns_replicas_builder_utils:kill_a_bunch_of_tap_names:59]Killed the following tap names on 'ns_1@10.3.3.38': [<<"replication_building_18_'ns_1@10.3.2.239'">>]
      [ns_server:debug,2012-12-05T3:19:11.495,ns_1@10.3.3.38:<0.2085.9>:ns_single_vbucket_mover:spawn_ebucketmigrator_mover:283]Spawned mover "default" 18 'ns_1@10.3.3.38' -> 'ns_1@10.3.2.239': <0.2106.9>
      [ns_server:info,2012-12-05T3:19:11.573,ns_1@10.3.3.38:<0.2106.9>:ebucketmigrator_srv:init:492]Setting

      {"10.3.2.239",11209}

      vbucket 18 to state replica
      [couchdb:error,2012-12-05T3:19:11.917,ns_1@10.3.3.38:<0.1261.9>:couch_log:error:42]Set view `default`, main group `_design/dev_ddoc0`, terminating because linked PID <0.243.0> died with reason: killed
      [couchdb:error,2012-12-05T3:19:11.917,ns_1@10.3.3.38:<0.1238.9>:couch_log:error:42]Set view `default`, main group `_design/dev_ddoc1`, terminating because linked PID <0.243.0> died with reason: killed
      [error_logger:error,2012-12-05T3:19:11.917,ns_1@10.3.3.38:error_logger<0.6.0>:ale_error_logger_handler:log_report:72]
      =========================CRASH REPORT=========================
      crasher:
      initial call: couch_view:init/1
      pid: <0.2001.9>
      registered_name: []
      exception exit: badmatch,{error,eacces,
      [

      {couch_file,'-init_delete_dir/1-fun-0-',2},
      {filelib,do_fold_files2,8},
      {couch_view,init,1},
      {gen_server,init_it,6},
      {proc_lib,init_p_do_apply,3}]}
      in function gen_server:init_it/6
      ancestors: [couch_secondary_services,couch_server_sup,cb_couch_sup,
      ns_server_cluster_sup,<0.67.0>]
      messages: []
      links: [<0.230.0>,<0.2002.9>]
      dictionary: []
      trap_exit: true
      status: running
      heap_size: 1597
      stack_size: 24
      reductions: 5294
      neighbours:
      neighbour: [{pid,<0.2002.9>},
      {registered_name,[]},
      {initial_call,{couch_event_sup,init,['Argument__1']}},
      {current_function,{gen_server,loop,6}},
      {ancestors,[couch_view,couch_secondary_services, couch_server_sup,cb_couch_sup, ns_server_cluster_sup,<0.67.0>]},
      {messages,[]},
      {links,[<0.2001.9>,<0.224.0>]},
      {dictionary,[]},
      {trap_exit,false},
      {status,waiting},
      {heap_size,233},
      {stack_size,9},
      {reductions,32}]

      [error_logger:error,2012-12-05T3:19:12.089,ns_1@10.3.3.38:error_logger<0.6.0>:ale_error_logger_handler:log_report:72]
      =========================SUPERVISOR REPORT=========================
      Supervisor: {local,couch_secondary_services}
      Context: start_error
      Reason: badmatch,{error,eacces,
      [{couch_file,'-init_delete_dir/1-fun-0-',2}

      ,

      {filelib,do_fold_files2,8}

      ,

      {couch_view,init,1}

      ,

      {gen_server,init_it,6}

      ,

      {proc_lib,init_p_do_apply,3}]}
      Offender: [{pid,<0.253.0>},
      {name,view_manager},
      {mfargs,{couch_view,start_link,[]}},
      {restart_type,permanent},
      {shutdown,brutal_kill},
      {child_type,worker}]

      ...

      [ns_server:info,2012-12-05T3:19:12.354,ns_1@10.3.3.38:ns_port_memcached<0.436.0>:ns_port_server:log:171]memcached<0.436.0>: Wed Dec 05 03:19:12.091140 Pacific Standard Time 3: Connection closed by mccouch
      memcached<0.436.0>: Wed Dec 05 03:19:12.091140 Pacific Standard Time 3: Resetting connection to mccouch, lastReceivedCommand = notify_vbucket_update lastSentCommand = notify_vbucket_update currentCommand =unknown
      memcached<0.436.0>: Wed Dec 05 03:19:12.106765 Pacific Standard Time 3: Trying to connect to mccouch: "localhost:11213"
      [ns_server:debug,2012-12-05T3:19:12.604,ns_1@10.3.3.38:<0.2106.9>:ebucketmigrator_srv:kill_tapname:966]killing tap named: rebalance_18
      [error_logger:error,2012-12-05T3:19:12.604,ns_1@10.3.3.38:error_logger<0.6.0>:ale_error_logger_handler:log_msg:76]** Generic server <0.1251.9> terminating
      ** Last message in was {'EXIT',<0.1238.9>,shutdown}
      ** When Server state == {state,
      {"C:\\\\Program Files\\\\Couchbase\\\\Server\\\\var\\\\lib\\\\couchbase\\\\data\\\\",
      <<"default">>,
      {set_view_group,
      <<221,171,174,219,140,126,2,7,211,255,118,50,68,113,
      26,249>>,
      nil,<<"default">>,<<"_design/dev_ddoc1">>,[],
      [{set_view,0,
      [<<"views2">>,<<"views1">>],
      <<"function (doc) { emit(doc.age, doc.first_name);}">>,
      nil,[],[],undefined}],
      nil,nil,
      {set_view_index_header,1,0,0,0,0,[],nil,[],false, [],nil,[]},
      nil,replica,nil,nil,nil,[]}},
      nil,
      {set_view_group,
      <<221,171,174,219,140,126,2,7,211,255,118,50,68,113,
      26,249>>,
      <0.1254.9>,<<"default">>,<<"_design/dev_ddoc1">>,[],
      [{set_view,0,
      [<<"views2">>,<<"views1">>],
      <<"function (doc) { emit(doc.age, doc.first_name);}">>,
      {btree,<0.1254.9>,nil, #Fun<couch_btree.3.59827385>, #Fun<couch_btree.4.7841881>, #Fun<couch_set_view_group.33.15465549>, #Fun<couch_set_view_group.32.18476503>,7168,6144, true},
      [],[],#Ref<0.0.184.165360>}],
      {btree,<0.1254.9>,nil,#Fun<couch_btree.3.59827385>, #Fun<couch_btree.4.7841881>, #Fun<couch_btree.5.72034400>, #Fun<couch_set_view_group.10.21071548>,7168,6144, true},
      <0.1258.9>,
      {set_view_index_header,1,1024,0,0,0,[],nil, [nil], false,[],nil,[]},
      <0.1259.9>,replica,nil,nil,nil,
      "c:/Program Files/Couchbase/Server/var/lib/couchbase/data/@indexes/default/replica_ddabaedb8c7e0207d3ff763244711af9.view.1"},
      nil,false,not_running,nil,nil,nil,[],nil,false,
      undefined,true,[],[],
      {dict,0,16,16,8,80,48,
      {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
      {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},
      0,
      {dict,0,16,16,8,80,48,
      {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
      {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
      []}}}}
      ** Reason for termination ==
      ** {badarg,[{ets,delete,
      [couch_set_view_stats,
      {<<"default">>,<<"_design/dev_ddoc1">>, <<221,171,174,219,140,126,2,7,211,255,118,50,68,113,26,249>>, replica}]},
      {couch_set_view_group,terminate,2},
      {gen_server,terminate,6},
      {proc_lib,init_p_do_apply,3}

      ]}

      [rebalance:error,2012-12-05T3:19:12.761,ns_1@10.3.3.38:<0.1013.9>:ns_vbucket_mover:handle_info:252]<0.2085.9> exited with {noproc,
      {gen_server,call,
      [

      {'ns_memcached-default','ns_1@10.3.3.38'},
      {get_vbucket,18},
      180000]}}
      [ns_server:debug,2012-12-05T3:19:12.761,ns_1@10.3.3.38:<0.1025.9>:ns_pubsub:do_subscribe_link:132]Parent process of subscription {ns_node_disco_events,<0.1013.9>} exited with reason {noproc,
      {gen_server,
      call,
      [{'ns_memcached-default', 'ns_1@10.3.3.38'},
      {get_vbucket, 18},
      180000]}}
      [ns_server:info,2012-12-05T3:19:12.776,ns_1@10.3.3.38:ns_port_memcached<0.436.0>:ns_port_server:log:171]memcached<0.436.0>: Wed Dec 05 03:19:12.434890 Pacific Standard Time 3: Schedule cleanup of "eq_tapq:anon_3115"
      memcached<0.436.0>: Wed Dec 05 03:19:12.434890 Pacific Standard Time 3: TAP (Producer) eq_tapq:replication_building_18_'ns_1@10.3.2.239' - Clear the tap queues by force
      memcached<0.436.0>: Wed Dec 05 03:19:12.653640 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - Sending TAP_OPAQUE with command "opaque_enable_auto_nack" and vbucket 0
      memcached<0.436.0>: Wed Dec 05 03:19:12.653640 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - Sending TAP_OPAQUE with command "enable_checkpoint_sync" and vbucket 0
      memcached<0.436.0>: Wed Dec 05 03:19:12.653640 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - Sending TAP_VBUCKET_SET with vbucket 18 and state "pending"
      memcached<0.436.0>: Wed Dec 05 03:19:12.669265 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - VBucket <18> is going dead to complete vbucket takeover.
      memcached<0.436.0>: Wed Dec 05 03:19:12.700515 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - Sending TAP_VBUCKET_SET with vbucket 18 and state "active"
      memcached<0.436.0>: Wed Dec 05 03:19:12.716140 Pacific Standard Time 3: TAP takeover is completed. Disconnecting tap stream <eq_tapq:rebalance_18>
      memcached<0.436.0>: Wed Dec 05 03:19:12.716140 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - disconnected
      memcached<0.436.0>: Wed Dec 05 03:19:12.716140 Pacific Standard Time 3: Schedule cleanup of "eq_tapq:rebalance_18"
      memcached<0.436.0>: Wed Dec 05 03:19:12.716140 Pacific Standard Time 3: TAP (Producer) eq_tapq:replication_building_18_'ns_1@10.3.2.243' - disconnected, keep alive for 300 seconds
      memcached<0.436.0>: Wed Dec 05 03:19:12.716140 Pacific Standard Time 3: TAP (Producer) eq_tapq:rebalance_18 - Clear the tap queues by force
      memcached<0.436.0>: Wed Dec 05 03:19:12.747390 Pacific Standard Time 3: TAP (Producer) eq_tapq:replication_building_18_'ns_1@10.3.2.243' - Connection is closed by force.

      ...

      [user:info,2012-12-05T3:19:12.776,ns_1@10.3.3.38:<0.389.0>:ns_orchestrator:handle_info:319]Rebalance exited with reason {noproc,
      {gen_server,call,
      [{'ns_memcached-default','ns_1@10.3.3.38'}

      ,

      {get_vbucket,18}

      ,
      180000]}}

      attaching logs
      https://s3.amazonaws.com/bugdb/jira/MB-7371/1da69735/10.3.3.38-diag.txt.gz
      https://s3.amazonaws.com/bugdb/jira/MB-7371/1da69735/10.3.3.39-diag.txt.gz
      https://s3.amazonaws.com/bugdb/jira/MB-7371/1da69735/10.3.2.239-diag.txt.gz
      https://s3.amazonaws.com/bugdb/jira/MB-7371/1da69735/10.3.2.243-diag.txt.gz

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            FilipeManana Filipe Manana (Inactive)
            iryna iryna
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty