Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-6781

creating and querying one spatial index on mac returns error,system_limit because mac file descriptor limit is not reset to 10k

    Details

      Description

      I am on Mac OS X - Mountain Lion. So I hadn't really played with spatial yet, so I went to docs and put in the reduce and it's causing my Couchbase Server 2.0 Beta to crash. The docs I followed: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-views-writing-geo-views.html

      I attached a Diag Report as well... Chris says it might be because of Erlang + open file limits.

      My Document (there are only 3 docs in bucket)

      { "type": "test", "timestamp": "2012-09-28 23:37:14 -0700", "dog": "Pug", "loc": [ 2, 3 ] }

      Map (Spatial)

      function(doc, meta) {
      if (doc.loc) {
      emit(

      { type: "Point", coordinates: [doc.loc[0], doc.loc[1]], }

      ,
      [meta.id, doc.loc]);
      }
      };

      Dev SetResult (only shown when I go directly to link in new tab)
      URL: http://127.0.0.1:8092/default/_design/dev_spatial/_spatial/points?bbox=-180%2C-90%2C180%2C90&connection_timeout=60000

      {
      error: "case_clause",
      reason: "{error,{exit,{aborted,{no_exists,['stats_archiver-default-minute']}}}}"
      }

      In Published View I get this:
      URL: http://127.0.0.1:8092/default/_design/spatial/_spatial/points?bbox=-180%2C-90%2C180%2C90&stale=update_after&connection_timeout=60000

      {
      total_rows: 0,
      rows: [ ],
      errors: [
      {
      from: "local",
      reason: "

      {error,system_limit}"
      },
      {
      from: "local",
      reason: "{error,system_limit}

      "
      },
      {
      from: "local",
      reason: "

      {error,system_limit}"
      },
      {
      from: "local",
      reason: "{error,system_limit}

      "
      },
      [... repeated 1000's of times ...]
      {
      from: "local",
      reason: "

      {error,system_limit}

      "
      }
      ]
      }

      No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

        Hide
        vmx Volker Mische added a comment -

        Filipe explained it very well, hence I'll just quote him:

        "This is irrelevant.

        On OS X, maximum allowed number of simultaneously open file
        descriptors is very limited. Here it was a single node case, 1024
        vbuckets, each with its own index file open. This is exactly the same
        architectural/design issue we had with regular indexes before the
        whole b-superstar tree thing - geo indexes will be similar in the
        future, but not for 2.0, where they are an experimental feature.

        So either use Linux (and ensure a high enough min/max for nofiles in
        /etc/security/limits.conf, etc), or reduce number of vbuckets (easy in
        development environment, but not sure if it's possible for package
        installations)."

        I'll close this as "Won't Fix" for now, as I'm aware and will take care of it for the coming versions of Couchbase.

        Show
        vmx Volker Mische added a comment - Filipe explained it very well, hence I'll just quote him: "This is irrelevant. On OS X, maximum allowed number of simultaneously open file descriptors is very limited. Here it was a single node case, 1024 vbuckets, each with its own index file open. This is exactly the same architectural/design issue we had with regular indexes before the whole b-superstar tree thing - geo indexes will be similar in the future, but not for 2.0, where they are an experimental feature. So either use Linux (and ensure a high enough min/max for nofiles in /etc/security/limits.conf, etc), or reduce number of vbuckets (easy in development environment, but not sure if it's possible for package installations)." I'll close this as "Won't Fix" for now, as I'm aware and will take care of it for the coming versions of Couchbase.
        Hide
        vmx Volker Mische added a comment -

        Just reopening it, so I can change the "fix version"

        Show
        vmx Volker Mische added a comment - Just reopening it, so I can change the "fix version"
        Hide
        vmx Volker Mische added a comment -

        See the comment when I previously closed the issue.

        Show
        vmx Volker Mische added a comment - See the comment when I previously closed the issue.
        Hide
        vmx Volker Mische added a comment -

        A solution is either reducing the vBuckets or increasing the file descriptor limit. See MB-6783 as a follow up.

        Show
        vmx Volker Mische added a comment - A solution is either reducing the vBuckets or increasing the file descriptor limit. See MB-6783 as a follow up.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        regular views work fine OSX installation . once 6783 is fixed we will rerun the test to see if spatial indexes still run into file descriptors issue.

        Show
        farshid Farshid Ghods (Inactive) added a comment - regular views work fine OSX installation . once 6783 is fixed we will rerun the test to see if spatial indexes still run into file descriptors issue.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        let's keep this open and QE will close this as fixed once 6783 is fixed and we are able to retest

        Show
        farshid Farshid Ghods (Inactive) added a comment - let's keep this open and QE will close this as fixed once 6783 is fixed and we are able to retest
        Hide
        abhinav Abhinav Dangeti added a comment -

        Able to query spatial views, when upto vbucket count of 256.
        However with vbucket count as 512 or 1024, spatial view tests fail with:
        error 500 reason: {read_loop_died,
        {problem_reopening_file,

        {error,system_limit},
        {read,199,{<0.4118.0>,#Ref<0.0.0.198592>}},
        <0.12593.0>,
        "/Users/abhinav/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1",
        10}} {"error":"{read_loop_died,\n {problem_reopening_file,\n {error,system_limit}

        ,\n {read,199,{<0.4118.0>,#Ref<0.0.0.198592>}},\n <0.12593.0>,\n \"/Users/abhinav/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1\",\n 10}}","reason":"{gen_server,call,[<0.12592.0>,

        {pread_iolist,199}

        ,infinity]}"}

        Show
        abhinav Abhinav Dangeti added a comment - Able to query spatial views, when upto vbucket count of 256. However with vbucket count as 512 or 1024, spatial view tests fail with: error 500 reason: {read_loop_died, {problem_reopening_file, {error,system_limit}, {read,199,{<0.4118.0>,#Ref<0.0.0.198592>}}, <0.12593.0>, "/Users/abhinav/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1", 10}} {"error":"{read_loop_died,\n {problem_reopening_file,\n {error,system_limit} ,\n {read,199,{<0.4118.0>,#Ref<0.0.0.198592>}},\n <0.12593.0>,\n \"/Users/abhinav/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1\",\n 10}}","reason":"{gen_server,call,[<0.12592.0>, {pread_iolist,199} ,infinity]}"}
        Hide
        abhinav Abhinav Dangeti added a comment -

        Also noticed, with file descriptor limit set at 10240, when trying to query a spatial view, system_limit is being reached although the maximum number of file descriptors that beam.smp uses is at around 1040.

        Show
        abhinav Abhinav Dangeti added a comment - Also noticed, with file descriptor limit set at 10240, when trying to query a spatial view, system_limit is being reached although the maximum number of file descriptors that beam.smp uses is at around 1040.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Jens,

        based on this information even though the change which was submitted changes the max of file descriptors to 10k it still fails after having 1024 open file descriptors.

        Alk has mentioned that there is a FreeBSD issue preventing us from setting this to a higher number

        Show
        farshid Farshid Ghods (Inactive) added a comment - Jens, based on this information even though the change which was submitted changes the max of file descriptors to 10k it still fails after having 1024 open file descriptors. Alk has mentioned that there is a FreeBSD issue preventing us from setting this to a higher number
        Hide
        jens Jens Alfke added a comment -

        I verified that the setrlimit call worked, by having the start-couchbase.sh script run 'ulimit -a':

        core file size (blocks, -c) 0
        data seg size (kbytes, -d) unlimited
        file size (blocks, -f) unlimited
        max locked memory (kbytes, -l) unlimited
        max memory size (kbytes, -m) unlimited
        open files (-n) 10240
        pipe size (512 bytes, -p) 1
        stack size (kbytes, -s) 8192
        cpu time (seconds, -t) unlimited
        max user processes (-u) 709
        virtual memory (kbytes, -v) unlimited

        So I don't know why we would be running into limits...

        Show
        jens Jens Alfke added a comment - I verified that the setrlimit call worked, by having the start-couchbase.sh script run 'ulimit -a': core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited So I don't know why we would be running into limits...
        Hide
        jens Jens Alfke added a comment -

        I've searched the web for info about Darwin-specific setrlimit and RLIMIT_NOFILE issues, and the only thing that comes up is that the call will fail if you set a value above OPEN_MAX (which is 10240). But we don't do that, and we aren't getting errors.

        I'm guessing that the actual limit being hit is something other than the max number of file descriptors. Maybe RLIMIT_NPROC, the maximum number of running processes per userid? According to "ulimit -a" it defaults to 709.

        Maybe someone familiar with the geo code could look into exactly what system call is failing and with what errno.

        Show
        jens Jens Alfke added a comment - I've searched the web for info about Darwin-specific setrlimit and RLIMIT_NOFILE issues, and the only thing that comes up is that the call will fail if you set a value above OPEN_MAX (which is 10240). But we don't do that, and we aren't getting errors. I'm guessing that the actual limit being hit is something other than the max number of file descriptors. Maybe RLIMIT_NPROC, the maximum number of running processes per userid? According to "ulimit -a" it defaults to 709. Maybe someone familiar with the geo code could look into exactly what system call is failing and with what errno.
        Hide
        vmx Volker Mische added a comment -

        It could be another limit as well. In order to figure out which limit it may reach, here's what the geo index does. It works like on Apache CouchDB. As every vBucket is a database, this means that for every design document there's a view created for every vBucket. I thought we hit the file descriptor limit easily, bit of course it could be something else. I don't know if Erlang does something like opening huge amount of processes if it works on so many files.

        Show
        vmx Volker Mische added a comment - It could be another limit as well. In order to figure out which limit it may reach, here's what the geo index does. It works like on Apache CouchDB. As every vBucket is a database, this means that for every design document there's a view created for every vBucket. I thought we hit the file descriptor limit easily, bit of course it could be something else. I don't know if Erlang does something like opening huge amount of processes if it works on so many files.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        I remember in the past having similar issue on OS X snow leopard. Tried several ways to increase the maximum allowed number of open files, but whatever value was set, in practice it didn't allow more then a few thousand (even if it reported allowing 10k or more).

        If I recall correctly, Dustin knew a lot of details about this. I think it's similar to what Farshid said above.

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - I remember in the past having similar issue on OS X snow leopard. Tried several ways to increase the maximum allowed number of open files, but whatever value was set, in practice it didn't allow more then a few thousand (even if it reported allowing 10k or more). If I recall correctly, Dustin knew a lot of details about this. I think it's similar to what Farshid said above.
        Hide
        dustin Dustin Sallings (Inactive) added a comment -

        There are a few different limits we're talking about here:

        1. rlimits.
        2. erlang limits
        3. limitations due to erlang using select() as an IO multiplexer.

        I don't think this is a file descriptor limit as that gives

        {error,emfile}

        . system_limit generally refers to limits from #2 such as overrunning the maximum number of ports. This limit is also 1024 by default, and can be raised by setting ERL_MAX_PORTS: http://www.erlang.org/doc/efficiency_guide/advanced.html#ports

        That one seems likely.

        #3 above (select()) is a big issue on OSX. It seems like it should be trivial to fix (and may have been in some version by now), but I've seen that one recently as well. It's an easy limit to hit, and you can't do much about it other than make erlang use kqueue (which it does, I believe, on FreeBSD, so I don't know why it wouldn't on OS X).

        Show
        dustin Dustin Sallings (Inactive) added a comment - There are a few different limits we're talking about here: 1. rlimits. 2. erlang limits 3. limitations due to erlang using select() as an IO multiplexer. I don't think this is a file descriptor limit as that gives {error,emfile} . system_limit generally refers to limits from #2 such as overrunning the maximum number of ports. This limit is also 1024 by default, and can be raised by setting ERL_MAX_PORTS: http://www.erlang.org/doc/efficiency_guide/advanced.html#ports That one seems likely. #3 above (select()) is a big issue on OSX. It seems like it should be trivial to fix (and may have been in some version by now), but I've seen that one recently as well. It's an easy limit to hit, and you can't do much about it other than make erlang use kqueue (which it does, I believe, on FreeBSD, so I don't know why it wouldn't on OS X).
        Hide
        vmx Volker Mische added a comment -

        Abhinav, can you try setting ERL_MAX_PORTS? I'll assign the bug to you for so long

        Show
        vmx Volker Mische added a comment - Abhinav, can you try setting ERL_MAX_PORTS? I'll assign the bug to you for so long
        Hide
        Pathe Patrick added a comment -

        I'm having similar issues on a 32bit Ubuntu box and the 2.0.0beta .deb installation. I don't get the system_limit messages, but all couchbase processes die when requesting a spatial view. Some output I found in the log file:

        10T21:50:45.178,ns_1@127.0.0.1:couch_spatial:couch_log:debug:36]Spawning new group server for spatial group _design/dev_products in database default/242.
        [couchdb:debug,2012-10-10T21:50:45.179,ns_1@127.0.0.1:<0.22359.1>:couch_log:debug:36]request_group

        {Pid, Seq} {<0.58.2>,0}
        [couchdb:debug,2012-10-10T21:50:45.179,ns_1@127.0.0.1:<0.58.2>:couch_log:debug:36](2) request_group handler: seqs: req: 0, group: 0
        [couchdb:debug,2012-10-10T21:50:45.180,ns_1@127.0.0.1:<0.75.2>:couch_log:debug:36]request_group {Pid, Seq}

        {<0.58.2>,0}

        [couchdb:debug,2012-10-10T21:50:45.180,ns_1@127.0.0.1:<0.58.2>:couch_log:debug:36](2) request_group handler: seqs: req: 0, group: 0

        but that doesn't tell much about why everything is crashing. Can't find any other log entries either.

        Show
        Pathe Patrick added a comment - I'm having similar issues on a 32bit Ubuntu box and the 2.0.0beta .deb installation. I don't get the system_limit messages, but all couchbase processes die when requesting a spatial view. Some output I found in the log file: 10T21:50:45.178,ns_1@127.0.0.1:couch_spatial:couch_log:debug:36]Spawning new group server for spatial group _design/dev_products in database default/242. [couchdb:debug,2012-10-10T21:50:45.179,ns_1@127.0.0.1:<0.22359.1>:couch_log:debug:36] request_group {Pid, Seq} {<0.58.2>,0} [couchdb:debug,2012-10-10T21:50:45.179,ns_1@127.0.0.1:<0.58.2>:couch_log:debug:36] (2) request_group handler: seqs: req: 0, group: 0 [couchdb:debug,2012-10-10T21:50:45.180,ns_1@127.0.0.1:<0.75.2>:couch_log:debug:36] request_group {Pid, Seq} {<0.58.2>,0} [couchdb:debug,2012-10-10T21:50:45.180,ns_1@127.0.0.1:<0.58.2>:couch_log:debug:36] (2) request_group handler: seqs: req: 0, group: 0 but that doesn't tell much about why everything is crashing. Can't find any other log entries either.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        Patrick, can you attach the logs? thanks

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - Patrick, can you attach the logs? thanks
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Hi Volker,

        is this a dupe of some of the existing issues you and Filipe are working on now ?

        Show
        farshid Farshid Ghods (Inactive) added a comment - Hi Volker, is this a dupe of some of the existing issues you and Filipe are working on now ?
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        These are unrelated. Other issues are about avoiding opening 2 file descriptors for empty spatial or dev views (per design document), and avoiding database file handle leaks when spatial views are used and bucket compaction happens.

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - These are unrelated. Other issues are about avoiding opening 2 file descriptors for empty spatial or dev views (per design document), and avoiding database file handle leaks when spatial views are used and bucket compaction happens.
        Hide
        vmx Volker Mische added a comment -

        Patrick, can you always reproduce that? If yes, can you please post the exact steps to see the crash? I've an Ubuntu 32-bit desktop machine at home, so I might be able to reproduce it as well.

        Show
        vmx Volker Mische added a comment - Patrick, can you always reproduce that? If yes, can you please post the exact steps to see the crash? I've an Ubuntu 32-bit desktop machine at home, so I might be able to reproduce it as well.
        Hide
        vmx Volker Mische added a comment -

        Farshid, as Filipe says, it's not related to MB-6860. I assign it back to you, so that you can assign it to someone to try out setting the ERL_MAX_PORTS to sime higher value.

        Show
        vmx Volker Mische added a comment - Farshid, as Filipe says, it's not related to MB-6860 . I assign it back to you, so that you can assign it to someone to try out setting the ERL_MAX_PORTS to sime higher value.
        Hide
        Pathe Patrick added a comment -

        Volker, the problem is easily reproducible by creating a new standard spatial view and having at least one document with a spatial point: location[41.386944,2.170025];

        function (doc) {
        if (doc.location)

        { emit(doc.location, null); }

        }

        The server crashes when trying to show results / show the view on 8092 (and being a bit impatient about the loading time, clicking multiple times). With a bit more patience and one click at a time the server seems to be stable, but I don't get any results in the view (there should be 10). The Linode I/O graph and CPU graph get high spikes and I got a warning email about excessive I/O from Linode (> 18k blocks / sec I/O). I tried dev and production views with multiple reloads.

        Filipe, this is a snap from /opt/couchbase/var/lib/couchbase/logs/debug.11 right after the crash. I can send you the full logs, if you like.

        [error_logger:error,2012-10-16T10:41:02.060,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72]
        =========================CRASH REPORT=========================
        crasher:
        initial call: couch_file:init/1
        pid: <0.30493.88>
        registered_name: []
        exception exit: {{badmatch,[41.39055252,2.162917375]},
        [

        {couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}
        in function gen_server:terminate/6
        in call from couch_file:init/1
        ancestors: [<0.30489.88>,couch_spatial,couch_secondary_services,
        couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
        <0.59.0>]
        messages: []
        links: [<0.30497.88>]
        dictionary: []
        trap_exit: true
        status: running
        heap_size: 610
        stack_size: 24
        reductions: 706
        neighbours:
        neighbour: [{pid,<0.30497.88>},
        {registered_name,[]},
        {initial_call,{couch_ref_counter,init,['Argument__1']}},
        {current_function,{gen_server,loop,6}},
        {ancestors,[<0.30489.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]},
        {messages,[]},
        {links,[<0.30493.88>]},
        {dictionary,[]},
        {trap_exit,false},
        {status,waiting},
        {heap_size,377},
        {stack_size,9},
        {reductions,110}]

        [ns_server:error,2012-10-16T10:41:08.896,ns_1@127.0.0.1:<0.7792.0>:ns_memcached:verify_report_long_call:274]call {stats,<<>>} took too long: 1319882 us
        [ns_server:error,2012-10-16T10:41:10.771,ns_1@127.0.0.1:<0.7791.0>:ns_memcached:verify_report_long_call:274]call {stats,<<"timings">>} took too long: 1522090 us
        [stats:warn,2012-10-16T10:41:10.805,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201]Dropped 4 ticks
        [ns_server:error,2012-10-16T10:41:11.629,ns_1@127.0.0.1:<0.7792.0>:ns_memcached:verify_report_long_call:274]call topkeys took too long: 866192 us
        [ns_server:error,2012-10-16T10:41:13.962,ns_1@127.0.0.1:<0.7791.0>:ns_memcached:verify_report_long_call:274]call {stats,<<>>} took too long: 1935259 us
        [ns_server:error,2012-10-16T10:41:14.311,ns_1@127.0.0.1:<0.7793.0>:ns_memcached:verify_report_long_call:274]call list_vbuckets took too long: 3488579 us
        [stats:warn,2012-10-16T10:41:18.375,ns_1@127.0.0.1:system_stats_collector:system_stats_collector:handle_info:133]lost 1 ticks
        [ns_server:error,2012-10-16T10:41:18.882,ns_1@127.0.0.1:ns_doctor:ns_doctor:update_status:204]The following buckets became not ready on node 'ns_1@127.0.0.1': ["default"], those of them are active ["default"]
        [ns_server:error,2012-10-16T10:41:18.889,ns_1@127.0.0.1:'ns_memcached-default':ns_memcached:handle_info:594]handle_info(ensure_bucket,..) took too long: 6014770 us
        [stats:warn,2012-10-16T10:41:19.422,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201]Dropped 7 ticks
        [ns_server:info,2012-10-16T10:41:19.422,ns_1@127.0.0.1:ns_doctor:ns_doctor:update_status:210]The following buckets became ready on node 'ns_1@127.0.0.1': ["default"]
        [ns_server:error,2012-10-16T10:41:22.886,ns_1@127.0.0.1:'ns_memcached-default':ns_memcached:handle_info:594]handle_info(ensure_bucket,..) took too long: 725717 us
        [stats:warn,2012-10-16T10:41:22.979,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201]Dropped 1 ticks
        [stats:warn,2012-10-16T10:41:24.777,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201]Dropped 1 ticks
        [error_logger:error,2012-10-16T10:41:25.815,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]Error in process <0.31516.88> on node 'ns_1@127.0.0.1' with exit value: {{badmatch,[4.138310e+01,2.181913e+00]},[{couch_spatial_updater,process_result,1}

        ,

        {couch_spatial_updater,'-process_results/1-fun-0-',2},{lists,foldl,3},{lists,map,2},{couch_spatial_updater,spatial_docs,4},{couch_spatial_updater,update,2}]}


        [error_logger:error,2012-10-16T10:41:25.821,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.31498.88> terminating
        ** Last message in was {'EXIT',<0.31516.88>,
        {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater, '-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}}
        ** When Server state == {group_state,<<"default/224">>,
        {"/opt/couchbase/var/lib/couchbase/data",
        <<"default/224">>,
        {spatial_group,
        <<155,78,10,252,86,203,121,94,106,184,229,209,71,
        187,100,242>>,
        nil,nil,<<"_design/dev_products">>,
        <<"javascript">>,[],
        [{spatial,nil,0,nil,0,
        <<"function (doc) {\n if (doc.location) {\n emit(doc.location, null);\n}\n}">>,
        [<<"index">>],
        0,0,0,nil}],
        {[]},
        nil,0,0}},
        {spatial_group,
        <<155,78,10,252,86,203,121,94,106,184,229,209,71,187,
        100,242>>,
        {db,<0.1373.0>,<0.1374.0>,nil,
        <<"1349906303013353">>,<0.1370.0>,<0.1375.0>,
        {db_header,10,1, <<0,0,0,0,34,36,0,0,0,0,0,83,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,1,246>>, <<0,0,0,0,34,119,0,0,0,0,0,85,0,0,0,0,1>>, <<0,0,0,0,224,91,0,0,0,0,0,93>>, 0,nil,nil},
        1,
        {btree,<0.1370.0>,
        {8740,<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,246>>,83},
        #Fun<couch_db_updater.7.89001503>,
        #Fun<couch_db_updater.8.75953275>,
        #Fun<couch_btree.5.72034400>,
        #Fun<couch_db_updater.9.14108461>,1279,true},
        {btree,<0.1370.0>,
        {8823,<<0,0,0,0,1>>,85},
        #Fun<couch_db_updater.10.50603258>,
        #Fun<couch_db_updater.11.85949495>,
        #Fun<couch_db_updater.6.41937156>,
        #Fun<couch_db_updater.12.107260449>,1279,true},
        {btree,<0.1370.0>,
        {57435,<<>>,93},
        #Fun<couch_btree.3.59827385>,
        #Fun<couch_btree.4.7841881>,
        #Fun<couch_btree.5.72034400>,nil,1279,true},
        1,<<"default/224">>,
        "/opt/couchbase/var/lib/couchbase/data/default/224.couch.1",
        [],nil,
        {user_ctx,null,[],undefined},
        nil,
        [before_header,after_header,on_file_open],
        []},
        <0.31502.88>,<<"_design/dev_products">>,
        <<"javascript">>,[],
        [{spatial,nil,0,nil,0,
        <<"function (doc) {\n if (doc.location) {n emit(doc.location, null);n}\n}">>,
        [<<"index">>],
        0,0,0,<0.31502.88>}],
        {[]},
        {btree,<0.31502.88>,nil, #Fun<couch_btree.0.59827385>, #Fun<couch_btree.1.7841881>, #Fun<couch_btree.2.72034400>,nil,1279,false},
        0,0},
        <0.31516.88>,nil,false,
        {{<0.31509.88>,#Ref<0.0.3655.71595>},1},
        <0.31507.88>}
        ** Reason for termination ==
        ** {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2}

        ,

        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}

        [error_logger:error,2012-10-16T10:41:25.839,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72]
        =========================CRASH REPORT=========================
        crasher:
        initial call: couch_spatial_group:init/1
        pid: <0.31498.88>
        registered_name: []
        exception exit: {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3}

        ,

        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}
        in function gen_server:terminate/6
        ancestors: [couch_spatial,couch_secondary_services,couch_server_sup,
        cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
        messages: []
        links: [<0.31502.88>,<0.7410.0>]
        dictionary: []
        trap_exit: true
        status: running
        heap_size: 1597
        stack_size: 24
        reductions: 3292
        neighbours:

        [error_logger:error,2012-10-16T10:41:25.840,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.31502.88> terminating
        ** Last message in was {'EXIT',<0.31498.88>,
        {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater, '-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2}

        ,

        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}}
        ** When Server state == {file,<0.31505.88>,<0.31506.88>,39}
        ** Reason for termination ==
        ** {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4}

        ,

        {couch_spatial_updater,update,2}]}

        [error_logger:error,2012-10-16T10:41:25.845,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72]
        =========================CRASH REPORT=========================
        crasher:
        initial call: couch_file:init/1
        pid: <0.31502.88>
        registered_name: []
        exception exit: {{badmatch,[41.383103901,2.181912661]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}

        ]}
        in function gen_server:terminate/6
        in call from couch_file:init/1
        ancestors: [<0.31498.88>,couch_spatial,couch_secondary_services,
        couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
        <0.59.0>]
        messages: []
        links: [<0.31507.88>]
        dictionary: []
        trap_exit: true
        status: running
        heap_size: 610
        stack_size: 24
        reductions: 706
        neighbours:
        neighbour: [

        {pid,<0.31507.88>}

        ,

        {registered_name,[]},
        {initial_call,{couch_ref_counter,init,['Argument__1']}},
        {current_function,{gen_server,loop,6}},
        {ancestors,[<0.31498.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]},
        {messages,[]},
        {links,[<0.31502.88>]},
        {dictionary,[]},
        {trap_exit,false},
        {status,waiting},
        {heap_size,377},
        {stack_size,9},
        {reductions,110}]

        [ns_doctor:debug,2012-10-16T10:41:30.265,ns_1@127.0.0.1:ns_doctor:ns_doctor:handle_info:136]Current node statuses:
        [{'ns_1@127.0.0.1',
        [{last_heard,{1350,384085,579005}},
        {outgoing_replications_safeness_level,[{"default",green}]},
        {incoming_replications_conf_hashes,[{"default",[]}]},
        {active_buckets,["default"]},
        {ready_buckets,["default"]},
        {local_tasks,[]},
        {memory,
        [{total,801721048},
        {processes,744843176},
        {processes_used,744841640},
        {system,56877872},
        {atom,947581},
        {atom_used,923632},
        {binary,16851464},
        {code,7533142},
        {ets,23608124}]},
        {system_memory_data,
        [{system_total_memory,520781824},
        {free_swap,36864},
        {total_swap,536866816},
        {cached_memory,7491584},
        {buffered_memory,352256},
        {free_memory,5218304},
        {total_memory,520781824}]},
        {node_storage_conf,
        [{db_path,"/opt/couchbase/var/lib/couchbase/data"},
        {index_path,"/opt/couchbase/var/lib/couchbase/data"}]},
        {statistics,
        [{wall_clock,{477786446,5173}},
        {context_switches,{1974640281,0}},
        {garbage_collection,{128000857,1904450423,0}},
        {io,input,590697858},{output,3673513125},
        {reductions,{1295735187,2577936}},
        {run_queue,1},
        {runtime,{4290737474,520}}]},
        {system_stats,
        [{cpu_utilization_rate,4.054054054054054},
        {swap_total,536866816},
        {swap_used,536862720}]},
        {interesting_stats,
        [{curr_items,10},{curr_items_tot,10},{vb_replica_curr_items,0}]},
        {cluster_compatibility_version,131072},
        {version,
        [{public_key,"0.13"},
        {lhttpc,"1.3.0"},
        {ale,"8cffe61"},
        {os_mon,"2.2.7"},
        {couch_set_view,"1.2.0a-5282953-git"},
        {mnesia,"4.5"},
        {inets,"5.7.1"},
        {couch,"1.2.0a-5282953-git"},
        {mapreduce,"1.0.0"},
        {couch_index_merger,"1.2.0a-5282953-git"},
        {kernel,"2.14.5"},
        {crypto,"2.0.4"},
        {ssl,"4.1.6"},
        {sasl,"2.1.10"},
        {couch_view_parser,"1.0.0"},
        {ns_server,"2.0.0-1723-rel-community"},
        {mochiweb,"1.4.1"},
        {oauth,"7d85d3ef"},
        {stdlib,"1.17.5"}]},
        {supported_compat_version,[2,0]},
        {system_arch,"i686-pc-linux-gnu"},
        {wall_clock,477786},
        {memory_data,{520781824,514686976,{<0.7803.0>,13463144}}},
        {disk_data,
        [{"/",10158008,25},
        {"/dev",254084,1},
        {"/run",50860,78},
        {"/run/lock",5120,0},
        {"/run/shm",254288,0}]},
        {meminfo, <<"MemTotal: 508576 kB\nMemFree: 5272 kB\nBuffers: 320 kB\nCached: 6128 kB\nSwapCached: 39364 kB\nActive: 236360 kB\nInactive: 236912 kB\nActive(anon): 233460 kB\nInactive(anon): 233664 kB\nActive(file): 2900 kB\nInactive(file): 3248 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 508576 kB\nLowFree: 5272 kB\nSwapTotal: 524284 kB\nSwapFree: 0 kB\nDirty: 0 kB\nWriteback: 0 kB\nAnonPages: 427852 kB\nMapped: 3772 kB\nShmem: 4 kB\nSlab: 14832 kB\nSReclaimable: 5900 kB\nSUnreclaim: 8932 kB\nKernelStack: 1144 kB\nPageTables: 3064 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 778572 kB\nCommitted_AS: 1429336 kB\nVmallocTotal: 329720 kB\nVmallocUsed: 3072 kB\nVmallocChunk: 324428 kB\nDirectMap4k: 532480 kB\nDirectMap2M: 0 kB\n">>}]}]
        [error_logger:error,2012-10-16T10:41:31.018,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]Error in process <0.32384.88> on node 'ns_1@127.0.0.1' with exit value: {{badmatch,[4.140051e+01,2.156376e+00]},[{couch_spatial_updater,process_result,1},{couch_spatial_updater,'-process_results/1-fun-0-',2},{lists,foldl,3},{lists,map,2},{couch_spatial_updater,spatial_docs,4},{couch_spatial_updater,update,2}]}


        [error_logger:error,2012-10-16T10:41:31.019,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.32354.88> terminating
        ** Last message in was {'EXIT',<0.32384.88>,
        {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater, '-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}}
        ** When Server state == {group_state,<<"default/129">>,
        {"/opt/couchbase/var/lib/couchbase/data",
        <<"default/129">>,
        {spatial_group,
        <<155,78,10,252,86,203,121,94,106,184,229,209,71,
        187,100,242>>,
        nil,nil,<<"_design/dev_products">>,
        <<"javascript">>,[],
        [{spatial,nil,0,nil,0,
        <<"function (doc) {\n if (doc.location) {\n emit(doc.location, null);\n}\n}">>,
        [<<"index">>],
        0,0,0,nil}],
        {[]},
        nil,0,0}},
        {spatial_group,
        <<155,78,10,252,86,203,121,94,106,184,229,209,71,187,
        100,242>>,
        {db,<0.631.0>,<0.632.0>,nil,<<"1349906302437662">>,
        <0.628.0>,<0.633.0>,
        {db_header,10,1, <<0,0,0,0,33,248,0,0,0,0,0,83,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,1,202>>, <<0,0,0,0,34,75,0,0,0,0,0,85,0,0,0,0,1>>, <<0,0,0,0,224,91,0,0,0,0,0,93>>, 0,nil,nil},
        1,
        {btree,<0.628.0>,
        {8696,<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,202>>,83},
        #Fun<couch_db_updater.7.89001503>,
        #Fun<couch_db_updater.8.75953275>,
        #Fun<couch_btree.5.72034400>,
        #Fun<couch_db_updater.9.14108461>,1279,true},
        {btree,<0.628.0>,
        {8779,<<0,0,0,0,1>>,85},
        #Fun<couch_db_updater.10.50603258>,
        #Fun<couch_db_updater.11.85949495>,
        #Fun<couch_db_updater.6.41937156>,
        #Fun<couch_db_updater.12.107260449>,1279,true},
        {btree,<0.628.0>,
        {57435,<<>>,93},
        #Fun<couch_btree.3.59827385>,
        #Fun<couch_btree.4.7841881>,
        #Fun<couch_btree.5.72034400>,nil,1279,true},
        1,<<"default/129">>,
        "/opt/couchbase/var/lib/couchbase/data/default/129.couch.1",
        [],nil,
        {user_ctx,null,[],undefined},
        nil,
        [before_header,after_header,on_file_open],
        []},
        <0.32358.88>,<<"_design/dev_products">>,
        <<"javascript">>,[],
        [{spatial,nil,0,nil,0,
        <<"function (doc) {\n if (doc.location) {n emit(doc.location, null);n}\n}">>,
        [<<"index">>],
        0,0,0,<0.32358.88>}],
        {[]},
        {btree,<0.32358.88>,nil, #Fun<couch_btree.0.59827385>, #Fun<couch_btree.1.7841881>, #Fun<couch_btree.2.72034400>,nil,1279,false},
        0,0},
        <0.32384.88>,nil,false,
        {{<0.32371.88>,#Ref<0.0.3655.98068>},1},
        <0.32362.88>}
        ** Reason for termination ==
        ** {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}

        [error_logger:error,2012-10-16T10:41:31.023,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72]
        =========================CRASH REPORT=========================
        crasher:
        initial call: couch_spatial_group:init/1
        pid: <0.32354.88>
        registered_name: []
        exception exit: {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}
        in function gen_server:terminate/6
        ancestors: [couch_spatial,couch_secondary_services,couch_server_sup,
        cb_couch_sup,ns_server_cluster_sup,<0.59.0>]
        messages: []
        links: [<0.32358.88>,<0.7410.0>]
        dictionary: []
        trap_exit: true
        status: running
        heap_size: 1597
        stack_size: 24
        reductions: 3289
        neighbours:

        [error_logger:error,2012-10-16T10:41:31.029,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76]** Generic server <0.32358.88> terminating
        ** Last message in was {'EXIT',<0.32354.88>,
        {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater, '-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}}
        ** When Server state == {file,<0.32360.88>,<0.32361.88>,39}
        ** Reason for termination ==
        ** {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}

        [error_logger:error,2012-10-16T10:41:31.031,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72]
        =========================CRASH REPORT=========================
        crasher:
        initial call: couch_file:init/1
        pid: <0.32358.88>
        registered_name: []
        exception exit: {{badmatch,[41.40050888,2.156376361]},
        [{couch_spatial_updater,process_result,1},
        {couch_spatial_updater,'-process_results/1-fun-0-',2},
        {lists,foldl,3},
        {lists,map,2},
        {couch_spatial_updater,spatial_docs,4},
        {couch_spatial_updater,update,2}]}
        in function gen_server:terminate/6
        in call from couch_file:init/1
        ancestors: [<0.32354.88>,couch_spatial,couch_secondary_services,
        couch_server_sup,cb_couch_sup,ns_server_cluster_sup,
        <0.59.0>]
        messages: []
        links: [<0.32362.88>]
        dictionary: []
        trap_exit: true
        status: running
        heap_size: 610
        stack_size: 24
        reductions: 702
        neighbours:
        neighbour: [{pid,<0.32362.88>},
        {registered_name,[]}

        ,
        {initial_call,{couch_ref_counter,init,['Argument__1']}},
        {current_function,{gen_server,loop,6}},

        {ancestors,[<0.32354.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]}

        ,

        {messages,[]}

        ,

        {links,[<0.32358.88>]}

        ,

        {dictionary,[]}

        ,

        {trap_exit,false}

        ,

        {status,waiting}

        ,

        {heap_size,377}

        ,

        {stack_size,9}

        ,

        {reductions,110}

        ]

        Show
        Pathe Patrick added a comment - Volker, the problem is easily reproducible by creating a new standard spatial view and having at least one document with a spatial point: location [41.386944,2.170025] ; function (doc) { if (doc.location) { emit(doc.location, null); } } The server crashes when trying to show results / show the view on 8092 (and being a bit impatient about the loading time, clicking multiple times). With a bit more patience and one click at a time the server seems to be stable, but I don't get any results in the view (there should be 10). The Linode I/O graph and CPU graph get high spikes and I got a warning email about excessive I/O from Linode (> 18k blocks / sec I/O). I tried dev and production views with multiple reloads. Filipe, this is a snap from /opt/couchbase/var/lib/couchbase/logs/debug.11 right after the crash. I can send you the full logs, if you like. [error_logger:error,2012-10-16T10:41:02.060,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72] =========================CRASH REPORT========================= crasher: initial call: couch_ file:init/1 pid: <0.30493.88> registered_name: [] exception exit: {{badmatch, [41.39055252,2.162917375] }, [ {couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} in function gen_server:terminate/6 in call from couch_ file:init/1 ancestors: [<0.30489.88>,couch_spatial,couch_secondary_services, couch_server_sup,cb_couch_sup,ns_server_cluster_sup, <0.59.0>] messages: [] links: [<0.30497.88>] dictionary: [] trap_exit: true status: running heap_size: 610 stack_size: 24 reductions: 706 neighbours: neighbour: [{pid,<0.30497.88>}, {registered_name,[]}, {initial_call,{couch_ref_counter,init, ['Argument__1'] }}, {current_function,{gen_server,loop,6}}, {ancestors,[<0.30489.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]}, {messages,[]}, {links,[<0.30493.88>]}, {dictionary,[]}, {trap_exit,false}, {status,waiting}, {heap_size,377}, {stack_size,9}, {reductions,110}] [ns_server:error,2012-10-16T10:41:08.896,ns_1@127.0.0.1:<0.7792.0>:ns_memcached:verify_report_long_call:274] call {stats,<<>>} took too long: 1319882 us [ns_server:error,2012-10-16T10:41:10.771,ns_1@127.0.0.1:<0.7791.0>:ns_memcached:verify_report_long_call:274] call {stats,<<"timings">>} took too long: 1522090 us [stats:warn,2012-10-16T10:41:10.805,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201] Dropped 4 ticks [ns_server:error,2012-10-16T10:41:11.629,ns_1@127.0.0.1:<0.7792.0>:ns_memcached:verify_report_long_call:274] call topkeys took too long: 866192 us [ns_server:error,2012-10-16T10:41:13.962,ns_1@127.0.0.1:<0.7791.0>:ns_memcached:verify_report_long_call:274] call {stats,<<>>} took too long: 1935259 us [ns_server:error,2012-10-16T10:41:14.311,ns_1@127.0.0.1:<0.7793.0>:ns_memcached:verify_report_long_call:274] call list_vbuckets took too long: 3488579 us [stats:warn,2012-10-16T10:41:18.375,ns_1@127.0.0.1:system_stats_collector:system_stats_collector:handle_info:133] lost 1 ticks [ns_server:error,2012-10-16T10:41:18.882,ns_1@127.0.0.1:ns_doctor:ns_doctor:update_status:204] The following buckets became not ready on node 'ns_1@127.0.0.1': ["default"] , those of them are active ["default"] [ns_server:error,2012-10-16T10:41:18.889,ns_1@127.0.0.1:'ns_memcached-default':ns_memcached:handle_info:594] handle_info(ensure_bucket,..) took too long: 6014770 us [stats:warn,2012-10-16T10:41:19.422,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201] Dropped 7 ticks [ns_server:info,2012-10-16T10:41:19.422,ns_1@127.0.0.1:ns_doctor:ns_doctor:update_status:210] The following buckets became ready on node 'ns_1@127.0.0.1': ["default"] [ns_server:error,2012-10-16T10:41:22.886,ns_1@127.0.0.1:'ns_memcached-default':ns_memcached:handle_info:594] handle_info(ensure_bucket,..) took too long: 725717 us [stats:warn,2012-10-16T10:41:22.979,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201] Dropped 1 ticks [stats:warn,2012-10-16T10:41:24.777,ns_1@127.0.0.1:<0.7799.0>:stats_collector:latest_tick:201] Dropped 1 ticks [error_logger:error,2012-10-16T10:41:25.815,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] Error in process <0.31516.88> on node 'ns_1@127.0.0.1' with exit value: {{badmatch, [4.138310e+01,2.181913e+00] },[{couch_spatial_updater,process_result,1} , {couch_spatial_updater,'-process_results/1-fun-0-',2},{lists,foldl,3},{lists,map,2},{couch_spatial_updater,spatial_docs,4},{couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:25.821,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] ** Generic server <0.31498.88> terminating ** Last message in was {'EXIT',<0.31516.88>, {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater, '-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]}} ** When Server state == {group_state,<<"default/224">>, {"/opt/couchbase/var/lib/couchbase/data", <<"default/224">>, {spatial_group, <<155,78,10,252,86,203,121,94,106,184,229,209,71, 187,100,242>>, nil,nil,<<"_design/dev_products">>, <<"javascript">>,[], [{spatial,nil,0,nil,0, <<"function (doc) {\n if (doc.location) {\n emit(doc.location, null);\n}\n}">>, [<<"index">>] , 0,0,0,nil}], {[]}, nil,0,0}}, {spatial_group, <<155,78,10,252,86,203,121,94,106,184,229,209,71,187, 100,242>>, {db,<0.1373.0>,<0.1374.0>,nil, <<"1349906303013353">>,<0.1370.0>,<0.1375.0>, {db_header,10,1, <<0,0,0,0,34,36,0,0,0,0,0,83,0,0,0,0,1,0,0,0,0,0,0, 0,0,0,1,246>>, <<0,0,0,0,34,119,0,0,0,0,0,85,0,0,0,0,1>>, <<0,0,0,0,224,91,0,0,0,0,0,93>>, 0,nil,nil}, 1, {btree,<0.1370.0>, {8740,<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,246>>,83}, #Fun<couch_db_updater.7.89001503>, #Fun<couch_db_updater.8.75953275>, #Fun<couch_btree.5.72034400>, #Fun<couch_db_updater.9.14108461>,1279,true}, {btree,<0.1370.0>, {8823,<<0,0,0,0,1>>,85}, #Fun<couch_db_updater.10.50603258>, #Fun<couch_db_updater.11.85949495>, #Fun<couch_db_updater.6.41937156>, #Fun<couch_db_updater.12.107260449>,1279,true}, {btree,<0.1370.0>, {57435,<<>>,93}, #Fun<couch_btree.3.59827385>, #Fun<couch_btree.4.7841881>, #Fun<couch_btree.5.72034400>,nil,1279,true}, 1,<<"default/224">>, "/opt/couchbase/var/lib/couchbase/data/default/224.couch.1", [],nil, {user_ctx,null,[],undefined}, nil, [before_header,after_header,on_file_open] , []}, <0.31502.88>,<<"_design/dev_products">>, <<"javascript">>,[], [{spatial,nil,0,nil,0, <<"function (doc) {\n if (doc.location) {n emit(doc.location, null);n}\n}">>, [<<"index">>] , 0,0,0,<0.31502.88>}], {[]}, {btree,<0.31502.88>,nil, #Fun<couch_btree.0.59827385>, #Fun<couch_btree.1.7841881>, #Fun<couch_btree.2.72034400>,nil,1279,false}, 0,0}, <0.31516.88>,nil,false, {{<0.31509.88>,#Ref<0.0.3655.71595>},1} , <0.31507.88>} ** Reason for termination == ** {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2} , {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:25.839,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72] =========================CRASH REPORT========================= crasher: initial call: couch_spatial_group:init/1 pid: <0.31498.88> registered_name: [] exception exit: {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3} , {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} in function gen_server:terminate/6 ancestors: [couch_spatial,couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>] messages: [] links: [<0.31502.88>,<0.7410.0>] dictionary: [] trap_exit: true status: running heap_size: 1597 stack_size: 24 reductions: 3292 neighbours: [error_logger:error,2012-10-16T10:41:25.840,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] ** Generic server <0.31502.88> terminating ** Last message in was {'EXIT',<0.31498.88>, {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater, '-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2} , {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]}} ** When Server state == {file,<0.31505.88>,<0.31506.88>,39} ** Reason for termination == ** {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4} , {couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:25.845,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72] =========================CRASH REPORT========================= crasher: initial call: couch_ file:init/1 pid: <0.31502.88> registered_name: [] exception exit: {{badmatch, [41.383103901,2.181912661] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2} ]} in function gen_server:terminate/6 in call from couch_ file:init/1 ancestors: [<0.31498.88>,couch_spatial,couch_secondary_services, couch_server_sup,cb_couch_sup,ns_server_cluster_sup, <0.59.0>] messages: [] links: [<0.31507.88>] dictionary: [] trap_exit: true status: running heap_size: 610 stack_size: 24 reductions: 706 neighbours: neighbour: [ {pid,<0.31507.88>} , {registered_name,[]}, {initial_call,{couch_ref_counter,init, ['Argument__1'] }}, {current_function,{gen_server,loop,6}}, {ancestors,[<0.31498.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]}, {messages,[]}, {links,[<0.31502.88>]}, {dictionary,[]}, {trap_exit,false}, {status,waiting}, {heap_size,377}, {stack_size,9}, {reductions,110}] [ns_doctor:debug,2012-10-16T10:41:30.265,ns_1@127.0.0.1:ns_doctor:ns_doctor:handle_info:136] Current node statuses: [{'ns_1@127.0.0.1', [{last_heard,{1350,384085,579005}}, {outgoing_replications_safeness_level, [{"default",green}] }, {incoming_replications_conf_hashes, [{"default",[]}] }, {active_buckets,["default"]}, {ready_buckets,["default"]}, {local_tasks,[]}, {memory, [{total,801721048}, {processes,744843176}, {processes_used,744841640}, {system,56877872}, {atom,947581}, {atom_used,923632}, {binary,16851464}, {code,7533142}, {ets,23608124}]}, {system_memory_data, [{system_total_memory,520781824}, {free_swap,36864}, {total_swap,536866816}, {cached_memory,7491584}, {buffered_memory,352256}, {free_memory,5218304}, {total_memory,520781824}]}, {node_storage_conf, [{db_path,"/opt/couchbase/var/lib/couchbase/data"}, {index_path,"/opt/couchbase/var/lib/couchbase/data"}]}, {statistics, [{wall_clock,{477786446,5173}}, {context_switches,{1974640281,0}}, {garbage_collection,{128000857,1904450423,0}}, {io, input,590697858},{output,3673513125} , {reductions,{1295735187,2577936}}, {run_queue,1}, {runtime,{4290737474,520}}]}, {system_stats, [{cpu_utilization_rate,4.054054054054054}, {swap_total,536866816}, {swap_used,536862720}]}, {interesting_stats, [{curr_items,10},{curr_items_tot,10},{vb_replica_curr_items,0}] }, {cluster_compatibility_version,131072}, {version, [{public_key,"0.13"}, {lhttpc,"1.3.0"}, {ale,"8cffe61"}, {os_mon,"2.2.7"}, {couch_set_view,"1.2.0a-5282953-git"}, {mnesia,"4.5"}, {inets,"5.7.1"}, {couch,"1.2.0a-5282953-git"}, {mapreduce,"1.0.0"}, {couch_index_merger,"1.2.0a-5282953-git"}, {kernel,"2.14.5"}, {crypto,"2.0.4"}, {ssl,"4.1.6"}, {sasl,"2.1.10"}, {couch_view_parser,"1.0.0"}, {ns_server,"2.0.0-1723-rel-community"}, {mochiweb,"1.4.1"}, {oauth,"7d85d3ef"}, {stdlib,"1.17.5"}]}, {supported_compat_version,[2,0]}, {system_arch,"i686-pc-linux-gnu"}, {wall_clock,477786}, {memory_data,{520781824,514686976,{<0.7803.0>,13463144}}}, {disk_data, [{"/",10158008,25}, {"/dev",254084,1}, {"/run",50860,78}, {"/run/lock",5120,0}, {"/run/shm",254288,0}]}, {meminfo, <<"MemTotal: 508576 kB\nMemFree: 5272 kB\nBuffers: 320 kB\nCached: 6128 kB\nSwapCached: 39364 kB\nActive: 236360 kB\nInactive: 236912 kB\nActive(anon): 233460 kB\nInactive(anon): 233664 kB\nActive(file): 2900 kB\nInactive(file): 3248 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 508576 kB\nLowFree: 5272 kB\nSwapTotal: 524284 kB\nSwapFree: 0 kB\nDirty: 0 kB\nWriteback: 0 kB\nAnonPages: 427852 kB\nMapped: 3772 kB\nShmem: 4 kB\nSlab: 14832 kB\nSReclaimable: 5900 kB\nSUnreclaim: 8932 kB\nKernelStack: 1144 kB\nPageTables: 3064 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 778572 kB\nCommitted_AS: 1429336 kB\nVmallocTotal: 329720 kB\nVmallocUsed: 3072 kB\nVmallocChunk: 324428 kB\nDirectMap4k: 532480 kB\nDirectMap2M: 0 kB\n">>}]}] [error_logger:error,2012-10-16T10:41:31.018,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] Error in process <0.32384.88> on node 'ns_1@127.0.0.1' with exit value: {{badmatch, [4.140051e+01,2.156376e+00] },[{couch_spatial_updater,process_result,1},{couch_spatial_updater,'-process_results/1-fun-0-',2},{lists,foldl,3},{lists,map,2},{couch_spatial_updater,spatial_docs,4},{couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:31.019,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] ** Generic server <0.32354.88> terminating ** Last message in was {'EXIT',<0.32384.88>, {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater, '-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]}} ** When Server state == {group_state,<<"default/129">>, {"/opt/couchbase/var/lib/couchbase/data", <<"default/129">>, {spatial_group, <<155,78,10,252,86,203,121,94,106,184,229,209,71, 187,100,242>>, nil,nil,<<"_design/dev_products">>, <<"javascript">>,[], [{spatial,nil,0,nil,0, <<"function (doc) {\n if (doc.location) {\n emit(doc.location, null);\n}\n}">>, [<<"index">>] , 0,0,0,nil}], {[]}, nil,0,0}}, {spatial_group, <<155,78,10,252,86,203,121,94,106,184,229,209,71,187, 100,242>>, {db,<0.631.0>,<0.632.0>,nil,<<"1349906302437662">>, <0.628.0>,<0.633.0>, {db_header,10,1, <<0,0,0,0,33,248,0,0,0,0,0,83,0,0,0,0,1,0,0,0,0,0, 0,0,0,0,1,202>>, <<0,0,0,0,34,75,0,0,0,0,0,85,0,0,0,0,1>>, <<0,0,0,0,224,91,0,0,0,0,0,93>>, 0,nil,nil}, 1, {btree,<0.628.0>, {8696,<<0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,202>>,83}, #Fun<couch_db_updater.7.89001503>, #Fun<couch_db_updater.8.75953275>, #Fun<couch_btree.5.72034400>, #Fun<couch_db_updater.9.14108461>,1279,true}, {btree,<0.628.0>, {8779,<<0,0,0,0,1>>,85}, #Fun<couch_db_updater.10.50603258>, #Fun<couch_db_updater.11.85949495>, #Fun<couch_db_updater.6.41937156>, #Fun<couch_db_updater.12.107260449>,1279,true}, {btree,<0.628.0>, {57435,<<>>,93}, #Fun<couch_btree.3.59827385>, #Fun<couch_btree.4.7841881>, #Fun<couch_btree.5.72034400>,nil,1279,true}, 1,<<"default/129">>, "/opt/couchbase/var/lib/couchbase/data/default/129.couch.1", [],nil, {user_ctx,null,[],undefined}, nil, [before_header,after_header,on_file_open] , []}, <0.32358.88>,<<"_design/dev_products">>, <<"javascript">>,[], [{spatial,nil,0,nil,0, <<"function (doc) {\n if (doc.location) {n emit(doc.location, null);n}\n}">>, [<<"index">>] , 0,0,0,<0.32358.88>}], {[]}, {btree,<0.32358.88>,nil, #Fun<couch_btree.0.59827385>, #Fun<couch_btree.1.7841881>, #Fun<couch_btree.2.72034400>,nil,1279,false}, 0,0}, <0.32384.88>,nil,false, {{<0.32371.88>,#Ref<0.0.3655.98068>},1} , <0.32362.88>} ** Reason for termination == ** {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:31.023,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72] =========================CRASH REPORT========================= crasher: initial call: couch_spatial_group:init/1 pid: <0.32354.88> registered_name: [] exception exit: {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} in function gen_server:terminate/6 ancestors: [couch_spatial,couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>] messages: [] links: [<0.32358.88>,<0.7410.0>] dictionary: [] trap_exit: true status: running heap_size: 1597 stack_size: 24 reductions: 3289 neighbours: [error_logger:error,2012-10-16T10:41:31.029,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_msg:76] ** Generic server <0.32358.88> terminating ** Last message in was {'EXIT',<0.32354.88>, {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater, '-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]}} ** When Server state == {file,<0.32360.88>,<0.32361.88>,39} ** Reason for termination == ** {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} [error_logger:error,2012-10-16T10:41:31.031,ns_1@127.0.0.1:error_logger:ale_error_logger_handler:log_report:72] =========================CRASH REPORT========================= crasher: initial call: couch_ file:init/1 pid: <0.32358.88> registered_name: [] exception exit: {{badmatch, [41.40050888,2.156376361] }, [{couch_spatial_updater,process_result,1}, {couch_spatial_updater,'-process_results/1-fun-0-',2}, {lists,foldl,3}, {lists,map,2}, {couch_spatial_updater,spatial_docs,4}, {couch_spatial_updater,update,2}]} in function gen_server:terminate/6 in call from couch_ file:init/1 ancestors: [<0.32354.88>,couch_spatial,couch_secondary_services, couch_server_sup,cb_couch_sup,ns_server_cluster_sup, <0.59.0>] messages: [] links: [<0.32362.88>] dictionary: [] trap_exit: true status: running heap_size: 610 stack_size: 24 reductions: 702 neighbours: neighbour: [{pid,<0.32362.88>}, {registered_name,[]} , {initial_call,{couch_ref_counter,init, ['Argument__1'] }}, {current_function,{gen_server,loop,6}}, {ancestors,[<0.32354.88>,couch_spatial, couch_secondary_services,couch_server_sup, cb_couch_sup,ns_server_cluster_sup,<0.59.0>]} , {messages,[]} , {links,[<0.32358.88>]} , {dictionary,[]} , {trap_exit,false} , {status,waiting} , {heap_size,377} , {stack_size,9} , {reductions,110} ]
        Hide
        Pathe Patrick added a comment -

        This is the latest output on Mac OS:

        {"error":"{read_loop_died,\n {problem_reopening_file,\n

        {error,system_limit},\n {read,99458,{<0.14639.0>,#Ref<0.0.85.154785>}},\n <0.7420.0>,\n \"/Users/patrick/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1\",\n 10}}","reason":"{gen_server,call,[<0.7419.0>,{pread_iolist,99458},infinity]}"},

        followed after a refresh by the system_limit-loop:

        {"total_rows":0,"rows":[
        ],
        "errors":[
        {"from":"local","reason":"{error,system_limit}

        "},
        {"from":"local","reason":"

        {error,system_limit}"},
        {"from":"local","reason":"{error,system_limit}

        "},
        {"from":"local","reason":"

        {error,system_limit}

        "},

        Show
        Pathe Patrick added a comment - This is the latest output on Mac OS: {"error":"{read_loop_died,\n {problem_reopening_file,\n {error,system_limit},\n {read,99458,{<0.14639.0>,#Ref<0.0.85.154785>}},\n <0.7420.0>,\n \"/Users/patrick/Library/Application Support/Couchbase/var/lib/couchdb/default/master.couch.1\",\n 10}}","reason":"{gen_server,call, [<0.7419.0>,{pread_iolist,99458},infinity] }"}, followed after a refresh by the system_limit-loop: {"total_rows":0,"rows":[ ], "errors":[ {"from":"local","reason":"{error,system_limit} "}, {"from":"local","reason":" {error,system_limit}"}, {"from":"local","reason":"{error,system_limit} "}, {"from":"local","reason":" {error,system_limit} "},
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Volker

        is there some other experiments you want QE to run to make progress on this tixket ?

        Show
        farshid Farshid Ghods (Inactive) added a comment - Volker is there some other experiments you want QE to run to make progress on this tixket ?
        Hide
        vmx Volker Mische added a comment -

        Patrick, could it be that not all your doc.locations are valid GeoJSON?

        Show
        vmx Volker Mische added a comment - Patrick, could it be that not all your doc.locations are valid GeoJSON?
        Hide
        vmx Volker Mische added a comment -

        Farshid: yes, please try it again with setting ERL_MAX_PORTS to a higher value, as Dustin suggested.

        Show
        vmx Volker Mische added a comment - Farshid: yes, please try it again with setting ERL_MAX_PORTS to a higher value, as Dustin suggested.
        Hide
        Pathe Patrick added a comment -

        Volker, I tested with one document in the database, which only has location[41.386944,2.170025] set. In the other bucket there are 10 documents, all with an array of two values: location[lat,lon].

        Show
        Pathe Patrick added a comment - Volker, I tested with one document in the database, which only has location [41.386944,2.170025] set. In the other bucket there are 10 documents, all with an array of two values: location [lat,lon] .
        Hide
        vmx Volker Mische added a comment -

        Patrick, good then that's the problem, it's not valid GeoJSON. The emit in your map function would need to look like:

        emit(

        {type: "Point", coordinates: doc.location}

        , null);

        Show
        vmx Volker Mische added a comment - Patrick, good then that's the problem, it's not valid GeoJSON. The emit in your map function would need to look like: emit( {type: "Point", coordinates: doc.location} , null);
        Hide
        Pathe Patrick added a comment -

        Oh crap. Sorry about that. I lost all my views with the 2.0 upgrade and didn't remember how I wrote them. Maybe a good idea to change the default spatial function to that emit in future releases?

        Show
        Pathe Patrick added a comment - Oh crap. Sorry about that. I lost all my views with the 2.0 upgrade and didn't remember how I wrote them. Maybe a good idea to change the default spatial function to that emit in future releases?
        Hide
        vmx Volker Mische added a comment -

        Patrick, the solution is better error messages instead of crashing It's on my TODO list.

        For all others, Patrick's crashes were a different problem, though the original issue is still there.

        Show
        vmx Volker Mische added a comment - Patrick, the solution is better error messages instead of crashing It's on my TODO list. For all others, Patrick's crashes were a different problem, though the original issue is still there.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        I've tried out on my macbook, running OS X Lion (10.7).
        Followed the instructions to raise max open files limit from http://docs.basho.com/riak/latest/cookbooks/Open-Files-Limit/#Mac OS X and set and exported ERL_MAX_PORTS to 10000.

        Not sure why, but I get EMFILE errors (too many open files), even though ulimit -n reports 10000.

        This is all like mapreduce indexes were over an year ago, one small index per vbucket database, then results from all are merged at query time. For this case (single node) it means 1024 files open for the vbucket databases (1 to 1 mapping) plus 2 file descriptors per vbucket spatial index file. In other words, you need at least 3K files open to query a spatial view on a single node (+1 one for the client to server TCP connection, etc).

        With DP4 and past versions, because the default number of vbuckets was 256 (and not 1024 like in beta and builds post DP4), this was probably doable in OS X or with default settings of most Linux distributions. If I recall correctly, on Ubuntu the default maximum for open files for a user is 1K or 2K.

        See:

        http://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user

        about how to increase it on Linux/Ubuntu.

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - I've tried out on my macbook, running OS X Lion (10.7). Followed the instructions to raise max open files limit from http://docs.basho.com/riak/latest/cookbooks/Open-Files-Limit/#Mac OS X and set and exported ERL_MAX_PORTS to 10000. Not sure why, but I get EMFILE errors (too many open files), even though ulimit -n reports 10000. This is all like mapreduce indexes were over an year ago, one small index per vbucket database, then results from all are merged at query time. For this case (single node) it means 1024 files open for the vbucket databases (1 to 1 mapping) plus 2 file descriptors per vbucket spatial index file. In other words, you need at least 3K files open to query a spatial view on a single node (+1 one for the client to server TCP connection, etc). With DP4 and past versions, because the default number of vbuckets was 256 (and not 1024 like in beta and builds post DP4), this was probably doable in OS X or with default settings of most Linux distributions. If I recall correctly, on Ubuntu the default maximum for open files for a user is 1K or 2K. See: http://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user about how to increase it on Linux/Ubuntu.
        Hide
        vmx Volker Mische added a comment -

        So how do we go from here? I lean towards mentioning in the docs that you should decrease the number of vBuckets.

        Show
        vmx Volker Mische added a comment - So how do we go from here? I lean towards mentioning in the docs that you should decrease the number of vBuckets.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        we can change the number of vbuckets for mac release but that has bigger impact such as SDKs and making sure all scripts and external customer need to be aware.

        Show
        farshid Farshid Ghods (Inactive) added a comment - we can change the number of vbuckets for mac release but that has bigger impact such as SDKs and making sure all scripts and external customer need to be aware.
        Hide
        dipti Dipti Borkar added a comment -

        changing the number of vBuckets is a much bigger change.

        how common is this crash?

        Show
        dipti Dipti Borkar added a comment - changing the number of vBuckets is a much bigger change. how common is this crash?
        Hide
        vmx Volker Mische added a comment -

        I like the idea of changing the default number of vBuckets, as I think it's way too high i you run a single instance (also rebalancing is realy slow with 1024 vBuckets).

        Though I really see the point that changing it is a too huge step. Hence I would rather spend time on making it easier to change the number of vBuckets and put up good documentation about it. So people who really want to use the spatial index can just decrease the number. That would then help on all platforms, as I think you could also easily hit the limit on Ubuntu as well as you have the default settings.

        Show
        vmx Volker Mische added a comment - I like the idea of changing the default number of vBuckets, as I think it's way too high i you run a single instance (also rebalancing is realy slow with 1024 vBuckets). Though I really see the point that changing it is a too huge step. Hence I would rather spend time on making it easier to change the number of vBuckets and put up good documentation about it. So people who really want to use the spatial index can just decrease the number. That would then help on all platforms, as I think you could also easily hit the limit on Ubuntu as well as you have the default settings.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        @Dipti, perhaps my comments before were not very explicit.
        It means, with default settings of OS X, Ubuntu etc, you will never be able to query spatial views, at least for single node case, and likely for 2 nodes cluster and maybe 3 nodes cluster.

        In Linux it's easy to change, so it gets documented. On OS X (and Windows almost for sure), someone more skilled in those OSes might know how far we can configure them to be able to allow for more open file descriptors.

        This shouldn't be a surprise for QE, as the exact same issue happened on regular map views over an year ago, and it affected DP1 and DP2 (and the demo for the first CouchConf SFO).

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - @Dipti, perhaps my comments before were not very explicit. It means, with default settings of OS X, Ubuntu etc, you will never be able to query spatial views, at least for single node case, and likely for 2 nodes cluster and maybe 3 nodes cluster. In Linux it's easy to change, so it gets documented. On OS X (and Windows almost for sure), someone more skilled in those OSes might know how far we can configure them to be able to allow for more open file descriptors. This shouldn't be a surprise for QE, as the exact same issue happened on regular map views over an year ago, and it affected DP1 and DP2 (and the demo for the first CouchConf SFO).
        Hide
        steve Steve Yen added a comment -

        This looks like it will not make 2.0. Let's talk in next "daily" 2.0 bug scrub w/ PM mtg. .next?

        For regular map-reduce indexes, iirc, Filipe made fixes to limit # of file descriptors so it doesn't run into this problem.

        However, spatial indexes doesn't have that fix, it seems.

        Show
        steve Steve Yen added a comment - This looks like it will not make 2.0. Let's talk in next "daily" 2.0 bug scrub w/ PM mtg. .next? For regular map-reduce indexes, iirc, Filipe made fixes to limit # of file descriptors so it doesn't run into this problem. However, spatial indexes doesn't have that fix, it seems.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        I didn't make any fixes Steve.
        It was a side effect of the b-superstar design -> 1 index file vs N index files.

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - I didn't make any fixes Steve. It was a side effect of the b-superstar design -> 1 index file vs N index files.
        Hide
        steve Steve Yen added a comment -

        tweaking the summary as comments indicate it's not just a mac-only thing

        Show
        steve Steve Yen added a comment - tweaking the summary as comments indicate it's not just a mac-only thing
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        reminder for QE to try out this use case on windows installation

        Show
        farshid Farshid Ghods (Inactive) added a comment - reminder for QE to try out this use case on windows installation
        Hide
        dipti Dipti Borkar added a comment -

        Reducing the number of vBuckets to 64 might be the best way around for MacOS. Dev and QE team should consider this and provide feedback if this is possible including testing for 2.0

        Show
        dipti Dipti Borkar added a comment - Reducing the number of vBuckets to 64 might be the best way around for MacOS. Dev and QE team should consider this and provide feedback if this is possible including testing for 2.0
        Hide
        steve Steve Yen added a comment -

        from farshid's / dipti's comments, seems like need a recommendation from QE and dev – assigning to farshid instead of dipti.

        Show
        steve Steve Yen added a comment - from farshid's / dipti's comments, seems like need a recommendation from QE and dev – assigning to farshid instead of dipti.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        in order to change the number of vbuckets to 64 we need to change this file

        https://github.com/couchbase/couchdbx-app/blob/master/Couchbase%20Server/start-couchbase.sh

        and the COUCHBASE_NUM_VBUCKETS

        Show
        farshid Farshid Ghods (Inactive) added a comment - in order to change the number of vbuckets to 64 we need to change this file https://github.com/couchbase/couchdbx-app/blob/master/Couchbase%20Server/start-couchbase.sh and the COUCHBASE_NUM_VBUCKETS
        Hide
        steve Steve Yen added a comment -

        options...

        • ship with 64 vbuckets as default. And, need to add checks if user attempts rebalance/XDCR between clusters of different # of vbuckets.
        • ship with 1024 vbuckets as default and have instructions on how to either change vbucket #'s to lower (64?) or instructions on how to change system limits. And, document warnings about rebalance/XDCR between clusters of mismatched # vbuckets. (Damien favors this.)
        Show
        steve Steve Yen added a comment - options... ship with 64 vbuckets as default. And, need to add checks if user attempts rebalance/XDCR between clusters of different # of vbuckets. ship with 1024 vbuckets as default and have instructions on how to either change vbucket #'s to lower (64?) or instructions on how to change system limits. And, document warnings about rebalance/XDCR between clusters of mismatched # vbuckets. (Damien favors this.)
        Hide
        siri Sriram Melkote added a comment -

        On OS X, I bumped up kern.maxfiles andkern.maxfilesperproc in sysctl, and then increased launchd maxfiles. Then my erlang install (not from couchbase installer) ran into FD_SETSIZE limit at which point I gave up and switched to 64 buckets.

        Show
        siri Sriram Melkote added a comment - On OS X, I bumped up kern.maxfiles andkern.maxfilesperproc in sysctl, and then increased launchd maxfiles. Then my erlang install (not from couchbase installer) ran into FD_SETSIZE limit at which point I gave up and switched to 64 buckets.
        Hide
        dipti Dipti Borkar added a comment -

        did it work well after switching to 64 vbuckets Sriram?

        Show
        dipti Dipti Borkar added a comment - did it work well after switching to 64 vbuckets Sriram?
        Hide
        steve Steve Yen added a comment -

        reviewed in bug-scrub mtg...

        consensus is to set it to 64 vbuckets for OSX.

        Show
        steve Steve Yen added a comment - reviewed in bug-scrub mtg... consensus is to set it to 64 vbuckets for OSX.
        Hide
        steve Steve Yen added a comment -

        reassigning to Jens to set that OSX env variable, per bug scrub.

        Show
        steve Steve Yen added a comment - reassigning to Jens to set that OSX env variable, per bug scrub.
        Hide
        siri Sriram Melkote added a comment -

        Dipti - yes, spatial view worked properly on OS X with 64 vBuckets

        Show
        siri Sriram Melkote added a comment - Dipti - yes, spatial view worked properly on OS X with 64 vBuckets
        Hide
        jens Jens Alfke added a comment -

        Patch is out for review: http://review.couchbase.org/#/c/22242/

        Show
        jens Jens Alfke added a comment - Patch is out for review: http://review.couchbase.org/#/c/22242/
        Hide
        vmx Volker Mische added a comment -

        John Zablocki just reported that GeoCouch also crashes on Windows. We might want to decrease the number of vBuckets there as well.

        http://www.couchbase.com/issues/browse/GC-4

        Show
        vmx Volker Mische added a comment - John Zablocki just reported that GeoCouch also crashes on Windows. We might want to decrease the number of vBuckets there as well. http://www.couchbase.com/issues/browse/GC-4
        Hide
        dipti Dipti Borkar added a comment -

        This needs to be documented. Jens please assign the bug to MC once you have merged the change.

        MC, we will need to add big "warning" signs on the MacOS install doc pages, saying that it is not compatible with other platforms. Given that it is a developer only platform, the number of vBuckets is set to 64 by default. So mixed clusters with other platforms will not work. Replicating data using XDCR to / from one cluster with 1024 vBucket (linux, windows) from / to to a cluster with 64 vBuckets will NOT work.

        Show
        dipti Dipti Borkar added a comment - This needs to be documented. Jens please assign the bug to MC once you have merged the change. MC, we will need to add big "warning" signs on the MacOS install doc pages, saying that it is not compatible with other platforms. Given that it is a developer only platform, the number of vBuckets is set to 64 by default. So mixed clusters with other platforms will not work. Replicating data using XDCR to / from one cluster with 1024 vBucket (linux, windows) from / to to a cluster with 64 vBuckets will NOT work.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        merged http://review.couchbase.org/#/c/22242/

        builds 1939 will have this change

        Show
        farshid Farshid Ghods (Inactive) added a comment - merged http://review.couchbase.org/#/c/22242/ builds 1939 will have this change
        Hide
        mikew Mike Wiederhold added a comment -

        MC, please see Dipti's comment for what needs to be documented.

        Show
        mikew Mike Wiederhold added a comment - MC, please see Dipti's comment for what needs to be documented.
        Hide
        jens Jens Alfke added a comment -

        FYI, I have asked a question on an Apple mailing list about why we can't seem to set the file-descriptor limit high enough. Hopefully someone will have a good answer that will let me work around the problem.

        Show
        jens Jens Alfke added a comment - FYI, I have asked a question on an Apple mailing list about why we can't seem to set the file-descriptor limit high enough. Hopefully someone will have a good answer that will let me work around the problem.
        Show
        farshid Farshid Ghods (Inactive) added a comment - http://builds.hq.northscale.net/latestbuilds/couchbase-server-community_x86_64_2.0.0-1939-rel.zip
        Hide
        vmx Volker Mische added a comment -

        I've tried to add a spatial view in Windows an the beer sample data set. It looks like:

        function (doc, meta) {
        if (doc.geo) {
        emit(

        {"type": "Point", "coordinates": [doc.geo.lng, doc.geo.lat]}

        , meta.id);
        }
        }

        It worked without a problem. I tried it with another one (with some other emit value). It was still working. So I'd say Windows is good to go. Perhaps we should try how many spatial views you can have before windows freaks out. It would be cool if it wouldn't be me, as using Windows vie remote desktop is really painfully slow.

        Show
        vmx Volker Mische added a comment - I've tried to add a spatial view in Windows an the beer sample data set. It looks like: function (doc, meta) { if (doc.geo) { emit( {"type": "Point", "coordinates": [doc.geo.lng, doc.geo.lat]} , meta.id); } } It worked without a problem. I tried it with another one (with some other emit value). It was still working. So I'd say Windows is good to go. Perhaps we should try how many spatial views you can have before windows freaks out. It would be cool if it wouldn't be me, as using Windows vie remote desktop is really painfully slow.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        on a seperate email Farshid has asked Iryna and Deep today to run similar tests on windows . I will update the ticket by wednesday.

        Show
        farshid Farshid Ghods (Inactive) added a comment - on a seperate email Farshid has asked Iryna and Deep today to run similar tests on windows . I will update the ticket by wednesday.
        Hide
        mccouch MC Brown (Inactive) added a comment -

        I've added a note to the Mac OS X requirements page noting the incompatibility and how to migrate data. I've also added a note to the MAc OS X install page and repeated the issue and warning.

        Show
        mccouch MC Brown (Inactive) added a comment - I've added a note to the Mac OS X requirements page noting the incompatibility and how to migrate data. I've also added a note to the MAc OS X install page and repeated the issue and warning.
        Hide
        vmx Volker Mische added a comment -

        I dare to reopen it and assign it to me. The still needs to be a decision made on how to handle Windows.

        Show
        vmx Volker Mische added a comment - I dare to reopen it and assign it to me. The still needs to be a decision made on how to handle Windows.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        assigning this to Iryna pending results from testing on windows

        Iryna , if you are uanble to access windows on ec2 or mview please let Deep know and he will run the tests

        Show
        farshid Farshid Ghods (Inactive) added a comment - assigning this to Iryna pending results from testing on windows Iryna , if you are uanble to access windows on ec2 or mview please let Deep know and he will run the tests
        Hide
        john John Zablocki (Inactive) added a comment - - edited

        I'm still seeing this problem on Windows and an Ubuntu cluster. All I'm doing is running the query referenced in GEO-4 through the admin console and I get the errors below and eventually the node crashes.

        {"total_rows":0,"rows":[
        ],
        "errors":[
        {"from":"local","reason":"{noproc,{gen_server,call,[<0.26111.0>,

        {request_group,0},infinity]}}"},
        {"from":"local","reason":"{noproc,{gen_server,call,[<0.26146.0>,{request_group,0}

        ,infinity]}}"},
        {"from":"local","reason":"{noproc,{gen_server,call,[<0.26183.0>,

        {request_group,0},infinity]}}"},
        {"from":"local","reason":"{noproc,{gen_server,call,[<0.25515.0>,{request_group,0}

        ,infinity]}}"},
        {"from":"local","reason":"{noproc,{gen_server,call,[<0.23414.0>,

        {request_group,0}

        ,infinity]}}"},

        ...

        Show
        john John Zablocki (Inactive) added a comment - - edited I'm still seeing this problem on Windows and an Ubuntu cluster. All I'm doing is running the query referenced in GEO-4 through the admin console and I get the errors below and eventually the node crashes. {"total_rows":0,"rows":[ ], "errors":[ {"from":"local","reason":"{noproc,{gen_server,call,[<0.26111.0>, {request_group,0},infinity]}}"}, {"from":"local","reason":"{noproc,{gen_server,call,[<0.26146.0>,{request_group,0} ,infinity]}}"}, {"from":"local","reason":"{noproc,{gen_server,call,[<0.26183.0>, {request_group,0},infinity]}}"}, {"from":"local","reason":"{noproc,{gen_server,call,[<0.25515.0>,{request_group,0} ,infinity]}}"}, {"from":"local","reason":"{noproc,{gen_server,call,[<0.23414.0>, {request_group,0} ,infinity]}}"}, ...
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        per discussion in bug scrubbing we need to create a seperate bug for use cases that are for more than one spatial index and on different platforms

        Show
        farshid Farshid Ghods (Inactive) added a comment - per discussion in bug scrubbing we need to create a seperate bug for use cases that are for more than one spatial index and on different platforms
        Hide
        deepkaran.salooja Deepkaran Salooja added a comment -

        For Windows, I get the error "reason":"{{system_limit with 5th ddoc/view while doing query. Till 4 ddocs (1 unique view per ddoc) it works fine.

        For Linux, till 15 ddoc/views(1 unique view per ddoc), view queries worked ok.

        Show
        deepkaran.salooja Deepkaran Salooja added a comment - For Windows, I get the error "reason":"{{system_limit with 5th ddoc/view while doing query. Till 4 ddocs (1 unique view per ddoc) it works fine. For Linux, till 15 ddoc/views(1 unique view per ddoc), view queries worked ok.
        Hide
        vmx Volker Mische added a comment -

        This sounds good enough for me. I would just document that on Windows the Spatial Views are limited to 4 design docs with 1 view each. This should be enough to play with it.

        For Linux 10+ is also good enough. There the documentation could mention that it's an experimental feature that might fall apart when too many Spatial Views are defined.

        Show
        vmx Volker Mische added a comment - This sounds good enough for me. I would just document that on Windows the Spatial Views are limited to 4 design docs with 1 view each. This should be enough to play with it. For Linux 10+ is also good enough. There the documentation could mention that it's an experimental feature that might fall apart when too many Spatial Views are defined.
        Hide
        steve Steve Yen added a comment - - edited

        per bug-scrub mtg. Farshid will followup with John Z on whether he saw a problem with a single ddoc/spatial-index.

        Show
        steve Steve Yen added a comment - - edited per bug-scrub mtg. Farshid will followup with John Z on whether he saw a problem with a single ddoc/spatial-index.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        John we opened http://www.couchbase.com/issues/browse/MB-7109 for tracking the issue you are seeing

        Show
        farshid Farshid Ghods (Inactive) added a comment - John we opened http://www.couchbase.com/issues/browse/MB-7109 for tracking the issue you are seeing
        Hide
        jchrisa J Chris Anderson [X] (Inactive) added a comment -

        Once I saw a line of code in the Erlang VM file handler, having to do with handling on Mac, that essentially hardcodes it at 1024. So unless we wanna fix that in the Erlang VM we made the right choice here by lowering vbucket count on Mac.

        Show
        jchrisa J Chris Anderson [X] (Inactive) added a comment - Once I saw a line of code in the Erlang VM file handler, having to do with handling on Mac, that essentially hardcodes it at 1024. So unless we wanna fix that in the Erlang VM we made the right choice here by lowering vbucket count on Mac.
        Hide
        FilipeManana Filipe Manana (Inactive) added a comment -

        Yes, that's a well known issue that was debated several times in erlang-questions list:

        http://erlang.2086793.n4.nabble.com/clipping-of-max-of-file-descriptors-by-erts-when-kernel-poll-is-enabled-td2108706.html

        Doesn't seem it will be addressed at any time soon.

        Show
        FilipeManana Filipe Manana (Inactive) added a comment - Yes, that's a well known issue that was debated several times in erlang-questions list: http://erlang.2086793.n4.nabble.com/clipping-of-max-of-file-descriptors-by-erts-when-kernel-poll-is-enabled-td2108706.html Doesn't seem it will be addressed at any time soon.
        Hide
        kzeller kzeller added a comment -

        Added to RN: For Mac OSX, we limit the number of vBuckets to 64 from 1024
        due to limitations on Mac OSX file descriptors. In the past, this
        resulted in crashes for OSX.

        Show
        kzeller kzeller added a comment - Added to RN: For Mac OSX, we limit the number of vBuckets to 64 from 1024 due to limitations on Mac OSX file descriptors. In the past, this resulted in crashes for OSX.

          People

          • Assignee:
            iryna iryna
            Reporter:
            farshid Farshid Ghods (Inactive)
          • Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Gerrit Reviews

              There are no open Gerrit changes