Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-1504

libconflate spin loops if REST server isn't streaming JSON

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • 1.6.0 beta2
    • 1.6.0 beta4
    • moxi
    • None
    • Operating System: All
      Platform: All

    Description

      See http://groups.google.com/group/moxi/browse_thread/thread/4d7ba4abd1ef2e07

      partially reproduced here...

      On Jul 13, 6:46 pm, "steve.yen" <steve....@gmail.com> wrote:

      > Hi, you've found two things...

      > One is a bug – moxi should stop (or at least backoff) if it sees a
      > bad config file. I've entered this into the internal bug tracking
      > system.

      > Another is your config is wrong, and is missing the vBucketMap
      > section...

      > 11311 =

      { > "hashAlgorithm": "CRC", > "numReplicas": 0, > "serverList": ["localhost:11211"], > "vBucketMap": > [ > [0], > [0] > ] > }

      > Actually, this brings up an important clarifying question: are you
      > trying to use moxi to proxy to a membase server or to memcached
      > server?

      Regular memcached server for now. We need to decrease the number of
      established connections to memcached servers. I have plans to try out
      also membase, but not right now.

      Yeah, I noticed that too. I got it to work with:
      11311 =

      { "hashAlgorithm": "CRC", "numReplicas": 0, "serverList": ["127.0.0.1:11214","127.0.0.1:11213","127.0.0.1:11212","127.0.0.1:11211"], "vBucketMap": [ [0] ] }

      With that config, moxi works fine if launching with the config file:
      $ ./moxi -vvv -z /var/www/pools/default/bucketsStreaming/default

      I have a testing php client that attacks the moxi proxi, and it works.
      Even more, I managed to configure it so the hash is compatible with
      the current hash php used.

      But again, when run in REST mode, the config is requested hundreds of
      times per second, and moxi crashes when I run the same testing php
      client against it. The config to fetch from REST is different from the
      file format?
      $ ./moxi -vvv -z auth=,url=http://localhost:80/pools/default/
      bucketsStreaming/default,#@ -p 11311

      slab class 1: chunk size 96 perslab 10922
      slab class 2: chunk size 120 perslab 8738
      slab class 3: chunk size 152 perslab 6898
      slab class 4: chunk size 192 perslab 5461
      slab class 5: chunk size 240 perslab 4369
      slab class 6: chunk size 304 perslab 3449
      slab class 7: chunk size 384 perslab 2730
      slab class 8: chunk size 480 perslab 2184
      slab class 9: chunk size 600 perslab 1747
      slab class 10: chunk size 752 perslab 1394
      slab class 11: chunk size 944 perslab 1110
      slab class 12: chunk size 1184 perslab 885
      slab class 13: chunk size 1480 perslab 708
      slab class 14: chunk size 1856 perslab 564
      slab class 15: chunk size 2320 perslab 451
      slab class 16: chunk size 2904 perslab 361
      slab class 17: chunk size 3632 perslab 288
      slab class 18: chunk size 4544 perslab 230
      slab class 19: chunk size 5680 perslab 184
      slab class 20: chunk size 7104 perslab 147
      slab class 21: chunk size 8880 perslab 118
      slab class 22: chunk size 11104 perslab 94
      slab class 23: chunk size 13880 perslab 75
      slab class 24: chunk size 17352 perslab 60
      slab class 25: chunk size 21696 perslab 48
      slab class 26: chunk size 27120 perslab 38
      slab class 27: chunk size 33904 perslab 30
      slab class 28: chunk size 42384 perslab 24
      slab class 29: chunk size 52984 perslab 19
      slab class 30: chunk size 66232 perslab 15
      slab class 31: chunk size 82792 perslab 12
      slab class 32: chunk size 103496 perslab 10
      slab class 33: chunk size 129376 perslab 8
      slab class 34: chunk size 161720 perslab 6
      slab class 35: chunk size 202152 perslab 5
      slab class 36: chunk size 252696 perslab 4
      slab class 37: chunk size 315872 perslab 3
      slab class 38: chunk size 394840 perslab 2
      slab class 39: chunk size 493552 perslab 2
      worker_libevent thread_id 140693837612816
      worker_libevent thread_id 140693854398224
      worker_libevent thread_id 140693862790928
      worker_libevent thread_id 140693846005520
      <38 server listening (auto-negotiate)
      <38 initialized conn_funcs to default
      <39 server listening (auto-negotiate)
      <39 initialized conn_funcs to default
      cproxy_init jid: host: http://localhost:80/pools/default/bucketsStreaming/default
      dbpath: /usr/local/var/lib/moxi/conflate-default.cfg
      cproxy_init_agent_start
      cproxy_init done
      38: drive_machine conn_listening
      <41 new auto-negotiating client connection
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      41: Client using the ascii protocol
      <41 set test_moxi_0 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_0
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_1 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_1
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_2 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_2
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_3 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_3
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_4 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_4
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_5 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_5e 0 5

      moxi5
      efault
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_6 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_6
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_7 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_7e 0 5

      moxi7
      efault
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_8 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_8: 0 5

      moxi8
      Host: l
      >41 STORED

      41: going from conn_nread to conn_write
      41: drive_machine conn_write
      41: drive_machine conn_write
      41: going from conn_write to conn_new_cmd
      41: drive_machine conn_new_cmd
      41: going from conn_new_cmd to conn_waiting
      41: drive_machine conn_waiting
      41: going from conn_waiting to conn_read
      41: drive_machine conn_read
      41: going from conn_read to conn_parse_cmd
      41: drive_machine conn_parse_cmd
      <41 set test_moxi_9 0 0 5
      41: going from conn_parse_cmd to conn_nread
      41: drive_machine conn_nread
      41: drive_machine conn_nread
      > NOT FOUND test_moxi_9: 0 5

      moxi9
      Host: l
      >41 STORED

      41: going from conn_nread to conn_write
      41: ...

      ------------------

      From: "steve.yen" <steve....@gmail.com>
      Date: Wed, 14 Jul 2010 08:57:18 -0700 (PDT)
      Local: Wed, Jul 14 2010 8:57 am
      Subject: Re: Moxi management channel

      Hi,
      Yes, you've hit a manifestation of the same bug, or at least it's in
      the same part of the code that interacts with libcurl.

      The latest libconflate has a quick fix that should make REST requests
      less often (once a second, rather than as fast-as-possible), and I
      hope that'll be helpful. Please see:
      http://github.com/northscale/libconflate/commit/420faf47fa6c1ee3564f2...

      A better fix would be to do something more intelligent (backoff, etc).

      The end-all solution, by the way, is with the way membase does it.
      The REST/webserver component in membase keeps the HTTP/REST connection
      open, or in so-called "streaming" fashion. So, when the membase
      cluster management component sees a cluster configuration change, it
      can actively notify clients like moxi. That has the benefit so that
      clients like moxi do not need to do continual polling against the REST
      server.

      Steve

      On Jul 14, 2:09 am, Guille bisho <bishi...@gmail.com> wrote:

      • Hide quoted text -
      • Show quoted text -
        > On Jul 13, 6:46 pm, "steve.yen" <steve....@gmail.com> wrote:

      > > Hi, you've found two things...

      > > One is a bug – moxi should stop (or at least backoff) if it sees a
      > > bad config file. I've entered this into the internal bug tracking
      > > system.

      > > Another is your config is wrong, and is missing the vBucketMap
      > > section...

      > > 11311 =

      { > > "hashAlgorithm": "CRC", > > "numReplicas": 0, > > "serverList": ["localhost:11211"], > > "vBucketMap": > > [ > > [0], > > [0] > > ] > > }

      > > Actually, this brings up an important clarifying question: are you
      > > trying to use moxi to proxy to a membase server or to memcached
      > > server?

      > Regular memcached server for now. We need to decrease the number of
      > established connections to memcached servers. I have plans to try out
      > also membase, but not right now.

      > Yeah, I noticed that too. I got it to work with:
      > 11311 =

      { > "hashAlgorithm": "CRC", > "numReplicas": 0, > "serverList": > ["127.0.0.1:11214","127.0.0.1:11213","127.0.0.1:11212","127.0.0.1:11211"], > "vBucketMap": > [ > [0] > ] > }

      > With that config, moxi works fine if launching with the config file:
      > $ ./moxi -vvv -z /var/www/pools/default/bucketsStreaming/default

      > I have a testing php client that attacks the moxi proxi, and it works.
      > Even more, I managed to configure it so the hash is compatible with
      > the current hash php used.

      > But again, when run in REST mode, the config is requested hundreds of
      > times per second, and moxi crashes when I run the same testing php
      > client against it. The config to fetch from REST is different from the
      > file format?
      > $ ./moxi -vvv -z auth=,url=http://localhost:80/pools/default/
      > bucketsStreaming/default,#@ -p 11311

      > slab class 1: chunk size 96 perslab 10922
      > slab class 2: chunk size 120 perslab 8738
      > slab class 3: chunk size 152 perslab 6898
      > slab class 4: chunk size 192 perslab 5461
      > slab class 5: chunk size 240 perslab 4369
      > slab class 6: chunk size 304 perslab 3449
      > slab class 7: chunk size 384 perslab 2730
      > slab class 8: chunk size 480 perslab 2184
      > slab class 9: chunk size 600 perslab 1747
      > slab class 10: chunk size 752 perslab 1394
      > slab class 11: chunk size 944 perslab 1110
      > slab class 12: chunk size 1184 perslab 885
      > slab class 13: chunk size 1480 perslab 708
      > slab class 14: chunk size 1856 perslab 564
      > slab class 15: chunk size 2320 perslab 451
      > slab class 16: chunk size 2904 perslab 361
      > slab class 17: chunk size 3632 perslab 288
      > slab class 18: chunk size 4544 perslab 230
      > slab class 19: chunk size 5680 perslab 184
      > slab class 20: chunk size 7104 perslab 147
      > slab class 21: chunk size 8880 perslab 118
      > slab class 22: chunk size 11104 perslab 94
      > slab class 23: chunk size 13880 perslab 75
      > slab class 24: chunk size 17352 perslab 60
      > slab class 25: chunk size 21696 perslab 48
      > slab class 26: chunk size 27120 perslab 38
      > slab class 27: chunk size 33904 perslab 30
      > slab class 28: chunk size 42384 perslab 24
      > slab class 29: chunk size 52984 perslab 19
      > slab class 30: chunk size 66232 perslab 15
      > slab class 31: chunk size 82792 perslab 12
      > slab class 32: chunk size 103496 perslab 10
      > slab class 33: chunk size 129376 perslab 8
      > slab class 34: chunk size 161720 perslab 6
      > slab class 35: chunk size 202152 perslab 5
      > slab class 36: chunk size 252696 perslab 4
      > slab class 37: chunk size 315872 perslab 3
      > slab class 38: chunk size 394840 perslab 2
      > slab class 39: chunk size 493552 perslab 2
      > worker_libevent thread_id 140693837612816
      > worker_libevent thread_id 140693854398224
      > worker_libevent thread_id 140693862790928
      > worker_libevent thread_id 140693846005520
      > <38 server listening (auto-negotiate)
      > <38 initialized conn_funcs to default
      > <39 server listening (auto-negotiate)
      > <39 initialized conn_funcs to default
      > cproxy_init jid: host:http://localhost:80/pools/default/bucketsStreaming/default
      > dbpath: /usr/local/var/lib/moxi/conflate-default.cfg
      > cproxy_init_agent_start
      > cproxy_init done
      > 38: drive_machine conn_listening
      > <41 new auto-negotiating client connection
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > 41: Client using the ascii protocol
      > <41 set test_moxi_0 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_0
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_1 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_1
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_2 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_2
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_3 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_3
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_4 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_4
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_5 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_5e 0 5

      > moxi5
      > efault>41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting
      > 41: going from conn_waiting to conn_read
      > 41: drive_machine conn_read
      > 41: going from conn_read to conn_parse_cmd
      > 41: drive_machine conn_parse_cmd
      > <41 set test_moxi_6 0 0 5
      > 41: going from conn_parse_cmd to conn_nread
      > 41: drive_machine conn_nread
      > 41: drive_machine conn_nread> NOT FOUND test_moxi_6
      > >41 STORED

      > 41: going from conn_nread to conn_write
      > 41: drive_machine conn_write
      > 41: drive_machine conn_write
      > 41: going from conn_write to conn_new_cmd
      > 41: drive_machine conn_new_cmd
      > 41: going from conn_new_cmd to conn_waiting
      > 41: drive_machine conn_waiting

      ...

      From: "steve.yen" <steve....@gmail.com>
      Date: Wed, 14 Jul 2010 09:30:54 -0700 (PDT)
      Local: Wed, Jul 14 2010 9:30 am
      Subject: Re: Moxi management channel

      Also, on the moxi crash, I'll try to replicate what you did, but the
      most useful things are stack backtraces, etc, if you have them.
      Thanks!
      Steve

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            steve.yen@northscale.com Steve Yen
            steve.yen@northscale.com Steve Yen
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty