Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-7749

[system test] rebalance does not stop if one node is down

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major
    • Resolution: Cannot Reproduce
    • Affects Version/s: 2.0.1
    • Fix Version/s: 2.1.0
    • Component/s: ns_server
    • Security Level: Public
    • Labels:
    • Environment:
      unix

      Description

      Environment:

      • Both source and destination cluster are in 2.0.0 GA
      • 2 nodes cluster at source with 2 buckets, one doc and 3 views for each doc
      • 2 nodes cluster at source with 2 buckets

      Load 1 M items to both buckets
      Do online upgrade at source cluster by using swap rebalance.
      Add node ubu-2509 with buid 2.0.1-152 to source cluster and remove one 2.0.0 node.
      Rebalance. Failed.
      Node ubu-2509 was down due to operating system killed beam.smp

      Rebalance does not stop when node ubu-2509 is down.

      Collect info of all nodes from source cluster
      https://s3.amazonaws.com/packages.couchbase/collect_info/2_0_1/201302/3nodes-online-upgrade-src-os-kill-beam-201-node.tgz

      No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        did rebalance start , i see progress is set to 0 percent

        also did oyu check back in a min to see if it has stopped ?

        Show
        farshid Farshid Ghods (Inactive) added a comment - did rebalance start , i see progress is set to 0 percent also did oyu check back in a min to see if it has stopped ?
        Hide
        thuan Thuan Nguyen added a comment -

        It did rebalance and system in that state hours without failed

        Show
        thuan Thuan Nguyen added a comment - It did rebalance and system in that state hours without failed
        Hide
        thuan Thuan Nguyen added a comment -

        Node went down at 2013-02-13 17:19:02

        [ns_server:info,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:<0.10852.50>:misc:start_singleton:851]<0.10852.50> saw ns_tick exit (was pid <21092.2321.9>).
        [user:warn,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:ns_node_disco<0.13144.18>:ns_node_disco:handle_info:168]Node 'ns_1@cen-2501.hq.couchbase.com' saw that node 'ns_1@ubu-2509.hq.couchbase.com' went down.
        [ns_server:warn,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:capi_set_view_manager-sasl<0.13176.18>:capi_set_view_manager:handle_info:345]Remote server node

        {'capi_ddoc_replication_srv-sasl', 'ns_1@ubu-2509.hq.couchbase.com'}

        process down: noconnection
        [ns_server:info,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:<0.10854.50>:misc:start_singleton:851]<0.10854.50> saw auto_failover exit (was pid <21092.2322.9>).
        [ns_server:info,2013-02-13T17:19:02.178,ns_1@cen-2501.hq.couchbase.com:janitor_agent-sasl<0.27548.9>:janitor_agent:handle_info:676]Undoing temporary vbucket states caused by rebalance
        [rebalance:info,2013-02-13T17:19:02.180,ns_1@cen-2501.hq.couchbase.com:<0.10395.50>:ebucketmigrator_srv:do_confirm_sent_messages:684]Got close ack!

        [rebalance:info,2013-02-13T17:19:02.180,ns_1@cen-2501.hq.couchbase.com:<0.10578.50>:ebucketmigrator_srv:do_confirm_sent_messages:684]Got close ack!

        [rebalance:info,2013-02-13T17:19:02.187,ns_1@cen-2501.hq.couchbase.com:<0.10448.50>:ebucketmigrator_srv:do_confirm_sent_messages:684]Got close ack!

          • Until 2013-02-14 10:51:52, cluster still waits node ubu-2509 up and does not trigger a failed rebalance

        [ns_server:error,2013-02-14T10:50:32.212,ns_1@cen-2501.hq.couchbase.com:<0.27104.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:error,2013-02-14T10:50:47.213,ns_1@cen-2501.hq.couchbase.com:<0.27180.68>:ns_janitor:cleanup_with_states:92]Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:error,2013-02-14T10:50:51.598,ns_1@cen-2501.hq.couchbase.com:<0.25381.9>:ns_memcached:verify_report_long_call:297]call topkeys took too long: 581304 us
        [ns_server:info,2013-02-14T10:50:53.224,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282]Skipping janitor in state janitor_running:

        {janitor_state, ["sasl"], <0.27216.68>}

        [ns_server:error,2013-02-14T10:50:53.231,ns_1@cen-2501.hq.couchbase.com:<0.27216.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [stats:warn,2013-02-14T10:50:53.233,ns_1@cen-2501.hq.couchbase.com:system_stats_collector<0.25227.9>:system_stats_collector:handle_info:133]lost 1 ticks
        [stats:warn,2013-02-14T10:50:53.257,ns_1@cen-2501.hq.couchbase.com:<0.25430.9>:stats_collector:latest_tick:223]Dropped 1 ticks
        [stats:warn,2013-02-14T10:50:53.262,ns_1@cen-2501.hq.couchbase.com:<0.25397.9>:stats_collector:latest_tick:223]Dropped 1 ticks
        [ns_server:info,2013-02-14T10:50:53.275,ns_1@cen-2501.hq.couchbase.com:<0.27263.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `default`
        [ns_server:info,2013-02-14T10:50:53.280,ns_1@cen-2501.hq.couchbase.com:<0.27263.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket default with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:info,2013-02-14T10:50:53.322,ns_1@cen-2501.hq.couchbase.com:<0.27273.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `sasl`
        [ns_server:info,2013-02-14T10:50:53.325,ns_1@cen-2501.hq.couchbase.com:<0.27273.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket sasl with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:error,2013-02-14T10:50:53.422,ns_1@cen-2501.hq.couchbase.com:<0.25403.9>:ns_memcached:verify_report_long_call:297]call topkeys took too long: 1799687 us
        [ns_server:info,2013-02-14T10:51:03.701,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341]Pulling config from: 'ns_1@cen-2503.hq.couchbase.com'

        [ns_server:error,2013-02-14T10:51:07.203,ns_1@cen-2501.hq.couchbase.com:<0.27331.68>:ns_janitor:cleanup_with_states:92]Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:51:12.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282]Skipping janitor in state janitor_running:

        {janitor_state, ["sasl"], <0.27368.68>}

        [ns_server:error,2013-02-14T10:51:12.209,ns_1@cen-2501.hq.couchbase.com:<0.27368.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:51:13.029,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341]Pulling config from: 'ns_1@cen-2503.hq.couchbase.com'

        [ns_server:error,2013-02-14T10:51:16.550,ns_1@cen-2501.hq.couchbase.com:<0.25381.9>:ns_memcached:verify_report_long_call:297]call topkeys took too long: 547077 us
        [ns_server:info,2013-02-14T10:51:23.373,ns_1@cen-2501.hq.couchbase.com:<0.27459.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `default`
        [ns_server:info,2013-02-14T10:51:23.376,ns_1@cen-2501.hq.couchbase.com:<0.27459.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket default with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:info,2013-02-14T10:51:23.428,ns_1@cen-2501.hq.couchbase.com:<0.27466.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `sasl`
        [ns_server:info,2013-02-14T10:51:23.431,ns_1@cen-2501.hq.couchbase.com:<0.27466.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket sasl with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:error,2013-02-14T10:51:27.205,ns_1@cen-2501.hq.couchbase.com:<0.27441.68>:ns_janitor:cleanup_with_states:92]Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:51:32.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282]Skipping janitor in state janitor_running:

        {janitor_state, ["sasl"], <0.27489.68>}

        [ns_server:error,2013-02-14T10:51:32.240,ns_1@cen-2501.hq.couchbase.com:<0.27489.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:error,2013-02-14T10:51:47.241,ns_1@cen-2501.hq.couchbase.com:<0.27588.68>:ns_janitor:cleanup_with_states:92]Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:51:52.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282]Skipping janitor in state janitor_running:

        {janitor_state, ["sasl"], <0.27622.68>}

        [ns_server:error,2013-02-14T10:51:52.248,ns_1@cen-2501.hq.couchbase.com:<0.27622.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:51:53.474,ns_1@cen-2501.hq.couchbase.com:<0.27664.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `default`
        [ns_server:info,2013-02-14T10:51:53.477,ns_1@cen-2501.hq.couchbase.com:<0.27664.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket default with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:info,2013-02-14T10:51:53.510,ns_1@cen-2501.hq.couchbase.com:<0.27671.68>:compaction_daemon:try_to_cleanup_indexes:439]Cleaning up indexes for bucket `sasl`
        [ns_server:info,2013-02-14T10:51:53.514,ns_1@cen-2501.hq.couchbase.com:<0.27671.68>:compaction_daemon:spawn_bucket_compactor:404]Compacting bucket sasl with config:
        [{database_fragmentation_threshold,{30,undefined}},
        {view_fragmentation_threshold,{30,undefined}}]
        [ns_server:info,2013-02-14T10:51:53.654,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341]Pulling config from: 'ns_1@cen-2503.hq.couchbase.com'

        [ns_server:error,2013-02-14T10:52:07.215,ns_1@cen-2501.hq.couchbase.com:<0.27710.68>:ns_janitor:cleanup_with_states:92]Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        [ns_server:info,2013-02-14T10:52:12.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282]Skipping janitor in state janitor_running:

        {janitor_state, ["sasl"], <0.27745.68>}

        [ns_server:error,2013-02-14T10:52:12.226,ns_1@cen-2501.hq.couchbase.com:<0.27745.68>:ns_janitor:cleanup_with_states:92]Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']

        Show
        thuan Thuan Nguyen added a comment - Node went down at 2013-02-13 17:19:02 [ns_server:info,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:<0.10852.50>:misc:start_singleton:851] <0.10852.50> saw ns_tick exit (was pid <21092.2321.9>). [user:warn,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:ns_node_disco<0.13144.18>:ns_node_disco:handle_info:168] Node 'ns_1@cen-2501.hq.couchbase.com' saw that node 'ns_1@ubu-2509.hq.couchbase.com' went down. [ns_server:warn,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:capi_set_view_manager-sasl<0.13176.18>:capi_set_view_manager:handle_info:345] Remote server node {'capi_ddoc_replication_srv-sasl', 'ns_1@ubu-2509.hq.couchbase.com'} process down: noconnection [ns_server:info,2013-02-13T17:19:02.177,ns_1@cen-2501.hq.couchbase.com:<0.10854.50>:misc:start_singleton:851] <0.10854.50> saw auto_failover exit (was pid <21092.2322.9>). [ns_server:info,2013-02-13T17:19:02.178,ns_1@cen-2501.hq.couchbase.com:janitor_agent-sasl<0.27548.9>:janitor_agent:handle_info:676] Undoing temporary vbucket states caused by rebalance [rebalance:info,2013-02-13T17:19:02.180,ns_1@cen-2501.hq.couchbase.com:<0.10395.50>:ebucketmigrator_srv:do_confirm_sent_messages:684] Got close ack! [rebalance:info,2013-02-13T17:19:02.180,ns_1@cen-2501.hq.couchbase.com:<0.10578.50>:ebucketmigrator_srv:do_confirm_sent_messages:684] Got close ack! [rebalance:info,2013-02-13T17:19:02.187,ns_1@cen-2501.hq.couchbase.com:<0.10448.50>:ebucketmigrator_srv:do_confirm_sent_messages:684] Got close ack! Until 2013-02-14 10:51:52, cluster still waits node ubu-2509 up and does not trigger a failed rebalance [ns_server:error,2013-02-14T10:50:32.212,ns_1@cen-2501.hq.couchbase.com:<0.27104.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:error,2013-02-14T10:50:47.213,ns_1@cen-2501.hq.couchbase.com:<0.27180.68>:ns_janitor:cleanup_with_states:92] Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:error,2013-02-14T10:50:51.598,ns_1@cen-2501.hq.couchbase.com:<0.25381.9>:ns_memcached:verify_report_long_call:297] call topkeys took too long: 581304 us [ns_server:info,2013-02-14T10:50:53.224,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282] Skipping janitor in state janitor_running: {janitor_state, ["sasl"], <0.27216.68>} [ns_server:error,2013-02-14T10:50:53.231,ns_1@cen-2501.hq.couchbase.com:<0.27216.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [stats:warn,2013-02-14T10:50:53.233,ns_1@cen-2501.hq.couchbase.com:system_stats_collector<0.25227.9>:system_stats_collector:handle_info:133] lost 1 ticks [stats:warn,2013-02-14T10:50:53.257,ns_1@cen-2501.hq.couchbase.com:<0.25430.9>:stats_collector:latest_tick:223] Dropped 1 ticks [stats:warn,2013-02-14T10:50:53.262,ns_1@cen-2501.hq.couchbase.com:<0.25397.9>:stats_collector:latest_tick:223] Dropped 1 ticks [ns_server:info,2013-02-14T10:50:53.275,ns_1@cen-2501.hq.couchbase.com:<0.27263.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `default` [ns_server:info,2013-02-14T10:50:53.280,ns_1@cen-2501.hq.couchbase.com:<0.27263.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket default with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:info,2013-02-14T10:50:53.322,ns_1@cen-2501.hq.couchbase.com:<0.27273.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `sasl` [ns_server:info,2013-02-14T10:50:53.325,ns_1@cen-2501.hq.couchbase.com:<0.27273.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket sasl with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:error,2013-02-14T10:50:53.422,ns_1@cen-2501.hq.couchbase.com:<0.25403.9>:ns_memcached:verify_report_long_call:297] call topkeys took too long: 1799687 us [ns_server:info,2013-02-14T10:51:03.701,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341] Pulling config from: 'ns_1@cen-2503.hq.couchbase.com' [ns_server:error,2013-02-14T10:51:07.203,ns_1@cen-2501.hq.couchbase.com:<0.27331.68>:ns_janitor:cleanup_with_states:92] Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:51:12.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282] Skipping janitor in state janitor_running: {janitor_state, ["sasl"], <0.27368.68>} [ns_server:error,2013-02-14T10:51:12.209,ns_1@cen-2501.hq.couchbase.com:<0.27368.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:51:13.029,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341] Pulling config from: 'ns_1@cen-2503.hq.couchbase.com' [ns_server:error,2013-02-14T10:51:16.550,ns_1@cen-2501.hq.couchbase.com:<0.25381.9>:ns_memcached:verify_report_long_call:297] call topkeys took too long: 547077 us [ns_server:info,2013-02-14T10:51:23.373,ns_1@cen-2501.hq.couchbase.com:<0.27459.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `default` [ns_server:info,2013-02-14T10:51:23.376,ns_1@cen-2501.hq.couchbase.com:<0.27459.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket default with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:info,2013-02-14T10:51:23.428,ns_1@cen-2501.hq.couchbase.com:<0.27466.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `sasl` [ns_server:info,2013-02-14T10:51:23.431,ns_1@cen-2501.hq.couchbase.com:<0.27466.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket sasl with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:error,2013-02-14T10:51:27.205,ns_1@cen-2501.hq.couchbase.com:<0.27441.68>:ns_janitor:cleanup_with_states:92] Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:51:32.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282] Skipping janitor in state janitor_running: {janitor_state, ["sasl"], <0.27489.68>} [ns_server:error,2013-02-14T10:51:32.240,ns_1@cen-2501.hq.couchbase.com:<0.27489.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:error,2013-02-14T10:51:47.241,ns_1@cen-2501.hq.couchbase.com:<0.27588.68>:ns_janitor:cleanup_with_states:92] Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:51:52.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282] Skipping janitor in state janitor_running: {janitor_state, ["sasl"], <0.27622.68>} [ns_server:error,2013-02-14T10:51:52.248,ns_1@cen-2501.hq.couchbase.com:<0.27622.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:51:53.474,ns_1@cen-2501.hq.couchbase.com:<0.27664.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `default` [ns_server:info,2013-02-14T10:51:53.477,ns_1@cen-2501.hq.couchbase.com:<0.27664.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket default with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:info,2013-02-14T10:51:53.510,ns_1@cen-2501.hq.couchbase.com:<0.27671.68>:compaction_daemon:try_to_cleanup_indexes:439] Cleaning up indexes for bucket `sasl` [ns_server:info,2013-02-14T10:51:53.514,ns_1@cen-2501.hq.couchbase.com:<0.27671.68>:compaction_daemon:spawn_bucket_compactor:404] Compacting bucket sasl with config: [{database_fragmentation_threshold,{30,undefined}}, {view_fragmentation_threshold,{30,undefined}}] [ns_server:info,2013-02-14T10:51:53.654,ns_1@cen-2501.hq.couchbase.com:ns_config_rep<0.13986.18>:ns_config_rep:do_pull:341] Pulling config from: 'ns_1@cen-2503.hq.couchbase.com' [ns_server:error,2013-02-14T10:52:07.215,ns_1@cen-2501.hq.couchbase.com:<0.27710.68>:ns_janitor:cleanup_with_states:92] Bucket "default" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com'] [ns_server:info,2013-02-14T10:52:12.196,ns_1@cen-2501.hq.couchbase.com:<0.10861.50>:ns_orchestrator:handle_info:282] Skipping janitor in state janitor_running: {janitor_state, ["sasl"], <0.27745.68>} [ns_server:error,2013-02-14T10:52:12.226,ns_1@cen-2501.hq.couchbase.com:<0.27745.68>:ns_janitor:cleanup_with_states:92] Bucket "sasl" not yet ready on ['ns_1@ubu-2509.hq.couchbase.com']
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Tony, please update the ticket if you are able to reproduce this again on the latest 2.0.1 build

        Show
        farshid Farshid Ghods (Inactive) added a comment - Tony, please update the ticket if you are able to reproduce this again on the latest 2.0.1 build
        Hide
        thuan Thuan Nguyen added a comment -

        Tested on offline upgrade from 2.0.0-1976 to build 2.0.1-160. I am not able to reproduce this bug. So I will close it

        Show
        thuan Thuan Nguyen added a comment - Tested on offline upgrade from 2.0.0-1976 to build 2.0.1-160. I am not able to reproduce this bug. So I will close it
        Hide
        thuan Thuan Nguyen added a comment -

        Can not reproduce on latest build 2.0.1-160

        Show
        thuan Thuan Nguyen added a comment - Can not reproduce on latest build 2.0.1-160

          People

          • Assignee:
            ketaki Ketaki Gangal
            Reporter:
            thuan Thuan Nguyen
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Gerrit Reviews

              There are no open Gerrit changes