Details
-
Bug
-
Resolution: Cannot Reproduce
-
Major
-
6.6.2
-
couchbase-server-enterprise-6.5.2-centos7.x86_64.rpm
couchbase-server-enterprise-6.6.2-centos7.x86_64.rpm
-
Untriaged
-
Centos 64-bit
-
1
-
Unknown
Description
Steps to reproduce:
- Install 6.5.2-6634-enterprise
- Initialize the single node cluster using the following cli command,
/opt/couchbase/bin/couchbase-cli cluster-init -c 10.112.212.101 --cluster-username Administrator --cluster-password password --services data --cluster-ramsize 512
- Create couchbase bucket with default params
- Perform offline upgrade to 6.6.2-9588-enterprise using the command,
service couchbase-server stop && rpm -U /vagrant/couchbase-server-enterprise-6.6.2-centos7.x86_64.rpm
Observation:
Post upgrade the cluster enters un-initialized state. All data directories on the node is left intact. Seeing following errors in the info.log,
[user:info,2022-01-04T18:59:27.898+05:30,ns_1@10.112.212.103:ns_server_sup<0.290.0>:menelaus_sup:start_link:48]Couchbase Server has started on web port 8091 on node 'ns_1@10.112.212.103'. Version: "6.6.2-9588-enterprise".
|
[ns_server:warn,2022-01-04T18:59:27.987+05:30,ns_1@10.112.212.103:<0.454.0>:ns_memcached:connect:1168]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying.
|
[ns_server:info,2022-01-04T18:59:27.993+05:30,ns_1@10.112.212.103:<0.458.0>:ns_memcached_log_rotator:init:42]Starting log rotator on "/opt/couchbase/var/lib/couchbase/logs"/"memcached.log"* with an initial period of 39003ms
|
[ns_server:warn,2022-01-04T18:59:28.009+05:30,ns_1@10.112.212.103:<0.466.0>:ns_memcached:connect:1168]Unable to connect: {error,{badmatch,{error,econnrefused}}}, retrying.
|
[user:info,2022-01-04T18:59:28.078+05:30,ns_1@10.112.212.103:mb_master<0.523.0>:mb_master:init:106]I'm the only node, so I'm the master.
|
[ns_server:info,2022-01-04T18:59:28.100+05:30,ns_1@10.112.212.103:mb_master_sup<0.525.0>:misc:start_singleton:857]start_singleton(gen_server, start_link, [{via,leader_registry,ns_tick},
|
ns_tick,[],[]]): started as <0.535.0> on 'ns_1@10.112.212.103'[ns_server:info,2022-01-04T18:59:28.104+05:30,ns_1@10.112.212.103:<0.532.0>:leader_lease_acquire_worker:handle_fresh_lease_acquired:302]Acquired lease from node 'ns_1@10.112.212.103' (lease uuid: <<"e38bcae96a0df77e9caa3e855cbfcd04">>)
|
[ns_server:info,2022-01-04T18:59:28.112+05:30,ns_1@10.112.212.103:ns_orchestrator_child_sup<0.539.0>:misc:start_singleton:857]start_singleton(gen_server, start_link, [{via,leader_registry,
|
auto_reprovision},
|
auto_reprovision,[],[]]): started as <0.541.0> on 'ns_1@10.112.212.103'[ns_server:info,2022-01-04T18:59:28.115+05:30,ns_1@10.112.212.103:ns_orchestrator_child_sup<0.539.0>:misc:start_singleton:857]start_singleton(gen_server, start_link, [{via,leader_registry,auto_rebalance},
|
auto_rebalance,[],[]]): started as <0.542.0> on 'ns_1@10.112.212.103'[ns_server:info,2022-01-04T18:59:28.116+05:30,ns_1@10.112.212.103:ns_orchestrator_child_sup<0.539.0>:misc:start_singleton:857]start_singleton(gen_statem, start_link, [{via,leader_registry,ns_orchestrator},
|
ns_orchestrator,[],[]]): started as <0.543.0> on 'ns_1@10.112.212.103'[user:info,2022-01-04T18:59:28.121+05:30,ns_1@10.112.212.103:<0.545.0>:auto_failover:handle_call:216]Enabled auto-failover with timeout 120 and max count 1
|
[ns_server:info,2022-01-04T18:59:28.128+05:30,ns_1@10.112.212.103:ns_orchestrator_sup<0.536.0>:misc:start_singleton:857]start_singleton(gen_server, start_link, [{via,leader_registry,auto_failover},
|
auto_failover,[],[]]): started as <0.545.0> on 'ns_1@10.112.212.103'[ns_server:info,2022-01-04T18:59:28.129+05:30,ns_1@10.112.212.103:mb_master_sup<0.525.0>:misc:start_singleton:857]start_singleton(work_queue, start_link, [{via,leader_registry,collections}]): started as <0.548.0> on 'ns_1@10.112.212.103'[ns_server:info,2022-01-04T18:59:28.213+05:30,ns_1@10.112.212.103:mb_master_sup<0.525.0>:misc:start_singleton:857]start_singleton(gen_server, start_link, [{via,leader_registry,
|
license_reporting},
|
license_reporting,[],[]]): started as <0.555.0> on 'ns_1@10.112.212.103'
|
Note: This issue is not seen while initializing the cluster manually through UI by providing the hostname as "10.112.212.101"