Details
Description
Hi Jeff,
I faced with the issues when I tried to switch .net sdkd tests on ssl.
I did nothing special in sdkd-net framework,
in short, as a result I create BucketConfiguration with ssl
var bucketConfiguration = new BucketConfiguration();
bucketConfiguration.BucketName = jObj[StringConst.HANDLE_BUCKET].ToString();
bucketConfiguration.UseSsl =jObj[StringConst.HANDLE_OPTIONS[StringConst.HANDLE_SSL].ToBool();
then I see that my tests stuck on
C:\jenkins\workspace-net2\sdkdclient-ng>call packages\sdkdclient\bin\brun.bat -I cluster_config.ini -I sdkd.args --variants HYBRID -d all:debug --no-upload
[0,14 INFO] (BRun run:435) Initializing history database
Will create SASL????true
Will create SASL????true
Will create SASL????true
Will create SASL????true
============================================================
Running Rb2Out-HYBRID
rebalance/mode=out
workload=HybridWorkloadGroup
rebalance/count=2
testcase=RebalanceScenario
Logging to C:\jenkins\workspace-net2\sdkdclient-ng\log\SDK-SDK\CB-3.0.0-933\Rb2Out-HYBRID\07-15-14\047482\e6fbe5361d1427bb464cf149d253458b
To re-run the test, copy/paste the following into the shell.
You may also copy/paste (except the first line) into an argfile
8<----------------------------------------
./stester \
\ --rebalance-mode out --workload HybridWorkloadGroup
\ --rebalance-count 2 --testcase RebalanceScenario -C share\rexec
\ --rexec_path C:\temp\sdkd-out-debug\SdkdConsole.exe --rexec_port
\ 8675 --cluster_node=10.3.121.134 --cluster_node=10.3.121.135
\ --cluster_node=10.3.121.136 --cluster_node=10.3.3.206
\ --cluster_ssh-username=root --cluster_ssh-password=couchbase
--cluster_useSSL=True
---------------------------------------->8
[1,41 WARN] (Drivers getDriver:76) 'rexec' is now mapped to local execution only. Use RemoteExecutingDriver for remote execution
Will create SASL????true
Will create SASL????true
Will create SASL????true
Will create SASL????true
[1,79 INFO] (RunContext run:102) Ramp for 30 seconds. Cluster modification: remove 2 nodes and rebalance. Rebound for 90 seconds.
[1,79 INFO] (RunContext run:124) Starting cluster and driver
[1,80 INFO] (CBCluster startCluster:360) Node http://10.3.121.134:8091 is master now
[1,80 INFO] (HostPortDriver launch:32) Invoking SDKD as 'C:\temp\sdkd-out-debug\SdkdConsole.exe'
[1,80 DEBUG] (CBCluster startCluster:366) Stopping any existing rebalance operations..
[1,99 INFO] (SDKD log:137) — Logging Self-Test —
[1,99 INFO] (SDKD log:137) [Sdkd.Main|Info] Info Message
[1,99 INFO] (SDKD log:137) [Sdkd.Main|Warn] Warn Message
[1,99 INFO] (SDKD log:137) [Sdkd.Main|Error] Error Message
[1,99 INFO] (SDKD log:137) [Sdkd.Main|Fatal] Fatal Message
[2,00 INFO] (SDKD log:137) [Sdkd.Main|Info] SDKD Listening on port 8675
[2,34 INFO] (SDKD log:137) [Sdkd.Control|Info] Got a new connection. Creating child handle
[2,35 DEBUG] (Handle sendMessageAsync:183) > INFO@0.0
[2,52 DEBUG] (Handle receiveMessage:158) < INFO@0.0 => {"CAPS":
,"COMPONENTS":{"SDK":"1.0.0.0","CLR":"4.0.30319.34014"}}
[7,37 DEBUG] (CBCluster clearSingleCluster:140) Failing over existing node <URI:10.3.121.135:8091,ns_1@10.3.121.135>
[7,82 DEBUG] (CBCluster clearSingleCluster:140) Failing over existing node <URI:10.3.121.136:8091,ns_1@10.3.121.136>
[8,28 DEBUG] (CBCluster clearSingleCluster:140) Failing over existing node <URI:10.3.3.206:8091,ns_1@10.3.3.206>
[8,74 DEBUG] (SSHConnection connect:99) Connecting with User[root] Pass[HASH=3b8cf05627127baaf3808913209aac84]
[8,74 DEBUG] (SSHConnection connect:99) Connecting with User[root] Pass[HASH=3b8cf05627127baaf3808913209aac84]
[10,58 INFO] (NodeHost createSSH:147) SSH Initialized for http://10.3.121.136:8091
[10,58 INFO] (NodeHost createSSH:147) SSH Initialized for http://10.3.121.135:8091
[10,59 INFO] (NodeHost createSSH:147) SSH Initialized for http://10.3.3.206:8091
[10,59 INFO] (NodeHost createSSH:147) SSH Initialized for http://10.3.121.134:8091
[10,61 DEBUG] (SSHCommand execute:75) Running /etc/init.d/couchbase-server start && pkill -CONT -f memcached && pkill -CONT -f beam.smp && iptables -F && iptables -t nat -F on 10.3.121.135
[10,61 DEBUG] (SSHCommand execute:75) Running /etc/init.d/couchbase-server start && pkill -CONT -f memcached && pkill -CONT -f beam.smp && iptables -F && iptables -t nat -F on 10.3.3.206
[10,61 DEBUG] (SSHCommand execute:75) Running /etc/init.d/couchbase-server start && pkill -CONT -f memcached && pkill -CONT -f beam.smp && iptables -F && iptables -t nat -F on 10.3.121.136
[10,61 DEBUG] (SSHCommand execute:75) Running /etc/init.d/couchbase-server start && pkill -CONT -f memcached && pkill -CONT -f beam.smp && iptables -F && iptables -t nat -F on 10.3.121.134
[11,33 DEBUG] (SSHCommand close:147) Closing channel com.jcraft.jsch.ChannelExec@10e975db
[11,33 DEBUG] (SSHCommand close:147) Closing channel com.jcraft.jsch.ChannelExec@4f388589
[11,33 DEBUG] (SSHCommand close:147) Closing channel com.jcraft.jsch.ChannelExec@3dbafe70
[11,34 DEBUG] (SSHCommand close:147) Closing channel com.jcraft.jsch.ChannelExec@729c078
[11,34 DEBUG] (CBCluster setupNewCluster:271) Provisioning initial node com.couchbase.cbadmin.client.CouchbaseAdmin@4ae6120d
[24,59 DEBUG] (CBCluster tryOnce:286) Adding node http://10.3.121.135:8091
[32,49 DEBUG] (CBCluster tryOnce:286) Adding node http://10.3.121.136:8091
[35,24 DEBUG] (CBCluster tryOnce:286) Adding node http://10.3.3.206:8091
[39,14 INFO] (CBCluster setupNewCluster:293) All nodes added. Will rebalance
[40,08 INFO] (RebalanceWaiter sweepOnce:33) Rebalance complete
[40,08 DEBUG] (CBCluster setupServerGroups:222) Not creating any groups
[40,31 INFO] (CBCluster setupMainBucket:209) Creating bucket default
[40,54 INFO] (CBCluster setupMainBucket:211) Bucket creation submitted
[47,78 INFO] (CBCluster waitForBucketReady:203) Bucket creation done
[50,40 INFO] (RunContext run:143) Driver and cluster initialized
[50,64 INFO] (RunContext call:167) Running scenario..
[50,64 INFO] (Scenario run:72) Starting RAMP phase
[50,64 INFO] (Workload setupDesign:63) Creating design test_design
[51,82 INFO] (Workload setupDesign:80) Design creation done
[51,82 INFO] (SDKD log:137) [Sdkd.Control|Info] Got a new connection. Creating child handle
[51,82 DEBUG] (Handle sendMessageAsync:183) > NEWHANDLE@101.1 => {Port=8091, Bucket=default, Options=
, Hostname=10.3.121.134}
[51,83 INFO] (SDKD log:137) [Sdkd.Control|Info] Registering handle 101
[51,83 INFO] (SDKD log:137) ClientConfiguration!!!!!!!!!!!!{
[51,83 INFO] (SDKD log:137) "Port": 8091,
[51,84 INFO] (SDKD log:137) "Bucket": "default",
[51,84 INFO] (SDKD log:137) "Options":
,
[51,88 INFO] (SDKD log:137) "Hostname": "10.3.121.134"
[51,88 INFO] (SDKD log:137) }
[51,89 INFO] (SDKD log:137) [Sdkd.ClientFactory|Info] Creating new shared client for key 'U,P,H10.3.121.134:8091,Bdefault'
[51,89 INFO] (SDKD log:137) [Sdkd.Main|Info] Resolving Common.Logging.Log4Net
[51,89 INFO] (SDKD log:137) [Sdkd.Main|Info] Have assembly Common.Logging.Log4Net, Version=2.0.0.0, Culture=neutral, PublicKeyToken=af08829b84f0328e
[51,89 INFO] (SDKD log:137) [Sdkd.Main|Info] Resolving log4net
[51,90 INFO] (SDKD log:137) [Sdkd.Main|Info] Have assembly log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821
logs.txt contains only:
2014-07-15 16:12:13,631 [4] DEBUG Couchbase.Core.ClusterManager - !!Trying to boostrap with Couchbase.Configuration.Server.Providers.CarrierPublication.CarrierPublicationProvider.
2014-07-15 16:12:13,875 [4] DEBUG Couchbase.IO.ConnectionPool`1 - Acquire new: 2632b059-2d3d-4b03-910e-d048e4025cda - [0, 0]
I've added in ClusterManager some logs to identify that ClusterManager.CreateBucket doesn't return bucket
— a/Src/Couchbase/Core/ClusterManager.cs
+++ b/Src/Couchbase/Core/ClusterManager.cs
@@ -145,7 +145,7 @@ namespace Couchbase.Core
{
try
{
- Log.DebugFormat("Trying to boostrap with
{0}.", provider);
+ Log.DebugFormat("!!Trying to boostrap with {0}.", provider);
var config = provider.GetConfig(bucketName, password);
switch (config.NodeLocator)
{
@@ -203,6 +203,10 @@ namespace Couchbase.CoreUnknown macro: { throw new ConfigException("Could not bootstrap {0}. See log for details.", bucketName);
}
+ else
+ {
+ Log.DebugFormat("!!success bootstrap {0}. See log for details.", bucketName);+ }return bucket;
}
so, I didn't get "!!success bootstrap..." in logs
Jeff, could you take a look on it?