Building

Started building at 2021/03/11 05:30:48
Using pegged server, 4170 build
Calculating base
Updating mirror
Basing run on 7.0.0-4170 6ff6b16a0
Updating tree for run 11.03.2021-05.30
query is at a595c6a, changes since last good build: none
gometa is at b0e38b7, changes since last good build: 
fatal: Invalid revision range d9758e473a546470dfe667aa6f85ec158003c60e..HEAD

ns_server is at 944a9ca, changes since last good build: none
couchstore is at 011d5d3, changes since last good build: none
forestdb is at 9a5b9b0, changes since last good build: none
kv_engine is at e56a54d, changes since last good build: none
Switching indexing to unstable
indexing is at 1e36489, changes since last good build: none
Switching plasma to unstable
plasma is at 29acbb2, changes since last good build: none
Switching nitro to unstable
nitro is at 60138fb, changes since last good build: none
Switching gometa to master
gometa is at 25c4750, changes since last good build: 
fatal: Invalid revision range d9758e473a546470dfe667aa6f85ec158003c60e..HEAD

Switching testrunner to master
testrunner is at 771f3d6, changes since last good build: none
Pulling in uncommitted change 148151 at refs/changes/51/148151/2
Total 5 (delta 4), reused 5 (delta 4)
[unstable e29e9f25] MB-37240 : Redact UD from ClustMgr:handleUpdateTopologyForIndex logging
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Mar 10 13:04:26 2021 +0530
 1 file changed, 18 insertions(+)
Pulling in uncommitted change 148154 at refs/changes/54/148154/1
Total 6 (delta 5), reused 6 (delta 5)
[unstable 2005fcc8] MB-41289 Use pointer receivers for storage stats to avoid copy
 Author: Varun Velamuri 
 Date: Wed Mar 10 13:59:14 2021 +0530
 2 files changed, 5 insertions(+), 5 deletions(-)
Pulling in uncommitted change 148208 at refs/changes/08/148208/2
Total 17 (delta 15), reused 17 (delta 15)
[unstable 0a8d3be7] MB-43967 Part 2: getIndexStatus ETag for LocalIndexMetadata and stats subset
 Author: Kevin Cherkauer 
 Date: Wed Mar 10 10:59:49 2021 -0800
 5 files changed, 227 insertions(+), 84 deletions(-)
Pulling in uncommitted change 148091 at refs/changes/91/148091/3
Total 10 (delta 9), reused 10 (delta 9)
[unstable f398df2] MB-44723: stop smrWorker when trimming sCtx
 Author: jliang00 
 Date: Tue Mar 9 13:36:11 2021 -0800
 8 files changed, 183 insertions(+), 26 deletions(-)
Pulling in uncommitted change 148183 at refs/changes/83/148183/1
Total 4 (delta 3), reused 4 (delta 3)
[unstable 9a29e2e] MB-44749: Preserve compressed bit in SetOp
 Author: akhilmd 
 Date: Wed Mar 10 18:56:54 2021 +0530
 2 files changed, 89 insertions(+), 2 deletions(-)
Pulling in uncommitted change 148210 at refs/changes/10/148210/1
Total 8 (delta 6), reused 8 (delta 6)
[unstable 6417056] MB-44749: Use gCtx in freePtrs for capturing reclaim free stats
 Author: akhilmd 
 Date: Thu Mar 11 00:38:56 2021 +0530
 2 files changed, 3 insertions(+), 3 deletions(-)
Pulling in uncommitted change 147942 at refs/changes/42/147942/7
Total 4 (delta 3), reused 4 (delta 3)
[master 2bf9bcc] MB-44689 Double the size of packets chan whenever blocked
 Author: Varun Velamuri 
 Date: Mon Mar 8 22:27:28 2021 +0530
 1 file changed, 59 insertions(+), 6 deletions(-)
Building cmakefiles and deps
Building main product

... A new version of repo (2.12) is available.
... You should upgrade soon:
    cp /opt/build/.repo/repo/repo /home/buildbot/bin/repo


... A new version of repo (2.12) is available.
... You should upgrade soon:
    cp /opt/build/.repo/repo/repo /home/buildbot/bin/repo

Testing

Started testing at 2021/03/11 05:47:48
Testing mode: sanity,unit,functional,integration
Using storage type: plasma
Setting ulimit to 200000

Simple Test

Mar 11 05:48:59 suite_setUp (rebalance.rebalancein.RebalanceInTests) ... ok
Mar 11 05:54:14 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Mar 11 05:58:53 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Mar 11 05:59:34 do_warmup_100k (memcapable.WarmUpMemcachedTest) ... ok
Mar 11 06:00:58 test_view_ops (view.createdeleteview.CreateDeleteViewTests) ... ok
Mar 11 06:10:44 test_employee_dataset_startkey_endkey_queries_rebalance_in (view.viewquerytests.ViewQueryTests) ... ok
Mar 11 06:11:23 b'-->result: '
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:11:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:11:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:11:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:11:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Mar 11 06:11:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Mar 11 06:11:23 b"{'replicas': '1', 'items': '10000', 'value_size': '128', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 7, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_05-48-11/test_7'}"ok
Mar 11 06:14:51 load_with_ops (xdcr.uniXDCR.unidirectional) ... ok
Mar 11 06:18:27 load_with_failover (xdcr.uniXDCR.unidirectional) ... ok
Mar 11 06:21:03 suite_tearDown (xdcr.uniXDCR.unidirectional) ... ok
Mar 11 06:21:04 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:21:04 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:21:04 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Mar 11 06:21:04 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Mar 11 06:21:04 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 1 , fail 0'
Mar 11 06:21:04 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_failover,replicas=1,items=10000,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,failover=source,get-logs-cluster-run=True'
Mar 11 06:21:04 b"{'replicas': '1', 'items': '10000', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'failover': 'source', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 8, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_05-48-11/test_8'}"
Mar 11 06:21:04 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Mar 11 06:21:04 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Mar 11 06:21:04 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Mar 11 06:21:04 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Mar 11 06:21:04 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 2 , fail 0'
Mar 11 06:21:04 b'Run after suite setup for xdcr.uniXDCR.unidirectional.load_with_failover'
Mar 11 06:21:04 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Mar 11 06:21:04 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Mar 11 06:21:04 b"('memcapable.WarmUpMemcachedTest.do_warmup_100k', ' pass')"
Mar 11 06:21:04 b"('view.createdeleteview.CreateDeleteViewTests.test_view_ops', ' pass')"
Mar 11 06:21:04 b"('view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in', ' pass')"
Mar 11 06:21:04 b"('view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification', ' pass')"
Mar 11 06:21:04 b"('xdcr.uniXDCR.unidirectional.load_with_ops', ' pass')"
Mar 11 06:21:04 b"('xdcr.uniXDCR.unidirectional.load_with_failover', ' pass')"

Unit tests

=== RUN   TestMerger
--- PASS: TestMerger (0.02s)
=== RUN   TestInsert
--- PASS: TestInsert (0.00s)
=== RUN   TestInsertPerf
16000 items took 17.296184ms -> 925059.5391445883 items/s conflicts 2
--- PASS: TestInsertPerf (0.02s)
=== RUN   TestGetPerf
16000 items took 5.783226ms -> 2.7666219511393816e+06 items/s
--- PASS: TestGetPerf (0.01s)
=== RUN   TestGetRangeSplitItems
{
"node_count":             1000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3450,
"memory_used":            45520,
"node_allocs":            1000,
"node_frees":             0,
"level_node_distribution":{
"level0": 747,
"level1": 181,
"level2": 56,
"level3": 13,
"level4": 2,
"level5": 1,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Split range keys [105 161 346 379 434 523 713]
No of items in each range [105 56 185 33 55 89 190 287]
--- PASS: TestGetRangeSplitItems (0.00s)
=== RUN   TestBuilder
{
"node_count":             50000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3368,
"memory_used":            2269408,
"node_allocs":            50000,
"node_frees":             0,
"level_node_distribution":{
"level0": 37380,
"level1": 9466,
"level2": 2370,
"level3": 578,
"level4": 152,
"level5": 40,
"level6": 9,
"level7": 4,
"level8": 1,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Took 8.434682ms to build 50000 items, 5.9279055e+06 items/sec
Took 832.507µs to iterate 50000 items
--- PASS: TestBuilder (0.01s)
PASS
ok  	github.com/couchbase/nitro/skiplist	0.072s
Initializing write barrier = 8000
=== RUN   TestAutoTunerWriteUsageStats
----------- running TestAutoTunerWriteUsageStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerWriteUsageStats_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1
LSS test.mvcc.TestAutoTunerWriteUsageStats_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerWriteUsageStats_1) to LSS (test.mvcc.TestAutoTunerWriteUsageStats_1) and RecoveryLSS (test.mvcc.TestAutoTunerWriteUsageStats_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestAutoTunerWriteUsageStats_1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1 to LSS test.mvcc.TestAutoTunerWriteUsageStats_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1], Shared [false]
LSS test.mvcc.TestAutoTunerWriteUsageStats_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [84.279µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1], Shared [false]. Built [0] plasmas, took [124.389µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerWriteUsageStats_1(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerWriteUsageStats_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1 started
TOTAL 29327360 ACTUAL 29327360 MAX BANDWIDTH 0 ACTUAL BANDWIDTH 0 TOTAL DISK 29327360 TOTAL USED 29327360 TOTAL DATA 13524570
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1_2
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerWriteUsageStats_1_2) to LSS (test.mvcc.TestAutoTunerWriteUsageStats_1_2) and RecoveryLSS (test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestAutoTunerWriteUsageStats_1_2
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 to LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1_2], Shared [false]
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.943µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1_2], Shared [false]. Built [0] plasmas, took [126.158µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1_3
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerWriteUsageStats_1_3) to LSS (test.mvcc.TestAutoTunerWriteUsageStats_1_3) and RecoveryLSS (test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.mvcc.TestAutoTunerWriteUsageStats_1_3
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 to LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1_3], Shared [false]
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.949µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerWriteUsageStats_1_3], Shared [false]. Built [0] plasmas, took [115.793µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 started
TOTAL 88137728 ACTUAL 88137728 MAX BANDWIDTH 9801728 ACTUAL BANDWIDTH 4900864 TOTAL DISK 88137728 TOTAL DATA 40641836
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 closed
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 closed
LSS test.mvcc.TestAutoTunerWriteUsageStats_1(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1 stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerWriteUsageStats_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1 closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerWriteUsageStats_1 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestAutoTunerWriteUsageStats_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_2 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerWriteUsageStats_1_3 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestAutoTunerWriteUsageStats (10.37s)
=== RUN   TestAutoTunerReadUsageStats
----------- running TestAutoTunerReadUsageStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerReadUsageStats_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1
LSS test.mvcc.TestAutoTunerReadUsageStats_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerReadUsageStats_1) to LSS (test.mvcc.TestAutoTunerReadUsageStats_1) and RecoveryLSS (test.mvcc.TestAutoTunerReadUsageStats_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestAutoTunerReadUsageStats_1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1 to LSS test.mvcc.TestAutoTunerReadUsageStats_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1], Shared [false]
LSS test.mvcc.TestAutoTunerReadUsageStats_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.527µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1], Shared [false]. Built [0] plasmas, took [97.296µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerReadUsageStats_1(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerReadUsageStats_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1_2
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerReadUsageStats_1_2) to LSS (test.mvcc.TestAutoTunerReadUsageStats_1_2) and RecoveryLSS (test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestAutoTunerReadUsageStats_1_2
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1_2 to LSS test.mvcc.TestAutoTunerReadUsageStats_1_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1_2], Shared [false]
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.709µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1_2], Shared [false]. Built [0] plasmas, took [95.329µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1_3
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerReadUsageStats_1_3) to LSS (test.mvcc.TestAutoTunerReadUsageStats_1_3) and RecoveryLSS (test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.mvcc.TestAutoTunerReadUsageStats_1_3
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerReadUsageStats_1_3 to LSS test.mvcc.TestAutoTunerReadUsageStats_1_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1_3], Shared [false]
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.187µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerReadUsageStats_1_3], Shared [false]. Built [0] plasmas, took [124.261µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_3 started
TOTAL 88305664 ACTUAL 88305664 MAX BANDWIDTH 0 ACTUAL BANDWIDTH 0 TOTAL DISK 88305664 TOTAL DATA 40710456
TOTAL 88354816 ACTUAL 88354816 MAX BANDWIDTH 24576 ACTUAL BANDWIDTH 12288 TOTAL DISK 88305664 TOTAL DATA 40710456
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_3 stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_3 closed
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_2 stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_2 closed
LSS test.mvcc.TestAutoTunerReadUsageStats_1(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1 stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerReadUsageStats_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1 closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerReadUsageStats_1 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestAutoTunerReadUsageStats_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerReadUsageStats_1_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.mvcc.TestAutoTunerReadUsageStats_1_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_2 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerReadUsageStats_1_3 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.mvcc.TestAutoTunerReadUsageStats_1_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerReadUsageStats_1_3 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestAutoTunerReadUsageStats (7.66s)
=== RUN   TestAutoTunerCleanerUsageStats
----------- running TestAutoTunerCleanerUsageStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerCleanerUsageStats_1) to LSS (test.mvcc.TestAutoTunerCleanerUsageStats_1) and RecoveryLSS (test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestAutoTunerCleanerUsageStats_1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1 to LSS test.mvcc.TestAutoTunerCleanerUsageStats_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1], Shared [false]
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.175µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1], Shared [false]. Built [0] plasmas, took [105.637µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1_2
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerCleanerUsageStats_1_2) to LSS (test.mvcc.TestAutoTunerCleanerUsageStats_1_2) and RecoveryLSS (test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 to LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1_2], Shared [false]
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.627µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1_2], Shared [false]. Built [0] plasmas, took [118.871µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1_3
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestAutoTunerCleanerUsageStats_1_3) to LSS (test.mvcc.TestAutoTunerCleanerUsageStats_1_3) and RecoveryLSS (test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 to LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1_3], Shared [false]
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [73.071µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery], Data log [test.mvcc.TestAutoTunerCleanerUsageStats_1_3], Shared [false]. Built [0] plasmas, took [110.48µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : all deamons started
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 started
TOTAL 88289280 ACTUAL 88289280 MAX BANDWIDTH 0 ACTUAL BANDWIDTH 0 TOTAL DISK 88289280 TOTAL DATA 40709378
usage is 13368
usage is 13368
usage is 13410
usage is 13410
usage is 13368
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : logCleaner: starting... frag 53, data: 13483236, used: 28803072 log:(0 - 28803072)
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : logCleaner: completed... frag 53, data: 13483236, used: 28776292, relocated: 1, retries: 0, skipped: 1 log:(0 - 28807168) run:1 duration:6 ms
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : logCleaner: starting... frag 53, data: 13349350, used: 28504064 log:(0 - 28504064)
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : logCleaner: completed... frag 53, data: 13349350, used: 28477284, relocated: 1, retries: 0, skipped: 1 log:(0 - 28508160) run:1 duration:7 ms
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : logCleaner: starting... frag 53, data: 13349350, used: 28504064 log:(0 - 28504064)
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : logCleaner: completed... frag 53, data: 13349350, used: 28490698, relocated: 1, retries: 0, skipped: 0 log:(0 - 28508160) run:1 duration:7 ms
TOTAL 88399872 ACTUAL 88313856 MAX BANDWIDTH 36864 ACTUAL BANDWIDTH 4096 TOTAL DISK 88234642 TOTAL DATA 40709570
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 closed
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 closed
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : all deamons stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1 stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1(shard1) : LSSCleaner stopped
LSS test.mvcc.TestAutoTunerCleanerUsageStats_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1 closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerCleanerUsageStats_1 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestAutoTunerCleanerUsageStats_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_2 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestAutoTunerCleanerUsageStats_1_3 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestAutoTunerCleanerUsageStats (8.72s)
=== RUN   TestAutoTunerDiskStats
----------- running TestAutoTunerDiskStats
--- PASS: TestAutoTunerDiskStats (2.50s)
=== RUN   TestAutoTunerTargetFragRatio
----------- running TestAutoTunerTargetFragRatio
--- PASS: TestAutoTunerTargetFragRatio (0.00s)
=== RUN   TestAutoTunerExcessUsedSpace
----------- running TestAutoTunerExcessUsedSpace
--- PASS: TestAutoTunerExcessUsedSpace (0.00s)
=== RUN   TestAutoTunerUsedSpaceRatio
----------- running TestAutoTunerUsedSpaceRatio
--- PASS: TestAutoTunerUsedSpaceRatio (0.00s)
=== RUN   TestAutoTunerAdjustFragRatio
----------- running TestAutoTunerAdjustFragRatio
--- PASS: TestAutoTunerAdjustFragRatio (0.00s)
=== RUN   TestBloom
----------- running TestBloom
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloom(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloom
LSS test.mvcc.TestBloom/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloom/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloom) to LSS (test.mvcc.TestBloom) and RecoveryLSS (test.mvcc.TestBloom/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloom
Shard shards/shard1(1) : Add instance test.mvcc.TestBloom to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloom to LSS test.mvcc.TestBloom
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloom/recovery], Data log [test.mvcc.TestBloom], Shared [false]
LSS test.mvcc.TestBloom/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [82.238µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloom/recovery], Data log [test.mvcc.TestBloom], Shared [false]. Built [0] plasmas, took [127.677µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloom(shard1) : all deamons started
LSS test.mvcc.TestBloom/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloom started
LSS test.mvcc.TestBloom(shard1) : all deamons stopped
LSS test.mvcc.TestBloom/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloom stopped
LSS test.mvcc.TestBloom(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloom/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloom closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloom ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloom ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloom sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloom (4.62s)
=== RUN   TestBloomDisableEnable
----------- running TestBloomDisableEnable
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomDisableEnable(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomDisableEnable
LSS test.mvcc.TestBloomDisableEnable/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomDisableEnable/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomDisableEnable) to LSS (test.mvcc.TestBloomDisableEnable) and RecoveryLSS (test.mvcc.TestBloomDisableEnable/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomDisableEnable
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomDisableEnable to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomDisableEnable to LSS test.mvcc.TestBloomDisableEnable
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomDisableEnable/recovery], Data log [test.mvcc.TestBloomDisableEnable], Shared [false]
LSS test.mvcc.TestBloomDisableEnable/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.795µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomDisableEnable/recovery], Data log [test.mvcc.TestBloomDisableEnable], Shared [false]. Built [0] plasmas, took [125.022µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomDisableEnable(shard1) : all deamons started
LSS test.mvcc.TestBloomDisableEnable/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisableEnable started
LSS test.mvcc.TestBloomDisableEnable(shard1) : all deamons stopped
LSS test.mvcc.TestBloomDisableEnable/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisableEnable stopped
LSS test.mvcc.TestBloomDisableEnable(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomDisableEnable/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisableEnable closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomDisableEnable ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomDisableEnable ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisableEnable sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomDisableEnable (3.76s)
=== RUN   TestBloomDisable
----------- running TestBloomDisable
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomDisable(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomDisable
LSS test.mvcc.TestBloomDisable/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomDisable/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomDisable) to LSS (test.mvcc.TestBloomDisable) and RecoveryLSS (test.mvcc.TestBloomDisable/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomDisable
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomDisable to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomDisable to LSS test.mvcc.TestBloomDisable
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomDisable/recovery], Data log [test.mvcc.TestBloomDisable], Shared [false]
LSS test.mvcc.TestBloomDisable/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.931µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomDisable/recovery], Data log [test.mvcc.TestBloomDisable], Shared [false]. Built [0] plasmas, took [100.601µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomDisable(shard1) : all deamons started
LSS test.mvcc.TestBloomDisable/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisable started
LSS test.mvcc.TestBloomDisable(shard1) : all deamons stopped
LSS test.mvcc.TestBloomDisable/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisable stopped
LSS test.mvcc.TestBloomDisable(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomDisable/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisable closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomDisable ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomDisable ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomDisable sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomDisable (0.03s)
=== RUN   TestBloomFreeDuringLookup
----------- running TestBloomFreeDuringLookup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomFreeDuringLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomFreeDuringLookup
LSS test.mvcc.TestBloomFreeDuringLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomFreeDuringLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomFreeDuringLookup) to LSS (test.mvcc.TestBloomFreeDuringLookup) and RecoveryLSS (test.mvcc.TestBloomFreeDuringLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomFreeDuringLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomFreeDuringLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomFreeDuringLookup to LSS test.mvcc.TestBloomFreeDuringLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomFreeDuringLookup], Shared [false]
LSS test.mvcc.TestBloomFreeDuringLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.092µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomFreeDuringLookup], Shared [false]. Built [0] plasmas, took [113.302µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomFreeDuringLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomFreeDuringLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomFreeDuringLookup started
LSS test.mvcc.TestBloomFreeDuringLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomFreeDuringLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomFreeDuringLookup stopped
LSS test.mvcc.TestBloomFreeDuringLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomFreeDuringLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomFreeDuringLookup closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomFreeDuringLookup ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomFreeDuringLookup ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomFreeDuringLookup sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomFreeDuringLookup (0.02s)
=== RUN   TestBloomRecoveryFreeDuringLookup
----------- running TestBloomRecoveryFreeDuringLookup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryFreeDuringLookup
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoveryFreeDuringLookup) to LSS (test.mvcc.TestBloomRecoveryFreeDuringLookup) and RecoveryLSS (test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoveryFreeDuringLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryFreeDuringLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryFreeDuringLookup to LSS test.mvcc.TestBloomRecoveryFreeDuringLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomRecoveryFreeDuringLookup], Shared [false]
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [161.005µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomRecoveryFreeDuringLookup], Shared [false]. Built [0] plasmas, took [211.32µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup started
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup closed
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryFreeDuringLookup
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoveryFreeDuringLookup) to LSS (test.mvcc.TestBloomRecoveryFreeDuringLookup) and RecoveryLSS (test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoveryFreeDuringLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryFreeDuringLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryFreeDuringLookup to LSS test.mvcc.TestBloomRecoveryFreeDuringLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomRecoveryFreeDuringLookup], Shared [false]
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [672.935µs]
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [757.784µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery], Data log [test.mvcc.TestBloomRecoveryFreeDuringLookup], Shared [false]. Built [1] plasmas, took [780.965µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup started
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoveryFreeDuringLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomRecoveryFreeDuringLookup ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomRecoveryFreeDuringLookup ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryFreeDuringLookup sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomRecoveryFreeDuringLookup (0.07s)
=== RUN   TestBloomRecoverySwapInLookup
----------- running TestBloomRecoverySwapInLookup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapInLookup
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapInLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoverySwapInLookup) to LSS (test.mvcc.TestBloomRecoverySwapInLookup) and RecoveryLSS (test.mvcc.TestBloomRecoverySwapInLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoverySwapInLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapInLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapInLookup to LSS test.mvcc.TestBloomRecoverySwapInLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoverySwapInLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapInLookup], Shared [false]
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.293µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoverySwapInLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapInLookup], Shared [false]. Built [0] plasmas, took [86.083µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup started
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup closed
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapInLookup
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapInLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoverySwapInLookup) to LSS (test.mvcc.TestBloomRecoverySwapInLookup) and RecoveryLSS (test.mvcc.TestBloomRecoverySwapInLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoverySwapInLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapInLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapInLookup to LSS test.mvcc.TestBloomRecoverySwapInLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoverySwapInLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapInLookup], Shared [false]
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [215.758µs]
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [727.803µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoverySwapInLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapInLookup], Shared [false]. Built [1] plasmas, took [792.742µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup started
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoverySwapInLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomRecoverySwapInLookup ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomRecoverySwapInLookup ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapInLookup sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomRecoverySwapInLookup (0.10s)
=== RUN   TestBloomRecoverySwapOutLookup
----------- running TestBloomRecoverySwapOutLookup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapOutLookup
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapOutLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoverySwapOutLookup) to LSS (test.mvcc.TestBloomRecoverySwapOutLookup) and RecoveryLSS (test.mvcc.TestBloomRecoverySwapOutLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoverySwapOutLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapOutLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapOutLookup to LSS test.mvcc.TestBloomRecoverySwapOutLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoverySwapOutLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapOutLookup], Shared [false]
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.438µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoverySwapOutLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapOutLookup], Shared [false]. Built [0] plasmas, took [80.255µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup started
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup closed
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapOutLookup
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoverySwapOutLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoverySwapOutLookup) to LSS (test.mvcc.TestBloomRecoverySwapOutLookup) and RecoveryLSS (test.mvcc.TestBloomRecoverySwapOutLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoverySwapOutLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapOutLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoverySwapOutLookup to LSS test.mvcc.TestBloomRecoverySwapOutLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoverySwapOutLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapOutLookup], Shared [false]
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [616.542µs]
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [741.394µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoverySwapOutLookup/recovery], Data log [test.mvcc.TestBloomRecoverySwapOutLookup], Shared [false]. Built [1] plasmas, took [804.965µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup started
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoverySwapOutLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomRecoverySwapOutLookup ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomRecoverySwapOutLookup ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoverySwapOutLookup sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomRecoverySwapOutLookup (0.05s)
=== RUN   TestBloomRecoveryInserts
----------- running TestBloomRecoveryInserts
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryInserts
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryInserts/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoveryInserts) to LSS (test.mvcc.TestBloomRecoveryInserts) and RecoveryLSS (test.mvcc.TestBloomRecoveryInserts/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoveryInserts
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryInserts to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryInserts to LSS test.mvcc.TestBloomRecoveryInserts
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoveryInserts/recovery], Data log [test.mvcc.TestBloomRecoveryInserts], Shared [false]
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.048µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoveryInserts/recovery], Data log [test.mvcc.TestBloomRecoveryInserts], Shared [false]. Built [0] plasmas, took [86.807µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts started
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts stopped
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts closed
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryInserts
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecoveryInserts/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecoveryInserts) to LSS (test.mvcc.TestBloomRecoveryInserts) and RecoveryLSS (test.mvcc.TestBloomRecoveryInserts/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecoveryInserts
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryInserts to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecoveryInserts to LSS test.mvcc.TestBloomRecoveryInserts
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecoveryInserts/recovery], Data log [test.mvcc.TestBloomRecoveryInserts], Shared [false]
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [721.059µs]
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [837.858µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecoveryInserts/recovery], Data log [test.mvcc.TestBloomRecoveryInserts], Shared [false]. Built [1] plasmas, took [860.093µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : all deamons started
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts started
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts stopped
LSS test.mvcc.TestBloomRecoveryInserts(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecoveryInserts/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomRecoveryInserts ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomRecoveryInserts ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecoveryInserts sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomRecoveryInserts (0.10s)
=== RUN   TestBloomRecovery
----------- running TestBloomRecovery
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecovery) to LSS (test.mvcc.TestBloomRecovery) and RecoveryLSS (test.mvcc.TestBloomRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to LSS test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.769µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]. Built [0] plasmas, took [80.71µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons started
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery started
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery stopped
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery closed
Test build bloom filter for lookup of non existent key
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecovery) to LSS (test.mvcc.TestBloomRecovery) and RecoveryLSS (test.mvcc.TestBloomRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to LSS test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [803.05µs]
LSS test.mvcc.TestBloomRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [904.251µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]. Built [1] plasmas, took [927.365µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons started
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery started
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery stopped
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery closed
Test build bloom filter for lookup of swapped out key
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomRecovery) to LSS (test.mvcc.TestBloomRecovery) and RecoveryLSS (test.mvcc.TestBloomRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomRecovery to LSS test.mvcc.TestBloomRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [28672]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [20480] took [365.749µs]
LSS test.mvcc.TestBloomRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [28672] replayOffset [20480]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [441.034µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomRecovery/recovery], Data log [test.mvcc.TestBloomRecovery], Shared [false]. Built [1] plasmas, took [509.748µs]
Plasma: doInit: data UsedSpace 28672 recovery UsedSpace 24808
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons started
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery started
LSS test.mvcc.TestBloomRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery stopped
LSS test.mvcc.TestBloomRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomRecovery ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomRecovery ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomRecovery sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomRecovery (0.09s)
=== RUN   TestBloomStats
----------- running TestBloomStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStats
LSS test.mvcc.TestBloomStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomStats) to LSS (test.mvcc.TestBloomStats) and RecoveryLSS (test.mvcc.TestBloomStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomStats
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStats to LSS test.mvcc.TestBloomStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomStats/recovery], Data log [test.mvcc.TestBloomStats], Shared [false]
LSS test.mvcc.TestBloomStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [83.633µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomStats/recovery], Data log [test.mvcc.TestBloomStats], Shared [false]. Built [0] plasmas, took [131.154µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomStats(shard1) : all deamons started
LSS test.mvcc.TestBloomStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStats started
LSS test.mvcc.TestBloomStats(shard1) : all deamons stopped
LSS test.mvcc.TestBloomStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStats stopped
LSS test.mvcc.TestBloomStats(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomStats closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomStats sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomStats (3.60s)
=== RUN   TestBloomStatsRecovery
----------- running TestBloomStatsRecovery
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestBloomStatsRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStatsRecovery
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStatsRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomStatsRecovery) to LSS (test.mvcc.TestBloomStatsRecovery) and RecoveryLSS (test.mvcc.TestBloomStatsRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomStatsRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStatsRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStatsRecovery to LSS test.mvcc.TestBloomStatsRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomStatsRecovery/recovery], Data log [test.mvcc.TestBloomStatsRecovery], Shared [false]
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [492.785µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomStatsRecovery/recovery], Data log [test.mvcc.TestBloomStatsRecovery], Shared [false]. Built [0] plasmas, took [531.717µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestBloomStatsRecovery(shard1) : all deamons started
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery started
LSS test.mvcc.TestBloomStatsRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery stopped
LSS test.mvcc.TestBloomStatsRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery closed
LSS test.mvcc.TestBloomStatsRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStatsRecovery
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestBloomStatsRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestBloomStatsRecovery) to LSS (test.mvcc.TestBloomStatsRecovery) and RecoveryLSS (test.mvcc.TestBloomStatsRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestBloomStatsRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStatsRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestBloomStatsRecovery to LSS test.mvcc.TestBloomStatsRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestBloomStatsRecovery/recovery], Data log [test.mvcc.TestBloomStatsRecovery], Shared [false]
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [90112]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [1069056] took [4.260699ms]
LSS test.mvcc.TestBloomStatsRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [1073152] replayOffset [1069056]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [4.364776ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestBloomStatsRecovery/recovery], Data log [test.mvcc.TestBloomStatsRecovery], Shared [false]. Built [1] plasmas, took [4.525817ms]
Plasma: doInit: data UsedSpace 1073152 recovery UsedSpace 92770
LSS test.mvcc.TestBloomStatsRecovery(shard1) : all deamons started
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery started
LSS test.mvcc.TestBloomStatsRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery stopped
LSS test.mvcc.TestBloomStatsRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestBloomStatsRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestBloomStatsRecovery ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestBloomStatsRecovery ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestBloomStatsRecovery sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBloomStatsRecovery (0.79s)
=== RUN   TestBloomFilterSimple
--- PASS: TestBloomFilterSimple (0.00s)
=== RUN   TestBloomFilterConcurrent
Generating 5000000 sets and unsets...
numSetters,duration,AddsPerSec
1,1.840864594s,2444429.109380
2,1.180744102s,3811039.997894
3,940.151214ms,4786318.342190
4,870.963746ms,5166533.074041
5,896.377289ms,5020054.674768
6,902.451862ms,4986263.743783
7,fragAutoTuner: FragRatio at 100. MaxFragRatio 100, MaxBandwidth 0. BandwidthUsage 0. AvailDisk 0. TotalUsed 0. BandwidthRatio 1. UsedSpaceRatio 1. CleanerBandwidth 0. Duration 0.
910.479304ms,4942301.247520
8,897.283822ms,5014982.873502
--- PASS: TestBloomFilterConcurrent (22.31s)
=== RUN   TestBitArrayConcurrent
Generating 5000000 sets and unsets...
numSetters,duration,setsPerSec
1,60.839713ms,73969842.691401
2,28.563806ms,157552673.477757
3,25.725846ms,174933178.096456
4,21.95097ms,205016179.239460
5,23.132304ms,194546293.356684
6,22.392904ms,200970093.025898
7,24.864073ms,180996251.096914
8,24.816307ms,181344629.561522
--- PASS: TestBitArrayConcurrent (0.57s)
=== RUN   TestBloomCapacity
--- PASS: TestBloomCapacity (0.00s)
=== RUN   TestBloomNumHashFuncs
--- PASS: TestBloomNumHashFuncs (0.00s)
=== RUN   TestBloomTestAndAdd
--- PASS: TestBloomTestAndAdd (0.22s)
=== RUN   TestBloomReset
--- PASS: TestBloomReset (0.00s)
=== RUN   TestDiag
----------- running TestDiag
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestDiag(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestDiag
LSS test.default.TestDiag/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestDiag/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestDiag) to LSS (test.default.TestDiag) and RecoveryLSS (test.default.TestDiag/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestDiag
Shard shards/shard1(1) : Add instance test.default.TestDiag to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestDiag to LSS test.default.TestDiag
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestDiag/recovery], Data log [test.default.TestDiag], Shared [false]
LSS test.default.TestDiag/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.143µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestDiag/recovery], Data log [test.default.TestDiag], Shared [false]. Built [0] plasmas, took [91.004µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestDiag(shard1) : all deamons started
LSS test.default.TestDiag/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestDiag started
LSS test.default.TestDiag(shard1) : all deamons stopped
LSS test.default.TestDiag/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestDiag stopped
LSS test.default.TestDiag(shard1) : LSSCleaner stopped
LSS test.default.TestDiag/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestDiag closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestDiag ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestDiag ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestDiag sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestDiag (0.03s)
=== RUN   TestDumpLog
----------- running TestDumpLog
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestDumpLog(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestDumpLog
LSS test.default.TestDumpLog/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestDumpLog/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestDumpLog) to LSS (test.default.TestDumpLog) and RecoveryLSS (test.default.TestDumpLog/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestDumpLog
Shard shards/shard1(1) : Add instance test.default.TestDumpLog to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestDumpLog to LSS test.default.TestDumpLog
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestDumpLog/recovery], Data log [test.default.TestDumpLog], Shared [false]
LSS test.default.TestDumpLog/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [46.871µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestDumpLog/recovery], Data log [test.default.TestDumpLog], Shared [false]. Built [0] plasmas, took [79.171µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestDumpLog(shard1) : all deamons started
LSS test.default.TestDumpLog/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestDumpLog started
LSS test.default.TestDumpLog(shard1) : all deamons stopped
LSS test.default.TestDumpLog/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestDumpLog stopped
LSS test.default.TestDumpLog(shard1) : LSSCleaner stopped
LSS test.default.TestDumpLog/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestDumpLog closed
offset 0 typ lssMaxSn plasmaid 0 - MaxSn 360002
offset 4096 typ lssRecoveryPoints plasmaid 0
Recovery Points - Version 1
offset 4118 typ lssPageData plasmaid 0 - state 8002 key minItem
-------delta------ op:opMetaDelta
Delta op:3, itm:item key:(key-400400400) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-399399399) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-398398398) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-397397397) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-396396396) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-395395395) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-394394394) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-393393393) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-392392392) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-391391391) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-390390390) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-389389389) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-388388388) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-387387387) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-386386386) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-385385385) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-384384384) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-383383383) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-382382382) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-381381381) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-380380380) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-379379379) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-378378378) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-377377377) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-376376376) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-375375375) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-374374374) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-373373373) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-372372372) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-371371371) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-370370370) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-369369369) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-368368368) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-367367367) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-366366366) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-365365365) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-364364364) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-363363363) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-362362362) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-361361361) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-360360360) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-359359359) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-358358358) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-357357357) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-356356356) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-355355355) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-354354354) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-353353353) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-352352352) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-351351351) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-350350350) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-349349349) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-348348348) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-347347347) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-346346346) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-345345345) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-344344344) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-343343343) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-342342342) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-341341341) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-340340340) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-339339339) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-338338338) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-337337337) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-336336336) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-335335335) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-334334334) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-333333333) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-332332332) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-331331331) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-330330330) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-329329329) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-328328328) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-327327327) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-326326326) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-325325325) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-324324324) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-323323323) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-322322322) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-321321321) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-320320320) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-319319319) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-318318318) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-317317317) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-316316316) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-315315315) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-314314314) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-313313313) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-312312312) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-311311311) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-310310310) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-309309309) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-308308308) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-307307307) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-306306306) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-305305305) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-304304304) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-303303303) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-302302302) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-301301301) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-300300300) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-299299299) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-298298298) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-297297297) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-296296296) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-295295295) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-294294294) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-293293293) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-292292292) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-291291291) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-290290290) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-289289289) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-288288288) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-287287287) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-286286286) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-285285285) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-284284284) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-283283283) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-282282282) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-281281281) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-280280280) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-279279279) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-278278278) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-277277277) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-276276276) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-275275275) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-274274274) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-273273273) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-272272272) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-271271271) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-270270270) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-269269269) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-268268268) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-267267267) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-266266266) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-265265265) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-264264264) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-263263263) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-262262262) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-261261261) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-260260260) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-259259259) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-258258258) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-257257257) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-256256256) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-255255255) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-254254254) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-253253253) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-252252252) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-251251251) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-250250250) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-249249249) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-248248248) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-247247247) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-246246246) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-245245245) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-244244244) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-243243243) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-242242242) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-241241241) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-240240240) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-239239239) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-238238238) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-237237237) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-236236236) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-235235235) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-234234234) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-233233233) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-232232232) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-231231231) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-230230230) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-229229229) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-228228228) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-227227227) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-226226226) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-225225225) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-224224224) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-223223223) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-222222222) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-221221221) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-220220220) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-219219219) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-218218218) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-217217217) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-216216216) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-215215215) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-214214214) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-213213213) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-212212212) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-211211211) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-210210210) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-209209209) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-208208208) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-207207207) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-206206206) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-205205205) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-204204204) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-203203203) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-202202202) val:((nil)) sn:1 insert: true
Delta op:3, itm:item key:(key-201201201) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-000) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-100100100) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-101010) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-101101101) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-102102102) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-103103103) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-104104104) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-105105105) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-106106106) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-107107107) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-108108108) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-109109109) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-110110110) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-111) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-111111) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-111111111) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-112112112) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-113113113) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-114114114) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-115115115) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-116116116) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-117117117) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-118118118) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-119119119) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-120120120) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-121121121) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-121212) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-122122122) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-123123123) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-124124124) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-125125125) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-126126126) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-127127127) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-128128128) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-129129129) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-130130130) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-131131131) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-131313) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-132132132) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-133133133) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-134134134) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-135135135) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-136136136) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-137137137) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-138138138) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-139139139) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-140140140) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-141141141) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-141414) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-142142142) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-143143143) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-144144144) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-145145145) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-146146146) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-147147147) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-148148148) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-149149149) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-150150150) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-151151151) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-151515) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-152152152) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-153153153) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-154154154) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-155155155) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-156156156) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-157157157) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-158158158) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-159159159) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-160160160) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-161161161) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-161616) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-162162162) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-163163163) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-164164164) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-165165165) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-166166166) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-167167167) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-168168168) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-169169169) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-170170170) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-171171171) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-171717) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-172172172) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-173173173) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-174174174) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-175175175) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-176176176) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-177177177) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-178178178) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-179179179) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-180180180) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-181181181) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-181818) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-182182182) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-183183183) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-184184184) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-185185185) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-186186186) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-187187187) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-188188188) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-189189189) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-190190190) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-191191191) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-191919) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-192192192) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-193193193) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-194194194) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-195195195) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-196196196) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-197197197) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-198198198) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-199199199) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-200200200) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-202020) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-212121) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-222) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-222222) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-232323) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-242424) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-252525) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-262626) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-272727) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-282828) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-292929) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-303030) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-313131) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-323232) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-333) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-333333) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-343434) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-353535) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-363636) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-373737) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-383838) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-393939) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-404040) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-414141) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-424242) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-434343) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-444) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-444444) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-454545) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-464646) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-474747) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-484848) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-494949) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-505050) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-515151) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-525252) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-535353) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-545454) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-555) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-555555) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-565656) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-575757) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-585858) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-595959) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-606060) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-616161) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-626262) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-636363) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-646464) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-656565) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-666) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-666666) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-676767) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-686868) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-696969) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-707070) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-717171) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-727272) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-737373) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-747474) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-757575) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-767676) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-777) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-777777) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-787878) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-797979) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-808080) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-818181) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-828282) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-838383) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-848484) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-858585) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-868686) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-878787) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-888) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-888888) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-898989) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-909090) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-919191) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-929292) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-939393) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-949494) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-959595) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-969696) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-979797) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-989898) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-999) val:((nil)) sn:1 insert: true
Basepage itm:item key:(key-999999) val:((nil)) sn:1 insert: true

offset,typ,version,blockdata
0,lssMaxSn,1,0,[0 0 0 0 0 5 126 66]
4096,lssRecoveryPoints,1,0,[0 1 0 0 0 0 0 0 0 0 0 0]
4118,lssPageData,1,0,[249 78 80 0 2 1 0 200 0 201 2 0 3 13 0 0 128 107 101 121 45 52 48 48 9 3 4 1 0 13 1 21 27 8 51 57 57 9 3 78 27 0 8 56 51 57 1 3 78 27 0 8 55 51 57 1 3 78 27 0 8 54 51 57 1 3 78 27 0 8 53 51 57 1 3 78 27 0 8 52 51 57 1 3 78 27 0 8 51 51 57 1 3 78 27 0 8 50 51 57 1 3 78 27 0 8 49 51 57 1 3 78 27 0 8 48 51 57 1 3 74 27 0 8 56 57 51 5 3 78 27 0 8 56 51 56 1 3 78 27 0 8 55 51 56 1 3 78 27 0 8 54 51 56 1 3 78 27 0 8 53 51 56 1 3 78 27 0 8 52 51 56 1 3 78 27 0 8 51 51 56 1 3 78 27 0 8 50 51 56 1 3 78 27 0 8 49 51 56 1 3 78 27 0 8 48 51 56 1 3 74 27 0 8 55 57 51 5 3 78 27 0 8 56 51 55 1 3 78 27 0 8 55 51 55 1 3 78 27 0 8 54 51 55 1 3 78 27 0 8 53 51 55 1 3 78 27 0 8 52 51 55 1 3 78 27 0 8 51 51 55 1 3 78 27 0 8 50 51 55 1 3 78 27 0 8 49 51 55 1 3 78 27 0 8 48 51 55 1 3 74 27 0 8 54 57 51 5 3 78 27 0 8 56 51 54 1 3 78 27 0 8 55 51 54 1 3 78 27 0 8 54 51 54 1 3 78 27 0 8 53 51 54 1 3 78 27 0 8 52 51 54 1 3 78 27 0 8 51 51 54 1 3 78 27 0 8 50 51 54 1 3 78 27 0 8 49 51 54 1 3 78 27 0 8 48 51 54 1 3 74 27 0 8 53 57 51 5 3 78 27 0 8 56 51 53 1 3 78 27 0 8 55 51 53 1 3 78 27 0 8 54 51 53 1 3 78 27 0 8 53 51 53 1 3 78 27 0 8 52 51 53 1 3 78 27 0 8 51 51 53 1 3 78 27 0 8 50 51 53 1 3 78 27 0 8 49 51 53 1 3 78 27 0 8 48 51 53 1 3 74 27 0 8 52 57 51 5 3 78 27 0 8 56 51 52 1 3 78 27 0 8 55 51 52 1 3 78 27 0 8 54 51 52 1 3 78 27 0 8 53 51 52 1 3 78 27 0 8 52 51 52 1 3 78 27 0 8 51 51 52 1 3 78 27 0 8 50 51 52 1 3 78 27 0 8 49 51 52 1 3 78 27 0 8 48 51 52 1 3 74 27 0 169 176 4 51 57 78 27 0 133 189 4 51 56 78 27 0 101 202 4 51 55 78 27 0 69 215 4 51 54 78 27 0 37 228 4 51 53 78 27 0 5 241 4 51 52 78 27 0 0 51 9 1 78 27 0 8 50 51 51 1 3 78 27 0 8 49 51 51 1 3 78 27 0 8 48 51 51 1 3 74 27 0 8 50 57 51 5 3 78 27 0 8 56 51 50 1 3 78 27 0 8 55 51 50 1 3 78 27 0 8 54 51 50 1 3 78 27 0 8 53 51 50 1 3 78 27 0 8 52 51 50 1 3 78 27 0 9 242 0 51 78 27 0 8 50 51 50 1 3 78 27 0 8 49 51 50 1 3 78 27 0 8 48 51 50 1 3 74 27 0 8 49 57 51 5 3 78 27 0 8 56 51 49 1 3 78 27 0 8 55 51 49 1 3 78 27 0 8 54 51 49 1 3 78 27 0 8 53 51 49 1 3 78 27 0 8 52 51 49 1 3 78 27 0 41 229 78 14 1 8 49 50 51 5 3 78 54 0 8 49 51 49 1 3 78 27 0 8 48 51 49 1 3 74 27 0 8 48 57 51 5 3 78 27 0 8 56 51 48 1 3 78 27 0 8 55 51 48 1 3 78 27 0 8 54 51 48 1 3 78 27 0 8 53 51 48 1 3 78 27 0 8 52 51 48 1 3 78 27 0 73 216 78 14 1 8 48 50 51 5 3 78 54 0 8 49 51 48 1 3 78 27 0 8 48 51 48 1 3 70 27 0 8 50 57 57 9 3 78 27 0 8 56 50 57 1 3 78 27 0 8 55 50 57 1 3 78 27 0 8 54 50 57 1 3 78 27 0 8 53 50 57 1 3 78 27 0 8 52 50 57 1 3 78 27 0 105 203 74 14 1 8 50 57 50 9 3 78 54 0 8 49 50 57 1 3 78 27 0 8 48 50 57 1 3 74 27 0 8 56 57 50 5 3 78 27 0 8 56 50 56 1 3 78 27 0 8 55 50 56 1 3 78 27 0 8 54 50 56 1 3 78 27 0 8 53 50 56 1 3 78 27 0 8 52 50 56 1 3 78 27 0 137 190 78 14 1 8 56 50 50 5 3 78 54 0 8 49 50 56 1 3 78 27 0 8 48 50 56 1 3 74 27 0 8 55 57 50 5 3 78 27 0 8 56 50 55 1 3 78 27 0 8 55 50 55 1 3 78 27 0 8 54 50 55 1 3 78 27 0 8 53 50 55 1 3 78 27 0 8 52 50 55 1 3 78 27 0 169 177 78 14 1 8 55 50 50 5 3 78 54 0 8 49 50 55 1 3 78 27 0 8 48 50 55 1 3 74 27 0 8 54 57 50 5 3 78 27 0 8 56 50 54 1 3 78 27 0 8 55 50 54 1 3 78 27 0 8 54 50 54 1 3 78 27 0 8 53 50 54 1 3 78 27 0 8 52 50 54 1 3 78 27 0 201 164 78 14 1 8 54 50 50 5 3 78 54 0 8 49 50 54 1 3 78 27 0 8 48 50 54 1 3 74 27 0 8 53 57 50 5 3 78 27 0 8 56 50 53 1 3 78 27 0 8 55 50 53 1 3 78 27 0 8 54 50 53 1 3 78 27 0 8 53 50 53 1 3 78 27 0 8 52 50 53 1 3 78 27 0 233 151 78 14 1 8 53 50 50 5 3 78 54 0 8 49 50 53 1 3 78 27 0 8 48 50 53 1 3 74 27 0 8 52 57 50 5 3 78 27 0 8 56 50 52 1 3 78 27 0 8 55 50 52 1 3 78 27 0 8 54 50 52 1 3 78 27 0 8 53 50 52 1 3 78 27 0 8 52 50 52 1 3 78 27 0 22 138 8 78 14 1 8 52 50 50 5 3 78 54 0 8 49 50 52 1 3 78 27 0 8 48 50 52 1 3 74 27 0 22 33 16 78 140 10 14 43 15 18 49 15 78 54 0 18 59 14 78 140 10 14 69 13 18 75 13 78 54 0 18 85 12 78 140 10 14 95 11 18 101 11 78 54 0 18 111 10 0 51 78 14 1 22 124 9 4 51 50 78 54 0 18 137 8 4 51 49 78 27 0 229 150 4 51 48 74 27 0 201 163 0 50 74 154 11 173 176 0 50 74 154 11 141 189 0 50 74 154 11 109 202 0 50 74 154 11 77 215 0 50 74 154 11 45 228 0 50 74 154 11 18 109 10 1 247 78 189 0 0 50 9 1 78 27 0 8 49 50 50 1 3 78 27 0 8 48 50 50 1 3 74 27 0 8 49 57 50 5 3 78 27 0 8 56 50 49 1 3 78 27 0 8 55 50 49 1 3 78 27 0 8 54 50 49 1 3 78 27 0 8 53 50 49 1 3 78 27 0 8 52 50 49 1 3 78 27 0 22 99 11 78 28 2 1 239 1 245 78 54 0 8 49 50 49 1 3 78 27 0 8 48 50 49 1 3 74 27 0 8 48 57 50 5 3 78 27 0 8 56 50 48 1 3 78 27 0 8 55 50 48 1 3 78 27 0 8 54 50 48 1 3 78 27 0 8 53 50 48 1 3 78 27 0 8 52 50 48 1 3 78 27 0 22 86 12 78 14 1 33 226 33 232 78 54 0 8 49 50 48 1 3 21 27 4 1 7 26 24 21 4 48 48 117 36 30 43 21 8 49 48 48 9 3 17 46 0 10 13 44 4 49 48 1 2 17 22 25 47 8 49 49 48 1 3 70 25 0 41 104 117 187 25 50 22 13 12 21 197 25 25 8 52 49 48 1 3 70 75 0 8 53 49 48 1 3 70 25 0 8 54 49 48 1 3 70 25 0 8 55 49 48 1 3 70 25 0 8 56 49 48 1 3 70 25 0 8 57 49 48 1 3 66 25 0 9 223 0 49 53 41 49 60 4 49 49 149 123 53 35 0 49 1 1 70 66 0 1 22 29 47 21 241 65 163 65 169 70 50 0 22 75 13 70 35 1 8 49 52 49 5 3 70 50 0 8 53 49 49 1 3 70 25 0 8 54 49 49 1 3 70 25 0 8 55 49 49 1 3 70 25 0 8 56 49 49 1 3 70 25 0 8 57 49 49 1 3 66 25 0 73 95 0 50 53 35 21 225 1 221 97 137 17 50 53 41 12 50 49 50 49 70 26 2 97 206 129 198 17 47 25 72 22 118 14 70 16 1 233 17 185 72 25 50 18 56 8 185 124 25 25 18 95 9 185 176 25 25 18 134 10 185 228 25 25 18 173 11 217 24 25 25 18 212 12 217 76 21 25 22 251 13 217 128 25 25 18 34 15 217 180 53 16 12 51 49 51 49 70 222 0 141 252 70 41 1 26 161 15 74 50 0 18 173 18 38 242 17 25 122 18 212 19 249 154 25 25 18 251 20 38 90 18 25 25 18 34 22 38 2 8 25 25 18 73 23 38 194 18 25 25 18 112 24 38 106 8 21 25 8 52 48 49 5 3 66 216 1 77 206 70 73 3 12 52 49 52 49 245 29 25 72 201 39 70 16 1 14 201 16 14 207 16 70 97 0 8 52 49 52 1 3 70 25 0 8 53 49 52 1 3 70 25 0 8 54 49 52 1 3 70 25 0 8 55 49 52 1 3 70 25 0 8 56 49 52 1 3 70 25 0 8 57 49 52 1 3 66 25 0 8 53 48 49 5 3 70 25 0 105 197 70 16 1 12 53 49 53 49 34 72 8 53 16 225 79 225 85 70 72 0 22 247 17 70 32 2 8 53 52 49 5 3 70 50 0 8 53 49 53 1 3 70 25 0 8 54 49 53 1 3 70 25 0 8 55 49 53 1 3 70 25 0 8 56 49 53 1 3 70 25 0 8 57 49 53 1 3 66 25 0 8 54 48 49 5 3 70 25 0 137 188 70 16 1 12 54 49 54 49 34 115 9 53 16 14 122 8 14 128 8 70 72 0 22 34 19 70 16 1 8 54 52 49 5 3 70 50 0 8 53 49 54 1 3 70 25 0 8 54 49 54 1 3 70 25 0 8 55 49 54 1 3 70 25 0 8 56 49 54 1 3 70 25 0 8 57 49 54 1 3 66 25 0 8 55 48 49 5 3 70 25 0 169 179 70 16 1 12 55 49 55 49 34 158 10 53 16 14 165 9 14 171 9 70 72 0 22 77 20 70 16 1 8 55 52 49 5 3 70 50 0 8 53 49 55 1 3 70 25 0 8 54 49 55 1 3 70 25 0 8 55 49 55 1 3 70 25 0 8 56 49 55 1 3 70 25 0 8 57 49 55 1 3 66 25 0 8 56 48 49 5 3 70 25 0 201 170 70 16 1 12 56 49 56 49 34 201 11 53 16 14 208 10 14 214 10 70 72 0 22 120 21 70 16 1 8 56 52 49 5 3 70 50 0 8 53 49 56 1 3 70 25 0 8 54 49 56 1 3 70 25 0 8 55 49 56 1 3 70 25 0 8 56 49 56 1 3 70 25 0 8 57 49 56 1 3 66 25 0 8 57 48 49 5 3 70 25 0 233 161 70 16 1 12 57 49 57 49 34 244 12 53 16 26 254 11 70 80 5 26 163 22 70 16 1 8 57 52 49 5 3 70 122 0 8 53 49 57 1 3 70 25 0 8 54 49 57 1 3 70 25 0 8 55 49 57 1 3 70 25 0 8 56 49 57 1 3 70 25 0 8 57 49 57 1 3 62 25 0 8 50 48 48 9 3 17 25 241 87 4 50 48 1 2 66 22 0 14 124 8 53 35 30 207 9 4 50 50 53 7 21 63 18 110 13 66 63 0 12 51 50 51 50 53 26 21 44 8 52 50 52 38 135 8 21 22 8 53 50 53 38 132 8 21 22 8 54 50 54 38 129 8 21 22 8 55 50 55 38 126 8 21 22 8 56 50 56 38 123 8 21 22 8 57 50 57 38 120 8 17 22 4 51 48 1 2 62 176 0 18 91 8 21 239 21 44 1 197 21 242 49 5 0 51 38 133 15 21 41 18 40 26 66 85 0 8 52 51 52 38 102 8 21 44 8 53 51 53 38 99 8 21 22 8 54 51 54 38 96 8 21 22 8 55 51 55 38 93 8 21 22 8 56 51 56 38 90 8 21 22 8 57 51 57 38 87 8 17 22 4 52 48 1 2 62 154 0 18 58 8 66 239 0 37 158 21 239 21 66 1 197 53 203 49 5 8 52 52 52 66 85 0 0 52 1 1 66 22 0 12 53 52 53 52 245 150 21 85 12 54 52 54 52 213 156 21 22 12 55 52 55 52 181 162 21 22 12 56 52 56 52 149 168 21 22 12 57 52 57 52 117 174 17 22 4 53 48 1 2 62 132 0 18 25 8 66 239 0 69 119 66 239 0 37 158 21 239 21 88 1 197 21 242 49 5 4 53 53 66 217 0 0 53 5 1 66 129 0 12 54 53 54 53 66 239 0 4 53 55 1 2 66 44 0 12 56 53 56 53 66 239 0 4 53 57 1 2 62 44 0 4 54 48 1 2 66 22 0 225 248 66 239 0 101 80 66 239 0 69 119 66 239 0 37 158 21 239 49 5 5 197 21 242 49 5 4 54 54 66 217 0 0 54 5 1 66 151 0 12 55 54 55 54 66 222 1 4 54 56 1 2 66 44 0 12 57 54 57 54 66 222 1 4 55 48 1 2 62 44 0 229 215 66 239 0 133 41 66 239 0 101 80 66 239 0 69 119 66 239 0 37 158 21 239 49 5 5 197 21 242 49 5 4 55 55 66 217 0 0 55 5 1 66 173 0 12 56 55 56 55 66 222 1 4 55 57 1 2 62 44 0 4 56 48 1 2 66 22 0 225 182 66 239 0 165 2 66 239 0 133 41 66 239 0 101 80 66 239 0 69 119 66 239 0 37 158 21 239 49 5 5 197 21 242 49 5 4 56 56 66 217 0 0 56 5 1 66 195 0 12 57 56 57 56 66 222 1 4 57 48 1 2 62 44 0 229 149 66 239 0 165 219 66 239 0 165 2 66 239 0 133 41 66 239 0 101 80 66 239 0 69 119 66 239 0 37 158 21 239 49 5 5 197 21 242 49 5 4 57 57 70 217 0 0 57 1 1 17 217]
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestDumpLog ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestDumpLog ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestDumpLog sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestDumpLog (0.05s)
=== RUN   TestExtrasN1
=== PAUSE TestExtrasN1
=== RUN   TestExtrasN2
=== PAUSE TestExtrasN2
=== RUN   TestExtrasN3
=== PAUSE TestExtrasN3
=== RUN   TestGMRecovery
----------- running TestGMRecovery
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestGMRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestGMRecovery
LSS test.mvcc.TestGMRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestGMRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestGMRecovery) to LSS (test.mvcc.TestGMRecovery) and RecoveryLSS (test.mvcc.TestGMRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestGMRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestGMRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestGMRecovery to LSS test.mvcc.TestGMRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestGMRecovery/recovery], Data log [test.mvcc.TestGMRecovery], Shared [false]
LSS test.mvcc.TestGMRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [102.935µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestGMRecovery/recovery], Data log [test.mvcc.TestGMRecovery], Shared [false]. Built [0] plasmas, took [143.817µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestGMRecovery(shard1) : all deamons started
LSS test.mvcc.TestGMRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery started
recovery_point sn:1001 meta:rp
LSS test.mvcc.TestGMRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestGMRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery stopped
LSS test.mvcc.TestGMRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestGMRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery closed
Reopening database...
LSS test.mvcc.TestGMRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestGMRecovery
LSS test.mvcc.TestGMRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestGMRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestGMRecovery) to LSS (test.mvcc.TestGMRecovery) and RecoveryLSS (test.mvcc.TestGMRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestGMRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestGMRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestGMRecovery to LSS test.mvcc.TestGMRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestGMRecovery/recovery], Data log [test.mvcc.TestGMRecovery], Shared [false]
LSS test.mvcc.TestGMRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [1085440]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [20779008] took [38.711403ms]
LSS test.mvcc.TestGMRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [20783104] replayOffset [20779008]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [38.829339ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestGMRecovery/recovery], Data log [test.mvcc.TestGMRecovery], Shared [false]. Built [1] plasmas, took [40.328077ms]
Plasma: doInit: data UsedSpace 20783104 recovery UsedSpace 1086802
LSS test.mvcc.TestGMRecovery(shard1) : all deamons started
LSS test.mvcc.TestGMRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery started
Rollbacked to rp
{
"memory_quota":         5242880,
"count":                0,
"compacts":             0,
"purges":               0,
"splits":               0,
"merges":               0,
"inserts":              0,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          1094456,
"memory_size_index":    444056,
"allocated":            1412792,
"freed":                318336,
"reclaimed":            318336,
"reclaim_pending":      0,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      444056,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          1000000,
"total_records":        1000000,
"num_rec_allocs":       0,
"num_rec_frees":        0,
"num_rec_swapout":      1000000,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       0,
"lss_data_size":        10747277,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        21178018,
"est_recovery_size":    1745770,
"lss_num_reads":        0,
"lss_read_bs":          36,
"lss_blk_read_bs":      4096,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           0,
"cache_misses":         0,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00000,
"mvcc_purge_ratio":     1.00000,
"currSn":               360004,
"gcSn":                 360003,
"gcSnIntervals":       "[0 1001 360004]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            23,
"num_free_wctxs":       21,
"num_readers":          0,
"num_writers":          0,
"page_bytes":           0,
"page_cnt":             0,
"page_itemcnt":         0,
"avg_item_size":        0,
"avg_page_size":        0,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":89571784,
"page_bytes_compressed":21063562,
"compression_ratio":    4.25245,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    875600,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          0,
    "lss_data_size":        11989237,
    "lss_used_space":       22958080,
    "lss_disk_size":        22958080,
    "lss_fragmentation":    47,
    "lss_num_reads":        0,
    "lss_read_bs":          1034792,
    "lss_blk_read_bs":      1089536,
    "bytes_written":        1089536,
    "bytes_incoming":       0,
    "write_amp":            0.00,
    "write_amp_avg":        0.00,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      21217280,
    "num_sctxs":            35,
    "num_free_sctxs":       24,
    "num_swapperWriter":    32
  }
}
LSS test.mvcc.TestGMRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestGMRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery stopped
LSS test.mvcc.TestGMRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestGMRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestGMRecovery ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestGMRecovery ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestGMRecovery sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestGMRecovery (9.08s)
=== RUN   TestIteratorSimple
----------- running TestIteratorSimple
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestIteratorSimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSimple
LSS test.basic.TestIteratorSimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestIteratorSimple) to LSS (test.basic.TestIteratorSimple) and RecoveryLSS (test.basic.TestIteratorSimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestIteratorSimple
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSimple to LSS test.basic.TestIteratorSimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestIteratorSimple/recovery], Data log [test.basic.TestIteratorSimple], Shared [false]
LSS test.basic.TestIteratorSimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.12µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestIteratorSimple/recovery], Data log [test.basic.TestIteratorSimple], Shared [false]. Built [0] plasmas, took [104.948µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestIteratorSimple(shard1) : all deamons started
LSS test.basic.TestIteratorSimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestIteratorSimple started
LSS test.basic.TestIteratorSimple(shard1) : all deamons stopped
LSS test.basic.TestIteratorSimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSimple stopped
LSS test.basic.TestIteratorSimple(shard1) : LSSCleaner stopped
LSS test.basic.TestIteratorSimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSimple closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestIteratorSimple ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestIteratorSimple ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestIteratorSimple sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestIteratorSimple (5.10s)
=== RUN   TestIteratorSeek
----------- running TestIteratorSeek
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestIteratorSeek(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSeek
LSS test.basic.TestIteratorSeek/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSeek/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestIteratorSeek) to LSS (test.basic.TestIteratorSeek) and RecoveryLSS (test.basic.TestIteratorSeek/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestIteratorSeek
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSeek to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSeek to LSS test.basic.TestIteratorSeek
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestIteratorSeek/recovery], Data log [test.basic.TestIteratorSeek], Shared [false]
LSS test.basic.TestIteratorSeek/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.29µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestIteratorSeek/recovery], Data log [test.basic.TestIteratorSeek], Shared [false]. Built [0] plasmas, took [109.484µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestIteratorSeek(shard1) : all deamons started
LSS test.basic.TestIteratorSeek/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestIteratorSeek started
LSS test.basic.TestIteratorSeek(shard1) : all deamons stopped
LSS test.basic.TestIteratorSeek/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSeek stopped
LSS test.basic.TestIteratorSeek(shard1) : LSSCleaner stopped
LSS test.basic.TestIteratorSeek/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSeek closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestIteratorSeek ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestIteratorSeek ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestIteratorSeek sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestIteratorSeek (6.01s)
=== RUN   TestPlasmaIteratorSwapin
----------- running TestPlasmaIteratorSwapin
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaIteratorSwapin(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaIteratorSwapin
LSS test.basic.TestPlasmaIteratorSwapin/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaIteratorSwapin/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaIteratorSwapin) to LSS (test.basic.TestPlasmaIteratorSwapin) and RecoveryLSS (test.basic.TestPlasmaIteratorSwapin/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaIteratorSwapin
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaIteratorSwapin to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaIteratorSwapin to LSS test.basic.TestPlasmaIteratorSwapin
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaIteratorSwapin/recovery], Data log [test.basic.TestPlasmaIteratorSwapin], Shared [false]
LSS test.basic.TestPlasmaIteratorSwapin/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.022µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaIteratorSwapin/recovery], Data log [test.basic.TestPlasmaIteratorSwapin], Shared [false]. Built [0] plasmas, took [100.349µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaIteratorSwapin(shard1) : all deamons started
LSS test.basic.TestPlasmaIteratorSwapin/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaIteratorSwapin started
LSS test.basic.TestPlasmaIteratorSwapin(shard1) : all deamons stopped
LSS test.basic.TestPlasmaIteratorSwapin/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaIteratorSwapin stopped
LSS test.basic.TestPlasmaIteratorSwapin(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaIteratorSwapin/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaIteratorSwapin closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaIteratorSwapin ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaIteratorSwapin ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaIteratorSwapin sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaIteratorSwapin (4.96s)
=== RUN   TestIteratorSetEnd
----------- running TestIteratorSetEnd
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestIteratorSetEnd(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSetEnd
LSS test.basic.TestIteratorSetEnd/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestIteratorSetEnd/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestIteratorSetEnd) to LSS (test.basic.TestIteratorSetEnd) and RecoveryLSS (test.basic.TestIteratorSetEnd/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestIteratorSetEnd
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSetEnd to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestIteratorSetEnd to LSS test.basic.TestIteratorSetEnd
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestIteratorSetEnd/recovery], Data log [test.basic.TestIteratorSetEnd], Shared [false]
LSS test.basic.TestIteratorSetEnd/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.804µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestIteratorSetEnd/recovery], Data log [test.basic.TestIteratorSetEnd], Shared [false]. Built [0] plasmas, took [114.083µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestIteratorSetEnd(shard1) : all deamons started
LSS test.basic.TestIteratorSetEnd/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestIteratorSetEnd started
LSS test.basic.TestIteratorSetEnd(shard1) : all deamons stopped
LSS test.basic.TestIteratorSetEnd/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSetEnd stopped
LSS test.basic.TestIteratorSetEnd(shard1) : LSSCleaner stopped
LSS test.basic.TestIteratorSetEnd/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestIteratorSetEnd closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestIteratorSetEnd ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestIteratorSetEnd ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestIteratorSetEnd sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestIteratorSetEnd (0.49s)
=== RUN   TestIterHiItm
----------- running TestIterHiItm
TestIterHiItm
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestIterHiItm(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestIterHiItm
LSS test.default.TestIterHiItm/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestIterHiItm/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestIterHiItm) to LSS (test.default.TestIterHiItm) and RecoveryLSS (test.default.TestIterHiItm/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestIterHiItm
Shard shards/shard1(1) : Add instance test.default.TestIterHiItm to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestIterHiItm to LSS test.default.TestIterHiItm
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestIterHiItm/recovery], Data log [test.default.TestIterHiItm], Shared [false]
LSS test.default.TestIterHiItm/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.808µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestIterHiItm/recovery], Data log [test.default.TestIterHiItm], Shared [false]. Built [0] plasmas, took [115.585µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestIterHiItm(shard1) : all deamons started
LSS test.default.TestIterHiItm/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestIterHiItm started
LSS test.default.TestIterHiItm(shard1) : all deamons stopped
LSS test.default.TestIterHiItm/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestIterHiItm stopped
LSS test.default.TestIterHiItm(shard1) : LSSCleaner stopped
LSS test.default.TestIterHiItm/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestIterHiItm closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestIterHiItm ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestIterHiItm ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestIterHiItm sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestIterHiItm (1.81s)
=== RUN   TestIterDeleteSplitMerge
----------- running TestIterDeleteSplitMerge
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestIterDeleteSplitMerge(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestIterDeleteSplitMerge
LSS test.default.TestIterDeleteSplitMerge/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestIterDeleteSplitMerge/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestIterDeleteSplitMerge) to LSS (test.default.TestIterDeleteSplitMerge) and RecoveryLSS (test.default.TestIterDeleteSplitMerge/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestIterDeleteSplitMerge
Shard shards/shard1(1) : Add instance test.default.TestIterDeleteSplitMerge to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestIterDeleteSplitMerge to LSS test.default.TestIterDeleteSplitMerge
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestIterDeleteSplitMerge/recovery], Data log [test.default.TestIterDeleteSplitMerge], Shared [false]
LSS test.default.TestIterDeleteSplitMerge/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.633µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestIterDeleteSplitMerge/recovery], Data log [test.default.TestIterDeleteSplitMerge], Shared [false]. Built [0] plasmas, took [88.338µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestIterDeleteSplitMerge(shard1) : all deamons started
LSS test.default.TestIterDeleteSplitMerge/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestIterDeleteSplitMerge started
LSS test.default.TestIterDeleteSplitMerge(shard1) : all deamons stopped
LSS test.default.TestIterDeleteSplitMerge/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestIterDeleteSplitMerge stopped
LSS test.default.TestIterDeleteSplitMerge(shard1) : LSSCleaner stopped
LSS test.default.TestIterDeleteSplitMerge/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestIterDeleteSplitMerge closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestIterDeleteSplitMerge ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestIterDeleteSplitMerge ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestIterDeleteSplitMerge sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestIterDeleteSplitMerge (0.04s)
=== RUN   TestLogOperation
--- PASS: TestLogOperation (61.56s)
=== RUN   TestLogLargeSize
--- PASS: TestLogLargeSize (0.36s)
=== RUN   TestLogTrim
--- PASS: TestLogTrim (60.55s)
=== RUN   TestLogSuperblockCorruption
--- PASS: TestLogSuperblockCorruption (60.26s)
=== RUN   TestLogTrimHolePunch
--- PASS: TestLogTrimHolePunch (50.62s)
=== RUN   TestShardLSSCleaning
----------- running TestShardLSSCleaning
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaning_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardLSSCleaning_1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaning_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaning_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.155µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [122.095µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [132.119µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaning_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardLSSCleaning_2
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaning_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaning_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_2 started
before cleaning: log head offset 0 tail offset 622592
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 64, data: 222590, used: 622592 log:(0 - 622592)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 0, data: 222590, used: 10096, relocated: 98, retries: 0, skipped: 192 log:(0 - 630784) run:1 duration:46 ms
after cleaning: log head offset 620688 tail offset 839680
num pages for instance1 = 49, num pages for instance2  = 49
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_2 closed
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaning_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardLSSCleaning (0.22s)
=== RUN   TestShardLSSCleaningDeleteInstance
----------- running TestShardLSSCleaningDeleteInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaningDeleteInstance_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardLSSCleaningDeleteInstance_1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningDeleteInstance_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningDeleteInstance_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.417µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [128.721µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [136.019µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaningDeleteInstance_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardLSSCleaningDeleteInstance_2
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningDeleteInstance_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningDeleteInstance_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_2 started
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_2 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardLSSCleaningDeleteInstance_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardLSSCleaningDeleteInstance_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_2 sucessfully destroyed
before cleaning: log head offset 0 tail offset 417792
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 73, data: 111364, used: 417792 log:(0 - 417792)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 0, data: 111295, used: 7974, relocated: 49, retries: 0, skipped: 47 log:(0 - 421888) run:1 duration:14 ms
after cleaning: log head offset 413914 tail offset 528384
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningDeleteInstance_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardLSSCleaningDeleteInstance (0.14s)
=== RUN   TestShardLSSCleaningCorruptInstance
----------- running TestShardLSSCleaningCorruptInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaningCorruptInstance_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardLSSCleaningCorruptInstance_1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningCorruptInstance_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningCorruptInstance_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [61.683µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [106.981µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [114.198µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardLSSCleaningCorruptInstance_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardLSSCleaningCorruptInstance_2
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningCorruptInstance_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardLSSCleaningCorruptInstance_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_2 started
before cleaning: log head offset 0 tail offset 417792
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 46, data: 222728, used: 417792 log:(0 - 417792)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 0, data: 222659, used: 6595, relocated: 49, retries: 0, skipped: 47 log:(0 - 421888) run:1 duration:15 ms
after cleaning: log head offset 415293 tail offset 528384
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_2 closed
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardLSSCleaningCorruptInstance_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardLSSCleaningCorruptInstance (0.18s)
=== RUN   TestPlasmaLSSCleaner
----------- running TestPlasmaLSSCleaner
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaLSSCleaner(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaLSSCleaner
LSS test.basic.TestPlasmaLSSCleaner/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaLSSCleaner/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaLSSCleaner) to LSS (test.basic.TestPlasmaLSSCleaner) and RecoveryLSS (test.basic.TestPlasmaLSSCleaner/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaLSSCleaner
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaLSSCleaner to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaLSSCleaner to LSS test.basic.TestPlasmaLSSCleaner
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaLSSCleaner/recovery], Data log [test.basic.TestPlasmaLSSCleaner], Shared [false]
LSS test.basic.TestPlasmaLSSCleaner/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.967µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaLSSCleaner/recovery], Data log [test.basic.TestPlasmaLSSCleaner], Shared [false]. Built [0] plasmas, took [92.816µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaLSSCleaner(shard1) : all deamons started
LSS test.basic.TestPlasmaLSSCleaner/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaLSSCleaner started
LSSInfo: frag:48, ds:8129142, used:15728640
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             9949,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          16637592,
"memory_size_index":    264256,
"allocated":            113819304,
"freed":                97181712,
"reclaimed":            97142640,
"reclaim_pending":      39072,
"reclaim_list_size":    39072,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000000,
"num_rec_allocs":       5004271,
"num_rec_frees":        4004271,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       8000000,
"lss_data_size":        8129408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16314984,
"est_recovery_size":    537234,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           20097624,
"page_cnt":             9362,
"page_itemcnt":         2512203,
"avg_item_size":        8,
"avg_page_size":        2146,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16255290,
"page_bytes_compressed":16255290,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    597992,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3276,
    "lss_data_size":        8388136,
    "lss_used_space":       16891904,
    "lss_disk_size":        16891904,
    "lss_fragmentation":    50,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        16891904,
    "bytes_incoming":       8000000,
    "write_amp":            0.00,
    "write_amp_avg":        2.04,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      16351232,
    "num_sctxs":            28,
    "num_free_sctxs":       17,
    "num_swapperWriter":    32
  }
}

Running iteration.. 0
LSS test.basic.TestPlasmaLSSCleaner(shard1) : logCleaner: starting... frag 56, data: 7129402, used: 16470016 log:(0 - 16470016)
LSS test.basic.TestPlasmaLSSCleaner(shard1) : logCleaner: completed... frag 9, data: 7949278, used: 8816744, relocated: 4242, retries: 45, skipped: 4243 log:(0 - 22761472) run:1 duration:259 ms
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             19899,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              2000000,
"deletes":              1000000,
"compact_conflicts":    2,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     1,
"swapin_conflicts":     0,
"persist_conflicts":    100,
"memory_size":          16715672,
"memory_size_index":    264256,
"allocated":            259265680,
"freed":                242550008,
"reclaimed":            242432808,
"reclaim_pending":      117200,
"reclaim_list_size":    117200,
"reclaim_list_count":   9,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000052,
"num_rec_allocs":       9999095,
"num_rec_frees":        8999043,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       24000000,
"lss_data_size":        6531314,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8166014,
"est_recovery_size":    1618870,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           53766008,
"page_cnt":             28459,
"page_itemcnt":         6720751,
"avg_item_size":        8,
"avg_page_size":        1889,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":30921270,
"page_bytes_compressed":30921270,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    710280,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          20028,
    "lss_data_size":        6739006,
    "lss_used_space":       9042784,
    "lss_disk_size":        32178176,
    "lss_fragmentation":    25,
    "lss_num_reads":        14085,
    "lss_read_bs":          23037352,
    "lss_blk_read_bs":      23166976,
    "bytes_written":        32178176,
    "bytes_incoming":       24000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.28,
    "lss_gc_num_reads":     14085,
    "lss_gc_reads_bs":      23037352,
    "lss_blk_gc_reads_bs":  23166976,
    "lss_gc_status":        "frag 9, data: 6853404, used: 7539552, relocated: 46832, retries: 315, skipped: 37619 log:(13943088 - 30674944) run:7 duration:6020 ms",
    "lss_head_offset":      23133752,
    "lss_tail_offset":      30674944,
    "num_sctxs":            29,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
Running iteration.. 1
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             29849,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              3000000,
"deletes":              2000000,
"compact_conflicts":    3,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    118,
"memory_size":          17020152,
"memory_size_index":    264256,
"allocated":            388226768,
"freed":                371206616,
"reclaimed":            371024952,
"reclaim_pending":      181664,
"reclaim_list_size":    181664,
"reclaim_list_count":   14,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1008758,
"num_rec_allocs":       13998944,
"num_rec_frees":        12990186,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       40000000,
"lss_data_size":        6725110,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8272944,
"est_recovery_size":    2158972,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           5000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           64409648,
"page_cnt":             36154,
"page_itemcnt":         8051206,
"avg_item_size":        8,
"avg_page_size":        1781,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":39295088,
"page_bytes_compressed":39295088,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    915480,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          20740,
    "lss_data_size":        6936962,
    "lss_used_space":       10400506,
    "lss_disk_size":        41848832,
    "lss_fragmentation":    33,
    "lss_num_reads":        19144,
    "lss_read_bs":          31321872,
    "lss_blk_read_bs":      31510528,
    "bytes_written":        41848832,
    "bytes_incoming":       40000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.99,
    "lss_gc_num_reads":     19144,
    "lss_gc_reads_bs":      31321872,
    "lss_blk_gc_reads_bs":  31510528,
    "lss_gc_status":        "frag 10, data: 7365478, used: 8225530, relocated: 127466, retries: 675, skipped: 90856 log:(13943088 - 39673856) run:15 duration:14003 ms",
    "lss_head_offset":      31280842,
    "lss_tail_offset":      39673856,
    "num_sctxs":            29,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
Running iteration.. 2
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             39799,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              4000000,
"deletes":              3000000,
"compact_conflicts":    3,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     4,
"swapin_conflicts":     0,
"persist_conflicts":    118,
"memory_size":          17160568,
"memory_size_index":    264256,
"allocated":            517480112,
"freed":                500319544,
"reclaimed":            499653304,
"reclaim_pending":      666240,
"reclaim_list_size":    666240,
"reclaim_list_count":   141,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1009862,
"num_rec_allocs":       18018491,
"num_rec_frees":        17008629,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       56000000,
"lss_data_size":        5960336,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8263690,
"est_recovery_size":    2698736,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           7000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36766664,
"page_cnt":             22921,
"page_itemcnt":         4595833,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":46948366,
"page_bytes_compressed":46948366,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1076488,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          21278,
    "lss_data_size":        6147592,
    "lss_used_space":       10128408,
    "lss_disk_size":        49303552,
    "lss_fragmentation":    39,
    "lss_num_reads":        23812,
    "lss_read_bs":          39021152,
    "lss_blk_read_bs":      39272448,
    "bytes_written":        49303552,
    "bytes_incoming":       56000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.84,
    "lss_gc_num_reads":     23812,
    "lss_gc_reads_bs":      39021152,
    "lss_blk_gc_reads_bs":  39272448,
    "lss_gc_status":        "frag 9, data: 6912578, used: 7617560, relocated: 236738, retries: 1035, skipped: 153959 log:(13943088 - 46792704) run:23 duration:22020 ms",
    "lss_head_offset":      38334070,
    "lss_tail_offset":      46792704,
    "num_sctxs":            29,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
Running iteration.. 3
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             49750,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              5000000,
"deletes":              4000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     3,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    164,
"memory_size":          17487192,
"memory_size_index":    264256,
"allocated":            646451768,
"freed":                628964576,
"reclaimed":            628640392,
"reclaim_pending":      324184,
"reclaim_list_size":    324184,
"reclaim_list_count":   25,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1018298,
"num_rec_allocs":       22016957,
"num_rec_frees":        20998659,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       72000000,
"lss_data_size":        7026168,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8561628,
"est_recovery_size":    3302342,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           9000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36792304,
"page_cnt":             22937,
"page_itemcnt":         4599038,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":56280826,
"page_bytes_compressed":56280826,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1306352,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          21298,
    "lss_data_size":        7244936,
    "lss_used_space":       11128672,
    "lss_disk_size":        59383808,
    "lss_fragmentation":    34,
    "lss_num_reads":        29272,
    "lss_read_bs":          48067428,
    "lss_blk_read_bs":      48386048,
    "bytes_written":        59383808,
    "bytes_incoming":       72000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.78,
    "lss_gc_num_reads":     29272,
    "lss_gc_reads_bs":      48067428,
    "lss_blk_gc_reads_bs":  48386048,
    "lss_gc_status":        "frag 8, data: 7264668, used: 7933792, relocated: 394676, retries: 1440, skipped: 236619 log:(13943088 - 56188928) run:32 duration:31019 ms",
    "lss_head_offset":      47750016,
    "lss_tail_offset":      56188928,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 4
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             59700,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              6000000,
"deletes":              5000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    258,
"memory_size":          17672760,
"memory_size_index":    264256,
"allocated":            775350808,
"freed":                757678048,
"reclaimed":            757289216,
"reclaim_pending":      388832,
"reclaim_list_size":    388832,
"reclaim_list_count":   30,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019672,
"num_rec_allocs":       26013188,
"num_rec_frees":        24993516,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       88000000,
"lss_data_size":        7127258,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8388472,
"est_recovery_size":    3832438,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           11000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34697208,
"page_cnt":             21631,
"page_itemcnt":         4337151,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":64714694,
"page_bytes_compressed":64714694,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1510048,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          21490,
    "lss_data_size":        7348574,
    "lss_used_space":       12386510,
    "lss_disk_size":        69427200,
    "lss_fragmentation":    40,
    "lss_num_reads":        34478,
    "lss_read_bs":          56819286,
    "lss_blk_read_bs":      57200640,
    "bytes_written":        69427200,
    "bytes_incoming":       88000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.75,
    "lss_gc_num_reads":     34478,
    "lss_gc_reads_bs":      56819286,
    "lss_blk_gc_reads_bs":  57200640,
    "lss_gc_status":        "frag 9, data: 7701614, used: 8519886, relocated: 564890, retries: 2056, skipped: 321586 log:(13943088 - 65560576) run:40 duration:39005 ms",
    "lss_head_offset":      56020674,
    "lss_tail_offset":      65560576,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 5
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             69650,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              7000000,
"deletes":              6000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     9,
"swapin_conflicts":     0,
"persist_conflicts":    285,
"memory_size":          17918680,
"memory_size_index":    264256,
"allocated":            904546360,
"freed":                886627680,
"reclaimed":            885804624,
"reclaim_pending":      823056,
"reclaim_list_size":    823056,
"reclaim_list_count":   141,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1027870,
"num_rec_allocs":       30029519,
"num_rec_frees":        29001649,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       104000000,
"lss_data_size":        6280118,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8420036,
"est_recovery_size":    4356818,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           13000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34830696,
"page_cnt":             21714,
"page_itemcnt":         4353837,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":72375500,
"page_bytes_compressed":72375500,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1662128,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          21492,
    "lss_data_size":        6473354,
    "lss_used_space":       12156634,
    "lss_disk_size":        77029376,
    "lss_fragmentation":    46,
    "lss_num_reads":        39101,
    "lss_read_bs":          64625114,
    "lss_blk_read_bs":      65069056,
    "bytes_written":        77029376,
    "bytes_incoming":       104000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.70,
    "lss_gc_num_reads":     39101,
    "lss_gc_reads_bs":      64625114,
    "lss_blk_gc_reads_bs":  65069056,
    "lss_gc_status":        "frag 10, data: 7138920, used: 7958234, relocated: 762857, retries: 2928, skipped: 417244 log:(13943088 - 72830976) run:48 duration:47021 ms",
    "lss_head_offset":      64531296,
    "lss_tail_offset":      72830976,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 6
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             79600,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              8000000,
"deletes":              7000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     10,
"swapin_conflicts":     0,
"persist_conflicts":    357,
"memory_size":          18116280,
"memory_size_index":    264256,
"allocated":            1033776216,
"freed":                1015659936,
"reclaimed":            1015072736,
"reclaim_pending":      587200,
"reclaim_list_size":    587200,
"reclaim_list_count":   59,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1028932,
"num_rec_allocs":       34043638,
"num_rec_frees":        33014706,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       120000000,
"lss_data_size":        7256298,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8758386,
"est_recovery_size":    4957248,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           15000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           32987856,
"page_cnt":             20565,
"page_itemcnt":         4123482,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":81921134,
"page_bytes_compressed":81921134,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1878768,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          21492,
    "lss_data_size":        7479174,
    "lss_used_space":       12970540,
    "lss_disk_size":        20144128,
    "lss_fragmentation":    42,
    "lss_num_reads":        44677,
    "lss_read_bs":          74003512,
    "lss_blk_read_bs":      74514432,
    "bytes_written":        87252992,
    "bytes_incoming":       120000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.69,
    "lss_gc_num_reads":     44677,
    "lss_gc_reads_bs":      74003512,
    "lss_blk_gc_reads_bs":  74514432,
    "lss_gc_status":        "frag 8, data: 7428566, used: 8096300, relocated: 1019731, retries: 3909, skipped: 537503 log:(13943088 - 82378752) run:57 duration:56022 ms",
    "lss_head_offset":      73939052,
    "lss_tail_offset":      82378752,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 7
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             89551,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              9000000,
"deletes":              8000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     11,
"swapin_conflicts":     0,
"persist_conflicts":    373,
"memory_size":          18393016,
"memory_size_index":    264256,
"allocated":            1162763120,
"freed":                1144370104,
"reclaimed":            1143774592,
"reclaim_pending":      595512,
"reclaim_list_size":    595512,
"reclaim_list_count":   46,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1036972,
"num_rec_allocs":       38045321,
"num_rec_frees":        37008349,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       136000000,
"lss_data_size":        7331510,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8758722,
"est_recovery_size":    5472428,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           17000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36710448,
"page_cnt":             22886,
"page_itemcnt":         4588806,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":90476150,
"page_bytes_compressed":90476150,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2063760,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          22052,
    "lss_data_size":        7554802,
    "lss_used_space":       14326664,
    "lss_disk_size":        30175232,
    "lss_fragmentation":    47,
    "lss_num_reads":        49806,
    "lss_read_bs":          82645638,
    "lss_blk_read_bs":      83222528,
    "bytes_written":        97284096,
    "bytes_incoming":       136000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.67,
    "lss_gc_num_reads":     49806,
    "lss_gc_reads_bs":      82645638,
    "lss_blk_gc_reads_bs":  83222528,
    "lss_gc_status":        "frag 10, data: 7857264, used: 8797064, relocated: 1277665, retries: 4781, skipped: 656202 log:(13943088 - 91754496) run:65 duration:64003 ms",
    "lss_head_offset":      82440024,
    "lss_tail_offset":      91754496,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 8
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             99501,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              10000000,
"deletes":              9000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     11,
"swapin_conflicts":     0,
"persist_conflicts":    373,
"memory_size":          18525016,
"memory_size_index":    264256,
"allocated":            1291674960,
"freed":                1273149944,
"reclaimed":            1272399160,
"reclaim_pending":      750784,
"reclaim_list_size":    90272,
"reclaim_list_count":   26,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1038496,
"num_rec_allocs":       42045572,
"num_rec_frees":        41007076,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       152000000,
"lss_data_size":        5994724,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8549304,
"est_recovery_size":    5925554,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           19000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36651104,
"page_cnt":             22849,
"page_itemcnt":         4581388,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":97682480,
"page_bytes_compressed":97682480,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2210952,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          22222,
    "lss_data_size":        6176832,
    "lss_used_space":       13424850,
    "lss_disk_size":        36728832,
    "lss_fragmentation":    53,
    "lss_num_reads":        54161,
    "lss_read_bs":          90078556,
    "lss_blk_read_bs":      90710016,
    "bytes_written":        103837696,
    "bytes_incoming":       152000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.64,
    "lss_gc_num_reads":     54161,
    "lss_gc_reads_bs":      90078556,
    "lss_blk_gc_reads_bs":  90710016,
    "lss_gc_status":        "frag 10, data: 6798762, used: 7592146, relocated: 1559363, retries: 5653, skipped: 788261 log:(13943088 - 98004992) run:73 duration:72022 ms",
    "lss_head_offset":      89718944,
    "lss_tail_offset":      98004992,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 9
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             109451,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              11000000,
"deletes":              10000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    388,
"memory_size":          18835576,
"memory_size_index":    264256,
"allocated":            1420883760,
"freed":                1402048184,
"reclaimed":            1401685752,
"reclaim_pending":      362432,
"reclaim_list_size":    362432,
"reclaim_list_count":   85,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1046332,
"num_rec_allocs":       46057883,
"num_rec_frees":        45011551,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       168000000,
"lss_data_size":        7272468,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8623914,
"est_recovery_size":    6489252,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           21000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36678296,
"page_cnt":             22866,
"page_itemcnt":         4584787,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":107631764,
"page_bytes_compressed":107631764,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2431056,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          22222,
    "lss_data_size":        7491548,
    "lss_used_space":       14617434,
    "lss_disk_size":        47861760,
    "lss_fragmentation":    48,
    "lss_num_reads":        59941,
    "lss_read_bs":          99988054,
    "lss_blk_read_bs":      100687872,
    "bytes_written":        114970624,
    "bytes_incoming":       168000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.65,
    "lss_gc_num_reads":     59941,
    "lss_gc_reads_bs":      99988054,
    "lss_blk_gc_reads_bs":  100687872,
    "lss_gc_status":        "frag 10, data: 7333200, used: 8178522, relocated: 1903658, retries: 6634, skipped: 956219 log:(13943088 - 108531712) run:82 duration:81026 ms",
    "lss_head_offset":      99308424,
    "lss_tail_offset":      108531712,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 10
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             119401,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              12000000,
"deletes":              11000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     13,
"swapin_conflicts":     0,
"persist_conflicts":    397,
"memory_size":          18985368,
"memory_size_index":    264256,
"allocated":            1550199312,
"freed":                1531213944,
"reclaimed":            1530812296,
"reclaim_pending":      401648,
"reclaim_list_size":    401648,
"reclaim_list_count":   81,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1046728,
"num_rec_allocs":       50078234,
"num_rec_frees":        49031506,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       184000000,
"lss_data_size":        7780560,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9215502,
"est_recovery_size":    7016382,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           23000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34965360,
"page_cnt":             21798,
"page_itemcnt":         4370670,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":116948894,
"page_bytes_compressed":116948894,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2605752,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          22222,
    "lss_data_size":        8014668,
    "lss_used_space":       16203020,
    "lss_disk_size":        58245120,
    "lss_fragmentation":    50,
    "lss_num_reads":        65016,
    "lss_read_bs":          108752568,
    "lss_blk_read_bs":      109518848,
    "bytes_written":        125353984,
    "bytes_incoming":       184000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.64,
    "lss_gc_num_reads":     65016,
    "lss_gc_reads_bs":      108752568,
    "lss_blk_gc_reads_bs":  109518848,
    "lss_gc_status":        "frag 10, data: 8144138, used: 9125132, relocated: 2235128, retries: 7506, skipped: 1121664 log:(13943088 - 118276096) run:90 duration:89009 ms",
    "lss_head_offset":      109149224,
    "lss_tail_offset":      118276096,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 11
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             129352,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              13000000,
"deletes":              12000000,
"compact_conflicts":    11,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     14,
"swapin_conflicts":     0,
"persist_conflicts":    422,
"memory_size":          19207736,
"memory_size_index":    264256,
"allocated":            1679251512,
"freed":                1660043776,
"reclaimed":            1659650792,
"reclaim_pending":      392984,
"reclaim_list_size":    392984,
"reclaim_list_count":   64,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1054386,
"num_rec_allocs":       54085946,
"num_rec_frees":        53031560,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       200000000,
"lss_data_size":        6905228,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8830042,
"est_recovery_size":    7521946,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           25000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35129472,
"page_cnt":             21900,
"page_itemcnt":         4391184,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":124781898,
"page_bytes_compressed":124781898,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2739160,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          23782,
    "lss_data_size":        7111360,
    "lss_used_space":       15846178,
    "lss_disk_size":        66154496,
    "lss_fragmentation":    55,
    "lss_num_reads":        69779,
    "lss_read_bs":          116990900,
    "lss_blk_read_bs":      117817344,
    "bytes_written":        133263360,
    "bytes_incoming":       200000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     69779,
    "lss_gc_reads_bs":      116990900,
    "lss_blk_gc_reads_bs":  117817344,
    "lss_gc_status":        "frag 10, data: 7540540, used: 8436514, relocated: 2592940, retries: 8378, skipped: 1300711 log:(13943088 - 125853696) run:98 duration:97040 ms",
    "lss_head_offset":      116716238,
    "lss_tail_offset":      125853696,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 12
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             139302,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              14000000,
"deletes":              13000000,
"compact_conflicts":    13,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     17,
"swapin_conflicts":     0,
"persist_conflicts":    439,
"memory_size":          19336696,
"memory_size_index":    264256,
"allocated":            1808444040,
"freed":                1789107344,
"reclaimed":            1788470944,
"reclaim_pending":      636400,
"reclaim_list_size":    636400,
"reclaim_list_count":   115,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1054960,
"num_rec_allocs":       58100468,
"num_rec_frees":        57045508,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       216000000,
"lss_data_size":        6710074,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8862088,
"est_recovery_size":    7999708,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           27000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34877352,
"page_cnt":             21743,
"page_itemcnt":         4359669,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":133427688,
"page_bytes_compressed":133427688,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2890800,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          24008,
    "lss_data_size":        6909966,
    "lss_used_space":       16421952,
    "lss_disk_size":        75423744,
    "lss_fragmentation":    57,
    "lss_num_reads":        74759,
    "lss_read_bs":          125651580,
    "lss_blk_read_bs":      126541824,
    "bytes_written":        142532608,
    "bytes_incoming":       216000000,
    "write_amp":            0.69,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     74759,
    "lss_gc_reads_bs":      125651580,
    "lss_blk_gc_reads_bs":  126541824,
    "lss_gc_status":        "frag 8, data: 7673980, used: 8389696, relocated: 2974855, retries: 9405, skipped: 1494484 log:(13943088 - 134500352) run:106 duration:105012 ms",
    "lss_head_offset":      125220010,
    "lss_tail_offset":      134500352,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 13
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             149252,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              15000000,
"deletes":              14000000,
"compact_conflicts":    15,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     8,
"delete_conflicts":     18,
"swapin_conflicts":     0,
"persist_conflicts":    533,
"memory_size":          19464152,
"memory_size_index":    264256,
"allocated":            1938968600,
"freed":                1919504448,
"reclaimed":            1918289568,
"reclaim_pending":      1214880,
"reclaim_list_size":    1214880,
"reclaim_list_count":   258,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1058760,
"num_rec_allocs":       62191370,
"num_rec_frees":        61132610,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       232000000,
"lss_data_size":        8197600,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9611480,
"est_recovery_size":    8655026,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           29000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35536704,
"page_cnt":             22153,
"page_itemcnt":         4442088,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":144408340,
"page_bytes_compressed":144408340,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2960696,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          24008,
    "lss_data_size":        8441120,
    "lss_used_space":       18164116,
    "lss_disk_size":        20500480,
    "lss_fragmentation":    53,
    "lss_num_reads":        80699,
    "lss_read_bs":          136059908,
    "lss_blk_read_bs":      137023488,
    "bytes_written":        154718208,
    "bytes_incoming":       232000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     80699,
    "lss_gc_reads_bs":      136059908,
    "lss_blk_gc_reads_bs":  137023488,
    "lss_gc_status":        "frag 10, data: 8444110, used: 9447828, relocated: 3437169, retries: 11101, skipped: 1728380 log:(13943088 - 146001920) run:115 duration:114017 ms",
    "lss_head_offset":      136027848,
    "lss_tail_offset":      146001920,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 14
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             159202,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              16000000,
"deletes":              15000000,
"compact_conflicts":    17,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     9,
"delete_conflicts":     19,
"swapin_conflicts":     0,
"persist_conflicts":    558,
"memory_size":          19472888,
"memory_size_index":    264256,
"allocated":            2068548504,
"freed":                2049075616,
"reclaimed":            2048113216,
"reclaim_pending":      962400,
"reclaim_list_size":    962400,
"reclaim_list_count":   171,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1057334,
"num_rec_allocs":       66230615,
"num_rec_frees":        65173281,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       248000000,
"lss_data_size":        7334804,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9336324,
"est_recovery_size":    9177800,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           31000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33886704,
"page_cnt":             21124,
"page_itemcnt":         4235838,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":152629080,
"page_bytes_compressed":152629080,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3008008,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          24256,
    "lss_data_size":        7552740,
    "lss_used_space":       17778370,
    "lss_disk_size":        28852224,
    "lss_fragmentation":    57,
    "lss_num_reads":        85692,
    "lss_read_bs":          144768856,
    "lss_blk_read_bs":      145793024,
    "bytes_written":        163069952,
    "bytes_incoming":       248000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     85692,
    "lss_gc_reads_bs":      144768856,
    "lss_blk_gc_reads_bs":  145793024,
    "lss_gc_status":        "frag 9, data: 7899860, used: 8709826, relocated: 3877217, retries: 12829, skipped: 1950270 log:(13943088 - 154001408) run:123 duration:122022 ms",
    "lss_head_offset":      144755506,
    "lss_tail_offset":      154001408,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 15
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             169153,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              17000000,
"deletes":              16000000,
"compact_conflicts":    22,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     13,
"delete_conflicts":     20,
"swapin_conflicts":     0,
"persist_conflicts":    568,
"memory_size":          19396984,
"memory_size_index":    264256,
"allocated":            2199541936,
"freed":                2180144952,
"reclaimed":            2178989440,
"reclaim_pending":      1155512,
"reclaim_list_size":    1155512,
"reclaim_list_count":   214,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1057952,
"num_rec_allocs":       70353702,
"num_rec_frees":        69295750,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       264000000,
"lss_data_size":        6994118,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9421244,
"est_recovery_size":    9728396,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           33000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29952072,
"page_cnt":             18671,
"page_itemcnt":         3744009,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":162057044,
"page_bytes_compressed":162057044,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2904032,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          25996,
    "lss_data_size":        7201446,
    "lss_used_space":       18596600,
    "lss_disk_size":        39223296,
    "lss_fragmentation":    61,
    "lss_num_reads":        91121,
    "lss_read_bs":          154285274,
    "lss_blk_read_bs":      155377664,
    "bytes_written":        173441024,
    "bytes_incoming":       264000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     91121,
    "lss_gc_reads_bs":      154285274,
    "lss_blk_gc_reads_bs":  155377664,
    "lss_gc_status":        "frag 10, data: 7897098, used: 8827640, relocated: 4345376, retries: 15441, skipped: 2185906 log:(13943088 - 163672064) run:131 duration:130010 ms",
    "lss_head_offset":      154842642,
    "lss_tail_offset":      163672064,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 16
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             179103,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              18000000,
"deletes":              17000000,
"compact_conflicts":    23,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     13,
"delete_conflicts":     22,
"swapin_conflicts":     0,
"persist_conflicts":    597,
"memory_size":          16800312,
"memory_size_index":    264256,
"allocated":            2344018008,
"freed":                2327217696,
"reclaimed":            2326673600,
"reclaim_pending":      544096,
"reclaim_list_size":    544096,
"reclaim_list_count":   42,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000718,
"num_rec_allocs":       75283402,
"num_rec_frees":        74282684,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       280000000,
"lss_data_size":        7812576,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9026780,
"est_recovery_size":    10886496,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           35000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36949120,
"page_cnt":             23022,
"page_itemcnt":         4618640,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":179206422,
"page_bytes_compressed":179206422,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    790296,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          26004,
    "lss_data_size":        8060776,
    "lss_used_space":       20062416,
    "lss_disk_size":        58707968,
    "lss_fragmentation":    59,
    "lss_num_reads":        101582,
    "lss_read_bs":          172246452,
    "lss_blk_read_bs":      173432832,
    "bytes_written":        192925696,
    "bytes_incoming":       280000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.65,
    "lss_gc_num_reads":     101582,
    "lss_gc_reads_bs":      172246452,
    "lss_blk_gc_reads_bs":  173432832,
    "lss_gc_status":        "frag 10, data: 8124862, used: 9060560, relocated: 4935104, retries: 18935, skipped: 2470214 log:(13943088 - 181923840) run:140 duration:139003 ms",
    "lss_head_offset":      171866536,
    "lss_tail_offset":      181923840,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 17
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             189053,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              19000000,
"deletes":              18000000,
"compact_conflicts":    27,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     17,
"delete_conflicts":     22,
"swapin_conflicts":     0,
"persist_conflicts":    605,
"memory_size":          17036984,
"memory_size_index":    264256,
"allocated":            2473242616,
"freed":                2456205632,
"reclaimed":            2455303904,
"reclaim_pending":      901728,
"reclaim_list_size":    901728,
"reclaim_list_count":   134,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1009404,
"num_rec_allocs":       79302949,
"num_rec_frees":        78293545,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       296000000,
"lss_data_size":        6359078,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8318480,
"est_recovery_size":    11328258,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           37000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           37186848,
"page_cnt":             23173,
"page_itemcnt":         4648356,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":186112170,
"page_bytes_compressed":186112170,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    927216,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          26004,
    "lss_data_size":        6559178,
    "lss_used_space":       18829890,
    "lss_disk_size":        65122304,
    "lss_fragmentation":    65,
    "lss_num_reads":        106233,
    "lss_read_bs":          179866952,
    "lss_blk_read_bs":      181108736,
    "bytes_written":        199340032,
    "bytes_incoming":       296000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.64,
    "lss_gc_num_reads":     106233,
    "lss_gc_reads_bs":      179866952,
    "lss_blk_gc_reads_bs":  181108736,
    "lss_gc_status":        "frag 7, data: 7000670, used: 7533122, relocated: 5419981, retries: 21901, skipped: 2707005 log:(13943088 - 188043264) run:147 duration:147010 ms",
    "lss_head_offset":      180009648,
    "lss_tail_offset":      188043264,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 18
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             199003,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              20000000,
"deletes":              19000000,
"compact_conflicts":    29,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     19,
"delete_conflicts":     23,
"swapin_conflicts":     0,
"persist_conflicts":    631,
"memory_size":          17204504,
"memory_size_index":    264256,
"allocated":            2603031864,
"freed":                2585827360,
"reclaimed":            2585306784,
"reclaim_pending":      520576,
"reclaim_list_size":    377880,
"reclaim_list_count":   106,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010026,
"num_rec_allocs":       83351842,
"num_rec_frees":        82341816,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       312000000,
"lss_data_size":        6813856,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        8936622,
"est_recovery_size":    11925580,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           39000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33166472,
"page_cnt":             20676,
"page_itemcnt":         4145809,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":195238136,
"page_bytes_compressed":195238136,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1119872,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          26004,
    "lss_data_size":        7027944,
    "lss_used_space":       20422284,
    "lss_disk_size":        75513856,
    "lss_fragmentation":    65,
    "lss_num_reads":        111498,
    "lss_read_bs":          188636090,
    "lss_blk_read_bs":      189943808,
    "bytes_written":        209731584,
    "bytes_incoming":       312000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     111498,
    "lss_gc_reads_bs":      188636090,
    "lss_blk_gc_reads_bs":  189943808,
    "lss_gc_status":        "frag 8, data: 7737906, used: 8416908, relocated: 6002946, retries: 25821, skipped: 2988546 log:(13943088 - 197726208) run:155 duration:155014 ms",
    "lss_head_offset":      188302224,
    "lss_tail_offset":      197726208,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 19
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             208954,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              21000000,
"deletes":              20000000,
"compact_conflicts":    31,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     21,
"delete_conflicts":     24,
"swapin_conflicts":     0,
"persist_conflicts":    678,
"memory_size":          17522424,
"memory_size_index":    264256,
"allocated":            2732554640,
"freed":                2715032216,
"reclaimed":            2714940592,
"reclaim_pending":      91624,
"reclaim_list_size":    91624,
"reclaim_list_count":   7,
"reclaim_threshold":    50,
"allocated_index":      264256,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1018050,
"num_rec_allocs":       87382669,
"num_rec_frees":        86364619,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       328000000,
"lss_data_size":        8219014,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        9842992,
"est_recovery_size":    12601306,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           41000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           37445104,
"page_cnt":             23343,
"page_itemcnt":         4680638,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":205176672,
"page_bytes_compressed":205176672,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1343928,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          26004,
    "lss_data_size":        8474910,
    "lss_used_space":       22760910,
    "lss_disk_size":        87117824,
    "lss_fragmentation":    62,
    "lss_num_reads":        117047,
    "lss_read_bs":          197865124,
    "lss_blk_read_bs":      199245824,
    "bytes_written":        221335552,
    "bytes_incoming":       328000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.64,
    "lss_gc_num_reads":     117047,
    "lss_gc_reads_bs":      197865124,
    "lss_blk_gc_reads_bs":  199245824,
    "lss_gc_status":        "frag 7, data: 7211088, used: 7794126, relocated: 6696396, retries: 30748, skipped: 3316302 log:(13943088 - 206368768) run:164 duration:164023 ms",
    "lss_head_offset":      197630376,
    "lss_tail_offset":      208592896,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
LSSInfo: frag:10, ds:8217528, used:9173676

frag from cleaner state = 10
LSS test.basic.TestPlasmaLSSCleaner(shard1) : all deamons stopped
LSS test.basic.TestPlasmaLSSCleaner/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaLSSCleaner stopped
LSS test.basic.TestPlasmaLSSCleaner(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaLSSCleaner/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaLSSCleaner closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaLSSCleaner ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaLSSCleaner ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaLSSCleaner sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaLSSCleaner (175.98s)
=== RUN   TestLSSBasic
--- PASS: TestLSSBasic (0.09s)
=== RUN   TestLSSConcurrent
--- PASS: TestLSSConcurrent (0.76s)
=== RUN   TestLSSCleaner
--- PASS: TestLSSCleaner (12.01s)
=== RUN   TestLSSSuperBlock
--- PASS: TestLSSSuperBlock (0.97s)
=== RUN   TestLSSLargeSinglePayload
--- PASS: TestLSSLargeSinglePayload (0.77s)
=== RUN   TestLSSUnstableEnvironment
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
Plasma: (test.data) Unable to write - err bad fortune
--- PASS: TestLSSUnstableEnvironment (10.21s)
=== RUN   TestMem
Plasma: Adaptive memory quota tuning (decrementing): RSS:400801792, freePercent:92.56607203293552, currentQuota=1099511627776, newQuota=1099511522919Plasma: Adaptive memory quota tuning (incrementing): RSS:380039168, freePercent: 92.69154520556361, currentQuota=0, newQuota=10995116277--- PASS: TestMem (15.01s)
=== RUN   TestCpu
----------- running TestCpu
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestCpu(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestCpu
LSS test.basic.TestCpu/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestCpu/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestCpu) to LSS (test.basic.TestCpu) and RecoveryLSS (test.basic.TestCpu/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestCpu
Shard shards/shard1(1) : Add instance test.basic.TestCpu to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestCpu to LSS test.basic.TestCpu
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestCpu/recovery], Data log [test.basic.TestCpu], Shared [false]
LSS test.basic.TestCpu/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.759µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestCpu/recovery], Data log [test.basic.TestCpu], Shared [false]. Built [0] plasmas, took [118.031µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestCpu(shard1) : all deamons started
LSS test.basic.TestCpu/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestCpu started
cpu 0.48981271866639225 total cpu 16 percent 0.030613294916649516
LSS test.basic.TestCpu(shard1) : all deamons stopped
LSS test.basic.TestCpu/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestCpu stopped
LSS test.basic.TestCpu(shard1) : LSSCleaner stopped
LSS test.basic.TestCpu/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestCpu closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestCpu ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestCpu ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestCpu sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestCpu (14.83s)
=== RUN   TestTopTen20
----------- running TestTop10
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop100(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop100
LSS test.default.TestTop100/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop100/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop100) to LSS (test.default.TestTop100) and RecoveryLSS (test.default.TestTop100/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestTop100
Shard shards/shard1(1) : Add instance test.default.TestTop100 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop100 to LSS test.default.TestTop100
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop100/recovery], Data log [test.default.TestTop100], Shared [false]
LSS test.default.TestTop100/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [114.867µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop100/recovery], Data log [test.default.TestTop100], Shared [false]. Built [0] plasmas, took [264.9µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop100(shard1) : all deamons started
LSS test.default.TestTop100/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop100 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop101(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop101
LSS test.default.TestTop101/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop101/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop101) to LSS (test.default.TestTop101) and RecoveryLSS (test.default.TestTop101/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestTop101
Shard shards/shard1(1) : Add instance test.default.TestTop101 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop101 to LSS test.default.TestTop101
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop101/recovery], Data log [test.default.TestTop101], Shared [false]
LSS test.default.TestTop101/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [684.364µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop101/recovery], Data log [test.default.TestTop101], Shared [false]. Built [0] plasmas, took [720.2µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop101(shard1) : all deamons started
LSS test.default.TestTop101/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop101 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop102(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop102
LSS test.default.TestTop102/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop102/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop102) to LSS (test.default.TestTop102) and RecoveryLSS (test.default.TestTop102/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestTop102
Shard shards/shard1(1) : Add instance test.default.TestTop102 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop102 to LSS test.default.TestTop102
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop102/recovery], Data log [test.default.TestTop102], Shared [false]
LSS test.default.TestTop102/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.988µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop102/recovery], Data log [test.default.TestTop102], Shared [false]. Built [0] plasmas, took [94.639µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop102(shard1) : all deamons started
LSS test.default.TestTop102/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop102 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop103(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop103
LSS test.default.TestTop103/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop103/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop103) to LSS (test.default.TestTop103) and RecoveryLSS (test.default.TestTop103/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestTop103
Shard shards/shard1(1) : Add instance test.default.TestTop103 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop103 to LSS test.default.TestTop103
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop103/recovery], Data log [test.default.TestTop103], Shared [false]
LSS test.default.TestTop103/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [46.453µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop103/recovery], Data log [test.default.TestTop103], Shared [false]. Built [0] plasmas, took [78.989µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop103(shard1) : all deamons started
LSS test.default.TestTop103/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop103 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop104(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop104
LSS test.default.TestTop104/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop104/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop104) to LSS (test.default.TestTop104) and RecoveryLSS (test.default.TestTop104/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestTop104
Shard shards/shard1(1) : Add instance test.default.TestTop104 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop104 to LSS test.default.TestTop104
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop104/recovery], Data log [test.default.TestTop104], Shared [false]
LSS test.default.TestTop104/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [84.73µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop104/recovery], Data log [test.default.TestTop104], Shared [false]. Built [0] plasmas, took [118.642µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop104(shard1) : all deamons started
LSS test.default.TestTop104/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop104 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop105(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop105
LSS test.default.TestTop105/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop105/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop105) to LSS (test.default.TestTop105) and RecoveryLSS (test.default.TestTop105/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestTop105
Shard shards/shard1(1) : Add instance test.default.TestTop105 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop105 to LSS test.default.TestTop105
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop105/recovery], Data log [test.default.TestTop105], Shared [false]
LSS test.default.TestTop105/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.301µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop105/recovery], Data log [test.default.TestTop105], Shared [false]. Built [0] plasmas, took [82.713µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop105(shard1) : all deamons started
LSS test.default.TestTop105/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop105 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop106(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop106
LSS test.default.TestTop106/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop106/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop106) to LSS (test.default.TestTop106) and RecoveryLSS (test.default.TestTop106/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestTop106
Shard shards/shard1(1) : Add instance test.default.TestTop106 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop106 to LSS test.default.TestTop106
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop106/recovery], Data log [test.default.TestTop106], Shared [false]
LSS test.default.TestTop106/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.303µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop106/recovery], Data log [test.default.TestTop106], Shared [false]. Built [0] plasmas, took [83.477µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop106(shard1) : all deamons started
LSS test.default.TestTop106/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop106 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop107(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop107
LSS test.default.TestTop107/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop107/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop107) to LSS (test.default.TestTop107) and RecoveryLSS (test.default.TestTop107/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestTop107
Shard shards/shard1(1) : Add instance test.default.TestTop107 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop107 to LSS test.default.TestTop107
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop107/recovery], Data log [test.default.TestTop107], Shared [false]
LSS test.default.TestTop107/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.491µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop107/recovery], Data log [test.default.TestTop107], Shared [false]. Built [0] plasmas, took [83.498µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop107(shard1) : all deamons started
LSS test.default.TestTop107/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop107 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop108(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop108
LSS test.default.TestTop108/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop108/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop108) to LSS (test.default.TestTop108) and RecoveryLSS (test.default.TestTop108/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestTop108
Shard shards/shard1(1) : Add instance test.default.TestTop108 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop108 to LSS test.default.TestTop108
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop108/recovery], Data log [test.default.TestTop108], Shared [false]
LSS test.default.TestTop108/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.877µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop108/recovery], Data log [test.default.TestTop108], Shared [false]. Built [0] plasmas, took [82.63µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop108(shard1) : all deamons started
LSS test.default.TestTop108/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop108 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop109(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop109
LSS test.default.TestTop109/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop109/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop109) to LSS (test.default.TestTop109) and RecoveryLSS (test.default.TestTop109/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestTop109
Shard shards/shard1(1) : Add instance test.default.TestTop109 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop109 to LSS test.default.TestTop109
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop109/recovery], Data log [test.default.TestTop109], Shared [false]
LSS test.default.TestTop109/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [79.405µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop109/recovery], Data log [test.default.TestTop109], Shared [false]. Built [0] plasmas, took [119.22µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop109(shard1) : all deamons started
LSS test.default.TestTop109/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop109 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1010(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1010
LSS test.default.TestTop1010/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1010/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1010) to LSS (test.default.TestTop1010) and RecoveryLSS (test.default.TestTop1010/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestTop1010
Shard shards/shard1(1) : Add instance test.default.TestTop1010 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1010 to LSS test.default.TestTop1010
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1010/recovery], Data log [test.default.TestTop1010], Shared [false]
LSS test.default.TestTop1010/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.972µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1010/recovery], Data log [test.default.TestTop1010], Shared [false]. Built [0] plasmas, took [105.884µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1010(shard1) : all deamons started
LSS test.default.TestTop1010/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1010 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1011(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1011
LSS test.default.TestTop1011/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1011/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1011) to LSS (test.default.TestTop1011) and RecoveryLSS (test.default.TestTop1011/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestTop1011
Shard shards/shard1(1) : Add instance test.default.TestTop1011 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1011 to LSS test.default.TestTop1011
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1011/recovery], Data log [test.default.TestTop1011], Shared [false]
LSS test.default.TestTop1011/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [44.652µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1011/recovery], Data log [test.default.TestTop1011], Shared [false]. Built [0] plasmas, took [78.267µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1011(shard1) : all deamons started
LSS test.default.TestTop1011/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1011 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1012(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1012
LSS test.default.TestTop1012/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1012/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1012) to LSS (test.default.TestTop1012) and RecoveryLSS (test.default.TestTop1012/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestTop1012
Shard shards/shard1(1) : Add instance test.default.TestTop1012 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1012 to LSS test.default.TestTop1012
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1012/recovery], Data log [test.default.TestTop1012], Shared [false]
LSS test.default.TestTop1012/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.069µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1012/recovery], Data log [test.default.TestTop1012], Shared [false]. Built [0] plasmas, took [82.003µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1012(shard1) : all deamons started
LSS test.default.TestTop1012/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1012 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1013(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1013
LSS test.default.TestTop1013/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1013/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1013) to LSS (test.default.TestTop1013) and RecoveryLSS (test.default.TestTop1013/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestTop1013
Shard shards/shard1(1) : Add instance test.default.TestTop1013 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1013 to LSS test.default.TestTop1013
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1013/recovery], Data log [test.default.TestTop1013], Shared [false]
LSS test.default.TestTop1013/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.713µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1013/recovery], Data log [test.default.TestTop1013], Shared [false]. Built [0] plasmas, took [105.866µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1013(shard1) : all deamons started
LSS test.default.TestTop1013/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1013 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1014(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1014
LSS test.default.TestTop1014/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1014/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1014) to LSS (test.default.TestTop1014) and RecoveryLSS (test.default.TestTop1014/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestTop1014
Shard shards/shard1(1) : Add instance test.default.TestTop1014 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1014 to LSS test.default.TestTop1014
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1014/recovery], Data log [test.default.TestTop1014], Shared [false]
LSS test.default.TestTop1014/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.262µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1014/recovery], Data log [test.default.TestTop1014], Shared [false]. Built [0] plasmas, took [85.546µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1014(shard1) : all deamons started
LSS test.default.TestTop1014/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1014 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1015(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1015
LSS test.default.TestTop1015/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1015/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1015) to LSS (test.default.TestTop1015) and RecoveryLSS (test.default.TestTop1015/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestTop1015
Shard shards/shard1(1) : Add instance test.default.TestTop1015 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1015 to LSS test.default.TestTop1015
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1015/recovery], Data log [test.default.TestTop1015], Shared [false]
LSS test.default.TestTop1015/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [44.664µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1015/recovery], Data log [test.default.TestTop1015], Shared [false]. Built [0] plasmas, took [77.79µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1015(shard1) : all deamons started
LSS test.default.TestTop1015/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1015 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1016(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1016
LSS test.default.TestTop1016/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1016/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1016) to LSS (test.default.TestTop1016) and RecoveryLSS (test.default.TestTop1016/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestTop1016
Shard shards/shard1(1) : Add instance test.default.TestTop1016 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1016 to LSS test.default.TestTop1016
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1016/recovery], Data log [test.default.TestTop1016], Shared [false]
LSS test.default.TestTop1016/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.281µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1016/recovery], Data log [test.default.TestTop1016], Shared [false]. Built [0] plasmas, took [84.193µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1016(shard1) : all deamons started
LSS test.default.TestTop1016/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1016 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1017(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1017
LSS test.default.TestTop1017/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1017/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1017) to LSS (test.default.TestTop1017) and RecoveryLSS (test.default.TestTop1017/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestTop1017
Shard shards/shard1(1) : Add instance test.default.TestTop1017 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1017 to LSS test.default.TestTop1017
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1017/recovery], Data log [test.default.TestTop1017], Shared [false]
LSS test.default.TestTop1017/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.858µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1017/recovery], Data log [test.default.TestTop1017], Shared [false]. Built [0] plasmas, took [97.161µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1017(shard1) : all deamons started
LSS test.default.TestTop1017/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1017 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1018(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1018
LSS test.default.TestTop1018/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1018/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1018) to LSS (test.default.TestTop1018) and RecoveryLSS (test.default.TestTop1018/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestTop1018
Shard shards/shard1(1) : Add instance test.default.TestTop1018 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1018 to LSS test.default.TestTop1018
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1018/recovery], Data log [test.default.TestTop1018], Shared [false]
LSS test.default.TestTop1018/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.264µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1018/recovery], Data log [test.default.TestTop1018], Shared [false]. Built [0] plasmas, took [97.298µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1018(shard1) : all deamons started
LSS test.default.TestTop1018/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1018 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop1019(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1019
LSS test.default.TestTop1019/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop1019/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop1019) to LSS (test.default.TestTop1019) and RecoveryLSS (test.default.TestTop1019/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestTop1019
Shard shards/shard1(1) : Add instance test.default.TestTop1019 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop1019 to LSS test.default.TestTop1019
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop1019/recovery], Data log [test.default.TestTop1019], Shared [false]
LSS test.default.TestTop1019/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.133µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop1019/recovery], Data log [test.default.TestTop1019], Shared [false]. Built [0] plasmas, took [81.017µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop1019(shard1) : all deamons started
LSS test.default.TestTop1019/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop1019 started
LSS test.default.TestTop1019(shard1) : all deamons stopped
LSS test.default.TestTop1019/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1019 stopped
LSS test.default.TestTop1019(shard1) : LSSCleaner stopped
LSS test.default.TestTop1019/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1019 closed
LSS test.default.TestTop1018(shard1) : all deamons stopped
LSS test.default.TestTop1018/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1018 stopped
LSS test.default.TestTop1018(shard1) : LSSCleaner stopped
LSS test.default.TestTop1018/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1018 closed
LSS test.default.TestTop1017(shard1) : all deamons stopped
LSS test.default.TestTop1017/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1017 stopped
LSS test.default.TestTop1017(shard1) : LSSCleaner stopped
LSS test.default.TestTop1017/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1017 closed
LSS test.default.TestTop1016(shard1) : all deamons stopped
LSS test.default.TestTop1016/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1016 stopped
LSS test.default.TestTop1016(shard1) : LSSCleaner stopped
LSS test.default.TestTop1016/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1016 closed
LSS test.default.TestTop1015(shard1) : all deamons stopped
LSS test.default.TestTop1015/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1015 stopped
LSS test.default.TestTop1015(shard1) : LSSCleaner stopped
LSS test.default.TestTop1015/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1015 closed
LSS test.default.TestTop1014(shard1) : all deamons stopped
LSS test.default.TestTop1014/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1014 stopped
LSS test.default.TestTop1014(shard1) : LSSCleaner stopped
LSS test.default.TestTop1014/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1014 closed
LSS test.default.TestTop1013(shard1) : all deamons stopped
LSS test.default.TestTop1013/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1013 stopped
LSS test.default.TestTop1013(shard1) : LSSCleaner stopped
LSS test.default.TestTop1013/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1013 closed
LSS test.default.TestTop1012(shard1) : all deamons stopped
LSS test.default.TestTop1012/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1012 stopped
LSS test.default.TestTop1012(shard1) : LSSCleaner stopped
LSS test.default.TestTop1012/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1012 closed
LSS test.default.TestTop1011(shard1) : all deamons stopped
LSS test.default.TestTop1011/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1011 stopped
LSS test.default.TestTop1011(shard1) : LSSCleaner stopped
LSS test.default.TestTop1011/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1011 closed
LSS test.default.TestTop1010(shard1) : all deamons stopped
LSS test.default.TestTop1010/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop1010 stopped
LSS test.default.TestTop1010(shard1) : LSSCleaner stopped
LSS test.default.TestTop1010/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop1010 closed
LSS test.default.TestTop109(shard1) : all deamons stopped
LSS test.default.TestTop109/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop109 stopped
LSS test.default.TestTop109(shard1) : LSSCleaner stopped
LSS test.default.TestTop109/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop109 closed
LSS test.default.TestTop108(shard1) : all deamons stopped
LSS test.default.TestTop108/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop108 stopped
LSS test.default.TestTop108(shard1) : LSSCleaner stopped
LSS test.default.TestTop108/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop108 closed
LSS test.default.TestTop107(shard1) : all deamons stopped
LSS test.default.TestTop107/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop107 stopped
LSS test.default.TestTop107(shard1) : LSSCleaner stopped
LSS test.default.TestTop107/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop107 closed
LSS test.default.TestTop106(shard1) : all deamons stopped
LSS test.default.TestTop106/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop106 stopped
LSS test.default.TestTop106(shard1) : LSSCleaner stopped
LSS test.default.TestTop106/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop106 closed
LSS test.default.TestTop105(shard1) : all deamons stopped
LSS test.default.TestTop105/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop105 stopped
LSS test.default.TestTop105(shard1) : LSSCleaner stopped
LSS test.default.TestTop105/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop105 closed
LSS test.default.TestTop104(shard1) : all deamons stopped
LSS test.default.TestTop104/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop104 stopped
LSS test.default.TestTop104(shard1) : LSSCleaner stopped
LSS test.default.TestTop104/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop104 closed
LSS test.default.TestTop103(shard1) : all deamons stopped
LSS test.default.TestTop103/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop103 stopped
LSS test.default.TestTop103(shard1) : LSSCleaner stopped
LSS test.default.TestTop103/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop103 closed
LSS test.default.TestTop102(shard1) : all deamons stopped
LSS test.default.TestTop102/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop102 stopped
LSS test.default.TestTop102(shard1) : LSSCleaner stopped
LSS test.default.TestTop102/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop102 closed
LSS test.default.TestTop101(shard1) : all deamons stopped
LSS test.default.TestTop101/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop101 stopped
LSS test.default.TestTop101(shard1) : LSSCleaner stopped
LSS test.default.TestTop101/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop101 closed
LSS test.default.TestTop100(shard1) : all deamons stopped
LSS test.default.TestTop100/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop100 stopped
LSS test.default.TestTop100(shard1) : LSSCleaner stopped
LSS test.default.TestTop100/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop100 closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestTop1016 ...
Shard shards/shard1(1) : removed plasmaId 17 for instance test.default.TestTop1016 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1016 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1018 ...
Shard shards/shard1(1) : removed plasmaId 19 for instance test.default.TestTop1018 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1018 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1010 ...
Shard shards/shard1(1) : removed plasmaId 11 for instance test.default.TestTop1010 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1010 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1014 ...
Shard shards/shard1(1) : removed plasmaId 15 for instance test.default.TestTop1014 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1014 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1015 ...
Shard shards/shard1(1) : removed plasmaId 16 for instance test.default.TestTop1015 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1015 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1019 ...
Shard shards/shard1(1) : removed plasmaId 20 for instance test.default.TestTop1019 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1019 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop108 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.default.TestTop108 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop108 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1011 ...
Shard shards/shard1(1) : removed plasmaId 12 for instance test.default.TestTop1011 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1011 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1012 ...
Shard shards/shard1(1) : removed plasmaId 13 for instance test.default.TestTop1012 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1012 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop102 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.default.TestTop102 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop102 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop103 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.default.TestTop103 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop103 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop104 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.default.TestTop104 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop104 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop105 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.default.TestTop105 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop105 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop107 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.default.TestTop107 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop107 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1017 ...
Shard shards/shard1(1) : removed plasmaId 18 for instance test.default.TestTop1017 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1017 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop100 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestTop100 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop100 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop101 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestTop101 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop101 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop106 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.default.TestTop106 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop106 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop109 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.default.TestTop109 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop109 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop1013 ...
Shard shards/shard1(1) : removed plasmaId 14 for instance test.default.TestTop1013 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop1013 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestTopTen20 (0.56s)
=== RUN   TestTopTen5
----------- running TestTop10
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop100(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop100
LSS test.default.TestTop100/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop100/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop100) to LSS (test.default.TestTop100) and RecoveryLSS (test.default.TestTop100/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestTop100
Shard shards/shard1(1) : Add instance test.default.TestTop100 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop100 to LSS test.default.TestTop100
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop100/recovery], Data log [test.default.TestTop100], Shared [false]
LSS test.default.TestTop100/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.036µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop100/recovery], Data log [test.default.TestTop100], Shared [false]. Built [0] plasmas, took [98.339µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop100(shard1) : all deamons started
LSS test.default.TestTop100/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop100 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop101(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop101
LSS test.default.TestTop101/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop101/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop101) to LSS (test.default.TestTop101) and RecoveryLSS (test.default.TestTop101/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestTop101
Shard shards/shard1(1) : Add instance test.default.TestTop101 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop101 to LSS test.default.TestTop101
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop101/recovery], Data log [test.default.TestTop101], Shared [false]
LSS test.default.TestTop101/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.583µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop101/recovery], Data log [test.default.TestTop101], Shared [false]. Built [0] plasmas, took [108.479µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop101(shard1) : all deamons started
LSS test.default.TestTop101/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop101 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop102(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop102
LSS test.default.TestTop102/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop102/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop102) to LSS (test.default.TestTop102) and RecoveryLSS (test.default.TestTop102/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestTop102
Shard shards/shard1(1) : Add instance test.default.TestTop102 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop102 to LSS test.default.TestTop102
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop102/recovery], Data log [test.default.TestTop102], Shared [false]
LSS test.default.TestTop102/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [62.141µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop102/recovery], Data log [test.default.TestTop102], Shared [false]. Built [0] plasmas, took [96.475µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop102(shard1) : all deamons started
LSS test.default.TestTop102/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop102 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop103(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop103
LSS test.default.TestTop103/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop103/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop103) to LSS (test.default.TestTop103) and RecoveryLSS (test.default.TestTop103/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestTop103
Shard shards/shard1(1) : Add instance test.default.TestTop103 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop103 to LSS test.default.TestTop103
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop103/recovery], Data log [test.default.TestTop103], Shared [false]
LSS test.default.TestTop103/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.081µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop103/recovery], Data log [test.default.TestTop103], Shared [false]. Built [0] plasmas, took [106.985µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop103(shard1) : all deamons started
LSS test.default.TestTop103/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop103 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestTop104(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop104
LSS test.default.TestTop104/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestTop104/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestTop104) to LSS (test.default.TestTop104) and RecoveryLSS (test.default.TestTop104/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestTop104
Shard shards/shard1(1) : Add instance test.default.TestTop104 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestTop104 to LSS test.default.TestTop104
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestTop104/recovery], Data log [test.default.TestTop104], Shared [false]
LSS test.default.TestTop104/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [65.342µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestTop104/recovery], Data log [test.default.TestTop104], Shared [false]. Built [0] plasmas, took [99.287µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestTop104(shard1) : all deamons started
LSS test.default.TestTop104/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestTop104 started
LSS test.default.TestTop104(shard1) : all deamons stopped
LSS test.default.TestTop104/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop104 stopped
LSS test.default.TestTop104(shard1) : LSSCleaner stopped
LSS test.default.TestTop104/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop104 closed
LSS test.default.TestTop103(shard1) : all deamons stopped
LSS test.default.TestTop103/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop103 stopped
LSS test.default.TestTop103(shard1) : LSSCleaner stopped
LSS test.default.TestTop103/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop103 closed
LSS test.default.TestTop102(shard1) : all deamons stopped
LSS test.default.TestTop102/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop102 stopped
LSS test.default.TestTop102(shard1) : LSSCleaner stopped
LSS test.default.TestTop102/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop102 closed
LSS test.default.TestTop101(shard1) : all deamons stopped
LSS test.default.TestTop101/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop101 stopped
LSS test.default.TestTop101(shard1) : LSSCleaner stopped
LSS test.default.TestTop101/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop101 closed
LSS test.default.TestTop100(shard1) : all deamons stopped
LSS test.default.TestTop100/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestTop100 stopped
LSS test.default.TestTop100(shard1) : LSSCleaner stopped
LSS test.default.TestTop100/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestTop100 closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestTop100 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestTop100 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop100 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop101 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestTop101 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop101 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop102 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.default.TestTop102 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop102 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop103 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.default.TestTop103 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop103 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestTop104 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.default.TestTop104 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestTop104 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestTopTen5 (0.14s)
=== RUN   TestMVCCSimple
----------- running TestMVCCSimple
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCSimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCSimple
LSS test.mvcc.TestMVCCSimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCSimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCSimple) to LSS (test.mvcc.TestMVCCSimple) and RecoveryLSS (test.mvcc.TestMVCCSimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCSimple
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCSimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCSimple to LSS test.mvcc.TestMVCCSimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCSimple/recovery], Data log [test.mvcc.TestMVCCSimple], Shared [false]
LSS test.mvcc.TestMVCCSimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.972µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCSimple/recovery], Data log [test.mvcc.TestMVCCSimple], Shared [false]. Built [0] plasmas, took [85.278µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCSimple(shard1) : all deamons started
LSS test.mvcc.TestMVCCSimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCSimple started
LSS test.mvcc.TestMVCCSimple(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCSimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCSimple stopped
LSS test.mvcc.TestMVCCSimple(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCSimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCSimple closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCSimple ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCSimple ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCSimple sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCSimple (0.20s)
=== RUN   TestMVCCLookup
----------- running TestMVCCLookup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCLookup(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCLookup
LSS test.mvcc.TestMVCCLookup/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCLookup/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCLookup) to LSS (test.mvcc.TestMVCCLookup) and RecoveryLSS (test.mvcc.TestMVCCLookup/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCLookup
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCLookup to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCLookup to LSS test.mvcc.TestMVCCLookup
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCLookup/recovery], Data log [test.mvcc.TestMVCCLookup], Shared [false]
LSS test.mvcc.TestMVCCLookup/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.306µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCLookup/recovery], Data log [test.mvcc.TestMVCCLookup], Shared [false]. Built [0] plasmas, took [108.756µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCLookup(shard1) : all deamons started
LSS test.mvcc.TestMVCCLookup/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCLookup started
LSS test.mvcc.TestMVCCLookup(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCLookup/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCLookup stopped
LSS test.mvcc.TestMVCCLookup(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCLookup/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCLookup closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCLookup ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCLookup ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCLookup sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCLookup (0.13s)
=== RUN   TestMVCCIteratorRefresh
----------- running TestMVCCIteratorRefresh
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCIteratorRefresh(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIteratorRefresh
LSS test.mvcc.TestMVCCIteratorRefresh/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIteratorRefresh/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCIteratorRefresh) to LSS (test.mvcc.TestMVCCIteratorRefresh) and RecoveryLSS (test.mvcc.TestMVCCIteratorRefresh/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCIteratorRefresh
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIteratorRefresh to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIteratorRefresh to LSS test.mvcc.TestMVCCIteratorRefresh
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCIteratorRefresh/recovery], Data log [test.mvcc.TestMVCCIteratorRefresh], Shared [false]
LSS test.mvcc.TestMVCCIteratorRefresh/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [107.723µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCIteratorRefresh/recovery], Data log [test.mvcc.TestMVCCIteratorRefresh], Shared [false]. Built [0] plasmas, took [145.729µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCIteratorRefresh(shard1) : all deamons started
LSS test.mvcc.TestMVCCIteratorRefresh/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefresh started
inserting into db...
creating snapshot...
iterating to middle...
compacting...
iterating to end...
LSS test.mvcc.TestMVCCIteratorRefresh(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCIteratorRefresh/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefresh stopped
LSS test.mvcc.TestMVCCIteratorRefresh(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCIteratorRefresh/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefresh closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCIteratorRefresh ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCIteratorRefresh ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefresh sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCIteratorRefresh (4.84s)
=== RUN   TestMVCCIteratorRefreshEveryRow
----------- running TestMVCCIteratorRefreshEveryRow
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIteratorRefreshEveryRow
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCIteratorRefreshEveryRow) to LSS (test.mvcc.TestMVCCIteratorRefreshEveryRow) and RecoveryLSS (test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCIteratorRefreshEveryRow
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIteratorRefreshEveryRow to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIteratorRefreshEveryRow to LSS test.mvcc.TestMVCCIteratorRefreshEveryRow
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery], Data log [test.mvcc.TestMVCCIteratorRefreshEveryRow], Shared [false]
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.605µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery], Data log [test.mvcc.TestMVCCIteratorRefreshEveryRow], Shared [false]. Built [0] plasmas, took [109.45µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow(shard1) : all deamons started
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefreshEveryRow started
inserting into db...
creating snapshot...
iterating to end...
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefreshEveryRow stopped
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCIteratorRefreshEveryRow/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefreshEveryRow closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCIteratorRefreshEveryRow ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCIteratorRefreshEveryRow ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIteratorRefreshEveryRow sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCIteratorRefreshEveryRow (0.89s)
=== RUN   TestMVCCGarbageCollection
----------- running TestMVCCGarbageCollection
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCGarbageCollection(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCGarbageCollection
LSS test.mvcc.TestMVCCGarbageCollection/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCGarbageCollection/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCGarbageCollection) to LSS (test.mvcc.TestMVCCGarbageCollection) and RecoveryLSS (test.mvcc.TestMVCCGarbageCollection/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCGarbageCollection
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCGarbageCollection to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCGarbageCollection to LSS test.mvcc.TestMVCCGarbageCollection
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCGarbageCollection/recovery], Data log [test.mvcc.TestMVCCGarbageCollection], Shared [false]
LSS test.mvcc.TestMVCCGarbageCollection/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.891µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCGarbageCollection/recovery], Data log [test.mvcc.TestMVCCGarbageCollection], Shared [false]. Built [0] plasmas, took [100.591µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCGarbageCollection(shard1) : all deamons started
LSS test.mvcc.TestMVCCGarbageCollection/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCGarbageCollection started
LSS test.mvcc.TestMVCCGarbageCollection(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCGarbageCollection/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCGarbageCollection stopped
LSS test.mvcc.TestMVCCGarbageCollection(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCGarbageCollection/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCGarbageCollection closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCGarbageCollection ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCGarbageCollection ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCGarbageCollection sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCGarbageCollection (0.09s)
=== RUN   TestMVCCRecoveryPoint
----------- running TestMVCCRecoveryPoint
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPoint) to LSS (test.mvcc.TestMVCCRecoveryPoint) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPoint/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to LSS test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.722µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]. Built [0] plasmas, took [81.206µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint started
(1) Recovery points gcSn:99, minRPSn:&[60 70 80 90]
recovery_point sn:60 meta:60000 count:60000
recovery_point sn:70 meta:70000 count:70000
recovery_point sn:80 meta:80000 count:80000
recovery_point sn:90 meta:90000 count:90000
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint stopped
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint closed
Reopening database...
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPoint) to LSS (test.mvcc.TestMVCCRecoveryPoint) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPoint/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to LSS test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [163840]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [2134296] took [3.733271ms]
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [2138112] replayOffset [2134296]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [3.837335ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]. Built [1] plasmas, took [4.018946ms]
Plasma: doInit: data UsedSpace 2138112 recovery UsedSpace 174214
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint started
(2) Recovery points gcSn:360002, minRPSn:&[60 70 80 90]
recovery_point sn:60 meta:60000 count:60000
recovery_point sn:70 meta:70000 count:70000
recovery_point sn:80 meta:80000 count:80000
recovery_point sn:90 meta:90000 count:90000
Rollbacked to 80000 count:80000
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint stopped
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint closed
Reopening database...
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPoint/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPoint) to LSS (test.mvcc.TestMVCCRecoveryPoint) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPoint/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPoint to LSS test.mvcc.TestMVCCRecoveryPoint
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [397312]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4116480] took [8.638693ms]
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [4329472] replayOffset [4116480]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [12.563741ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRecoveryPoint], Shared [false]. Built [1] plasmas, took [12.812319ms]
Plasma: doInit: data UsedSpace 4329472 recovery UsedSpace 396586
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint started
(3) Recovery points gcSn:720003, minRPSn:&[60 70 80 360013 360023 360033 360043 360053 360063 360073 360083 360093]
recovery_point sn:60 meta:60000 count:60000
recovery_point sn:70 meta:70000 count:70000
recovery_point sn:80 meta:80000 count:80000
recovery_point sn:360013 meta:110000 count:90000
recovery_point sn:360023 meta:120000 count:100000
recovery_point sn:360033 meta:130000 count:110000
recovery_point sn:360043 meta:140000 count:120000
recovery_point sn:360053 meta:150000 count:130000
recovery_point sn:360063 meta:160000 count:140000
recovery_point sn:360073 meta:170000 count:150000
recovery_point sn:360083 meta:180000 count:160000
recovery_point sn:360093 meta:190000 count:170000
Rollbacked to 160000 count:140000
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint stopped
LSS test.mvcc.TestMVCCRecoveryPoint(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPoint/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCRecoveryPoint ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCRecoveryPoint ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPoint sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCRecoveryPoint (1.83s)
=== RUN   TestMVCCRollbackItemsNotInSnapshot
----------- running testMVCCRollbackItemsNotInSnapshot
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.testMVCCRollbackItemsNotInSnapshot
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.testMVCCRollbackItemsNotInSnapshot) to LSS (test.mvcc.testMVCCRollbackItemsNotInSnapshot) and RecoveryLSS (test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.testMVCCRollbackItemsNotInSnapshot
Shard shards/shard1(1) : Add instance test.mvcc.testMVCCRollbackItemsNotInSnapshot to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.testMVCCRollbackItemsNotInSnapshot to LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery], Data log [test.mvcc.testMVCCRollbackItemsNotInSnapshot], Shared [false]
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.699µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery], Data log [test.mvcc.testMVCCRollbackItemsNotInSnapshot], Shared [false]. Built [0] plasmas, took [82.268µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot(shard1) : all deamons started
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.testMVCCRollbackItemsNotInSnapshot started
Inserted 5000 items, currSn=3
Deleted 5000 items, currSn=3
Rollbacked to RecoveryPoint sn:1 count:2500
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot(shard1) : all deamons stopped
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.testMVCCRollbackItemsNotInSnapshot stopped
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot(shard1) : LSSCleaner stopped
LSS test.mvcc.testMVCCRollbackItemsNotInSnapshot/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.testMVCCRollbackItemsNotInSnapshot closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.testMVCCRollbackItemsNotInSnapshot ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.testMVCCRollbackItemsNotInSnapshot ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.testMVCCRollbackItemsNotInSnapshot sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCRollbackItemsNotInSnapshot (0.11s)
=== RUN   TestMVCCRecoveryPointRollbackedSnapshot
----------- running TestMVCCRecoveryPointRollbackedSnapshot
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot) to LSS (test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot to LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery], Data log [test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.144µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery], Data log [test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot], Shared [false]. Built [0] plasmas, took [115.312µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot started
Created snapshot with sn:100
Rollbacked to 60000 count:60000 sn:60
Create recovery point with snapshot with sn:100
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot stopped
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointRollbackedSnapshot sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCRecoveryPointRollbackedSnapshot (0.85s)
=== RUN   TestMVCCRollbackBetweenRecoveryPoint
----------- running TestMVCCRollbackBetweenRecoveryPoint
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRollbackBetweenRecoveryPoint
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRollbackBetweenRecoveryPoint) to LSS (test.mvcc.TestMVCCRollbackBetweenRecoveryPoint) and RecoveryLSS (test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint to LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRollbackBetweenRecoveryPoint], Shared [false]
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.182µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery], Data log [test.mvcc.TestMVCCRollbackBetweenRecoveryPoint], Shared [false]. Built [0] plasmas, took [81.982µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint(shard1) : all deamons started
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint started
Created snapshot with sn:100
Create recovery point with snapshot with sn:100
Rollbacked to 60000 count:60000 sn:60
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint stopped
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRollbackBetweenRecoveryPoint/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRollbackBetweenRecoveryPoint sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCRollbackBetweenRecoveryPoint (0.85s)
=== RUN   TestMVCCRecoveryPointCrash
----------- running TestMVCCRecoveryPointCrash
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointCrash
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointCrash/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPointCrash) to LSS (test.mvcc.TestMVCCRecoveryPointCrash) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPointCrash/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPointCrash
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointCrash to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointCrash to LSS test.mvcc.TestMVCCRecoveryPointCrash
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointCrash/recovery], Data log [test.mvcc.TestMVCCRecoveryPointCrash], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.293µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointCrash/recovery], Data log [test.mvcc.TestMVCCRecoveryPointCrash], Shared [false]. Built [0] plasmas, took [83.381µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash started
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash closed
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointCrash
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCRecoveryPointCrash/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCRecoveryPointCrash) to LSS (test.mvcc.TestMVCCRecoveryPointCrash) and RecoveryLSS (test.mvcc.TestMVCCRecoveryPointCrash/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCRecoveryPointCrash
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointCrash to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCRecoveryPointCrash to LSS test.mvcc.TestMVCCRecoveryPointCrash
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointCrash/recovery], Data log [test.mvcc.TestMVCCRecoveryPointCrash], Shared [false]
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8192] took [134.778µs]
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [12288] replayOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [218.564µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCRecoveryPointCrash/recovery], Data log [test.mvcc.TestMVCCRecoveryPointCrash], Shared [false]. Built [1] plasmas, took [299.858µs]
Plasma: doInit: data UsedSpace 12288 recovery UsedSpace 12418
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : all deamons started
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash started
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCRecoveryPointCrash/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCRecoveryPointCrash ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCRecoveryPointCrash ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCRecoveryPointCrash sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCRecoveryPointCrash (0.08s)
=== RUN   TestMVCCIntervalGC
----------- running TestMVCCIntervalGC
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCIntervalGC(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIntervalGC
LSS test.mvcc.TestMVCCIntervalGC/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCIntervalGC/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCIntervalGC) to LSS (test.mvcc.TestMVCCIntervalGC) and RecoveryLSS (test.mvcc.TestMVCCIntervalGC/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCIntervalGC
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIntervalGC to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCIntervalGC to LSS test.mvcc.TestMVCCIntervalGC
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCIntervalGC/recovery], Data log [test.mvcc.TestMVCCIntervalGC], Shared [false]
LSS test.mvcc.TestMVCCIntervalGC/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [96.114µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCIntervalGC/recovery], Data log [test.mvcc.TestMVCCIntervalGC], Shared [false]. Built [0] plasmas, took [113.993µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCIntervalGC(shard1) : all deamons started
LSS test.mvcc.TestMVCCIntervalGC/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIntervalGC started
LSS test.mvcc.TestMVCCIntervalGC(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCIntervalGC/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIntervalGC stopped
LSS test.mvcc.TestMVCCIntervalGC(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCIntervalGC/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIntervalGC closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCIntervalGC ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCIntervalGC ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCIntervalGC sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCIntervalGC (0.18s)
=== RUN   TestMVCCItemsCount
----------- running TestMVCCItemsCount
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestMVCCItemsCount(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCItemsCount
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCItemsCount/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCItemsCount) to LSS (test.mvcc.TestMVCCItemsCount) and RecoveryLSS (test.mvcc.TestMVCCItemsCount/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCItemsCount
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCItemsCount to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCItemsCount to LSS test.mvcc.TestMVCCItemsCount
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCItemsCount/recovery], Data log [test.mvcc.TestMVCCItemsCount], Shared [false]
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [252.429µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCItemsCount/recovery], Data log [test.mvcc.TestMVCCItemsCount], Shared [false]. Built [0] plasmas, took [296.71µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestMVCCItemsCount(shard1) : all deamons started
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount started
LSS test.mvcc.TestMVCCItemsCount(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount stopped
LSS test.mvcc.TestMVCCItemsCount(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount closed
Reopening db...
LSS test.mvcc.TestMVCCItemsCount(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCItemsCount
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestMVCCItemsCount/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestMVCCItemsCount) to LSS (test.mvcc.TestMVCCItemsCount) and RecoveryLSS (test.mvcc.TestMVCCItemsCount/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestMVCCItemsCount
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCItemsCount to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestMVCCItemsCount to LSS test.mvcc.TestMVCCItemsCount
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestMVCCItemsCount/recovery], Data log [test.mvcc.TestMVCCItemsCount], Shared [false]
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [86016]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [450560] took [1.677367ms]
LSS test.mvcc.TestMVCCItemsCount(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [454656] replayOffset [450560]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [1.769972ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestMVCCItemsCount/recovery], Data log [test.mvcc.TestMVCCItemsCount], Shared [false]. Built [1] plasmas, took [1.866481ms]
Plasma: doInit: data UsedSpace 454656 recovery UsedSpace 81238
LSS test.mvcc.TestMVCCItemsCount(shard1) : all deamons started
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount started
LSS test.mvcc.TestMVCCItemsCount(shard1) : all deamons stopped
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount stopped
LSS test.mvcc.TestMVCCItemsCount(shard1) : LSSCleaner stopped
LSS test.mvcc.TestMVCCItemsCount/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestMVCCItemsCount ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestMVCCItemsCount ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestMVCCItemsCount sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCItemsCount (0.32s)
=== RUN   TestLargeItems
----------- running TestLargeItems
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestLargeItems(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestLargeItems
LSS test.mvcc.TestLargeItems/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestLargeItems/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestLargeItems) to LSS (test.mvcc.TestLargeItems) and RecoveryLSS (test.mvcc.TestLargeItems/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestLargeItems
Shard shards/shard1(1) : Add instance test.mvcc.TestLargeItems to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestLargeItems to LSS test.mvcc.TestLargeItems
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestLargeItems/recovery], Data log [test.mvcc.TestLargeItems], Shared [false]
LSS test.mvcc.TestLargeItems/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [46.909µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestLargeItems/recovery], Data log [test.mvcc.TestLargeItems], Shared [false]. Built [0] plasmas, took [82.43µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestLargeItems(shard1) : all deamons started
LSS test.mvcc.TestLargeItems/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems started
LSS test.mvcc.TestLargeItems(shard1) : all deamons stopped
LSS test.mvcc.TestLargeItems/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems stopped
LSS test.mvcc.TestLargeItems(shard1) : LSSCleaner stopped
LSS test.mvcc.TestLargeItems/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems closed
LSS test.mvcc.TestLargeItems(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestLargeItems
LSS test.mvcc.TestLargeItems/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestLargeItems/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestLargeItems) to LSS (test.mvcc.TestLargeItems) and RecoveryLSS (test.mvcc.TestLargeItems/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestLargeItems
Shard shards/shard1(1) : Add instance test.mvcc.TestLargeItems to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestLargeItems to LSS test.mvcc.TestLargeItems
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestLargeItems/recovery], Data log [test.mvcc.TestLargeItems], Shared [false]
LSS test.mvcc.TestLargeItems/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [168.145µs]
LSS test.mvcc.TestLargeItems(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [9846784] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [852.647721ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestLargeItems/recovery], Data log [test.mvcc.TestLargeItems], Shared [false]. Built [1] plasmas, took [852.75078ms]
Plasma: doInit: data UsedSpace 9846784 recovery UsedSpace 8246
LSS test.mvcc.TestLargeItems(shard1) : all deamons started
LSS test.mvcc.TestLargeItems/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems started
LSS test.mvcc.TestLargeItems(shard1) : all deamons stopped
LSS test.mvcc.TestLargeItems/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems stopped
LSS test.mvcc.TestLargeItems(shard1) : LSSCleaner stopped
LSS test.mvcc.TestLargeItems/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestLargeItems ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestLargeItems ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestLargeItems sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestLargeItems (108.80s)
=== RUN   TestTooLargeKey
----------- running TestTooLargeKey
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestTooLargeKey(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestTooLargeKey
LSS test.basic.TestTooLargeKey/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestTooLargeKey/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestTooLargeKey) to LSS (test.basic.TestTooLargeKey) and RecoveryLSS (test.basic.TestTooLargeKey/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestTooLargeKey
Shard shards/shard1(1) : Add instance test.basic.TestTooLargeKey to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestTooLargeKey to LSS test.basic.TestTooLargeKey
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestTooLargeKey/recovery], Data log [test.basic.TestTooLargeKey], Shared [false]
LSS test.basic.TestTooLargeKey/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.856µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestTooLargeKey/recovery], Data log [test.basic.TestTooLargeKey], Shared [false]. Built [0] plasmas, took [93.553µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestTooLargeKey(shard1) : all deamons started
LSS test.basic.TestTooLargeKey/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestTooLargeKey started
LSS test.basic.TestTooLargeKey(shard1) : all deamons stopped
LSS test.basic.TestTooLargeKey/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestTooLargeKey stopped
LSS test.basic.TestTooLargeKey(shard1) : LSSCleaner stopped
LSS test.basic.TestTooLargeKey/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestTooLargeKey closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestTooLargeKey ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestTooLargeKey ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestTooLargeKey sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestTooLargeKey (2.68s)
=== RUN   TestMVCCItemUpdateSize
----------- running TestMVCCItemUpdateSize
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestMVCCItemUpdateSize(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestMVCCItemUpdateSize
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestMVCCItemUpdateSize/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestMVCCItemUpdateSize) to LSS (test.default.TestMVCCItemUpdateSize) and RecoveryLSS (test.default.TestMVCCItemUpdateSize/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestMVCCItemUpdateSize
Shard shards/shard1(1) : Add instance test.default.TestMVCCItemUpdateSize to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestMVCCItemUpdateSize to LSS test.default.TestMVCCItemUpdateSize
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestMVCCItemUpdateSize/recovery], Data log [test.default.TestMVCCItemUpdateSize], Shared [false]
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [81.562µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestMVCCItemUpdateSize/recovery], Data log [test.default.TestMVCCItemUpdateSize], Shared [false]. Built [0] plasmas, took [121.474µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestMVCCItemUpdateSize(shard1) : all deamons started
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize started
{
"memory_quota":         10995116277,
"count":                20000,
"compacts":             296,
"purges":               0,
"splits":               98,
"merges":               0,
"inserts":              20000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          874396,
"memory_size_index":    7028,
"allocated":            6167140,
"freed":                5292744,
"reclaimed":            4415116,
"reclaim_pending":      877628,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      7028,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            99,
"items_count":          0,
"total_records":        20000,
"num_rec_allocs":       118993,
"num_rec_frees":        98993,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       700000,
"lss_data_size":        165616,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        526482,
"est_recovery_size":    35718,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           20000,
"cache_misses":         0,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               3,
"gcSn":                 0,
"gcSnIntervals":       "[0 1]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       17,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           0,
"page_cnt":             0,
"page_itemcnt":         0,
"avg_item_size":        0,
"avg_page_size":        0,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":2274266,
"page_bytes_compressed":515320,
"compression_ratio":    4.41331,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    11848,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          38765,
    "lss_data_size":        182468,
    "lss_used_space":       557056,
    "lss_disk_size":        557056,
    "lss_fragmentation":    67,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        557056,
    "bytes_incoming":       700000,
    "write_amp":            0.00,
    "write_amp_avg":        0.75,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      524288,
    "num_sctxs":            28,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
LSS test.default.TestMVCCItemUpdateSize(shard1) : all deamons stopped
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize stopped
LSS test.default.TestMVCCItemUpdateSize(shard1) : LSSCleaner stopped
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize closed
LSS test.default.TestMVCCItemUpdateSize(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestMVCCItemUpdateSize
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestMVCCItemUpdateSize/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestMVCCItemUpdateSize) to LSS (test.default.TestMVCCItemUpdateSize) and RecoveryLSS (test.default.TestMVCCItemUpdateSize/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestMVCCItemUpdateSize
Shard shards/shard1(1) : Add instance test.default.TestMVCCItemUpdateSize to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestMVCCItemUpdateSize to LSS test.default.TestMVCCItemUpdateSize
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestMVCCItemUpdateSize/recovery], Data log [test.default.TestMVCCItemUpdateSize], Shared [false]
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [32768]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [1.402763ms]
LSS test.default.TestMVCCItemUpdateSize(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [524288] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [12.703564ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestMVCCItemUpdateSize/recovery], Data log [test.default.TestMVCCItemUpdateSize], Shared [false]. Built [1] plasmas, took [12.79989ms]
Plasma: doInit: data UsedSpace 524288 recovery UsedSpace 35718
LSS test.default.TestMVCCItemUpdateSize(shard1) : all deamons started
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize started
{
"memory_quota":         10995116277,
"count":                0,
"compacts":             0,
"purges":               0,
"splits":               0,
"merges":               0,
"inserts":              0,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          8884,
"memory_size_index":    6996,
"allocated":            25224,
"freed":                16340,
"reclaimed":            16340,
"reclaim_pending":      0,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      6996,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            99,
"items_count":          0,
"total_records":        20000,
"num_rec_allocs":       0,
"num_rec_frees":        0,
"num_rec_swapout":      20000,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       0,
"lss_data_size":        165616,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        530588,
"est_recovery_size":    43930,
"lss_num_reads":        0,
"lss_read_bs":          517108,
"lss_blk_read_bs":      520192,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           0,
"cache_misses":         0,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               360002,
"gcSn":                 360002,
"gcSnIntervals":       "[0 360003]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            3,
"num_free_wctxs":       1,
"num_readers":          0,
"num_writers":          0,
"page_bytes":           0,
"page_cnt":             0,
"page_itemcnt":         0,
"avg_item_size":        0,
"avg_page_size":        0,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":0,
"page_bytes_compressed":0,
"compression_ratio":    0.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    6336,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          10632,
    "lss_data_size":        182468,
    "lss_used_space":       565248,
    "lss_disk_size":        565248,
    "lss_fragmentation":    67,
    "lss_num_reads":        0,
    "lss_read_bs":          543466,
    "lss_blk_read_bs":      552960,
    "bytes_written":        8192,
    "bytes_incoming":       0,
    "write_amp":            0.00,
    "write_amp_avg":        0.00,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      528384,
    "num_sctxs":            13,
    "num_free_sctxs":       2,
    "num_swapperWriter":    32
  }
}
LSS test.default.TestMVCCItemUpdateSize(shard1) : all deamons stopped
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize stopped
LSS test.default.TestMVCCItemUpdateSize(shard1) : LSSCleaner stopped
LSS test.default.TestMVCCItemUpdateSize/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestMVCCItemUpdateSize ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestMVCCItemUpdateSize ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestMVCCItemUpdateSize sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestMVCCItemUpdateSize (0.21s)
=== RUN   TestEvictionStats
----------- running TestEvictionStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestEvictionStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestEvictionStats
LSS test.mvcc.TestEvictionStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestEvictionStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestEvictionStats) to LSS (test.mvcc.TestEvictionStats) and RecoveryLSS (test.mvcc.TestEvictionStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestEvictionStats
Shard shards/shard1(1) : Add instance test.mvcc.TestEvictionStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestEvictionStats to LSS test.mvcc.TestEvictionStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestEvictionStats/recovery], Data log [test.mvcc.TestEvictionStats], Shared [false]
LSS test.mvcc.TestEvictionStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.77µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestEvictionStats/recovery], Data log [test.mvcc.TestEvictionStats], Shared [false]. Built [0] plasmas, took [81.349µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestEvictionStats(shard1) : all deamons started
LSS test.mvcc.TestEvictionStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestEvictionStats started
LSS test.mvcc.TestEvictionStats(shard1) : all deamons stopped
LSS test.mvcc.TestEvictionStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestEvictionStats stopped
LSS test.mvcc.TestEvictionStats(shard1) : LSSCleaner stopped
LSS test.mvcc.TestEvictionStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestEvictionStats closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestEvictionStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestEvictionStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestEvictionStats sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictionStats (0.43s)
=== RUN   TestReaderCacheStats
----------- running TestReaderCacheStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestReaderCacheStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestReaderCacheStats
LSS test.mvcc.TestReaderCacheStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestReaderCacheStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestReaderCacheStats) to LSS (test.mvcc.TestReaderCacheStats) and RecoveryLSS (test.mvcc.TestReaderCacheStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestReaderCacheStats
Shard shards/shard1(1) : Add instance test.mvcc.TestReaderCacheStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestReaderCacheStats to LSS test.mvcc.TestReaderCacheStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestReaderCacheStats/recovery], Data log [test.mvcc.TestReaderCacheStats], Shared [false]
LSS test.mvcc.TestReaderCacheStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.63µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestReaderCacheStats/recovery], Data log [test.mvcc.TestReaderCacheStats], Shared [false]. Built [0] plasmas, took [86.308µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestReaderCacheStats(shard1) : all deamons started
LSS test.mvcc.TestReaderCacheStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestReaderCacheStats started
Plasma: Warning: not enough memory to hold records in memory. MemStats: {"memory_size":5216,"memory_size_index":4320,"buf_memused":99364,"mvcc_purge_ratio":1.00000,"resident_ratio":0.00000,"alloc_size":2889708,"free_size":2880172,"items_count":10000,"recs_in_mem":0,"reclaimed":0,"reclaim_pending":2880172}

LSS test.mvcc.TestReaderCacheStats(shard1) : all deamons stopped
LSS test.mvcc.TestReaderCacheStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestReaderCacheStats stopped
LSS test.mvcc.TestReaderCacheStats(shard1) : LSSCleaner stopped
LSS test.mvcc.TestReaderCacheStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestReaderCacheStats closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestReaderCacheStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestReaderCacheStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestReaderCacheStats sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestReaderCacheStats (1.12s)
=== RUN   TestInvalidSnapshot
----------- running TestInvalidSnapshot
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestInvalidSnapshot(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestInvalidSnapshot
LSS test.mvcc.TestInvalidSnapshot/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestInvalidSnapshot/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestInvalidSnapshot) to LSS (test.mvcc.TestInvalidSnapshot) and RecoveryLSS (test.mvcc.TestInvalidSnapshot/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestInvalidSnapshot
Shard shards/shard1(1) : Add instance test.mvcc.TestInvalidSnapshot to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestInvalidSnapshot to LSS test.mvcc.TestInvalidSnapshot
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestInvalidSnapshot/recovery], Data log [test.mvcc.TestInvalidSnapshot], Shared [false]
LSS test.mvcc.TestInvalidSnapshot/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [58.959µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestInvalidSnapshot/recovery], Data log [test.mvcc.TestInvalidSnapshot], Shared [false]. Built [0] plasmas, took [105.208µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestInvalidSnapshot(shard1) : all deamons started
LSS test.mvcc.TestInvalidSnapshot/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestInvalidSnapshot started
LSS test.mvcc.TestInvalidSnapshot(shard1) : all deamons stopped
LSS test.mvcc.TestInvalidSnapshot/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestInvalidSnapshot stopped
LSS test.mvcc.TestInvalidSnapshot(shard1) : LSSCleaner stopped
LSS test.mvcc.TestInvalidSnapshot/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestInvalidSnapshot closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestInvalidSnapshot ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestInvalidSnapshot ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestInvalidSnapshot sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestInvalidSnapshot (0.91s)
=== RUN   TestEmptyKeyInsert
----------- running TestEmptyKeyInsert
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestEmptyKeyInsert(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestEmptyKeyInsert
LSS test.mvcc.TestEmptyKeyInsert/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestEmptyKeyInsert/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestEmptyKeyInsert) to LSS (test.mvcc.TestEmptyKeyInsert) and RecoveryLSS (test.mvcc.TestEmptyKeyInsert/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestEmptyKeyInsert
Shard shards/shard1(1) : Add instance test.mvcc.TestEmptyKeyInsert to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestEmptyKeyInsert to LSS test.mvcc.TestEmptyKeyInsert
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestEmptyKeyInsert/recovery], Data log [test.mvcc.TestEmptyKeyInsert], Shared [false]
LSS test.mvcc.TestEmptyKeyInsert/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.046µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestEmptyKeyInsert/recovery], Data log [test.mvcc.TestEmptyKeyInsert], Shared [false]. Built [0] plasmas, took [82.166µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestEmptyKeyInsert(shard1) : all deamons started
LSS test.mvcc.TestEmptyKeyInsert/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestEmptyKeyInsert started
LSS test.mvcc.TestEmptyKeyInsert(shard1) : all deamons stopped
LSS test.mvcc.TestEmptyKeyInsert/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestEmptyKeyInsert stopped
LSS test.mvcc.TestEmptyKeyInsert(shard1) : LSSCleaner stopped
LSS test.mvcc.TestEmptyKeyInsert/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestEmptyKeyInsert closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestEmptyKeyInsert ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestEmptyKeyInsert ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestEmptyKeyInsert sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEmptyKeyInsert (0.03s)
=== RUN   TestPageMergeCorrectness2
--- PASS: TestPageMergeCorrectness2 (0.00s)
=== RUN   TestPageMergeCorrectness
--- PASS: TestPageMergeCorrectness (0.01s)
=== RUN   TestPageMarshalFull
--- PASS: TestPageMarshalFull (0.01s)
=== RUN   TestPageMergeMarshal
--- PASS: TestPageMergeMarshal (0.00s)
=== RUN   TestPageOperations
--- PASS: TestPageOperations (0.04s)
=== RUN   TestPageIterator
--- PASS: TestPageIterator (0.00s)
=== RUN   TestPageMarshal
--- PASS: TestPageMarshal (0.02s)
=== RUN   TestPageMergeCorrectness3
--- PASS: TestPageMergeCorrectness3 (0.00s)
=== RUN   TestPlasmaPageVisitor
----------- running TestPlasmaPageVisitor
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaPageVisitor(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaPageVisitor
LSS test.basic.TestPlasmaPageVisitor/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaPageVisitor/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaPageVisitor) to LSS (test.basic.TestPlasmaPageVisitor) and RecoveryLSS (test.basic.TestPlasmaPageVisitor/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaPageVisitor
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaPageVisitor to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaPageVisitor to LSS test.basic.TestPlasmaPageVisitor
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaPageVisitor/recovery], Data log [test.basic.TestPlasmaPageVisitor], Shared [false]
LSS test.basic.TestPlasmaPageVisitor/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [142.255µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaPageVisitor/recovery], Data log [test.basic.TestPlasmaPageVisitor], Shared [false]. Built [0] plasmas, took [189.371µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaPageVisitor(shard1) : all deamons started
LSS test.basic.TestPlasmaPageVisitor/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaPageVisitor started
Paritition counts [241 623 228 92 231 275 599 330 369 333 497 404 753 0 0 0]
LSS test.basic.TestPlasmaPageVisitor(shard1) : all deamons stopped
LSS test.basic.TestPlasmaPageVisitor/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaPageVisitor stopped
LSS test.basic.TestPlasmaPageVisitor(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaPageVisitor/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaPageVisitor closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaPageVisitor ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaPageVisitor ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaPageVisitor sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaPageVisitor (4.90s)
=== RUN   TestPageRingVisitor
----------- running TestPageRingVisitor
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPageRingVisitor(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPageRingVisitor
LSS test.basic.TestPageRingVisitor/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPageRingVisitor/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPageRingVisitor) to LSS (test.basic.TestPageRingVisitor) and RecoveryLSS (test.basic.TestPageRingVisitor/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPageRingVisitor
Shard shards/shard1(1) : Add instance test.basic.TestPageRingVisitor to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPageRingVisitor to LSS test.basic.TestPageRingVisitor
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPageRingVisitor/recovery], Data log [test.basic.TestPageRingVisitor], Shared [false]
LSS test.basic.TestPageRingVisitor/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.583µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPageRingVisitor/recovery], Data log [test.basic.TestPageRingVisitor], Shared [false]. Built [0] plasmas, took [105.582µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPageRingVisitor(shard1) : all deamons started
LSS test.basic.TestPageRingVisitor/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPageRingVisitor started
LSS test.basic.TestPageRingVisitor(shard1) : all deamons stopped
LSS test.basic.TestPageRingVisitor/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPageRingVisitor stopped
LSS test.basic.TestPageRingVisitor(shard1) : LSSCleaner stopped
LSS test.basic.TestPageRingVisitor/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPageRingVisitor closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPageRingVisitor ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPageRingVisitor ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPageRingVisitor sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageRingVisitor (4.76s)
=== RUN   TestCheckpointRecovery
----------- running TestCheckpointRecovery
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestCheckpointRecovery) to LSS (test.mvcc.TestCheckpointRecovery) and RecoveryLSS (test.mvcc.TestCheckpointRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to LSS test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [112.403µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]. Built [0] plasmas, took [151.645µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons started
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery started
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery stopped
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery closed
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestCheckpointRecovery) to LSS (test.mvcc.TestCheckpointRecovery) and RecoveryLSS (test.mvcc.TestCheckpointRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to LSS test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [1089536]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [20698860] took [47.570191ms]
LSS test.mvcc.TestCheckpointRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [20709376] replayOffset [20698860]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [47.790571ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]. Built [1] plasmas, took [49.63743ms]
Plasma: doInit: data UsedSpace 20709376 recovery UsedSpace 1090920
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons started
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery started
Checkpoint recovery took 58.39252ms
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery stopped
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery closed
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckpointRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestCheckpointRecovery) to LSS (test.mvcc.TestCheckpointRecovery) and RecoveryLSS (test.mvcc.TestCheckpointRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckpointRecovery to LSS test.mvcc.TestCheckpointRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [1097728]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [20713472] took [45.637464ms]
LSS test.mvcc.TestCheckpointRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [20717568] replayOffset [20713472]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [45.798286ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestCheckpointRecovery/recovery], Data log [test.mvcc.TestCheckpointRecovery], Shared [false]. Built [1] plasmas, took [47.656684ms]
Plasma: doInit: data UsedSpace 20717568 recovery UsedSpace 1099124
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons started
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery started
Regular recovery took 54.71878ms
LSS test.mvcc.TestCheckpointRecovery(shard1) : all deamons stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery stopped
LSS test.mvcc.TestCheckpointRecovery(shard1) : LSSCleaner stopped
LSS test.mvcc.TestCheckpointRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestCheckpointRecovery ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestCheckpointRecovery ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestCheckpointRecovery sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestCheckpointRecovery (9.57s)
=== RUN   TestPageCorruption
----------- running TestPageCorruption
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestPageCorruption(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestPageCorruption
LSS test.mvcc.TestPageCorruption/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestPageCorruption/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestPageCorruption) to LSS (test.mvcc.TestPageCorruption) and RecoveryLSS (test.mvcc.TestPageCorruption/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestPageCorruption
Shard shards/shard1(1) : Add instance test.mvcc.TestPageCorruption to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestPageCorruption to LSS test.mvcc.TestPageCorruption
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestPageCorruption/recovery], Data log [test.mvcc.TestPageCorruption], Shared [false]
LSS test.mvcc.TestPageCorruption/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [103.545µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestPageCorruption/recovery], Data log [test.mvcc.TestPageCorruption], Shared [false]. Built [0] plasmas, took [140.478µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestPageCorruption(shard1) : all deamons started
LSS test.mvcc.TestPageCorruption/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestPageCorruption started
LSS test.mvcc.TestPageCorruption(shard1) : all deamons stopped
LSS test.mvcc.TestPageCorruption/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestPageCorruption stopped
LSS test.mvcc.TestPageCorruption(shard1) : LSSCleaner stopped
LSS test.mvcc.TestPageCorruption/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestPageCorruption closed
Corrupting file at 2073672
Wrote [2950] bytes to file [test.mvcc.TestPageCorruption/log.00000000000000.data]
LSS test.mvcc.TestPageCorruption(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestPageCorruption
LSS test.mvcc.TestPageCorruption/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestPageCorruption/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestPageCorruption) to LSS (test.mvcc.TestPageCorruption) and RecoveryLSS (test.mvcc.TestPageCorruption/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestPageCorruption
Shard shards/shard1(1) : Add instance test.mvcc.TestPageCorruption to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestPageCorruption to LSS test.mvcc.TestPageCorruption
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestPageCorruption/recovery], Data log [test.mvcc.TestPageCorruption], Shared [false]
LSS test.mvcc.TestPageCorruption/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [126976]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [2066405] took [4.723763ms]
LSS test.mvcc.TestPageCorruption(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [2076672] replayOffset [2066405]
LSS test.mvcc.TestPageCorruption(shard1) : (fatal error - Fail to recover plasmaId 0 due to error ': fatal: LSS Block is corrupted')
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestPageCorruption/recovery], Data log [test.mvcc.TestPageCorruption], Shared [false]. Built [0] plasmas, took [5.353722ms]
LSS test.mvcc.TestPageCorruption(shard1) : LSSCleaner stopped
LSS test.mvcc.TestPageCorruption/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestPageCorruption closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestPageCorruption ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestPageCorruption ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestPageCorruption sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageCorruption (0.83s)
=== RUN   TestCheckPointRecoveryFollowCleaning
----------- running TestCheckPointRecoveryFollowCleaning
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckPointRecoveryFollowCleaning
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestCheckPointRecoveryFollowCleaning) to LSS (test.mvcc.TestCheckPointRecoveryFollowCleaning) and RecoveryLSS (test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestCheckPointRecoveryFollowCleaning
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckPointRecoveryFollowCleaning to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckPointRecoveryFollowCleaning to LSS test.mvcc.TestCheckPointRecoveryFollowCleaning
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery], Data log [test.mvcc.TestCheckPointRecoveryFollowCleaning], Shared [false]
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [83.148µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery], Data log [test.mvcc.TestCheckPointRecoveryFollowCleaning], Shared [false]. Built [0] plasmas, took [117.914µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : all deamons started
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning started
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : all deamons stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : LSSCleaner stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning closed
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckPointRecoveryFollowCleaning
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestCheckPointRecoveryFollowCleaning) to LSS (test.mvcc.TestCheckPointRecoveryFollowCleaning) and RecoveryLSS (test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestCheckPointRecoveryFollowCleaning
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckPointRecoveryFollowCleaning to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestCheckPointRecoveryFollowCleaning to LSS test.mvcc.TestCheckPointRecoveryFollowCleaning
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery], Data log [test.mvcc.TestCheckPointRecoveryFollowCleaning], Shared [false]
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [16384]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [53248] took [324.551µs]
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [61440] replayOffset [53248]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [426.856µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery], Data log [test.mvcc.TestCheckPointRecoveryFollowCleaning], Shared [false]. Built [1] plasmas, took [494.896µs]
Plasma: doInit: data UsedSpace 61440 recovery UsedSpace 16838
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : all deamons started
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning started
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : logCleaner: starting... frag 64, data: 24442, used: 69632 log:(0 - 69632)
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : logCleaner: completed... frag 26, data: 24046, used: 32750, relocated: 1, retries: 0, skipped: 3 log:(0 - 98304) run:1 duration:7 ms
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : all deamons stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning(shard1) : LSSCleaner stopped
LSS test.mvcc.TestCheckPointRecoveryFollowCleaning/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestCheckPointRecoveryFollowCleaning ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestCheckPointRecoveryFollowCleaning ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestCheckPointRecoveryFollowCleaning sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestCheckPointRecoveryFollowCleaning (0.14s)
=== RUN   TestFragmentationWithZeroItems
----------- running TestFragmentationWithZeroItems
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestFragmentationWithZeroItems
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestFragmentationWithZeroItems/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestFragmentationWithZeroItems) to LSS (test.mvcc.TestFragmentationWithZeroItems) and RecoveryLSS (test.mvcc.TestFragmentationWithZeroItems/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestFragmentationWithZeroItems
Shard shards/shard1(1) : Add instance test.mvcc.TestFragmentationWithZeroItems to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestFragmentationWithZeroItems to LSS test.mvcc.TestFragmentationWithZeroItems
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestFragmentationWithZeroItems/recovery], Data log [test.mvcc.TestFragmentationWithZeroItems], Shared [false]
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [143.866µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestFragmentationWithZeroItems/recovery], Data log [test.mvcc.TestFragmentationWithZeroItems], Shared [false]. Built [0] plasmas, took [179.016µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : all deamons started
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestFragmentationWithZeroItems started
fragAutoTuner: FragRatio at 30. MaxFragRatio 40, MaxBandwidth 2116. BandwidthUsage 1061. AvailDisk N/A. TotalUsed 0. BandwidthRatio 0.5014177693761814. UsedSpaceRatio 0. CleanerBandwidth 9223372036854775807. Duration 0.
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : recoveryCleaner: starting... frag 90, data: 8292, used: 86016 log:(0 - 86016)
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : recoveryCleaner: completed... frag 0, data: 8292, used: 7904, relocated: 1, retries: 0, skipped: 0 log:(0 - 90112) run:1 duration:4 ms
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : logCleaner: starting... frag 83, data: 8202, used: 49152 log:(0 - 49152)
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : logCleaner: completed... frag 0, data: 8204, used: 7904, relocated: 1, retries: 0, skipped: 0 log:(0 - 53248) run:1 duration:4 ms
data log: data 8204 used space 12000
recovery log: data 8292 used space 12000
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : all deamons stopped
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestFragmentationWithZeroItems stopped
LSS test.mvcc.TestFragmentationWithZeroItems(shard1) : LSSCleaner stopped
LSS test.mvcc.TestFragmentationWithZeroItems/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestFragmentationWithZeroItems closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestFragmentationWithZeroItems ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestFragmentationWithZeroItems ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestFragmentationWithZeroItems sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestFragmentationWithZeroItems (17.16s)
=== RUN   TestEvictOnPersist
----------- running TestEvictOnPersist
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictOnPersist(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictOnPersist
LSS test.default.TestEvictOnPersist/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictOnPersist/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictOnPersist) to LSS (test.default.TestEvictOnPersist) and RecoveryLSS (test.default.TestEvictOnPersist/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestEvictOnPersist
Shard shards/shard1(1) : Add instance test.default.TestEvictOnPersist to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictOnPersist to LSS test.default.TestEvictOnPersist
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictOnPersist/recovery], Data log [test.default.TestEvictOnPersist], Shared [false]
LSS test.default.TestEvictOnPersist/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.409µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictOnPersist/recovery], Data log [test.default.TestEvictOnPersist], Shared [false]. Built [0] plasmas, took [117.12µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictOnPersist(shard1) : all deamons started
LSS test.default.TestEvictOnPersist/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestEvictOnPersist started
TestEvictOnPersist: evict dirty page on persist
TestEvictOnPersist: do not evict read page on persist
TestEvictOnPersist: do not evict dirty page under threshold
LSS test.default.TestEvictOnPersist(shard1) : all deamons stopped
LSS test.default.TestEvictOnPersist/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestEvictOnPersist stopped
LSS test.default.TestEvictOnPersist(shard1) : LSSCleaner stopped
LSS test.default.TestEvictOnPersist/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictOnPersist closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestEvictOnPersist ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestEvictOnPersist ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictOnPersist sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictOnPersist (0.14s)
=== RUN   TestPlasmaSimple
----------- running TestPlasmaSimple
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaSimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaSimple
LSS test.basic.TestPlasmaSimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaSimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaSimple) to LSS (test.basic.TestPlasmaSimple) and RecoveryLSS (test.basic.TestPlasmaSimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaSimple
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaSimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaSimple to LSS test.basic.TestPlasmaSimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaSimple/recovery], Data log [test.basic.TestPlasmaSimple], Shared [false]
LSS test.basic.TestPlasmaSimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.983µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaSimple/recovery], Data log [test.basic.TestPlasmaSimple], Shared [false]. Built [0] plasmas, took [84.397µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaSimple(shard1) : all deamons started
LSS test.basic.TestPlasmaSimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaSimple started
{
"memory_quota":         1099511627776,
"count":                200000,
"compacts":             13929,
"purges":               0,
"splits":               4974,
"merges":               3979,
"inserts":              1000000,
"deletes":              800000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          3329496,
"memory_size_index":    53104,
"allocated":            153397040,
"freed":                150067544,
"reclaimed":            150040968,
"reclaim_pending":      26576,
"reclaim_list_size":    26576,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      267008,
"freed_index":          213904,
"reclaimed_index":      213744,
"num_pages":            996,
"items_count":          0,
"total_records":        200040,
"num_rec_allocs":       5808251,
"num_rec_frees":        5608211,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       14400000,
"lss_data_size":        1625848,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16410246,
"est_recovery_size":    736134,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3800000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          2000000,
"rcache_misses":        0,
"rcache_hit_ratio":     1.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            2,
"num_free_wctxs":       0,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           32066008,
"page_cnt":             22881,
"page_itemcnt":         4008251,
"avg_item_size":        8,
"avg_page_size":        1401,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16326684,
"page_bytes_compressed":16326684,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    121248,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3284,
    "lss_data_size":        1677624,
    "lss_used_space":       17313792,
    "lss_disk_size":        17313792,
    "lss_fragmentation":    90,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        17313792,
    "bytes_incoming":       14400000,
    "write_amp":            0.00,
    "write_amp_avg":        1.15,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      16510976,
    "num_sctxs":            12,
    "num_free_sctxs":       1,
    "num_swapperWriter":    32
  }
}
LSS test.basic.TestPlasmaSimple(shard1) : all deamons stopped
LSS test.basic.TestPlasmaSimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaSimple stopped
LSS test.basic.TestPlasmaSimple(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaSimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaSimple closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaSimple ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaSimple ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaSimple sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaSimple (14.44s)
=== RUN   TestPlasmaCompression
----------- running TestPlasmaCompression
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaCompression(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaCompression
LSS test.basic.TestPlasmaCompression/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaCompression/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaCompression) to LSS (test.basic.TestPlasmaCompression) and RecoveryLSS (test.basic.TestPlasmaCompression/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaCompression
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaCompression to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaCompression to LSS test.basic.TestPlasmaCompression
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaCompression/recovery], Data log [test.basic.TestPlasmaCompression], Shared [false]
LSS test.basic.TestPlasmaCompression/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.72µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaCompression/recovery], Data log [test.basic.TestPlasmaCompression], Shared [false]. Built [0] plasmas, took [108.759µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaCompression(shard1) : all deamons started
LSS test.basic.TestPlasmaCompression/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompression started
LSS test.basic.TestPlasmaCompression(shard1) : all deamons stopped
LSS test.basic.TestPlasmaCompression/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompression stopped
LSS test.basic.TestPlasmaCompression(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaCompression/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompression closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaCompression ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaCompression ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompression sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaCompression (0.03s)
=== RUN   TestPlasmaCompressionWrong
----------- running TestPlasmaCompressionWrong
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaCompressionWrong(shard1) : LSSCleaner initialized
LSS test.basic.TestPlasmaCompressionWrong(shard1) : (fatal error - Incorrect compression type 'fake', switching to 'snappy')
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaCompressionWrong
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : LSSCleaner initialized for recovery
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : (fatal error - Incorrect compression type 'fake', switching to 'snappy')
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaCompressionWrong/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaCompressionWrong) to LSS (test.basic.TestPlasmaCompressionWrong) and RecoveryLSS (test.basic.TestPlasmaCompressionWrong/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaCompressionWrong
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaCompressionWrong to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaCompressionWrong to LSS test.basic.TestPlasmaCompressionWrong
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaCompressionWrong/recovery], Data log [test.basic.TestPlasmaCompressionWrong], Shared [false]
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [98.284µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaCompressionWrong/recovery], Data log [test.basic.TestPlasmaCompressionWrong], Shared [false]. Built [0] plasmas, took [142.26µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaCompressionWrong(shard1) : all deamons started
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompressionWrong started
LSS test.basic.TestPlasmaCompressionWrong(shard1) : all deamons stopped
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompressionWrong stopped
LSS test.basic.TestPlasmaCompressionWrong(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaCompressionWrong/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompressionWrong closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaCompressionWrong ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaCompressionWrong ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaCompressionWrong sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaCompressionWrong (0.02s)
=== RUN   TestSpoiledConfig
----------- running TestSpoiledConfig
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestSpoiledConfig(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestSpoiledConfig
LSS test.basic.TestSpoiledConfig/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestSpoiledConfig/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestSpoiledConfig) to LSS (test.basic.TestSpoiledConfig) and RecoveryLSS (test.basic.TestSpoiledConfig/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestSpoiledConfig
Shard shards/shard1(1) : Add instance test.basic.TestSpoiledConfig to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestSpoiledConfig to LSS test.basic.TestSpoiledConfig
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestSpoiledConfig/recovery], Data log [test.basic.TestSpoiledConfig], Shared [false]
LSS test.basic.TestSpoiledConfig/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [61.029µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestSpoiledConfig/recovery], Data log [test.basic.TestSpoiledConfig], Shared [false]. Built [0] plasmas, took [104.997µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestSpoiledConfig(shard1) : all deamons started
LSS test.basic.TestSpoiledConfig/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig started
LSS test.basic.TestSpoiledConfig(shard1) : all deamons stopped
LSS test.basic.TestSpoiledConfig/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig stopped
LSS test.basic.TestSpoiledConfig(shard1) : LSSCleaner stopped
LSS test.basic.TestSpoiledConfig/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig closed
LSS test.basic.TestSpoiledConfig(shard1) : LSSCleaner initialized
LSS test.basic.TestSpoiledConfig(shard1) : (fatal error - Incorrect compression type 'fake', switching to 'snappy')
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestSpoiledConfig
LSS test.basic.TestSpoiledConfig/recovery(shard1) : LSSCleaner initialized for recovery
LSS test.basic.TestSpoiledConfig/recovery(shard1) : (fatal error - Incorrect compression type 'fake', switching to 'snappy')
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestSpoiledConfig/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestSpoiledConfig) to LSS (test.basic.TestSpoiledConfig) and RecoveryLSS (test.basic.TestSpoiledConfig/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestSpoiledConfig
Shard shards/shard1(1) : Add instance test.basic.TestSpoiledConfig to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestSpoiledConfig to LSS test.basic.TestSpoiledConfig
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestSpoiledConfig/recovery], Data log [test.basic.TestSpoiledConfig], Shared [false]
LSS test.basic.TestSpoiledConfig/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [15.133381ms]
LSS test.basic.TestSpoiledConfig(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [4096] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [15.239463ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestSpoiledConfig/recovery], Data log [test.basic.TestSpoiledConfig], Shared [false]. Built [1] plasmas, took [15.284509ms]
Plasma: doInit: data UsedSpace 4096 recovery UsedSpace 42
LSS test.basic.TestSpoiledConfig(shard1) : all deamons started
LSS test.basic.TestSpoiledConfig/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig started
LSS test.basic.TestSpoiledConfig(shard1) : all deamons stopped
LSS test.basic.TestSpoiledConfig/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig stopped
LSS test.basic.TestSpoiledConfig(shard1) : LSSCleaner stopped
LSS test.basic.TestSpoiledConfig/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestSpoiledConfig ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestSpoiledConfig ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestSpoiledConfig sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSpoiledConfig (0.04s)
=== RUN   TestPlasmaErrorFile
----------- running testPlasmaErrorFile
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.testPlasmaErrorFile(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.testPlasmaErrorFile
LSS test.basic.testPlasmaErrorFile/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.testPlasmaErrorFile/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.testPlasmaErrorFile) to LSS (test.basic.testPlasmaErrorFile) and RecoveryLSS (test.basic.testPlasmaErrorFile/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.testPlasmaErrorFile
Shard shards/shard1(1) : Add instance test.basic.testPlasmaErrorFile to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.testPlasmaErrorFile to LSS test.basic.testPlasmaErrorFile
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.testPlasmaErrorFile/recovery], Data log [test.basic.testPlasmaErrorFile], Shared [false]
LSS test.basic.testPlasmaErrorFile/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [92.827µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.testPlasmaErrorFile/recovery], Data log [test.basic.testPlasmaErrorFile], Shared [false]. Built [0] plasmas, took [127.733µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.testPlasmaErrorFile(shard1) : all deamons started
LSS test.basic.testPlasmaErrorFile/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.testPlasmaErrorFile started
LSS test.basic.testPlasmaErrorFile(shard1) : all deamons stopped
LSS test.basic.testPlasmaErrorFile/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.testPlasmaErrorFile stopped
LSS test.basic.testPlasmaErrorFile(shard1) : LSSCleaner stopped
LSS test.basic.testPlasmaErrorFile/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.testPlasmaErrorFile closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.testPlasmaErrorFile ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.testPlasmaErrorFile ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.testPlasmaErrorFile sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaErrorFile (0.02s)
=== RUN   TestPlasmaPersistor
----------- running TestPlasmaPersistor
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaPersistor(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaPersistor
LSS test.basic.TestPlasmaPersistor/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaPersistor/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaPersistor) to LSS (test.basic.TestPlasmaPersistor) and RecoveryLSS (test.basic.TestPlasmaPersistor/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaPersistor
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaPersistor to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaPersistor to LSS test.basic.TestPlasmaPersistor
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaPersistor/recovery], Data log [test.basic.TestPlasmaPersistor], Shared [false]
LSS test.basic.TestPlasmaPersistor/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [92.67µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaPersistor/recovery], Data log [test.basic.TestPlasmaPersistor], Shared [false]. Built [0] plasmas, took [128.92µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaPersistor(shard1) : all deamons started
LSS test.basic.TestPlasmaPersistor/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaPersistor started
took 81.753365ms 29433856
took 40.714009ms 31920128
took 37.731717ms 34406400
{
"memory_quota":         1099511627776,
"count":                2200000,
"compacts":             18305,
"purges":               0,
"splits":               9152,
"merges":               0,
"inserts":              2200000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          49045784,
"memory_size_index":    488384,
"allocated":            227847320,
"freed":                178801536,
"reclaimed":            178346000,
"reclaim_pending":      455536,
"reclaim_list_size":    455536,
"reclaim_list_count":   47,
"reclaim_threshold":    50,
"allocated_index":      488384,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            9153,
"items_count":          0,
"total_records":        2200000,
"num_rec_allocs":       9567361,
"num_rec_frees":        7367361,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       17600000,
"lss_data_size":        19167416,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        34339636,
"est_recovery_size":    2028106,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           2200000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       17,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           57671792,
"page_cnt":             26866,
"page_itemcnt":         7208974,
"avg_item_size":        8,
"avg_page_size":        2146,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":34122334,
"page_bytes_compressed":34122334,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    16654552,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3284,
    "lss_data_size":        20574712,
    "lss_used_space":       36442112,
    "lss_disk_size":        36442112,
    "lss_fragmentation":    43,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        36442112,
    "bytes_incoming":       17600000,
    "write_amp":            0.00,
    "write_amp_avg":        1.95,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      34406400,
    "num_sctxs":            31,
    "num_free_sctxs":       21,
    "num_swapperWriter":    32
  }
}
LSS test.basic.TestPlasmaPersistor(shard1) : all deamons stopped
LSS test.basic.TestPlasmaPersistor/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaPersistor stopped
LSS test.basic.TestPlasmaPersistor(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaPersistor/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaPersistor closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaPersistor ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaPersistor ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaPersistor sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaPersistor (11.00s)
=== RUN   TestPlasmaEvictionLSSDataSize
----------- running TestPlasmaEvictionLSSDataSize
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaEvictionLSSDataSize(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaEvictionLSSDataSize
LSS test.basic.TestPlasmaEvictionLSSDataSize/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaEvictionLSSDataSize/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaEvictionLSSDataSize) to LSS (test.basic.TestPlasmaEvictionLSSDataSize) and RecoveryLSS (test.basic.TestPlasmaEvictionLSSDataSize/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaEvictionLSSDataSize
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaEvictionLSSDataSize to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaEvictionLSSDataSize to LSS test.basic.TestPlasmaEvictionLSSDataSize
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaEvictionLSSDataSize/recovery], Data log [test.basic.TestPlasmaEvictionLSSDataSize], Shared [false]
LSS test.basic.TestPlasmaEvictionLSSDataSize/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [79.878µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaEvictionLSSDataSize/recovery], Data log [test.basic.TestPlasmaEvictionLSSDataSize], Shared [false]. Built [0] plasmas, took [91.567µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaEvictionLSSDataSize(shard1) : all deamons started
LSS test.basic.TestPlasmaEvictionLSSDataSize/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaEvictionLSSDataSize started
LSS test.basic.TestPlasmaEvictionLSSDataSize(shard1) : all deamons stopped
LSS test.basic.TestPlasmaEvictionLSSDataSize/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaEvictionLSSDataSize stopped
LSS test.basic.TestPlasmaEvictionLSSDataSize(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaEvictionLSSDataSize/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaEvictionLSSDataSize closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaEvictionLSSDataSize ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaEvictionLSSDataSize ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaEvictionLSSDataSize sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaEvictionLSSDataSize (0.04s)
=== RUN   TestPlasmaEviction
----------- running TestPlasmaEviction
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaEviction(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaEviction
LSS test.basic.TestPlasmaEviction/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaEviction/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaEviction) to LSS (test.basic.TestPlasmaEviction) and RecoveryLSS (test.basic.TestPlasmaEviction/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaEviction
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaEviction to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaEviction to LSS test.basic.TestPlasmaEviction
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaEviction/recovery], Data log [test.basic.TestPlasmaEviction], Shared [false]
LSS test.basic.TestPlasmaEviction/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.993µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaEviction/recovery], Data log [test.basic.TestPlasmaEviction], Shared [false]. Built [0] plasmas, took [111.082µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaEviction(shard1) : all deamons started
LSS test.basic.TestPlasmaEviction/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaEviction started
LSS test.basic.TestPlasmaEviction(shard1) : all deamons stopped
LSS test.basic.TestPlasmaEviction/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaEviction stopped
LSS test.basic.TestPlasmaEviction(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaEviction/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaEviction closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaEviction ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaEviction ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaEviction sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaEviction (27.12s)
=== RUN   TestConcurrDelOps
----------- running TestConcurrDelOps
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestConcurrDelOps(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestConcurrDelOps
LSS test.basic.TestConcurrDelOps/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestConcurrDelOps/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestConcurrDelOps) to LSS (test.basic.TestConcurrDelOps) and RecoveryLSS (test.basic.TestConcurrDelOps/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestConcurrDelOps
Shard shards/shard1(1) : Add instance test.basic.TestConcurrDelOps to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestConcurrDelOps to LSS test.basic.TestConcurrDelOps
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestConcurrDelOps/recovery], Data log [test.basic.TestConcurrDelOps], Shared [false]
LSS test.basic.TestConcurrDelOps/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [81.36µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestConcurrDelOps/recovery], Data log [test.basic.TestConcurrDelOps], Shared [false]. Built [0] plasmas, took [265.155µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestConcurrDelOps(shard1) : all deamons started
LSS test.basic.TestConcurrDelOps/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestConcurrDelOps started
{
"memory_quota":         1099511627776,
"count":                0,
"compacts":             122593,
"purges":               0,
"splits":               33582,
"merges":               33582,
"inserts":              9999996,
"deletes":              9999996,
"compact_conflicts":    1656,
"split_conflicts":      826,
"merge_conflicts":      2,
"insert_conflicts":     4439,
"delete_conflicts":     1035,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          5288,
"memory_size_index":    0,
"allocated":            1653400776,
"freed":                1653395488,
"reclaimed":            1650449520,
"reclaim_pending":      2945968,
"reclaim_list_size":    2739992,
"reclaim_list_count":   248,
"reclaim_threshold":    50,
"allocated_index":      1791440,
"freed_index":          1791440,
"reclaimed_index":      1789216,
"num_pages":            1,
"items_count":          0,
"total_records":        158,
"num_rec_allocs":       61955063,
"num_rec_frees":        61954905,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       159999936,
"lss_data_size":        6350354,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        204441158,
"est_recovery_size":    5842756,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           19999993,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          1,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            2,
"num_free_wctxs":       0,
"num_readers":          1,
"num_writers":          12,
"page_bytes":           58086808,
"page_cnt":             48905,
"page_itemcnt":         7260851,
"avg_item_size":        8,
"avg_page_size":        1187,
"act_max_page_items":   400,
"act_min_page_items":   200,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":203836682,
"page_bytes_compressed":203836682,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3384,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          76956,
    "lss_data_size":        6525122,
    "lss_used_space":       214925312,
    "lss_disk_size":        214925312,
    "lss_fragmentation":    96,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        214925312,
    "bytes_incoming":       159999936,
    "write_amp":            0.00,
    "write_amp_avg":        1.30,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      208633856,
    "num_sctxs":            25,
    "num_free_sctxs":       2,
    "num_swapperWriter":    32
  }
}
LSS test.basic.TestConcurrDelOps(shard1) : all deamons stopped
LSS test.basic.TestConcurrDelOps/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestConcurrDelOps stopped
LSS test.basic.TestConcurrDelOps(shard1) : LSSCleaner stopped
LSS test.basic.TestConcurrDelOps/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestConcurrDelOps closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestConcurrDelOps ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestConcurrDelOps ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestConcurrDelOps sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestConcurrDelOps (55.38s)
=== RUN   TestPlasmaDataSize
----------- running TestPlasmaDataSize
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaDataSize(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaDataSize
LSS test.basic.TestPlasmaDataSize/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaDataSize/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaDataSize) to LSS (test.basic.TestPlasmaDataSize) and RecoveryLSS (test.basic.TestPlasmaDataSize/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaDataSize
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaDataSize to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaDataSize to LSS test.basic.TestPlasmaDataSize
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaDataSize/recovery], Data log [test.basic.TestPlasmaDataSize], Shared [false]
LSS test.basic.TestPlasmaDataSize/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.193µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaDataSize/recovery], Data log [test.basic.TestPlasmaDataSize], Shared [false]. Built [0] plasmas, took [87.565µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaDataSize(shard1) : all deamons started
LSS test.basic.TestPlasmaDataSize/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaDataSize started
Data size after persistence = 1610
LSS test.basic.TestPlasmaDataSize(shard1) : all deamons stopped
LSS test.basic.TestPlasmaDataSize/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaDataSize stopped
LSS test.basic.TestPlasmaDataSize(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaDataSize/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaDataSize closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaDataSize ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaDataSize ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaDataSize sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaDataSize (0.02s)
=== RUN   TestLargeBasePage
----------- running TestLargeBasePage
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestLargeBasePage(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeBasePage
LSS test.default.TestLargeBasePage/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeBasePage/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeBasePage) to LSS (test.default.TestLargeBasePage) and RecoveryLSS (test.default.TestLargeBasePage/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeBasePage
Shard shards/shard1(1) : Add instance test.default.TestLargeBasePage to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeBasePage to LSS test.default.TestLargeBasePage
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeBasePage/recovery], Data log [test.default.TestLargeBasePage], Shared [false]
LSS test.default.TestLargeBasePage/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.917µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeBasePage/recovery], Data log [test.default.TestLargeBasePage], Shared [false]. Built [0] plasmas, took [83.958µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestLargeBasePage(shard1) : all deamons started
LSS test.default.TestLargeBasePage/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestLargeBasePage started
Plasma: Adjusting SMO limits: Avg - Item Size 48 Avg Page Size 1807830: Old MaxPageItems 1000000 MinPageItems 25 MaxDeltaChainLen 1000 -> New MaxPageItems 2730 MinPageItems 273 MaxDeltaChainLen 1365
LSS test.default.TestLargeBasePage(shard1) : all deamons stopped
LSS test.default.TestLargeBasePage/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestLargeBasePage stopped
LSS test.default.TestLargeBasePage(shard1) : LSSCleaner stopped
LSS test.default.TestLargeBasePage/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeBasePage closed
LSS test.default.TestLargeBasePage(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeBasePage
LSS test.default.TestLargeBasePage/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeBasePage/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeBasePage) to LSS (test.default.TestLargeBasePage) and RecoveryLSS (test.default.TestLargeBasePage/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeBasePage
Shard shards/shard1(1) : Add instance test.default.TestLargeBasePage to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeBasePage to LSS test.default.TestLargeBasePage
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeBasePage/recovery], Data log [test.default.TestLargeBasePage], Shared [false]
LSS test.default.TestLargeBasePage/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [246.007µs]
LSS test.default.TestLargeBasePage(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [2625536] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [35.708103ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeBasePage/recovery], Data log [test.default.TestLargeBasePage], Shared [false]. Built [1] plasmas, took [35.801569ms]
Plasma: doInit: data UsedSpace 2625536 recovery UsedSpace 12489
LSS test.default.TestLargeBasePage(shard1) : all deamons started
LSS test.default.TestLargeBasePage/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestLargeBasePage started
LSS test.default.TestLargeBasePage(shard1) : all deamons stopped
LSS test.default.TestLargeBasePage/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestLargeBasePage stopped
LSS test.default.TestLargeBasePage(shard1) : LSSCleaner stopped
LSS test.default.TestLargeBasePage/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeBasePage closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestLargeBasePage ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestLargeBasePage ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestLargeBasePage sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestLargeBasePage (70.27s)
=== RUN   TestLargeValue
----------- running TestLargeValue
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestLargeValue(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeValue) to LSS (test.default.TestLargeValue) and RecoveryLSS (test.default.TestLargeValue/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeValue
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to LSS test.default.TestLargeValue
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]
LSS test.default.TestLargeValue/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.547µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]. Built [0] plasmas, took [88.724µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestLargeValue(shard1) : all deamons started
LSS test.default.TestLargeValue/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue started
LSS test.default.TestLargeValue(shard1) : all deamons stopped
LSS test.default.TestLargeValue/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue stopped
LSS test.default.TestLargeValue(shard1) : LSSCleaner stopped
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeValue closed
LSS test.default.TestLargeValue(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeValue) to LSS (test.default.TestLargeValue) and RecoveryLSS (test.default.TestLargeValue/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeValue
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to LSS test.default.TestLargeValue
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]
LSS test.default.TestLargeValue/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [391.541µs]
LSS test.default.TestLargeValue(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [8364032] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [251.055917ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]. Built [1] plasmas, took [251.220355ms]
Plasma: doInit: data UsedSpace 8364032 recovery UsedSpace 15842
LSS test.default.TestLargeValue(shard1) : all deamons started
LSS test.default.TestLargeValue/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue started
LSS test.default.TestLargeValue(shard1) : all deamons stopped
LSS test.default.TestLargeValue/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue stopped
LSS test.default.TestLargeValue(shard1) : LSSCleaner stopped
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeValue closed
LSS test.default.TestLargeValue(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeValue) to LSS (test.default.TestLargeValue) and RecoveryLSS (test.default.TestLargeValue/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeValue
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to LSS test.default.TestLargeValue
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]
LSS test.default.TestLargeValue/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [28672]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [8368128] took [705.017µs]
LSS test.default.TestLargeValue(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [17965056] replayOffset [8368128]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [258.585106ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]. Built [1] plasmas, took [258.774477ms]
Plasma: doInit: data UsedSpace 17965056 recovery UsedSpace 32988
LSS test.default.TestLargeValue(shard1) : all deamons started
LSS test.default.TestLargeValue/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue started
fragAutoTuner: FragRatio at 10. MaxFragRatio 20, MaxBandwidth 163145. BandwidthUsage 82103. AvailDisk N/A. TotalUsed 0. BandwidthRatio 0.5032517086027767. UsedSpaceRatio 0. CleanerBandwidth 9223372036854775807. Duration 0.
LSS test.default.TestLargeValue(shard1) : logCleaner: starting... frag 46, data: 9568941, used: 17969152 log:(0 - 17969152)
LSS test.default.TestLargeValue(shard1) : logCleaner: completed... frag 0, data: 9505694, used: 9385434, relocated: 1, retries: 1, skipped: 96 log:(0 - 17969152) run:1 duration:744 ms
LSS test.default.TestLargeValue(shard1) : all deamons stopped
LSS test.default.TestLargeValue/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue stopped
LSS test.default.TestLargeValue(shard1) : LSSCleaner stopped
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeValue closed
LSS test.default.TestLargeValue(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestLargeValue/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestLargeValue) to LSS (test.default.TestLargeValue) and RecoveryLSS (test.default.TestLargeValue/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestLargeValue
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestLargeValue to LSS test.default.TestLargeValue
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]
LSS test.default.TestLargeValue/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [45056]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [17969152] took [864.157µs]
LSS test.default.TestLargeValue(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [8364050] tailOffset [23126016] replayOffset [17969152]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [143.28345ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestLargeValue/recovery], Data log [test.default.TestLargeValue], Shared [false]. Built [1] plasmas, took [143.447843ms]
Plasma: doInit: data UsedSpace 14761966 recovery UsedSpace 50462
LSS test.default.TestLargeValue(shard1) : all deamons started
LSS test.default.TestLargeValue/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue started
Plasma: Adjusting SMO limits: Avg - Item Size 8218 Avg Page Size 2562646: Old MaxPageItems 400 MinPageItems 25 MaxDeltaChainLen 200 -> New MaxPageItems 15 MinPageItems 1 MaxDeltaChainLen 7
LSS test.default.TestLargeValue(shard1) : logCleaner: starting... frag 12, data: 78116727, used: 89087982 log:(8364050 - 97452032)
LSS test.default.TestLargeValue(shard1) : logCleaner: completed... frag 28, data: 305103312, used: 426905600, relocated: 93, retries: 3, skipped: 54 log:(8364050 - 518377472) run:1 duration:5844 ms
LSS test.default.TestLargeValue(shard1) : all deamons stopped
LSS test.default.TestLargeValue/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestLargeValue stopped
LSS test.default.TestLargeValue(shard1) : LSSCleaner stopped
LSS test.default.TestLargeValue/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestLargeValue closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestLargeValue ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestLargeValue ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestLargeValue sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestLargeValue (104.07s)
=== RUN   TestPlasmaTooLargeKey
----------- running TestPlasmaTooLargeKey
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaTooLargeKey(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaTooLargeKey
LSS test.basic.TestPlasmaTooLargeKey/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaTooLargeKey/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaTooLargeKey) to LSS (test.basic.TestPlasmaTooLargeKey) and RecoveryLSS (test.basic.TestPlasmaTooLargeKey/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaTooLargeKey
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaTooLargeKey to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaTooLargeKey to LSS test.basic.TestPlasmaTooLargeKey
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaTooLargeKey/recovery], Data log [test.basic.TestPlasmaTooLargeKey], Shared [false]
LSS test.basic.TestPlasmaTooLargeKey/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.098µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaTooLargeKey/recovery], Data log [test.basic.TestPlasmaTooLargeKey], Shared [false]. Built [0] plasmas, took [107.226µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaTooLargeKey(shard1) : all deamons started
LSS test.basic.TestPlasmaTooLargeKey/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaTooLargeKey started
LSS test.basic.TestPlasmaTooLargeKey(shard1) : all deamons stopped
LSS test.basic.TestPlasmaTooLargeKey/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaTooLargeKey stopped
LSS test.basic.TestPlasmaTooLargeKey(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaTooLargeKey/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaTooLargeKey closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaTooLargeKey ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaTooLargeKey ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaTooLargeKey sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaTooLargeKey (3.27s)
=== RUN   TestEvictAfterMerge
----------- running TestEvictAfterMerge
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictAfterMerge(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictAfterMerge
LSS test.default.TestEvictAfterMerge/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictAfterMerge/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictAfterMerge) to LSS (test.default.TestEvictAfterMerge) and RecoveryLSS (test.default.TestEvictAfterMerge/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestEvictAfterMerge
Shard shards/shard1(1) : Add instance test.default.TestEvictAfterMerge to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictAfterMerge to LSS test.default.TestEvictAfterMerge
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictAfterMerge/recovery], Data log [test.default.TestEvictAfterMerge], Shared [false]
LSS test.default.TestEvictAfterMerge/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.882µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictAfterMerge/recovery], Data log [test.default.TestEvictAfterMerge], Shared [false]. Built [0] plasmas, took [84.163µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictAfterMerge(shard1) : all deamons started
LSS test.default.TestEvictAfterMerge/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestEvictAfterMerge started
TestEvictAfterMerge: evict dirty page after merge
TestEvictAfterMerge: evict read page after merge
LSS test.default.TestEvictAfterMerge(shard1) : all deamons stopped
LSS test.default.TestEvictAfterMerge/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestEvictAfterMerge stopped
LSS test.default.TestEvictAfterMerge(shard1) : LSSCleaner stopped
LSS test.default.TestEvictAfterMerge/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictAfterMerge closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestEvictAfterMerge ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestEvictAfterMerge ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictAfterMerge sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictAfterMerge (0.10s)
=== RUN   TestEvictDirty
----------- running TestEvictDirty
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictDirty(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictDirty
LSS test.default.TestEvictDirty/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictDirty/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictDirty) to LSS (test.default.TestEvictDirty) and RecoveryLSS (test.default.TestEvictDirty/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestEvictDirty
Shard shards/shard1(1) : Add instance test.default.TestEvictDirty to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictDirty to LSS test.default.TestEvictDirty
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictDirty/recovery], Data log [test.default.TestEvictDirty], Shared [false]
LSS test.default.TestEvictDirty/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.203µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictDirty/recovery], Data log [test.default.TestEvictDirty], Shared [false]. Built [0] plasmas, took [108.135µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictDirty(shard1) : all deamons started
LSS test.default.TestEvictDirty/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestEvictDirty started
TestEvictDirty:evictDirty=false. Evict persisted dirty page
TestEvictDirty:evictDirty=false. Evict persisted read page
TestEvictDirty:evictDirty=false. Do not evict dirty page before persist
TestEvictDirty:evictDirty=true. Persist dirty page during eviction
LSS test.default.TestEvictDirty(shard1) : all deamons stopped
LSS test.default.TestEvictDirty/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestEvictDirty stopped
LSS test.default.TestEvictDirty(shard1) : LSSCleaner stopped
LSS test.default.TestEvictDirty/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictDirty closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestEvictDirty ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestEvictDirty ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictDirty sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictDirty (0.15s)
=== RUN   TestEvictUnderQuota
----------- running TestEvictUnderQuota
TestEvictUnderQuota
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictUnderQuota(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictUnderQuota
LSS test.default.TestEvictUnderQuota/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictUnderQuota/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictUnderQuota) to LSS (test.default.TestEvictUnderQuota) and RecoveryLSS (test.default.TestEvictUnderQuota/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestEvictUnderQuota
Shard shards/shard1(1) : Add instance test.default.TestEvictUnderQuota to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictUnderQuota to LSS test.default.TestEvictUnderQuota
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictUnderQuota/recovery], Data log [test.default.TestEvictUnderQuota], Shared [false]
LSS test.default.TestEvictUnderQuota/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.398µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictUnderQuota/recovery], Data log [test.default.TestEvictUnderQuota], Shared [false]. Built [0] plasmas, took [79.48µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictUnderQuota(shard1) : all deamons started
LSS test.default.TestEvictUnderQuota/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestEvictUnderQuota started
Start Periodic Eviction.  Quota 1070604 memInUse 535302
LSS test.default.TestEvictUnderQuota(shard1) : all deamons stopped
LSS test.default.TestEvictUnderQuota/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestEvictUnderQuota stopped
LSS test.default.TestEvictUnderQuota(shard1) : LSSCleaner stopped
LSS test.default.TestEvictUnderQuota/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictUnderQuota closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestEvictUnderQuota ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestEvictUnderQuota ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictUnderQuota sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictUnderQuota (60.10s)
=== RUN   TestEvictSetting
----------- running TestEvictSetting
TestEvictSetting
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictSetting(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictSetting
LSS test.default.TestEvictSetting/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictSetting/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictSetting) to LSS (test.default.TestEvictSetting) and RecoveryLSS (test.default.TestEvictSetting/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestEvictSetting
Shard shards/shard1(1) : Add instance test.default.TestEvictSetting to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictSetting to LSS test.default.TestEvictSetting
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictSetting/recovery], Data log [test.default.TestEvictSetting], Shared [false]
LSS test.default.TestEvictSetting/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [55.491µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictSetting/recovery], Data log [test.default.TestEvictSetting], Shared [false]. Built [0] plasmas, took [103.44µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictSetting(shard1) : all deamons started
LSS test.default.TestEvictSetting/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestEvictSetting started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestEvictSetting_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictSetting_2
LSS test.default.TestEvictSetting_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestEvictSetting_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestEvictSetting_2) to LSS (test.default.TestEvictSetting_2) and RecoveryLSS (test.default.TestEvictSetting_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestEvictSetting_2
Shard shards/shard1(1) : Add instance test.default.TestEvictSetting_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestEvictSetting_2 to LSS test.default.TestEvictSetting_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestEvictSetting_2/recovery], Data log [test.default.TestEvictSetting_2], Shared [false]
LSS test.default.TestEvictSetting_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [44.792µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestEvictSetting_2/recovery], Data log [test.default.TestEvictSetting_2], Shared [false]. Built [0] plasmas, took [79.407µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestEvictSetting_2(shard1) : all deamons started
LSS test.default.TestEvictSetting_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestEvictSetting_2 started
TestEvictSetting: evictPage (DGM)
TestEvictSetting: evictPage (non-DGM)
TestEvictSetting: memory estimate
LSS test.default.TestEvictSetting_2(shard1) : all deamons stopped
LSS test.default.TestEvictSetting_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestEvictSetting_2 stopped
LSS test.default.TestEvictSetting_2(shard1) : LSSCleaner stopped
LSS test.default.TestEvictSetting_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictSetting_2 closed
LSS test.default.TestEvictSetting(shard1) : all deamons stopped
LSS test.default.TestEvictSetting/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestEvictSetting stopped
LSS test.default.TestEvictSetting(shard1) : LSSCleaner stopped
LSS test.default.TestEvictSetting/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestEvictSetting closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestEvictSetting ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestEvictSetting ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictSetting sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestEvictSetting_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestEvictSetting_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestEvictSetting_2 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestEvictSetting (1.14s)
=== RUN   TestBasePageAfterCompaction
----------- running TestBasePageAfterCompaction
TestBasePageAfterCompaction
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestBasePageAfterCompaction(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestBasePageAfterCompaction
LSS test.default.TestBasePageAfterCompaction/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestBasePageAfterCompaction/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestBasePageAfterCompaction) to LSS (test.default.TestBasePageAfterCompaction) and RecoveryLSS (test.default.TestBasePageAfterCompaction/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestBasePageAfterCompaction
Shard shards/shard1(1) : Add instance test.default.TestBasePageAfterCompaction to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestBasePageAfterCompaction to LSS test.default.TestBasePageAfterCompaction
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestBasePageAfterCompaction/recovery], Data log [test.default.TestBasePageAfterCompaction], Shared [false]
LSS test.default.TestBasePageAfterCompaction/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.343µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestBasePageAfterCompaction/recovery], Data log [test.default.TestBasePageAfterCompaction], Shared [false]. Built [0] plasmas, took [83.995µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestBasePageAfterCompaction(shard1) : all deamons started
LSS test.default.TestBasePageAfterCompaction/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestBasePageAfterCompaction started
LSS test.default.TestBasePageAfterCompaction(shard1) : all deamons stopped
LSS test.default.TestBasePageAfterCompaction/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestBasePageAfterCompaction stopped
LSS test.default.TestBasePageAfterCompaction(shard1) : LSSCleaner stopped
LSS test.default.TestBasePageAfterCompaction/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestBasePageAfterCompaction closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestBasePageAfterCompaction ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestBasePageAfterCompaction ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestBasePageAfterCompaction sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestBasePageAfterCompaction (0.09s)
=== RUN   TestSwapout
----------- running TestSwapout
TestSwapout
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestSwapout(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestSwapout
LSS test.default.TestSwapout/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestSwapout/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestSwapout) to LSS (test.default.TestSwapout) and RecoveryLSS (test.default.TestSwapout/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestSwapout
Shard shards/shard1(1) : Add instance test.default.TestSwapout to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestSwapout to LSS test.default.TestSwapout
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestSwapout/recovery], Data log [test.default.TestSwapout], Shared [false]
LSS test.default.TestSwapout/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [79.29µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestSwapout/recovery], Data log [test.default.TestSwapout], Shared [false]. Built [0] plasmas, took [114.625µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestSwapout(shard1) : all deamons started
LSS test.default.TestSwapout/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestSwapout started
Plasma: 

"low:":         minItem,
"high:":        maxItem,
"chainLen:":    1,
"numItems:":    10,
"state:":       8004,
"flushed:":     true,
"evicted:":     false,
"compressed:":  false

 0 flush: op opRelocPageDelta NumRecords 10 NumSegments 1 bloomFilter: 
 1 base:
	    0: itm=[item key:(key-         0) val:((nil)) sn:2 insert: true]
	    1: itm=[item key:(key-         1) val:((nil)) sn:2 insert: true]
	    2: itm=[item key:(key-         2) val:((nil)) sn:2 insert: true]
	    3: itm=[item key:(key-         3) val:((nil)) sn:2 insert: true]
	    4: itm=[item key:(key-         4) val:((nil)) sn:2 insert: true]
	    5: itm=[item key:(key-         5) val:((nil)) sn:2 insert: true]
	    6: itm=[item key:(key-         6) val:((nil)) sn:2 insert: true]
	    7: itm=[item key:(key-         7) val:((nil)) sn:2 insert: true]
	    8: itm=[item key:(key-         8) val:((nil)) sn:2 insert: true]
	    9: itm=[item key:(key-         9) val:((nil)) sn:2 insert: true]

LSS test.default.TestSwapout(shard1) : all deamons stopped
LSS test.default.TestSwapout/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestSwapout stopped
LSS test.default.TestSwapout(shard1) : LSSCleaner stopped
LSS test.default.TestSwapout/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestSwapout closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestSwapout ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestSwapout ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestSwapout sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapout (0.04s)
=== RUN   TestSwapoutSplitBasePage
----------- running TestSwapoutSplitBasePage
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestSwapoutSplitBasePage(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestSwapoutSplitBasePage
LSS test.default.TestSwapoutSplitBasePage/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestSwapoutSplitBasePage/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestSwapoutSplitBasePage) to LSS (test.default.TestSwapoutSplitBasePage) and RecoveryLSS (test.default.TestSwapoutSplitBasePage/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestSwapoutSplitBasePage
Shard shards/shard1(1) : Add instance test.default.TestSwapoutSplitBasePage to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestSwapoutSplitBasePage to LSS test.default.TestSwapoutSplitBasePage
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestSwapoutSplitBasePage/recovery], Data log [test.default.TestSwapoutSplitBasePage], Shared [false]
LSS test.default.TestSwapoutSplitBasePage/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.771µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestSwapoutSplitBasePage/recovery], Data log [test.default.TestSwapoutSplitBasePage], Shared [false]. Built [0] plasmas, took [110.899µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestSwapoutSplitBasePage(shard1) : all deamons started
LSS test.default.TestSwapoutSplitBasePage/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestSwapoutSplitBasePage started
LSS test.default.TestSwapoutSplitBasePage(shard1) : all deamons stopped
LSS test.default.TestSwapoutSplitBasePage/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestSwapoutSplitBasePage stopped
LSS test.default.TestSwapoutSplitBasePage(shard1) : LSSCleaner stopped
LSS test.default.TestSwapoutSplitBasePage/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestSwapoutSplitBasePage closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestSwapoutSplitBasePage ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestSwapoutSplitBasePage ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestSwapoutSplitBasePage sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapoutSplitBasePage (0.03s)
=== RUN   TestCompactFullMarshal
----------- running TestCompactFullMarshal
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestCompactFullMarshal(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestCompactFullMarshal
LSS test.default.TestCompactFullMarshal/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestCompactFullMarshal/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestCompactFullMarshal) to LSS (test.default.TestCompactFullMarshal) and RecoveryLSS (test.default.TestCompactFullMarshal/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestCompactFullMarshal
Shard shards/shard1(1) : Add instance test.default.TestCompactFullMarshal to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestCompactFullMarshal to LSS test.default.TestCompactFullMarshal
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestCompactFullMarshal/recovery], Data log [test.default.TestCompactFullMarshal], Shared [false]
LSS test.default.TestCompactFullMarshal/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.148µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestCompactFullMarshal/recovery], Data log [test.default.TestCompactFullMarshal], Shared [false]. Built [0] plasmas, took [104.075µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestCompactFullMarshal(shard1) : all deamons started
LSS test.default.TestCompactFullMarshal/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestCompactFullMarshal started
Memory before compaction 815.  Memory after Compaction 111
LSS test.default.TestCompactFullMarshal(shard1) : all deamons stopped
LSS test.default.TestCompactFullMarshal/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestCompactFullMarshal stopped
LSS test.default.TestCompactFullMarshal(shard1) : LSSCleaner stopped
LSS test.default.TestCompactFullMarshal/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestCompactFullMarshal closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestCompactFullMarshal ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestCompactFullMarshal ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestCompactFullMarshal sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestCompactFullMarshal (0.06s)
=== RUN   TestPageStats
----------- running TestPageStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestPageStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageStats
LSS test.default.TestPageStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestPageStats) to LSS (test.default.TestPageStats) and RecoveryLSS (test.default.TestPageStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestPageStats
Shard shards/shard1(1) : Add instance test.default.TestPageStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestPageStats to LSS test.default.TestPageStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestPageStats/recovery], Data log [test.default.TestPageStats], Shared [false]
LSS test.default.TestPageStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.888µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestPageStats/recovery], Data log [test.default.TestPageStats], Shared [false]. Built [0] plasmas, took [115.262µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestPageStats(shard1) : all deamons started
LSS test.default.TestPageStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestPageStats started
sample count 249 num pages 2000
sample count 405 num pages 4000
sample count 627 num pages 6000
sample count 816 num pages 8000
sample count 1224 num pages 10000
sample count 1209 num pages 12000
sample count 1554 num pages 14000
sample count 1752 num pages 16000
sample count 1827 num pages 18000
sample count 2046 num pages 20000
LSS test.default.TestPageStats(shard1) : all deamons stopped
LSS test.default.TestPageStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestPageStats stopped
LSS test.default.TestPageStats(shard1) : LSSCleaner stopped
LSS test.default.TestPageStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestPageStats closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestPageStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestPageStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestPageStats sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageStats (1.95s)
=== RUN   TestPageCompress
----------- running TestPageCompress
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestPageCompress(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompress
LSS test.default.TestPageCompress/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompress/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestPageCompress) to LSS (test.default.TestPageCompress) and RecoveryLSS (test.default.TestPageCompress/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestPageCompress
Shard shards/shard1(1) : Add instance test.default.TestPageCompress to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestPageCompress to LSS test.default.TestPageCompress
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestPageCompress/recovery], Data log [test.default.TestPageCompress], Shared [false]
LSS test.default.TestPageCompress/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.376µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestPageCompress/recovery], Data log [test.default.TestPageCompress], Shared [false]. Built [0] plasmas, took [85.839µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestPageCompress(shard1) : all deamons started
LSS test.default.TestPageCompress/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestPageCompress started
Plasma: 

"low:":         minItem,
"high:":        maxItem,
"chainLen:":    12,
"numItems:":    0,
"state:":       8001,
"flushed:":     true,
"evicted:":     false,
"compressed:":  true

 0 flush: op opFlushPageDelta NumRecords 10 NumSegments 2 bloomFilter: 
 1 insert/delete: op opInsertDelta itm=[item key:(key-         9) val:(val-         9) sn:1 insert: true]
 2 insert/delete: op opInsertDelta itm=[item key:(key-         8) val:(val-         8) sn:1 insert: true]
 3 insert/delete: op opInsertDelta itm=[item key:(key-         7) val:(val-         7) sn:1 insert: true]
 4 insert/delete: op opInsertDelta itm=[item key:(key-         6) val:(val-         6) sn:1 insert: true]
 5 insert/delete: op opInsertDelta itm=[item key:(key-         5) val:(val-         5) sn:1 insert: true]
 6 flush: op opRelocPageDelta NumRecords 5 NumSegments 1 bloomFilter: 
 7 compressDelta: op opCompressDelta compressed data len:80
 8 delta: op opMetaDelta 

LSS test.default.TestPageCompress(shard1) : all deamons stopped
LSS test.default.TestPageCompress/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestPageCompress stopped
LSS test.default.TestPageCompress(shard1) : LSSCleaner stopped
LSS test.default.TestPageCompress/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestPageCompress closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestPageCompress ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestPageCompress ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestPageCompress sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageCompress (0.04s)
=== RUN   TestPageCompressSwapin
----------- running TestPageCompressSwapin
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestPageCompressSwapin(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressSwapin
LSS test.default.TestPageCompressSwapin/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressSwapin/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestPageCompressSwapin) to LSS (test.default.TestPageCompressSwapin) and RecoveryLSS (test.default.TestPageCompressSwapin/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestPageCompressSwapin
Shard shards/shard1(1) : Add instance test.default.TestPageCompressSwapin to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestPageCompressSwapin to LSS test.default.TestPageCompressSwapin
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestPageCompressSwapin/recovery], Data log [test.default.TestPageCompressSwapin], Shared [false]
LSS test.default.TestPageCompressSwapin/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.435µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestPageCompressSwapin/recovery], Data log [test.default.TestPageCompressSwapin], Shared [false]. Built [0] plasmas, took [85.953µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestPageCompressSwapin(shard1) : all deamons started
LSS test.default.TestPageCompressSwapin/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestPageCompressSwapin started
Plasma: 

"low:":         minItem,
"high:":        maxItem,
"chainLen:":    17,
"numItems:":    0,
"state:":       8001,
"flushed:":     true,
"evicted:":     false,
"compressed:":  true

 0 swapin: op opSwapinDelta 

	 0 compressDelta: op opCompressDelta compressed data len:80
	 1 delta: op opMetaDelta 
 1 insert/delete: op opInsertDelta itm=[item key:(key-        14) val:(val-        14) sn:1 insert: true]
 2 insert/delete: op opInsertDelta itm=[item key:(key-        13) val:(val-        13) sn:1 insert: true]
 3 insert/delete: op opInsertDelta itm=[item key:(key-        12) val:(val-        12) sn:1 insert: true]
 4 insert/delete: op opInsertDelta itm=[item key:(key-        11) val:(val-        11) sn:1 insert: true]
 5 insert/delete: op opInsertDelta itm=[item key:(key-        10) val:(val-        10) sn:1 insert: true]
 6 flush: op opFlushPageDelta NumRecords 10 NumSegments 2 bloomFilter: &{896 3}
 7 insert/delete: op opInsertDelta itm=[item key:(key-         9) val:(val-         9) sn:1 insert: true]
 8 insert/delete: op opInsertDelta itm=[item key:(key-         8) val:(val-         8) sn:1 insert: true]
 9 insert/delete: op opInsertDelta itm=[item key:(key-         7) val:(val-         7) sn:1 insert: true]
10 insert/delete: op opInsertDelta itm=[item key:(key-         6) val:(val-         6) sn:1 insert: true]
11 insert/delete: op opInsertDelta itm=[item key:(key-         5) val:(val-         5) sn:1 insert: true]
12 swapout: op opSwapoutDelta numRecords: 5, bloomFilter:&{896 3}

LSS test.default.TestPageCompressSwapin(shard1) : all deamons stopped
LSS test.default.TestPageCompressSwapin/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressSwapin stopped
LSS test.default.TestPageCompressSwapin(shard1) : LSSCleaner stopped
LSS test.default.TestPageCompressSwapin/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressSwapin closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestPageCompressSwapin ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestPageCompressSwapin ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestPageCompressSwapin sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageCompressSwapin (0.03s)
=== RUN   TestPageCompressStats
----------- running TestPageCompressStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestPageCompressStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressStats
LSS test.default.TestPageCompressStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestPageCompressStats) to LSS (test.default.TestPageCompressStats) and RecoveryLSS (test.default.TestPageCompressStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestPageCompressStats
Shard shards/shard1(1) : Add instance test.default.TestPageCompressStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestPageCompressStats to LSS test.default.TestPageCompressStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestPageCompressStats/recovery], Data log [test.default.TestPageCompressStats], Shared [false]
LSS test.default.TestPageCompressStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.197µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestPageCompressStats/recovery], Data log [test.default.TestPageCompressStats], Shared [false]. Built [0] plasmas, took [99.573µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestPageCompressStats(shard1) : all deamons started
LSS test.default.TestPageCompressStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestPageCompressStats started
LSS test.default.TestPageCompressStats(shard1) : all deamons stopped
LSS test.default.TestPageCompressStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressStats stopped
LSS test.default.TestPageCompressStats(shard1) : LSSCleaner stopped
LSS test.default.TestPageCompressStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressStats closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestPageCompressStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestPageCompressStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestPageCompressStats sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageCompressStats (0.58s)
=== RUN   TestPageCompressState
----------- running TestPageCompressState
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestPageCompressState(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressState
LSS test.default.TestPageCompressState/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestPageCompressState/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestPageCompressState) to LSS (test.default.TestPageCompressState) and RecoveryLSS (test.default.TestPageCompressState/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestPageCompressState
Shard shards/shard1(1) : Add instance test.default.TestPageCompressState to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestPageCompressState to LSS test.default.TestPageCompressState
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestPageCompressState/recovery], Data log [test.default.TestPageCompressState], Shared [false]
LSS test.default.TestPageCompressState/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [82.235µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestPageCompressState/recovery], Data log [test.default.TestPageCompressState], Shared [false]. Built [0] plasmas, took [118.425µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestPageCompressState(shard1) : all deamons started
LSS test.default.TestPageCompressState/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestPageCompressState started
LSS test.default.TestPageCompressState(shard1) : all deamons stopped
LSS test.default.TestPageCompressState/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressState stopped
LSS test.default.TestPageCompressState(shard1) : LSSCleaner stopped
LSS test.default.TestPageCompressState/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestPageCompressState closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestPageCompressState ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestPageCompressState ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestPageCompressState sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPageCompressState (0.03s)
=== RUN   TestWrittenDataSz
----------- running TestWrittenDataSz
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenDataSz) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenDataSz
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSz to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSz to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.774µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [104.559µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [114.981µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz started
Init: WrittenDataSz 8236
Loading: DataSz 723624 WrittenDataSz 10398868 usedSpace 10604544 diskSize 10604544 numCtx 18
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenDataSz) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenDataSz
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSz to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSz to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [2805760]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [10252288] took [49.876956ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [10604544] replayOffset [10252288]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [61.207433ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [1] plasmas, took [61.38745ms]
Plasma: doInit: data UsedSpace 10398868 recovery UsedSpace 2534420
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz started
Recovery: WrittenDataSz 10407104
Recovery: DataSz 723624 WrittenDataSz 10407104 usedSpace 10608640 diskSize 10608640 numCtx 3
before cleaning: log head offset 0 tail offset 10608640
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 93, data: 723624, used: 10608640 log:(0 - 10608640)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 0, data: 670606, used: 675810, relocated: 333, retries: 0, skipped: 3451 log:(0 - 11280384) run:1 duration:206 ms
after cleaning: log head offset 10604574 tail offset 11280384
Cleaning: DataSz 670606 WrittenDataSz 677976 usedSpace 675810 diskSize 11280384 numCtx 4
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSz closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestWrittenDataSz (3.05s)
=== RUN   TestWrittenDataSzAfterRecoveryCleaning
----------- running TestWrittenDataSzAfterRecoveryCleaning
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenDataSzAfterRecoveryCleaning) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.136µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [89.439µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [96.9µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning started
Init: WrittenDataSz 8236
Loading: DataSz 729560 WrittenDataSz 10351258 usedSpace 10555392 diskSize 10555392 numCtx 18
before cleaning: log head offset 0 tail offset 2797568
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 93, data: 729560, used: 10555392 log:(0 - 10555392)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 89, data: 729560, used: 7038177, relocated: 0, retries: 0, skipped: 1048 log:(0 - 10559488) run:1 duration:42 ms
after cleaning: log head offset 0 tail offset 2801664
before cleaning: recovery log head offset 0 tail offset 2801664
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 95, data: 138068, used: 2801664 log:(0 - 2801664)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 93, data: 138068, used: 1992828, relocated: 0, retries: 0, skipped: 4434 log:(0 - 2801664) run:1 duration:15 ms
after cleaning: recovery log head offset 808836 tail offset 2801664
Cleaning: DataSz 729560 WrittenDataSz 6931870 usedSpace 7038177 diskSize 10559488 numCtx 18
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenDataSzAfterRecoveryCleaning) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [808836] tailOffset [2801664]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [10268672] took [37.392652ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [3521311] tailOffset [10559488] replayOffset [10268672]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [47.599789ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [1] plasmas, took [47.790709ms]
Plasma: doInit: data UsedSpace 6968932 recovery UsedSpace 1869958
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning started
Recovery: WrittenDataSz 6977168
Recovery: DataSz 729560 WrittenDataSz 6977168 usedSpace 7042273 diskSize 10563584 numCtx 3
Recovery: numRecords before recovery 100000 numRecords after recovery 100000
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenDataSzAfterRecoveryCleaning closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestWrittenDataSzAfterRecoveryCleaning (3.01s)
=== RUN   TestWrittenHdrSz
----------- running TestWrittenHdrSz
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenHdrSz) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenHdrSz
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenHdrSz to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenHdrSz to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.72µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [99.693µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [110.038µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz started
Init: WrittenDataSz 8236
Loading: WrittenHdrSz 2556422 usedSpace 2801664 diskSize 2801664 numCtx 18
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestWrittenHdrSz) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestWrittenHdrSz
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenHdrSz to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestWrittenHdrSz to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [2801664]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [10280960] took [50.725775ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [10567680] replayOffset [10280960]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [61.628846ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [1] plasmas, took [61.831351ms]
Plasma: doInit: data UsedSpace 10365152 recovery UsedSpace 2556422
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz started
Recovery: WrittenHdrSz 2564658
Recovery: WrittenHdrSz 2564658 usedSpace 2805760 diskSize 2805760 numCtx 3
before cleaning: log head offset 0 tail offset 2805760
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 95, data: 137200, used: 2805760 log:(0 - 2805760)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 95, data: 137200, used: 2801630, relocated: 0, retries: 0, skipped: 0 log:(0 - 2805760) run:1 duration:0 ms
after cleaning: log head offset 4130 tail offset 2809856
Cleaning: WrittenHdrSz 2556422 usedSpace 2805726 diskSize 2809856 numCtx 3
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz stopped
Shard shards/shard1(1) : instance test.mvcc.TestWrittenHdrSz closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestWrittenHdrSz (2.86s)
=== RUN   TestPersistConfigUpgrade
----------- running TestPersistConfigUpgrade
--- PASS: TestPersistConfigUpgrade (0.00s)
=== RUN   TestLSSSegmentSize
----------- running TestLSSSegmentSize
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestLSSSegmentSize_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestLSSSegmentSize_1
Shard shards/shard1(1) : Add instance test.mvcc.TestLSSSegmentSize_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestLSSSegmentSize_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.769µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [89.885µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [96.849µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.mvcc.TestLSSSegmentSize_2) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.mvcc.TestLSSSegmentSize_2
Shard shards/shard2(2) : Add instance test.mvcc.TestLSSSegmentSize_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.mvcc.TestLSSSegmentSize_2 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.188µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [120.932µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [128.047µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard2/data(shard2) : all deamons started
LSS shards/shard2/data/recovery(shard2) : all deamons started
Shard shards/shard2(2) : Swapper started
Shard shards/shard2(2) : Instance added to swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestLSSSegmentSize_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestLSSSegmentSize_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestLSSSegmentSize_3
Shard shards/shard1(1) : Add instance test.mvcc.TestLSSSegmentSize_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestLSSSegmentSize_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_3 started
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_3 stopped
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_3 closed
Shard shards/shard2(2) : Instance removed from swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestLSSSegmentSize_2 stopped
Shard shards/shard2(2) : instance test.mvcc.TestLSSSegmentSize_2 closed
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_1 stopped
Shard shards/shard1(1) : instance test.mvcc.TestLSSSegmentSize_1 closed
Shard shards/shard1(1) : Swapper stopped
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : Swapper stopped
LSS shards/shard2/data(shard2) : all deamons stopped
LSS shards/shard2/data/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
--- PASS: TestLSSSegmentSize (0.28s)
=== RUN   TestRecoveryCleanerFragRatio
----------- running TestRecoveryCleanerFragRatio
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestRecoveryCleanerFragRatio(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerFragRatio
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerFragRatio/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestRecoveryCleanerFragRatio) to LSS (test.basic.TestRecoveryCleanerFragRatio) and RecoveryLSS (test.basic.TestRecoveryCleanerFragRatio/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestRecoveryCleanerFragRatio
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerFragRatio to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerFragRatio to LSS test.basic.TestRecoveryCleanerFragRatio
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestRecoveryCleanerFragRatio/recovery], Data log [test.basic.TestRecoveryCleanerFragRatio], Shared [false]
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.386µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestRecoveryCleanerFragRatio/recovery], Data log [test.basic.TestRecoveryCleanerFragRatio], Shared [false]. Built [0] plasmas, took [107.222µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestRecoveryCleanerFragRatio(shard1) : all deamons started
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerFragRatio started
LSSInfo: frag:48, ds:8129142, used:15728640
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             9949,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          16637592,
"memory_size_index":    265936,
"allocated":            113819304,
"freed":                97181712,
"reclaimed":            97142640,
"reclaim_pending":      39072,
"reclaim_list_size":    39072,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000000,
"num_rec_allocs":       5004271,
"num_rec_frees":        4004271,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       8000000,
"lss_data_size":        8129408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16314984,
"est_recovery_size":    537234,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           21099048,
"page_cnt":             9829,
"page_itemcnt":         2637381,
"avg_item_size":        8,
"avg_page_size":        2146,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16255290,
"page_bytes_compressed":16255290,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    597992,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3276,
    "lss_data_size":        8388136,
    "lss_used_space":       16891904,
    "lss_disk_size":        16891904,
    "lss_fragmentation":    50,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        16891904,
    "bytes_incoming":       8000000,
    "write_amp":            0.00,
    "write_amp_avg":        2.04,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      16351232,
    "num_sctxs":            28,
    "num_free_sctxs":       17,
    "num_swapperWriter":    32
  }
}

Running iteration.. 0
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : recoveryCleaner: starting... frag 58, data: 226852, used: 544768 log:(0 - 544768)
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : recoveryCleaner: completed... frag 10, data: 226436, used: 254364, relocated: 3303, retries: 0, skipped: 2075 log:(0 - 544768) run:1 duration:14 ms
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             19899,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              2000000,
"deletes":              1000000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    29,
"memory_size":          16669144,
"memory_size_index":    265936,
"allocated":            242564456,
"freed":                225895312,
"reclaimed":            225778496,
"reclaim_pending":      116816,
"reclaim_list_size":    116816,
"reclaim_list_count":   9,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000080,
"num_rec_allocs":       8999296,
"num_rec_frees":        7999216,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       24000000,
"lss_data_size":        4801670,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        21135904,
"est_recovery_size":    288642,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           45624864,
"page_cnt":             23396,
"page_itemcnt":         5703108,
"avg_item_size":        8,
"avg_page_size":        1950,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":21058588,
"page_bytes_compressed":21058588,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    668488,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4954282,
    "lss_used_space":       21436504,
    "lss_disk_size":        22216704,
    "lss_fragmentation":    76,
    "lss_num_reads":        14122,
    "lss_read_bs":          717636,
    "lss_blk_read_bs":      811008,
    "bytes_written":        22216704,
    "bytes_incoming":       24000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.89,
    "lss_gc_num_reads":     14122,
    "lss_gc_reads_bs":      717636,
    "lss_blk_gc_reads_bs":  811008,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      21241856,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 1
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             29849,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              3000000,
"deletes":              2000000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    30,
"memory_size":          17035992,
"memory_size_index":    265936,
"allocated":            371486248,
"freed":                354450256,
"reclaimed":            354268560,
"reclaim_pending":      181696,
"reclaim_list_size":    181696,
"reclaim_list_count":   14,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010074,
"num_rec_allocs":       12994321,
"num_rec_frees":        11984247,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       40000000,
"lss_data_size":        6045368,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        30557560,
"est_recovery_size":    326234,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           5000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29068088,
"page_cnt":             18122,
"page_itemcnt":         3633511,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":30446062,
"page_bytes_compressed":30446062,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    915648,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6235420,
    "lss_used_space":       30906438,
    "lss_disk_size":        32309248,
    "lss_fragmentation":    79,
    "lss_num_reads":        24802,
    "lss_read_bs":          1294316,
    "lss_blk_read_bs":      1470464,
    "bytes_written":        32309248,
    "bytes_incoming":       40000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.77,
    "lss_gc_num_reads":     24802,
    "lss_gc_reads_bs":      1294316,
    "lss_blk_gc_reads_bs":  1470464,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      30674944,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 2
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             39799,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              4000000,
"deletes":              3000000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    68,
"memory_size":          17247096,
"memory_size_index":    265936,
"allocated":            500411176,
"freed":                483164080,
"reclaimed":            482917552,
"reclaim_pending":      246528,
"reclaim_list_size":    246528,
"reclaim_list_count":   19,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010138,
"num_rec_allocs":       16989346,
"num_rec_frees":        15979208,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       56000000,
"lss_data_size":        7326226,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        40103386,
"est_recovery_size":    347926,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           7000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           32667864,
"page_cnt":             20366,
"page_itemcnt":         4083483,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":39957412,
"page_bytes_compressed":39957412,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1165944,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7556266,
    "lss_used_space":       40571920,
    "lss_disk_size":        42598400,
    "lss_fragmentation":    81,
    "lss_num_reads":        35481,
    "lss_read_bs":          1870942,
    "lss_blk_read_bs":      2129920,
    "bytes_written":        42598400,
    "bytes_incoming":       56000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.72,
    "lss_gc_num_reads":     35481,
    "lss_gc_reads_bs":      1870942,
    "lss_blk_gc_reads_bs":  2129920,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      40304640,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 3
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             49750,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              5000000,
"deletes":              4000000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    68,
"memory_size":          17421656,
"memory_size_index":    265936,
"allocated":            629154048,
"freed":                611732392,
"reclaimed":            611408192,
"reclaim_pending":      324200,
"reclaim_list_size":    324200,
"reclaim_list_count":   25,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019924,
"num_rec_allocs":       20984597,
"num_rec_frees":        19964673,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       72000000,
"lss_data_size":        3838480,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        44873354,
"est_recovery_size":    288534,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           9000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36077968,
"page_cnt":             22492,
"page_itemcnt":         4509746,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":44710316,
"page_bytes_compressed":44710316,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1222520,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        3957708,
    "lss_used_space":       45221360,
    "lss_disk_size":        47697920,
    "lss_fragmentation":    91,
    "lss_num_reads":        43189,
    "lss_read_bs":          2287150,
    "lss_blk_read_bs":      2617344,
    "bytes_written":        47697920,
    "bytes_incoming":       72000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     43189,
    "lss_gc_reads_bs":      2287150,
    "lss_blk_gc_reads_bs":  2617344,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      45080576,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 4
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             59700,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              6000000,
"deletes":              5000000,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    127,
"memory_size":          17625496,
"memory_size_index":    265936,
"allocated":            758071680,
"freed":                740446184,
"reclaimed":            740057320,
"reclaim_pending":      388864,
"reclaim_list_size":    388864,
"reclaim_list_count":   30,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019990,
"num_rec_allocs":       24979622,
"num_rec_frees":        23959632,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       88000000,
"lss_data_size":        4940930,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        54341370,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           11000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           32005616,
"page_cnt":             19953,
"page_itemcnt":         4000702,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":54144540,
"page_bytes_compressed":54144540,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1465520,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5094218,
    "lss_used_space":       54807944,
    "lss_disk_size":        57880576,
    "lss_fragmentation":    90,
    "lss_num_reads":        53394,
    "lss_read_bs":          2838188,
    "lss_blk_read_bs":      3252224,
    "bytes_written":        57880576,
    "bytes_incoming":       88000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     53394,
    "lss_gc_reads_bs":      2838188,
    "lss_blk_gc_reads_bs":  3252224,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      54628352,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 5
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             69650,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              7000000,
"deletes":              6000000,
"compact_conflicts":    2,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     4,
"swapin_conflicts":     0,
"persist_conflicts":    199,
"memory_size":          17992952,
"memory_size_index":    265936,
"allocated":            886994304,
"freed":                869001352,
"reclaimed":            868548376,
"reclaim_pending":      452976,
"reclaim_list_size":    452976,
"reclaim_list_count":   35,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1029974,
"num_rec_allocs":       28974647,
"num_rec_frees":        27944673,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       104000000,
"lss_data_size":        6235620,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        64013436,
"est_recovery_size":    345548,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           13000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           28813048,
"page_cnt":             17963,
"page_itemcnt":         3601631,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":63782346,
"page_bytes_compressed":63782346,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1713432,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6427024,
    "lss_used_space":       64655000,
    "lss_disk_size":        68296704,
    "lss_fragmentation":    90,
    "lss_num_reads":        63153,
    "lss_read_bs":          3365142,
    "lss_blk_read_bs":      3854336,
    "bytes_written":        68296704,
    "bytes_incoming":       104000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     63153,
    "lss_gc_reads_bs":      3365142,
    "lss_blk_gc_reads_bs":  3854336,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      64409600,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 6
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             79600,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              8000000,
"deletes":              7000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     4,
"swapin_conflicts":     0,
"persist_conflicts":    256,
"memory_size":          18204408,
"memory_size_index":    265936,
"allocated":            1015919616,
"freed":                997715208,
"reclaimed":            997196984,
"reclaim_pending":      518224,
"reclaim_list_size":    518224,
"reclaim_list_count":   40,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1030040,
"num_rec_allocs":       32969672,
"num_rec_frees":        31939632,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       120000000,
"lss_data_size":        7557512,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        73802858,
"est_recovery_size":    374606,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           15000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29273400,
"page_cnt":             18250,
"page_itemcnt":         3659175,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":73537256,
"page_bytes_compressed":73537256,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1964032,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7789216,
    "lss_used_space":       74549322,
    "lss_disk_size":        78790656,
    "lss_fragmentation":    89,
    "lss_num_reads":        73481,
    "lss_read_bs":          3922814,
    "lss_blk_read_bs":      4489216,
    "bytes_written":        78790656,
    "bytes_incoming":       120000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.62,
    "lss_gc_num_reads":     73481,
    "lss_gc_reads_bs":      3922814,
    "lss_blk_gc_reads_bs":  4489216,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      74268672,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 7
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             89551,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              9000000,
"deletes":              8000000,
"compact_conflicts":    5,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    260,
"memory_size":          18383320,
"memory_size_index":    265936,
"allocated":            1144666952,
"freed":                1126283632,
"reclaimed":            1125688136,
"reclaim_pending":      595496,
"reclaim_list_size":    595496,
"reclaim_list_count":   46,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039822,
"num_rec_allocs":       36964922,
"num_rec_frees":        35925100,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       136000000,
"lss_data_size":        4103316,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        78807768,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           17000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31227472,
"page_cnt":             19468,
"page_itemcnt":         3903434,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":78524682,
"page_bytes_compressed":78524682,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2025024,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4227796,
    "lss_used_space":       79439940,
    "lss_disk_size":        84131840,
    "lss_fragmentation":    94,
    "lss_num_reads":        81192,
    "lss_read_bs":          4339184,
    "lss_blk_read_bs":      4976640,
    "bytes_written":        84131840,
    "bytes_incoming":       136000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     81192,
    "lss_gc_reads_bs":      4339184,
    "lss_blk_gc_reads_bs":  4976640,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      79290368,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 8
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             99501,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              10000000,
"deletes":              9000000,
"compact_conflicts":    6,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    278,
"memory_size":          18599864,
"memory_size_index":    265936,
"allocated":            1273597256,
"freed":                1254997392,
"reclaimed":            1254349840,
"reclaim_pending":      647552,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039892,
"num_rec_allocs":       40959947,
"num_rec_frees":        39920055,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       152000000,
"lss_data_size":        5571970,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        88842762,
"est_recovery_size":    315446,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           19000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31347968,
"page_cnt":             19543,
"page_itemcnt":         3918496,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":88524696,
"page_bytes_compressed":88524696,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2280656,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5740806,
    "lss_used_space":       89612246,
    "lss_disk_size":        94859264,
    "lss_fragmentation":    93,
    "lss_num_reads":        90686,
    "lss_read_bs":          4851828,
    "lss_blk_read_bs":      5566464,
    "bytes_written":        94859264,
    "bytes_incoming":       152000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     90686,
    "lss_gc_reads_bs":      4851828,
    "lss_blk_gc_reads_bs":  5566464,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      89391104,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 9
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             109451,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              11000000,
"deletes":              10000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    307,
"memory_size":          18974744,
"memory_size_index":    265936,
"allocated":            1402527304,
"freed":                1383552560,
"reclaimed":            1383487152,
"reclaim_pending":      65408,
"reclaim_list_size":    65408,
"reclaim_list_count":   5,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049872,
"num_rec_allocs":       44954972,
"num_rec_frees":        43905100,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       168000000,
"lss_data_size":        7104206,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        98950072,
"est_recovery_size":    332266,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           21000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29464272,
"page_cnt":             18369,
"page_itemcnt":         3683034,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":98597050,
"page_bytes_compressed":98597050,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2536056,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7317242,
    "lss_used_space":       99812142,
    "lss_disk_size":        105676800,
    "lss_fragmentation":    92,
    "lss_num_reads":        101231,
    "lss_read_bs":          5421218,
    "lss_blk_read_bs":      6213632,
    "bytes_written":        105676800,
    "bytes_incoming":       168000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     101231,
    "lss_gc_reads_bs":      5421218,
    "lss_blk_gc_reads_bs":  6213632,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      99553280,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 10
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             119401,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              12000000,
"deletes":              11000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    319,
"memory_size":          19190488,
"memory_size_index":    265936,
"allocated":            1531456712,
"freed":                1512266224,
"reclaimed":            1512136112,
"reclaim_pending":      130112,
"reclaim_list_size":    130112,
"reclaim_list_count":   10,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049946,
"num_rec_allocs":       48949997,
"num_rec_frees":        47900051,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       184000000,
"lss_data_size":        8569068,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        109079402,
"est_recovery_size":    455864,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           23000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31450432,
"page_cnt":             19607,
"page_itemcnt":         3931304,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":108691484,
"page_bytes_compressed":108691484,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2790832,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8825732,
    "lss_used_space":       110169906,
    "lss_disk_size":        116510720,
    "lss_fragmentation":    91,
    "lss_num_reads":        109386,
    "lss_read_bs":          5861540,
    "lss_blk_read_bs":      6729728,
    "bytes_written":        116510720,
    "bytes_incoming":       184000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.60,
    "lss_gc_num_reads":     109386,
    "lss_gc_reads_bs":      5861540,
    "lss_blk_gc_reads_bs":  6729728,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      109707264,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 11
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             129352,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              13000000,
"deletes":              12000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    331,
"memory_size":          19373272,
"memory_size_index":    265936,
"allocated":            1660207968,
"freed":                1640834696,
"reclaimed":            1640626784,
"reclaim_pending":      207912,
"reclaim_list_size":    207912,
"reclaim_list_count":   16,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059724,
"num_rec_allocs":       52945248,
"num_rec_frees":        51885524,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       200000000,
"lss_data_size":        5147994,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        114314972,
"est_recovery_size":    293232,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           25000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31482512,
"page_cnt":             19627,
"page_itemcnt":         3935314,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":113909204,
"page_bytes_compressed":113909204,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2855768,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5300606,
    "lss_used_space":       115174112,
    "lss_disk_size":        122118144,
    "lss_fragmentation":    95,
    "lss_num_reads":        119682,
    "lss_read_bs":          6417492,
    "lss_blk_read_bs":      7364608,
    "bytes_written":        122118144,
    "bytes_incoming":       200000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     119682,
    "lss_gc_reads_bs":      6417492,
    "lss_blk_gc_reads_bs":  7364608,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      114974720,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 12
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             139302,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              14000000,
"deletes":              13000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     10,
"swapin_conflicts":     0,
"persist_conflicts":    405,
"memory_size":          19591448,
"memory_size_index":    265936,
"allocated":            1789139872,
"freed":                1769548424,
"reclaimed":            1769275992,
"reclaim_pending":      272432,
"reclaim_list_size":    272432,
"reclaim_list_count":   21,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059796,
"num_rec_allocs":       56940273,
"num_rec_frees":        55880477,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       216000000,
"lss_data_size":        6695802,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        124628232,
"est_recovery_size":    312778,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           27000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31400904,
"page_cnt":             19576,
"page_itemcnt":         3925113,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":124187334,
"page_bytes_compressed":124187334,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3113000,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6894070,
    "lss_used_space":       125646534,
    "lss_disk_size":        133214208,
    "lss_fragmentation":    94,
    "lss_num_reads":        130401,
    "lss_read_bs":          6996278,
    "lss_blk_read_bs":      8024064,
    "bytes_written":        133214208,
    "bytes_incoming":       216000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     130401,
    "lss_gc_reads_bs":      6996278,
    "lss_blk_gc_reads_bs":  8024064,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      125407232,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 13
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             149252,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              15000000,
"deletes":              14000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    462,
"memory_size":          19964632,
"memory_size_index":    265936,
"allocated":            1918068320,
"freed":                1898103688,
"reclaimed":            1897766984,
"reclaim_pending":      336704,
"reclaim_list_size":    336704,
"reclaim_list_count":   26,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069774,
"num_rec_allocs":       60935298,
"num_rec_frees":        59865524,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       232000000,
"lss_data_size":        8226070,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        134934316,
"est_recovery_size":    355640,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           29000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31445816,
"page_cnt":             19604,
"page_itemcnt":         3930727,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":134458612,
"page_bytes_compressed":134458612,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3366736,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8467186,
    "lss_used_space":       136069240,
    "lss_disk_size":        144261120,
    "lss_fragmentation":    93,
    "lss_num_reads":        141090,
    "lss_read_bs":          7573444,
    "lss_blk_read_bs":      8683520,
    "bytes_written":        144261120,
    "bytes_incoming":       232000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     141090,
    "lss_gc_reads_bs":      7573444,
    "lss_blk_gc_reads_bs":  8683520,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      135794688,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 14
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             159202,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              16000000,
"deletes":              15000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    462,
"memory_size":          19997176,
"memory_size_index":    265936,
"allocated":            2046814496,
"freed":                2026817320,
"reclaimed":            2026415528,
"reclaim_pending":      401792,
"reclaim_list_size":    401792,
"reclaim_list_count":   31,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069848,
"num_rec_allocs":       64930323,
"num_rec_frees":        63860475,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       248000000,
"lss_data_size":        4642582,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        140196346,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           31000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29568536,
"page_cnt":             18434,
"page_itemcnt":         3696067,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":139702924,
"page_bytes_compressed":139702924,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3438304,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4778502,
    "lss_used_space":       141239910,
    "lss_disk_size":        149856256,
    "lss_fragmentation":    96,
    "lss_num_reads":        148357,
    "lss_read_bs":          7965846,
    "lss_blk_read_bs":      9142272,
    "bytes_written":        149856256,
    "bytes_incoming":       248000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     148357,
    "lss_gc_reads_bs":      7965846,
    "lss_blk_gc_reads_bs":  9142272,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      141062144,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 15
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             169153,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              17000000,
"deletes":              16000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    462,
"memory_size":          20363544,
"memory_size_index":    265936,
"allocated":            2175749400,
"freed":                2155385856,
"reclaimed":            2154906632,
"reclaim_pending":      479224,
"reclaim_list_size":    479224,
"reclaim_list_count":   37,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079622,
"num_rec_allocs":       68925572,
"num_rec_frees":        67845950,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       264000000,
"lss_data_size":        6246740,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        150675364,
"est_recovery_size":    309936,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           33000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31426360,
"page_cnt":             19592,
"page_itemcnt":         3928295,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":150146872,
"page_bytes_compressed":150146872,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3686896,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6427796,
    "lss_used_space":       151771822,
    "lss_disk_size":        160985088,
    "lss_fragmentation":    95,
    "lss_num_reads":        158590,
    "lss_read_bs":          8518396,
    "lss_blk_read_bs":      9777152,
    "bytes_written":        160985088,
    "bytes_incoming":       264000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     158590,
    "lss_gc_reads_bs":      8518396,
    "lss_blk_gc_reads_bs":  9777152,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      151556096,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 16
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             179103,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              18000000,
"deletes":              17000000,
"compact_conflicts":    11,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     8,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    491,
"memory_size":          20579832,
"memory_size_index":    265936,
"allocated":            2304679256,
"freed":                2284099424,
"reclaimed":            2283555360,
"reclaim_pending":      544064,
"reclaim_list_size":    544064,
"reclaim_list_count":   42,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079704,
"num_rec_allocs":       72920597,
"num_rec_frees":        71840893,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       280000000,
"lss_data_size":        7773316,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        161164410,
"est_recovery_size":    355872,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           35000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33224640,
"page_cnt":             20713,
"page_itemcnt":         4153080,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":160600980,
"page_bytes_compressed":160600980,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3942080,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7998364,
    "lss_used_space":       162362370,
    "lss_disk_size":        172163072,
    "lss_fragmentation":    95,
    "lss_num_reads":        168664,
    "lss_read_bs":          9062352,
    "lss_blk_read_bs":      10399744,
    "bytes_written":        172163072,
    "bytes_incoming":       280000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     168664,
    "lss_gc_reads_bs":      9062352,
    "lss_blk_gc_reads_bs":  10399744,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      162107392,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 17
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             189053,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              19000000,
"deletes":              18000000,
"compact_conflicts":    12,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     9,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    500,
"memory_size":          20769496,
"memory_size_index":    265936,
"allocated":            2433424280,
"freed":                2412654784,
"reclaimed":            2412046736,
"reclaim_pending":      608048,
"reclaim_list_size":    608048,
"reclaim_list_count":   47,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089672,
"num_rec_allocs":       76915622,
"num_rec_frees":        75825950,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       296000000,
"lss_data_size":        4148626,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        166497248,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           37000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34984240,
"page_cnt":             21810,
"page_itemcnt":         4373030,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":165916208,
"page_bytes_compressed":165916208,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4012456,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4267542,
    "lss_used_space":       167625392,
    "lss_disk_size":        177836032,
    "lss_fragmentation":    97,
    "lss_num_reads":        175694,
    "lss_read_bs":          9441956,
    "lss_blk_read_bs":      10838016,
    "bytes_written":        177836032,
    "bytes_incoming":       296000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     175694,
    "lss_gc_reads_bs":      9441956,
    "lss_blk_gc_reads_bs":  10838016,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      167469056,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 18
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             199003,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              20000000,
"deletes":              19000000,
"compact_conflicts":    14,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    560,
"memory_size":          20985944,
"memory_size_index":    265936,
"allocated":            2562354328,
"freed":                2541368384,
"reclaimed":            2541264144,
"reclaim_pending":      104240,
"reclaim_list_size":    13304,
"reclaim_list_count":   1,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089754,
"num_rec_allocs":       80910647,
"num_rec_frees":        79820893,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       312000000,
"lss_data_size":        5695168,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        177108520,
"est_recovery_size":    289114,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           39000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34974416,
"page_cnt":             21804,
"page_itemcnt":         4371802,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":176492524,
"page_bytes_compressed":176492524,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4267792,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5858232,
    "lss_used_space":       178370490,
    "lss_disk_size":        189222912,
    "lss_fragmentation":    96,
    "lss_num_reads":        186726,
    "lss_read_bs":          10037636,
    "lss_blk_read_bs":      11517952,
    "bytes_written":        189222912,
    "bytes_incoming":       312000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     186726,
    "lss_gc_reads_bs":      10037636,
    "lss_blk_gc_reads_bs":  11517952,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      178176000,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 19
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             208954,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              21000000,
"deletes":              20000000,
"compact_conflicts":    14,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     12,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    569,
"memory_size":          21353688,
"memory_size_index":    265936,
"allocated":            2691290736,
"freed":                2669937048,
"reclaimed":            2669845408,
"reclaim_pending":      91640,
"reclaim_list_size":    91640,
"reclaim_list_count":   7,
"reclaim_threshold":    50,
"allocated_index":      265936,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1099522,
"num_rec_allocs":       84905898,
"num_rec_frees":        83806376,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       328000000,
"lss_data_size":        7385556,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        187871116,
"est_recovery_size":    348390,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           41000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33181144,
"page_cnt":             20686,
"page_itemcnt":         4147643,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":187219912,
"page_bytes_compressed":187219912,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4517856,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7594952,
    "lss_used_space":       189229606,
    "lss_disk_size":        200687616,
    "lss_fragmentation":    95,
    "lss_num_reads":        197105,
    "lss_read_bs":          10598062,
    "lss_blk_read_bs":      12161024,
    "bytes_written":        200687616,
    "bytes_incoming":       328000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     197105,
    "lss_gc_reads_bs":      10598062,
    "lss_blk_gc_reads_bs":  12161024,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      188968960,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
LSSInfo: frag:10, ds:209396, used:235184

frag from cleaner state = 10
LSS test.basic.TestRecoveryCleanerFragRatio(shard1) : all deamons stopped
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerFragRatio stopped
LSS test.basic.TestRecoveryCleanerFragRatio(shard1) : LSSCleaner stopped
LSS test.basic.TestRecoveryCleanerFragRatio/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerFragRatio closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestRecoveryCleanerFragRatio ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestRecoveryCleanerFragRatio ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerFragRatio sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerFragRatio (182.03s)
=== RUN   TestRecoveryCleanerRelocation
----------- running TestRecoveryCleanerRelocation
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestRecoveryCleanerRelocation(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerRelocation
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerRelocation/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestRecoveryCleanerRelocation) to LSS (test.basic.TestRecoveryCleanerRelocation) and RecoveryLSS (test.basic.TestRecoveryCleanerRelocation/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestRecoveryCleanerRelocation
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerRelocation to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerRelocation to LSS test.basic.TestRecoveryCleanerRelocation
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestRecoveryCleanerRelocation/recovery], Data log [test.basic.TestRecoveryCleanerRelocation], Shared [false]
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [75.704µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestRecoveryCleanerRelocation/recovery], Data log [test.basic.TestRecoveryCleanerRelocation], Shared [false]. Built [0] plasmas, took [114.003µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestRecoveryCleanerRelocation(shard1) : all deamons started
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerRelocation started
LSSInfo: frag:48, ds:8129142, used:15728640
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             9949,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          16637592,
"memory_size_index":    265984,
"allocated":            113819304,
"freed":                97181712,
"reclaimed":            97142640,
"reclaim_pending":      39072,
"reclaim_list_size":    39072,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000000,
"num_rec_allocs":       5004271,
"num_rec_frees":        4004271,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       8000000,
"lss_data_size":        8129408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16314984,
"est_recovery_size":    537234,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           20168464,
"page_cnt":             9395,
"page_itemcnt":         2521058,
"avg_item_size":        8,
"avg_page_size":        2146,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16255290,
"page_bytes_compressed":16255290,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    597992,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3276,
    "lss_data_size":        8388136,
    "lss_used_space":       16891904,
    "lss_disk_size":        16891904,
    "lss_fragmentation":    50,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        16891904,
    "bytes_incoming":       8000000,
    "write_amp":            0.00,
    "write_amp_avg":        2.04,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      16351232,
    "num_sctxs":            28,
    "num_free_sctxs":       17,
    "num_swapperWriter":    32
  }
}

Running iteration.. 0
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : recoveryCleaner: starting... frag 58, data: 227840, used: 544768 log:(0 - 544768)
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : recoveryCleaner: completed... frag 10, data: 227320, used: 255336, relocated: 3276, retries: 0, skipped: 2084 log:(0 - 544768) run:1 duration:23 ms
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             19899,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              2000000,
"deletes":              1000000,
"compact_conflicts":    1,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     1,
"delete_conflicts":     1,
"swapin_conflicts":     0,
"persist_conflicts":    33,
"memory_size":          16669592,
"memory_size_index":    265984,
"allocated":            242564968,
"freed":                225895376,
"reclaimed":            225778560,
"reclaim_pending":      116816,
"reclaim_list_size":    116816,
"reclaim_list_count":   9,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000078,
"num_rec_allocs":       8999296,
"num_rec_frees":        7999218,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       24000000,
"lss_data_size":        4814758,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        21149570,
"est_recovery_size":    288642,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           47347560,
"page_cnt":             24470,
"page_itemcnt":         5918445,
"avg_item_size":        8,
"avg_page_size":        1934,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":21072206,
"page_bytes_compressed":21072206,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    668960,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4967786,
    "lss_used_space":       21431930,
    "lss_disk_size":        22220800,
    "lss_fragmentation":    76,
    "lss_num_reads":        14248,
    "lss_read_bs":          724760,
    "lss_blk_read_bs":      819200,
    "bytes_written":        22220800,
    "bytes_incoming":       24000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.89,
    "lss_gc_num_reads":     14248,
    "lss_gc_reads_bs":      724760,
    "lss_blk_gc_reads_bs":  819200,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      21241856,
    "num_sctxs":            29,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
Running iteration.. 1
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             29849,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              3000000,
"deletes":              2000000,
"compact_conflicts":    2,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    66,
"memory_size":          17050008,
"memory_size_index":    265984,
"allocated":            371500328,
"freed":                354450320,
"reclaimed":            354268624,
"reclaim_pending":      181696,
"reclaim_list_size":    181696,
"reclaim_list_count":   14,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010074,
"num_rec_allocs":       12994321,
"num_rec_frees":        11984247,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       40000000,
"lss_data_size":        6409246,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        30921810,
"est_recovery_size":    332266,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           5000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           29866880,
"page_cnt":             18620,
"page_itemcnt":         3733360,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":30808992,
"page_bytes_compressed":30808992,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    929648,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6610738,
    "lss_used_space":       31298080,
    "lss_disk_size":        32735232,
    "lss_fragmentation":    78,
    "lss_num_reads":        25321,
    "lss_read_bs":          1322662,
    "lss_blk_read_bs":      1503232,
    "bytes_written":        32735232,
    "bytes_incoming":       40000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.78,
    "lss_gc_num_reads":     25321,
    "lss_gc_reads_bs":      1322662,
    "lss_blk_gc_reads_bs":  1503232,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      31068160,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 2
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             39799,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              4000000,
"deletes":              3000000,
"compact_conflicts":    3,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    156,
"memory_size":          17268216,
"memory_size_index":    265984,
"allocated":            500432488,
"freed":                483164272,
"reclaimed":            482917744,
"reclaim_pending":      246528,
"reclaim_list_size":    246528,
"reclaim_list_count":   19,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010132,
"num_rec_allocs":       16989346,
"num_rec_frees":        15979214,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       56000000,
"lss_data_size":        7877620,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        40657926,
"est_recovery_size":    458822,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           7000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31670176,
"page_cnt":             19744,
"page_itemcnt":         3958772,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":40509954,
"page_bytes_compressed":40509954,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1187136,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8124976,
    "lss_used_space":       41400400,
    "lss_disk_size":        43278336,
    "lss_fragmentation":    80,
    "lss_num_reads":        32886,
    "lss_read_bs":          1731140,
    "lss_blk_read_bs":      1978368,
    "bytes_written":        43278336,
    "bytes_incoming":       56000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.73,
    "lss_gc_num_reads":     32886,
    "lss_gc_reads_bs":      1731140,
    "lss_blk_gc_reads_bs":  1978368,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      40935424,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 3
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             49750,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              5000000,
"deletes":              4000000,
"compact_conflicts":    3,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    156,
"memory_size":          17450072,
"memory_size_index":    265984,
"allocated":            629182592,
"freed":                611732520,
"reclaimed":            611408320,
"reclaim_pending":      324200,
"reclaim_list_size":    324200,
"reclaim_list_count":   25,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019924,
"num_rec_allocs":       20984597,
"num_rec_frees":        19964673,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       72000000,
"lss_data_size":        4583408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        45622526,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           9000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33431368,
"page_cnt":             20842,
"page_itemcnt":         4178921,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":45456812,
"page_bytes_compressed":45456812,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1250920,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4725776,
    "lss_used_space":       46073148,
    "lss_disk_size":        48578560,
    "lss_fragmentation":    89,
    "lss_num_reads":        43672,
    "lss_read_bs":          2313552,
    "lss_blk_read_bs":      2641920,
    "bytes_written":        48578560,
    "bytes_incoming":       72000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.64,
    "lss_gc_num_reads":     43672,
    "lss_gc_reads_bs":      2313552,
    "lss_blk_gc_reads_bs":  2641920,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      45903872,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 4
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             59700,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              6000000,
"deletes":              5000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    249,
"memory_size":          17667416,
"memory_size_index":    265984,
"allocated":            758113856,
"freed":                740446440,
"reclaimed":            740057576,
"reclaim_pending":      388864,
"reclaim_list_size":    388864,
"reclaim_list_count":   30,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019984,
"num_rec_allocs":       24979622,
"num_rec_frees":        23959638,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       88000000,
"lss_data_size":        6043720,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        55448748,
"est_recovery_size":    345142,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           11000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33405904,
"page_cnt":             20826,
"page_itemcnt":         4175738,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":55247964,
"page_bytes_compressed":55247964,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1507512,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6231224,
    "lss_used_space":       56076402,
    "lss_disk_size":        59170816,
    "lss_fragmentation":    88,
    "lss_num_reads":        53757,
    "lss_read_bs":          2858110,
    "lss_blk_read_bs":      3264512,
    "bytes_written":        59170816,
    "bytes_incoming":       88000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     53757,
    "lss_gc_reads_bs":      2858110,
    "lss_blk_gc_reads_bs":  3264512,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      55836672,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 5
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             69650,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              7000000,
"deletes":              6000000,
"compact_conflicts":    6,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     5,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    282,
"memory_size":          18044760,
"memory_size_index":    265984,
"allocated":            887046272,
"freed":                869001512,
"reclaimed":            868548536,
"reclaim_pending":      452976,
"reclaim_list_size":    452976,
"reclaim_list_count":   35,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1029974,
"num_rec_allocs":       28974647,
"num_rec_frees":        27944673,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       104000000,
"lss_data_size":        7609448,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        65392716,
"est_recovery_size":    363354,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           13000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33463656,
"page_cnt":             20862,
"page_itemcnt":         4182957,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":65156754,
"page_bytes_compressed":65156754,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1765216,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7843024,
    "lss_used_space":       66120964,
    "lss_disk_size":        69832704,
    "lss_fragmentation":    88,
    "lss_num_reads":        64323,
    "lss_read_bs":          3428634,
    "lss_blk_read_bs":      3919872,
    "bytes_written":        69832704,
    "bytes_incoming":       104000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     64323,
    "lss_gc_reads_bs":      3428634,
    "lss_blk_gc_reads_bs":  3919872,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      65843200,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 6
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             79600,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              8000000,
"deletes":              7000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    296,
"memory_size":          18079672,
"memory_size_index":    265984,
"allocated":            1015795072,
"freed":                997715400,
"reclaimed":            997197176,
"reclaim_pending":      518224,
"reclaim_list_size":    518224,
"reclaim_list_count":   40,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1030034,
"num_rec_allocs":       32969672,
"num_rec_frees":        31939638,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       120000000,
"lss_data_size":        4257006,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        70489286,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           15000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31689424,
"page_cnt":             19756,
"page_itemcnt":         3961178,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":70235360,
"page_bytes_compressed":70235360,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1839368,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4387518,
    "lss_used_space":       71114112,
    "lss_disk_size":        75280384,
    "lss_fragmentation":    93,
    "lss_num_reads":        72131,
    "lss_read_bs":          3850250,
    "lss_blk_read_bs":      4407296,
    "bytes_written":        75280384,
    "bytes_incoming":       120000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     72131,
    "lss_gc_reads_bs":      3850250,
    "lss_blk_gc_reads_bs":  4407296,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      70967296,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 7
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             89551,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              9000000,
"deletes":              8000000,
"compact_conflicts":    7,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    296,
"memory_size":          18450040,
"memory_size_index":    265984,
"allocated":            1144733752,
"freed":                1126283712,
"reclaimed":            1125688216,
"reclaim_pending":      595496,
"reclaim_list_size":    595496,
"reclaim_list_count":   46,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039822,
"num_rec_allocs":       36964921,
"num_rec_frees":        35925099,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       136000000,
"lss_data_size":        5891020,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        80601618,
"est_recovery_size":    316258,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           17000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           28086424,
"page_cnt":             17510,
"page_itemcnt":         3510803,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":80312268,
"page_bytes_compressed":80312268,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2091736,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6069736,
    "lss_used_space":       81307776,
    "lss_disk_size":        86036480,
    "lss_fragmentation":    92,
    "lss_num_reads":        81776,
    "lss_read_bs":          4371048,
    "lss_blk_read_bs":      5001216,
    "bytes_written":        86036480,
    "bytes_incoming":       136000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.60,
    "lss_gc_num_reads":     81776,
    "lss_gc_reads_bs":      4371048,
    "lss_blk_gc_reads_bs":  5001216,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      81088512,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 8
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             99501,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              10000000,
"deletes":              9000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    354,
"memory_size":          18671224,
"memory_size_index":    265984,
"allocated":            1273668728,
"freed":                1254997504,
"reclaimed":            1254414640,
"reclaim_pending":      582864,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039890,
"num_rec_allocs":       40959946,
"num_rec_frees":        39920056,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       152000000,
"lss_data_size":        7487008,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        90763162,
"est_recovery_size":    342126,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           19000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31694424,
"page_cnt":             19759,
"page_itemcnt":         3961803,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":90438394,
"page_bytes_compressed":90438394,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2352040,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7713876,
    "lss_used_space":       91583140,
    "lss_disk_size":        96935936,
    "lss_fragmentation":    91,
    "lss_num_reads":        92456,
    "lss_read_bs":          4947736,
    "lss_blk_read_bs":      5656576,
    "bytes_written":        96935936,
    "bytes_incoming":       152000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.60,
    "lss_gc_num_reads":     92456,
    "lss_gc_reads_bs":      4947736,
    "lss_blk_gc_reads_bs":  5656576,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      91324416,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 9
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             109451,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              11000000,
"deletes":              10000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    361,
"memory_size":          18862296,
"memory_size_index":    265984,
"allocated":            1402414904,
"freed":                1383552608,
"reclaimed":            1383487200,
"reclaim_pending":      65408,
"reclaim_list_size":    65408,
"reclaim_list_count":   5,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049872,
"num_rec_allocs":       44954971,
"num_rec_frees":        43905099,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       168000000,
"lss_data_size":        4057560,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        95889652,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           21000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35345136,
"page_cnt":             22035,
"page_itemcnt":         4418142,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":95547166,
"page_bytes_compressed":95547166,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2423608,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4179232,
    "lss_used_space":       96633134,
    "lss_disk_size":        102428672,
    "lss_fragmentation":    95,
    "lss_num_reads":        100021,
    "lss_read_bs":          5356222,
    "lss_blk_read_bs":      6131712,
    "bytes_written":        102428672,
    "bytes_incoming":       168000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     100021,
    "lss_gc_reads_bs":      5356222,
    "lss_blk_gc_reads_bs":  6131712,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      96473088,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 10
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             119401,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              12000000,
"deletes":              11000000,
"compact_conflicts":    9,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     9,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    403,
"memory_size":          19080792,
"memory_size_index":    265984,
"allocated":            1531347128,
"freed":                1512266336,
"reclaimed":            1512136224,
"reclaim_pending":      130112,
"reclaim_list_size":    130112,
"reclaim_list_count":   10,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049944,
"num_rec_allocs":       48949996,
"num_rec_frees":        47900052,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       184000000,
"lss_data_size":        5595246,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        106092600,
"est_recovery_size":    300482,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           23000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33142648,
"page_cnt":             20662,
"page_itemcnt":         4142831,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":105714954,
"page_bytes_compressed":105714954,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2681160,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5762834,
    "lss_used_space":       106978044,
    "lss_disk_size":        113401856,
    "lss_fragmentation":    94,
    "lss_num_reads":        110811,
    "lss_read_bs":          5938850,
    "lss_blk_read_bs":      6795264,
    "bytes_written":        113401856,
    "bytes_incoming":       184000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     110811,
    "lss_gc_reads_bs":      5938850,
    "lss_blk_gc_reads_bs":  6795264,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      106774528,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 11
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             129352,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              13000000,
"deletes":              12000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     10,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    439,
"memory_size":          19491032,
"memory_size_index":    265984,
"allocated":            1660325904,
"freed":                1640834872,
"reclaimed":            1640626960,
"reclaim_pending":      207912,
"reclaim_list_size":    207912,
"reclaim_list_count":   16,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059722,
"num_rec_allocs":       52945247,
"num_rec_frees":        51885525,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       200000000,
"lss_data_size":        8377118,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        117556804,
"est_recovery_size":    371590,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           25000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33157080,
"page_cnt":             20671,
"page_itemcnt":         4144635,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":117139978,
"page_bytes_compressed":117139978,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2973552,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8625462,
    "lss_used_space":       118618838,
    "lss_disk_size":        125685760,
    "lss_fragmentation":    92,
    "lss_num_reads":        121863,
    "lss_read_bs":          6535626,
    "lss_blk_read_bs":      7475200,
    "bytes_written":        125685760,
    "bytes_incoming":       200000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     121863,
    "lss_gc_reads_bs":      6535626,
    "lss_blk_gc_reads_bs":  7475200,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      118337536,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 12
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             139302,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              14000000,
"deletes":              13000000,
"compact_conflicts":    11,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    479,
"memory_size":          19523128,
"memory_size_index":    265984,
"allocated":            1789071696,
"freed":                1769548568,
"reclaimed":            1769276136,
"reclaim_pending":      272432,
"reclaim_list_size":    272432,
"reclaim_list_count":   21,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059794,
"num_rec_allocs":       56940272,
"num_rec_frees":        55880478,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       216000000,
"lss_data_size":        4823868,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        122749716,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           27000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31558104,
"page_cnt":             19674,
"page_itemcnt":         3944763,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":122315208,
"page_bytes_compressed":122315208,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3044696,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4966704,
    "lss_used_space":       123766708,
    "lss_disk_size":        131256320,
    "lss_fragmentation":    95,
    "lss_num_reads":        129106,
    "lss_read_bs":          6926724,
    "lss_blk_read_bs":      7929856,
    "bytes_written":        131256320,
    "bytes_incoming":       216000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     129106,
    "lss_gc_reads_bs":      6926724,
    "lss_blk_gc_reads_bs":  7929856,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      123584512,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 13
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             149252,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              15000000,
"deletes":              14000000,
"compact_conflicts":    11,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    486,
"memory_size":          19901752,
"memory_size_index":    265984,
"allocated":            1918005520,
"freed":                1898103768,
"reclaimed":            1897767064,
"reclaim_pending":      336704,
"reclaim_list_size":    336704,
"reclaim_list_count":   26,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069772,
"num_rec_allocs":       60935297,
"num_rec_frees":        59865525,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       232000000,
"lss_data_size":        6483982,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        133183430,
"est_recovery_size":    312836,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           29000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           27926032,
"page_cnt":             17410,
"page_itemcnt":         3490754,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":132713612,
"page_bytes_compressed":132713612,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3303888,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6674034,
    "lss_used_space":       134258182,
    "lss_disk_size":        142340096,
    "lss_fragmentation":    95,
    "lss_num_reads":        139267,
    "lss_read_bs":          7475386,
    "lss_blk_read_bs":      8560640,
    "bytes_written":        142340096,
    "bytes_incoming":       232000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     139267,
    "lss_gc_reads_bs":      7475386,
    "lss_blk_gc_reads_bs":  8560640,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      134037504,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 14
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             159202,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              16000000,
"deletes":              15000000,
"compact_conflicts":    12,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     12,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    600,
"memory_size":          20122776,
"memory_size_index":    265984,
"allocated":            2046940176,
"freed":                2026817400,
"reclaimed":            2026415608,
"reclaim_pending":      401792,
"reclaim_list_size":    401792,
"reclaim_list_count":   31,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069850,
"num_rec_allocs":       64930322,
"num_rec_frees":        63860472,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       248000000,
"lss_data_size":        8127184,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        143692138,
"est_recovery_size":    458184,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           31000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           27950104,
"page_cnt":             17425,
"page_itemcnt":         3493763,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":143186932,
"page_bytes_compressed":143186932,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3563872,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8365128,
    "lss_used_space":       145161440,
    "lss_disk_size":        153661440,
    "lss_fragmentation":    94,
    "lss_num_reads":        146416,
    "lss_read_bs":          7861408,
    "lss_blk_read_bs":      9007104,
    "bytes_written":        153661440,
    "bytes_incoming":       248000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     146416,
    "lss_gc_reads_bs":      7861408,
    "lss_blk_gc_reads_bs":  9007104,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      144691200,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 15
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             169153,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              17000000,
"deletes":              16000000,
"compact_conflicts":    13,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     12,
"delete_conflicts":     6,
"swapin_conflicts":     0,
"persist_conflicts":    635,
"memory_size":          20303672,
"memory_size_index":    265984,
"allocated":            2175689688,
"freed":                2155386016,
"reclaimed":            2154906792,
"reclaim_pending":      479224,
"reclaim_list_size":    479224,
"reclaim_list_count":   37,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079620,
"num_rec_allocs":       68925572,
"num_rec_frees":        67845952,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       264000000,
"lss_data_size":        4571120,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        148993392,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           33000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33495528,
"page_cnt":             20882,
"page_itemcnt":         4186941,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":148470498,
"page_bytes_compressed":148470498,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3627040,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4703608,
    "lss_used_space":       150205356,
    "lss_disk_size":        159358976,
    "lss_fragmentation":    96,
    "lss_num_reads":        157592,
    "lss_read_bs":          8464880,
    "lss_blk_read_bs":      9695232,
    "bytes_written":        159358976,
    "bytes_incoming":       264000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     157592,
    "lss_gc_reads_bs":      8464880,
    "lss_blk_gc_reads_bs":  9695232,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      150044672,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 16
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             179103,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              18000000,
"deletes":              17000000,
"compact_conflicts":    14,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     13,
"delete_conflicts":     6,
"swapin_conflicts":     0,
"persist_conflicts":    717,
"memory_size":          20522584,
"memory_size_index":    265984,
"allocated":            2304622184,
"freed":                2284099600,
"reclaimed":            2283555536,
"reclaim_pending":      544064,
"reclaim_list_size":    544064,
"reclaim_list_count":   42,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079702,
"num_rec_allocs":       72920598,
"num_rec_frees":        71840896,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       280000000,
"lss_data_size":        6169432,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        159556590,
"est_recovery_size":    294508,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           35000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35157472,
"page_cnt":             21918,
"page_itemcnt":         4394684,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":158998512,
"page_bytes_compressed":158998512,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3884848,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6348044,
    "lss_used_space":       160904414,
    "lss_disk_size":        170692608,
    "lss_fragmentation":    96,
    "lss_num_reads":        168426,
    "lss_read_bs":          9049876,
    "lss_blk_read_bs":      10366976,
    "bytes_written":        170692608,
    "bytes_incoming":       280000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     168426,
    "lss_gc_reads_bs":      9049876,
    "lss_blk_gc_reads_bs":  10366976,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      160702464,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 17
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             189053,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              19000000,
"deletes":              18000000,
"compact_conflicts":    15,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     13,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    768,
"memory_size":          20900760,
"memory_size_index":    265984,
"allocated":            2433555752,
"freed":                2412654992,
"reclaimed":            2412046944,
"reclaim_pending":      608048,
"reclaim_list_size":    608048,
"reclaim_list_count":   47,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089670,
"num_rec_allocs":       76915623,
"num_rec_frees":        75825953,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       296000000,
"lss_data_size":        7870962,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        170233326,
"est_recovery_size":    360802,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           37000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31402512,
"page_cnt":             19577,
"page_itemcnt":         3925314,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":169639962,
"page_bytes_compressed":169639962,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4143744,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8096582,
    "lss_used_space":       171745204,
    "lss_disk_size":        182120448,
    "lss_fragmentation":    95,
    "lss_num_reads":        178532,
    "lss_read_bs":          9595568,
    "lss_blk_read_bs":      10989568,
    "bytes_written":        182120448,
    "bytes_incoming":       296000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     178532,
    "lss_gc_reads_bs":      9595568,
    "lss_blk_gc_reads_bs":  10989568,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      171470848,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 18
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             199003,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              20000000,
"deletes":              19000000,
"compact_conflicts":    15,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     13,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    773,
"memory_size":          20932280,
"memory_size_index":    265984,
"allocated":            2562300712,
"freed":                2541368432,
"reclaimed":            2541328880,
"reclaim_pending":      39552,
"reclaim_list_size":    13304,
"reclaim_list_count":   1,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089756,
"num_rec_allocs":       80910648,
"num_rec_frees":        79820892,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       312000000,
"lss_data_size":        4171488,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        175577778,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           39000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33388064,
"page_cnt":             20815,
"page_itemcnt":         4173508,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":174966810,
"page_bytes_compressed":174966810,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4214096,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4290924,
    "lss_used_space":       176966134,
    "lss_disk_size":        187801600,
    "lss_fragmentation":    97,
    "lss_num_reads":        186400,
    "lss_read_bs":          10020416,
    "lss_blk_read_bs":      11485184,
    "bytes_written":        187801600,
    "bytes_incoming":       312000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     186400,
    "lss_gc_reads_bs":      10020416,
    "lss_blk_gc_reads_bs":  11485184,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      176824320,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 19
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             208954,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              21000000,
"deletes":              20000000,
"compact_conflicts":    17,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     14,
"delete_conflicts":     9,
"swapin_conflicts":     0,
"persist_conflicts":    860,
"memory_size":          21299832,
"memory_size_index":    265984,
"allocated":            2691237056,
"freed":                2669937224,
"reclaimed":            2669845584,
"reclaim_pending":      91640,
"reclaim_list_size":    91640,
"reclaim_list_count":   7,
"reclaim_threshold":    50,
"allocated_index":      265984,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1099520,
"num_rec_allocs":       84905899,
"num_rec_frees":        83806379,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       328000000,
"lss_data_size":        5844972,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        186328732,
"est_recovery_size":    304020,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           41000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33407312,
"page_cnt":             20827,
"page_itemcnt":         4175914,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":185682562,
"page_bytes_compressed":185682562,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4464016,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6010688,
    "lss_used_space":       187952652,
    "lss_disk_size":        199368704,
    "lss_fragmentation":    96,
    "lss_num_reads":        196316,
    "lss_read_bs":          10555848,
    "lss_blk_read_bs":      12099584,
    "bytes_written":        199368704,
    "bytes_incoming":       328000000,
    "write_amp":            0.56,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     196316,
    "lss_gc_reads_bs":      10555848,
    "lss_blk_gc_reads_bs":  12099584,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      187740160,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
LSS test.basic.TestRecoveryCleanerRelocation(shard1) : all deamons stopped
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerRelocation stopped
LSS test.basic.TestRecoveryCleanerRelocation(shard1) : LSSCleaner stopped
LSS test.basic.TestRecoveryCleanerRelocation/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerRelocation closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestRecoveryCleanerRelocation ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestRecoveryCleanerRelocation ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerRelocation sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerRelocation (178.70s)
=== RUN   TestRecoveryCleanerDataSize
----------- running TestRecoveryCleanerDataSize
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestRecoveryCleanerDataSize(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerDataSize
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestRecoveryCleanerDataSize/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestRecoveryCleanerDataSize) to LSS (test.basic.TestRecoveryCleanerDataSize) and RecoveryLSS (test.basic.TestRecoveryCleanerDataSize/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestRecoveryCleanerDataSize
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDataSize to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDataSize to LSS test.basic.TestRecoveryCleanerDataSize
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestRecoveryCleanerDataSize/recovery], Data log [test.basic.TestRecoveryCleanerDataSize], Shared [false]
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.829µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestRecoveryCleanerDataSize/recovery], Data log [test.basic.TestRecoveryCleanerDataSize], Shared [false]. Built [0] plasmas, took [101.521µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestRecoveryCleanerDataSize(shard1) : all deamons started
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDataSize started
LSSInfo: frag:48, ds:8129142, used:15728640
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             9949,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          16637592,
"memory_size_index":    265808,
"allocated":            113819304,
"freed":                97181712,
"reclaimed":            97142640,
"reclaim_pending":      39072,
"reclaim_list_size":    39072,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000000,
"num_rec_allocs":       5004271,
"num_rec_frees":        4004271,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       8000000,
"lss_data_size":        8129408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16314984,
"est_recovery_size":    537234,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           19965608,
"page_cnt":             9301,
"page_itemcnt":         2495701,
"avg_item_size":        8,
"avg_page_size":        2146,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16255290,
"page_bytes_compressed":16255290,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    597992,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          3276,
    "lss_data_size":        8388136,
    "lss_used_space":       16891904,
    "lss_disk_size":        16891904,
    "lss_fragmentation":    50,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        16891904,
    "bytes_incoming":       8000000,
    "write_amp":            0.00,
    "write_amp_avg":        2.04,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      16351232,
    "num_sctxs":            28,
    "num_free_sctxs":       17,
    "num_swapperWriter":    32
  }
}

Running iteration.. 0
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : recoveryCleaner: starting... frag 58, data: 228316, used: 544768 log:(0 - 544768)
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : recoveryCleaner: completed... frag 10, data: 227796, used: 255876, relocated: 3262, retries: 0, skipped: 2088 log:(0 - 544768) run:1 duration:19 ms
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             19899,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              2000000,
"deletes":              1000000,
"compact_conflicts":    1,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     1,
"swapin_conflicts":     0,
"persist_conflicts":    18,
"memory_size":          16666072,
"memory_size_index":    265808,
"allocated":            242561464,
"freed":                225895392,
"reclaimed":            225778576,
"reclaim_pending":      116816,
"reclaim_list_size":    116816,
"reclaim_list_count":   9,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000078,
"num_rec_allocs":       8999297,
"num_rec_frees":        7999219,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       24000000,
"lss_data_size":        4724760,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        21062504,
"est_recovery_size":    288642,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           45016952,
"page_cnt":             23017,
"page_itemcnt":         5627119,
"avg_item_size":        8,
"avg_page_size":        1955,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":20985470,
"page_bytes_compressed":20985470,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    665440,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4874928,
    "lss_used_space":       21343820,
    "lss_disk_size":        22122496,
    "lss_fragmentation":    77,
    "lss_num_reads":        14082,
    "lss_read_bs":          715764,
    "lss_blk_read_bs":      811008,
    "bytes_written":        22122496,
    "bytes_incoming":       24000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.88,
    "lss_gc_num_reads":     14082,
    "lss_gc_reads_bs":      715764,
    "lss_blk_gc_reads_bs":  811008,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      21151744,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 1
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             29849,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              3000000,
"deletes":              2000000,
"compact_conflicts":    1,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     1,
"delete_conflicts":     1,
"swapin_conflicts":     0,
"persist_conflicts":    20,
"memory_size":          17040632,
"memory_size_index":    265808,
"allocated":            371490936,
"freed":                354450304,
"reclaimed":            354268608,
"reclaim_pending":      181696,
"reclaim_list_size":    181696,
"reclaim_list_count":   14,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010074,
"num_rec_allocs":       12994322,
"num_rec_frees":        11984248,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       40000000,
"lss_data_size":        6166108,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        30680142,
"est_recovery_size":    309124,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           5000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31260760,
"page_cnt":             19489,
"page_itemcnt":         3907595,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":30568206,
"page_bytes_compressed":30568206,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    920280,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6359956,
    "lss_used_space":       31011288,
    "lss_disk_size":        32423936,
    "lss_fragmentation":    79,
    "lss_num_reads":        24956,
    "lss_read_bs":          1302920,
    "lss_blk_read_bs":      1478656,
    "bytes_written":        32423936,
    "bytes_incoming":       40000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.77,
    "lss_gc_num_reads":     24956,
    "lss_gc_reads_bs":      1302920,
    "lss_blk_gc_reads_bs":  1478656,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      30781440,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 2
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             39799,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              4000000,
"deletes":              3000000,
"compact_conflicts":    2,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    51,
"memory_size":          17256312,
"memory_size_index":    265808,
"allocated":            500420536,
"freed":                483164224,
"reclaimed":            482917696,
"reclaim_pending":      246528,
"reclaim_list_size":    246528,
"reclaim_list_count":   19,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1010134,
"num_rec_allocs":       16989347,
"num_rec_frees":        15979213,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       56000000,
"lss_data_size":        7567966,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        40349074,
"est_recovery_size":    341546,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           7000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31355792,
"page_cnt":             19548,
"page_itemcnt":         3919474,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":40202224,
"page_bytes_compressed":40202224,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1175208,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7805598,
    "lss_used_space":       40797120,
    "lss_disk_size":        42840064,
    "lss_fragmentation":    80,
    "lss_num_reads":        35748,
    "lss_read_bs":          1885648,
    "lss_blk_read_bs":      2146304,
    "bytes_written":        42840064,
    "bytes_incoming":       56000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.72,
    "lss_gc_num_reads":     35748,
    "lss_gc_reads_bs":      1885648,
    "lss_blk_gc_reads_bs":  2146304,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      40529920,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 3
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             49750,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              5000000,
"deletes":              4000000,
"compact_conflicts":    3,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     2,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    76,
"memory_size":          17436504,
"memory_size_index":    265808,
"allocated":            629169040,
"freed":                611732536,
"reclaimed":            611408336,
"reclaim_pending":      324200,
"reclaim_list_size":    324200,
"reclaim_list_count":   25,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019924,
"num_rec_allocs":       20984598,
"num_rec_frees":        19964674,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       72000000,
"lss_data_size":        4228518,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        45269826,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           9000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33182744,
"page_cnt":             20687,
"page_itemcnt":         4147843,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":45105384,
"page_bytes_compressed":45105384,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1237352,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4359862,
    "lss_used_space":       45651528,
    "lss_disk_size":        48128000,
    "lss_fragmentation":    90,
    "lss_num_reads":        43161,
    "lss_read_bs":          2285926,
    "lss_blk_read_bs":      2613248,
    "bytes_written":        48128000,
    "bytes_incoming":       72000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     43161,
    "lss_gc_reads_bs":      2285926,
    "lss_blk_gc_reads_bs":  2613248,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      45481984,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 4
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             59700,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              6000000,
"deletes":              5000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     3,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    135,
"memory_size":          17653400,
"memory_size_index":    265808,
"allocated":            758099856,
"freed":                740446456,
"reclaimed":            740057592,
"reclaim_pending":      388864,
"reclaim_list_size":    388864,
"reclaim_list_count":   30,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1019984,
"num_rec_allocs":       24979623,
"num_rec_frees":        23959639,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       88000000,
"lss_data_size":        5676658,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        55084044,
"est_recovery_size":    307500,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           11000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36953960,
"page_cnt":             23038,
"page_itemcnt":         4619245,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":54884574,
"page_bytes_compressed":54884574,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1493496,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5852774,
    "lss_used_space":       55585308,
    "lss_disk_size":        58683392,
    "lss_fragmentation":    89,
    "lss_num_reads":        53788,
    "lss_read_bs":          2859744,
    "lss_blk_read_bs":      3272704,
    "bytes_written":        58683392,
    "bytes_incoming":       88000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     53788,
    "lss_gc_reads_bs":      2859744,
    "lss_blk_gc_reads_bs":  3272704,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      55377920,
    "num_sctxs":            30,
    "num_free_sctxs":       19,
    "num_swapperWriter":    32
  }
}
Running iteration.. 5
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             69650,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              7000000,
"deletes":              6000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    136,
"memory_size":          18027032,
"memory_size_index":    265808,
"allocated":            887028496,
"freed":                869001464,
"reclaimed":            868548488,
"reclaim_pending":      452976,
"reclaim_list_size":    452976,
"reclaim_list_count":   35,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1029974,
"num_rec_allocs":       28974648,
"num_rec_frees":        27944674,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       104000000,
"lss_data_size":        7138518,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        64922360,
"est_recovery_size":    370256,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           13000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35037176,
"page_cnt":             21843,
"page_itemcnt":         4379647,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":64688066,
"page_bytes_compressed":64688066,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1747504,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7357638,
    "lss_used_space":       65511668,
    "lss_disk_size":        69177344,
    "lss_fragmentation":    88,
    "lss_num_reads":        63532,
    "lss_read_bs":          3385880,
    "lss_blk_read_bs":      3870720,
    "bytes_written":        69177344,
    "bytes_incoming":       104000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.63,
    "lss_gc_num_reads":     63532,
    "lss_gc_reads_bs":      3385880,
    "lss_blk_gc_reads_bs":  3870720,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      65232896,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 6
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             79600,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              8000000,
"deletes":              7000000,
"compact_conflicts":    4,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     3,
"swapin_conflicts":     0,
"persist_conflicts":    136,
"memory_size":          18056632,
"memory_size_index":    265808,
"allocated":            1015771856,
"freed":                997715224,
"reclaimed":            997197000,
"reclaim_pending":      518224,
"reclaim_list_size":    518224,
"reclaim_list_count":   40,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1030040,
"num_rec_allocs":       32969673,
"num_rec_frees":        31939633,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       120000000,
"lss_data_size":        3641358,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        69873276,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           15000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33160296,
"page_cnt":             20673,
"page_itemcnt":         4145037,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":69621528,
"page_bytes_compressed":69621528,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    1816256,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        3752994,
    "lss_used_space":       70334908,
    "lss_disk_size":        74452992,
    "lss_fragmentation":    94,
    "lss_num_reads":        71277,
    "lss_read_bs":          3804086,
    "lss_blk_read_bs":      4362240,
    "bytes_written":        74452992,
    "bytes_incoming":       120000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     71277,
    "lss_gc_reads_bs":      3804086,
    "lss_blk_gc_reads_bs":  4362240,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      70189056,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 7
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             89551,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              9000000,
"deletes":              8000000,
"compact_conflicts":    6,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     4,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    166,
"memory_size":          18422424,
"memory_size_index":    265808,
"allocated":            1144706136,
"freed":                1126283712,
"reclaimed":            1125688216,
"reclaim_pending":      595496,
"reclaim_list_size":    595496,
"reclaim_list_count":   46,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039822,
"num_rec_allocs":       36964923,
"num_rec_frees":        35925101,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       136000000,
"lss_data_size":        5150572,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        79861076,
"est_recovery_size":    293870,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           17000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34959984,
"page_cnt":             21795,
"page_itemcnt":         4369998,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":79574318,
"page_bytes_compressed":79574318,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2064128,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5306824,
    "lss_used_space":       80413832,
    "lss_disk_size":        85127168,
    "lss_fragmentation":    93,
    "lss_num_reads":        81440,
    "lss_read_bs":          4352848,
    "lss_blk_read_bs":      4993024,
    "bytes_written":        85127168,
    "bytes_incoming":       136000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     81440,
    "lss_gc_reads_bs":      4352848,
    "lss_blk_gc_reads_bs":  4993024,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      80228352,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 8
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             99501,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              10000000,
"deletes":              9000000,
"compact_conflicts":    8,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     5,
"swapin_conflicts":     0,
"persist_conflicts":    210,
"memory_size":          18636664,
"memory_size_index":    265808,
"allocated":            1273634200,
"freed":                1254997536,
"reclaimed":            1254349984,
"reclaim_pending":      647552,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1039890,
"num_rec_allocs":       40959948,
"num_rec_frees":        39920058,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       152000000,
"lss_data_size":        6560368,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        89837102,
"est_recovery_size":    357496,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           19000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36753448,
"page_cnt":             22913,
"page_itemcnt":         4594181,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":89515574,
"page_bytes_compressed":89515574,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2317480,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6759156,
    "lss_used_space":       90536618,
    "lss_disk_size":        95797248,
    "lss_fragmentation":    92,
    "lss_num_reads":        90799,
    "lss_read_bs":          4858202,
    "lss_blk_read_bs":      5570560,
    "bytes_written":        95797248,
    "bytes_incoming":       152000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.59,
    "lss_gc_num_reads":     90799,
    "lss_gc_reads_bs":      4858202,
    "lss_blk_gc_reads_bs":  5570560,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      90279936,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 9
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             109451,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              11000000,
"deletes":              10000000,
"compact_conflicts":    10,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     6,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    268,
"memory_size":          19009336,
"memory_size_index":    265808,
"allocated":            1402562072,
"freed":                1383552736,
"reclaimed":            1383487328,
"reclaim_pending":      65408,
"reclaim_list_size":    65408,
"reclaim_list_count":   5,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049872,
"num_rec_allocs":       44954973,
"num_rec_frees":        43905101,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       168000000,
"lss_data_size":        8042300,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        99895820,
"est_recovery_size":    455864,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           21000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33057832,
"page_cnt":             20609,
"page_itemcnt":         4132229,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":99539540,
"page_bytes_compressed":99539540,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2570640,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8283468,
    "lss_used_space":       100907500,
    "lss_disk_size":        106643456,
    "lss_fragmentation":    91,
    "lss_num_reads":        98956,
    "lss_read_bs":          5298656,
    "lss_blk_read_bs":      6082560,
    "bytes_written":        106643456,
    "bytes_incoming":       168000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.60,
    "lss_gc_num_reads":     98956,
    "lss_gc_reads_bs":      5298656,
    "lss_blk_gc_reads_bs":  6082560,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      100442112,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 10
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             119401,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              12000000,
"deletes":              11000000,
"compact_conflicts":    11,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     7,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    313,
"memory_size":          19041944,
"memory_size_index":    265808,
"allocated":            1531308440,
"freed":                1512266496,
"reclaimed":            1512136384,
"reclaim_pending":      130112,
"reclaim_list_size":    130112,
"reclaim_list_count":   10,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1049940,
"num_rec_allocs":       48949998,
"num_rec_frees":        47900058,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       184000000,
"lss_data_size":        4544912,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        105046164,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           23000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33221248,
"page_cnt":             20711,
"page_itemcnt":         4152656,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":104672148,
"page_bytes_compressed":104672148,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2642360,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4681040,
    "lss_used_space":       105835650,
    "lss_disk_size":        112218112,
    "lss_fragmentation":    95,
    "lss_num_reads":        109978,
    "lss_read_bs":          5893812,
    "lss_blk_read_bs":      6766592,
    "bytes_written":        112218112,
    "bytes_incoming":       184000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     109978,
    "lss_gc_reads_bs":      5893812,
    "lss_blk_gc_reads_bs":  6766592,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      105672704,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 11
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             129352,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              13000000,
"deletes":              12000000,
"compact_conflicts":    12,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     8,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    321,
"memory_size":          19406456,
"memory_size_index":    265808,
"allocated":            1660241392,
"freed":                1640834936,
"reclaimed":            1640627024,
"reclaim_pending":      207912,
"reclaim_list_size":    207912,
"reclaim_list_count":   16,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059722,
"num_rec_allocs":       52945249,
"num_rec_frees":        51885527,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       200000000,
"lss_data_size":        6061826,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        115240254,
"est_recovery_size":    311386,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           25000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           36871952,
"page_cnt":             22987,
"page_itemcnt":         4608994,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":114831354,
"page_bytes_compressed":114831354,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    2888952,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6241530,
    "lss_used_space":       116139312,
    "lss_disk_size":        123117568,
    "lss_fragmentation":    94,
    "lss_num_reads":        120172,
    "lss_read_bs":          6444248,
    "lss_blk_read_bs":      7393280,
    "bytes_written":        123117568,
    "bytes_incoming":       200000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     120172,
    "lss_gc_reads_bs":      6444248,
    "lss_blk_gc_reads_bs":  7393280,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      115904512,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 12
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             139302,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              14000000,
"deletes":              13000000,
"compact_conflicts":    12,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     8,
"delete_conflicts":     7,
"swapin_conflicts":     0,
"persist_conflicts":    356,
"memory_size":          19622264,
"memory_size_index":    265808,
"allocated":            1789170944,
"freed":                1769548680,
"reclaimed":            1769276248,
"reclaim_pending":      272432,
"reclaim_list_size":    272432,
"reclaim_list_count":   21,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1059792,
"num_rec_allocs":       56940275,
"num_rec_frees":        55880483,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       216000000,
"lss_data_size":        7545668,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        125487512,
"est_recovery_size":    365210,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           27000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34960192,
"page_cnt":             21795,
"page_itemcnt":         4370024,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":125043704,
"page_bytes_compressed":125043704,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3143856,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7769104,
    "lss_used_space":       126506914,
    "lss_disk_size":        134107136,
    "lss_fragmentation":    93,
    "lss_num_reads":        130785,
    "lss_read_bs":          7017310,
    "lss_blk_read_bs":      8052736,
    "bytes_written":        134107136,
    "bytes_incoming":       216000000,
    "write_amp":            0.59,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     130785,
    "lss_gc_reads_bs":      7017310,
    "lss_blk_gc_reads_bs":  8052736,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      126234624,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 13
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             149252,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              15000000,
"deletes":              14000000,
"compact_conflicts":    13,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     8,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    385,
"memory_size":          19809688,
"memory_size_index":    265808,
"allocated":            1917913536,
"freed":                1898103848,
"reclaimed":            1897767144,
"reclaim_pending":      336704,
"reclaim_list_size":    336704,
"reclaim_list_count":   26,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069772,
"num_rec_allocs":       60935300,
"num_rec_frees":        59865528,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       232000000,
"lss_data_size":        3934734,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        130633038,
"est_recovery_size":    288592,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           29000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           33181352,
"page_cnt":             20686,
"page_itemcnt":         4147669,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":130171848,
"page_bytes_compressed":130171848,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3211800,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        4050062,
    "lss_used_space":       131567896,
    "lss_disk_size":        139628544,
    "lss_fragmentation":    96,
    "lss_num_reads":        138660,
    "lss_read_bs":          7442536,
    "lss_blk_read_bs":      8548352,
    "bytes_written":        139628544,
    "bytes_incoming":       232000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     138660,
    "lss_gc_reads_bs":      7442536,
    "lss_blk_gc_reads_bs":  8548352,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      131428352,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 14
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             159202,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              16000000,
"deletes":              15000000,
"compact_conflicts":    15,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     10,
"delete_conflicts":     8,
"swapin_conflicts":     0,
"persist_conflicts":    432,
"memory_size":          20026200,
"memory_size_index":    265808,
"allocated":            2046843840,
"freed":                2026817640,
"reclaimed":            2026415848,
"reclaim_pending":      401792,
"reclaim_list_size":    401792,
"reclaim_list_count":   31,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1069842,
"num_rec_allocs":       64930325,
"num_rec_frees":        63860483,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       248000000,
"lss_data_size":        5454196,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        141018322,
"est_recovery_size":    308950,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           31000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           34980832,
"page_cnt":             21808,
"page_itemcnt":         4372604,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":140522152,
"page_bytes_compressed":140522152,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3467392,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5613880,
    "lss_used_space":       142097164,
    "lss_disk_size":        150728704,
    "lss_fragmentation":    96,
    "lss_num_reads":        148455,
    "lss_read_bs":          7971434,
    "lss_blk_read_bs":      9154560,
    "bytes_written":        150728704,
    "bytes_incoming":       248000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     148455,
    "lss_gc_reads_bs":      7971434,
    "lss_blk_gc_reads_bs":  9154560,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      141885440,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 15
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             169153,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              17000000,
"deletes":              16000000,
"compact_conflicts":    16,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     10,
"delete_conflicts":     9,
"swapin_conflicts":     0,
"persist_conflicts":    456,
"memory_size":          20393304,
"memory_size_index":    265808,
"allocated":            2175779464,
"freed":                2155386160,
"reclaimed":            2154906936,
"reclaim_pending":      479224,
"reclaim_list_size":    479224,
"reclaim_list_count":   37,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079620,
"num_rec_allocs":       68925575,
"num_rec_frees":        67845955,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       264000000,
"lss_data_size":        7086306,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        151525190,
"est_recovery_size":    348970,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           33000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           35041784,
"page_cnt":             21846,
"page_itemcnt":         4380223,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":150993884,
"page_bytes_compressed":150993884,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3716648,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        7291698,
    "lss_used_space":       152696114,
    "lss_disk_size":        161923072,
    "lss_fragmentation":    95,
    "lss_num_reads":        158677,
    "lss_read_bs":          8523382,
    "lss_blk_read_bs":      9785344,
    "bytes_written":        161923072,
    "bytes_incoming":       264000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     158677,
    "lss_gc_reads_bs":      8523382,
    "lss_blk_gc_reads_bs":  9785344,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      152444928,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 16
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             179103,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              18000000,
"deletes":              17000000,
"compact_conflicts":    17,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     9,
"swapin_conflicts":     0,
"persist_conflicts":    476,
"memory_size":          20608472,
"memory_size_index":    265808,
"allocated":            2304708296,
"freed":                2284099824,
"reclaimed":            2283555760,
"reclaim_pending":      544064,
"reclaim_list_size":    544064,
"reclaim_list_count":   42,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1079694,
"num_rec_allocs":       72920600,
"num_rec_frees":        71840906,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       280000000,
"lss_data_size":        8586696,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        161987844,
"est_recovery_size":    398444,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           35000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31339552,
"page_cnt":             19538,
"page_itemcnt":         3917444,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":161421696,
"page_bytes_compressed":161421696,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    3970832,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8835300,
    "lss_used_space":       163239148,
    "lss_disk_size":        173056000,
    "lss_fragmentation":    94,
    "lss_num_reads":        168743,
    "lss_read_bs":          9066906,
    "lss_blk_read_bs":      10412032,
    "bytes_written":        173056000,
    "bytes_incoming":       280000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     168743,
    "lss_gc_reads_bs":      9066906,
    "lss_blk_gc_reads_bs":  10412032,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      162951168,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 17
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             189053,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              19000000,
"deletes":              18000000,
"compact_conflicts":    18,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     10,
"swapin_conflicts":     0,
"persist_conflicts":    476,
"memory_size":          20798552,
"memory_size_index":    265808,
"allocated":            2433453640,
"freed":                2412655088,
"reclaimed":            2412047040,
"reclaim_pending":      608048,
"reclaim_list_size":    608048,
"reclaim_list_count":   47,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089670,
"num_rec_allocs":       76915625,
"num_rec_frees":        75825955,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       296000000,
"lss_data_size":        4977618,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        167334494,
"est_recovery_size":    293580,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           37000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31384856,
"page_cnt":             19566,
"page_itemcnt":         3923107,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":166750706,
"page_bytes_compressed":166750706,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4041504,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        5120298,
    "lss_used_space":       168491390,
    "lss_disk_size":        178716672,
    "lss_fragmentation":    96,
    "lss_num_reads":        175724,
    "lss_read_bs":          9443856,
    "lss_blk_read_bs":      10850304,
    "bytes_written":        178716672,
    "bytes_incoming":       296000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     175724,
    "lss_gc_reads_bs":      9443856,
    "lss_blk_gc_reads_bs":  10850304,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      168300544,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 18
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             199003,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              20000000,
"deletes":              19000000,
"compact_conflicts":    18,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     10,
"swapin_conflicts":     0,
"persist_conflicts":    499,
"memory_size":          21015288,
"memory_size_index":    265808,
"allocated":            2562383976,
"freed":                2541368688,
"reclaimed":            2541277392,
"reclaim_pending":      91296,
"reclaim_list_size":    13304,
"reclaim_list_count":   1,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1089750,
"num_rec_allocs":       80910652,
"num_rec_frees":        79820902,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       312000000,
"lss_data_size":        6532270,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        177954904,
"est_recovery_size":    325190,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           39000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           31301272,
"page_cnt":             19514,
"page_itemcnt":         3912659,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":177336136,
"page_bytes_compressed":177336136,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4297176,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        6719306,
    "lss_used_space":       179200914,
    "lss_disk_size":        190066688,
    "lss_fragmentation":    96,
    "lss_num_reads":        186678,
    "lss_read_bs":          10035332,
    "lss_blk_read_bs":      11526144,
    "bytes_written":        190066688,
    "bytes_incoming":       312000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.57,
    "lss_gc_num_reads":     186678,
    "lss_gc_reads_bs":      10035332,
    "lss_blk_gc_reads_bs":  11526144,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      178970624,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
Running iteration.. 19
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             208954,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              21000000,
"deletes":              20000000,
"compact_conflicts":    19,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     11,
"delete_conflicts":     12,
"swapin_conflicts":     0,
"persist_conflicts":    562,
"memory_size":          21379064,
"memory_size_index":    265808,
"allocated":            2691316400,
"freed":                2669937336,
"reclaimed":            2669845696,
"reclaim_pending":      91640,
"reclaim_list_size":    91640,
"reclaim_list_count":   7,
"reclaim_threshold":    50,
"allocated_index":      265808,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1099520,
"num_rec_allocs":       84905902,
"num_rec_frees":        83806382,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       328000000,
"lss_data_size":        8117302,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        188616616,
"est_recovery_size":    360280,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           41000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           25738176,
"page_cnt":             16046,
"page_itemcnt":         3217272,
"avg_item_size":        8,
"avg_page_size":        1604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":187963012,
"page_bytes_compressed":187963012,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    4543232,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11468,
    "lss_data_size":        8347446,
    "lss_used_space":       190088034,
    "lss_disk_size":        201584640,
    "lss_fragmentation":    95,
    "lss_num_reads":        197502,
    "lss_read_bs":          10619796,
    "lss_blk_read_bs":      12193792,
    "bytes_written":        201584640,
    "bytes_incoming":       328000000,
    "write_amp":            0.55,
    "write_amp_avg":        0.58,
    "lss_gc_num_reads":     197502,
    "lss_gc_reads_bs":      10619796,
    "lss_blk_gc_reads_bs":  12193792,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      189820928,
    "num_sctxs":            31,
    "num_free_sctxs":       20,
    "num_swapperWriter":    32
  }
}
data size(258684) matches on disk
LSS test.basic.TestRecoveryCleanerDataSize(shard1) : all deamons stopped
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDataSize stopped
LSS test.basic.TestRecoveryCleanerDataSize(shard1) : LSSCleaner stopped
LSS test.basic.TestRecoveryCleanerDataSize/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDataSize closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestRecoveryCleanerDataSize ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestRecoveryCleanerDataSize ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDataSize sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerDataSize (182.43s)
=== RUN   TestRecoveryCleanerDeleteInstance
----------- running TestRecoveryCleanerDeleteInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestRecoveryCleanerDeleteInstance) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestRecoveryCleanerDeleteInstance
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDeleteInstance to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDeleteInstance to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.033µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [116.028µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [166.045µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.basic.TestRecoveryCleanerDeleteInstance1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.basic.TestRecoveryCleanerDeleteInstance1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDeleteInstance1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestRecoveryCleanerDeleteInstance1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance1 started
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance1 stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance1 closed
Shard shards/shard1(1) : destroying instance test.basic.TestRecoveryCleanerDeleteInstance1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.basic.TestRecoveryCleanerDeleteInstance1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance1 sucessfully destroyed
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 98, data: 258684, used: 17707008 log:(0 - 17707008)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 0, data: 258684, used: 2336, relocated: 4975, retries: 0, skipped: 104712 log:(0 - 17707008) run:1 duration:973 ms
data size(258684) matches on disk
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance stopped
Shard shards/shard1(1) : instance test.basic.TestRecoveryCleanerDeleteInstance closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerDeleteInstance (422.55s)
=== RUN   TestRecoveryCleanerRecoveryPoint
----------- running TestRecoveryCleanerRecoveryPoint
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestRecoveryCleanerRecoveryPoint) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestRecoveryCleanerRecoveryPoint
Shard shards/shard1(1) : Add instance test.mvcc.TestRecoveryCleanerRecoveryPoint to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestRecoveryCleanerRecoveryPoint to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [438.938µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [510.253µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [520.882µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestRecoveryCleanerRecoveryPoint started
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 76, data: 1047437, used: 4530176 log:(0 - 4530176)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 0, data: 1047437, used: 1052734, relocated: 14507, retries: 0, skipped: 20636 log:(0 - 5582848) run:1 duration:170 ms
Shard shards/shard1(1) : instance test.mvcc.TestRecoveryCleanerRecoveryPoint stopped
Shard shards/shard1(1) : instance test.mvcc.TestRecoveryCleanerRecoveryPoint closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerRecoveryPoint (23.75s)
=== RUN   TestRecoveryCleanerCorruptInstance
----------- running TestRecoveryCleanerCorruptInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestRecoveryCleanerCorruptInstance_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestRecoveryCleanerCorruptInstance_1
Shard shards/shard1(1) : Add instance test.default.TestRecoveryCleanerCorruptInstance_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestRecoveryCleanerCorruptInstance_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [56.389µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [98.683µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [105.5µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestRecoveryCleanerCorruptInstance_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestRecoveryCleanerCorruptInstance_2
Shard shards/shard1(1) : Add instance test.default.TestRecoveryCleanerCorruptInstance_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestRecoveryCleanerCorruptInstance_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_2 started
before cleaning: log head offset 0 tail offset 28672
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 12, data: 25222, used: 28672 log:(0 - 28672)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 0, data: 25222, used: 5514, relocated: 50, retries: 0, skipped: 47 log:(0 - 32768) run:1 duration:12 ms
after cleaning: log head offset 27254 tail offset 40960
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_2 stopped
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_2 closed
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_1 stopped
Shard shards/shard1(1) : instance test.default.TestRecoveryCleanerCorruptInstance_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestRecoveryCleanerCorruptInstance (0.13s)
=== RUN   TestPlasmaRecoverySimple
----------- running TestPlasmaRecoverySimple
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaRecoverySimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecoverySimple
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecoverySimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaRecoverySimple) to LSS (test.basic.TestPlasmaRecoverySimple) and RecoveryLSS (test.basic.TestPlasmaRecoverySimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaRecoverySimple
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecoverySimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecoverySimple to LSS test.basic.TestPlasmaRecoverySimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaRecoverySimple/recovery], Data log [test.basic.TestPlasmaRecoverySimple], Shared [false]
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [303.732µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaRecoverySimple/recovery], Data log [test.basic.TestPlasmaRecoverySimple], Shared [false]. Built [0] plasmas, took [342.147µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaRecoverySimple(shard1) : all deamons started
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple started
LSS test.basic.TestPlasmaRecoverySimple(shard1) : all deamons stopped
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple stopped
LSS test.basic.TestPlasmaRecoverySimple(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple closed
LSS test.basic.TestPlasmaRecoverySimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecoverySimple
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecoverySimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaRecoverySimple) to LSS (test.basic.TestPlasmaRecoverySimple) and RecoveryLSS (test.basic.TestPlasmaRecoverySimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaRecoverySimple
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecoverySimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecoverySimple to LSS test.basic.TestPlasmaRecoverySimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaRecoverySimple/recovery], Data log [test.basic.TestPlasmaRecoverySimple], Shared [false]
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [205.442µs]
LSS test.basic.TestPlasmaRecoverySimple(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [4096] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [295.402µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaRecoverySimple/recovery], Data log [test.basic.TestPlasmaRecoverySimple], Shared [false]. Built [1] plasmas, took [378.524µs]
Plasma: doInit: data UsedSpace 4096 recovery UsedSpace 42
LSS test.basic.TestPlasmaRecoverySimple(shard1) : all deamons started
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple started
LSS test.basic.TestPlasmaRecoverySimple(shard1) : all deamons stopped
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple stopped
LSS test.basic.TestPlasmaRecoverySimple(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaRecoverySimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaRecoverySimple ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaRecoverySimple ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecoverySimple sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaRecoverySimple (0.05s)
=== RUN   TestPlasmaRecovery
----------- running TestPlasmaRecovery
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecovery
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaRecovery) to LSS (test.basic.TestPlasmaRecovery) and RecoveryLSS (test.basic.TestPlasmaRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaRecovery
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecovery to LSS test.basic.TestPlasmaRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaRecovery/recovery], Data log [test.basic.TestPlasmaRecovery], Shared [false]
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [91.177µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaRecovery/recovery], Data log [test.basic.TestPlasmaRecovery], Shared [false]. Built [0] plasmas, took [111.426µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaRecovery(shard1) : all deamons started
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery started
{
"memory_quota":         1099511627776,
"count":                100000,
"compacts":             14412,
"purges":               0,
"splits":               4971,
"merges":               4463,
"inserts":              1000000,
"deletes":              900000,
"compact_conflicts":    14,
"split_conflicts":      10,
"merge_conflicts":      0,
"insert_conflicts":     988,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          1716120,
"memory_size_index":    27168,
"allocated":            164165536,
"freed":                162449416,
"reclaimed":            162173080,
"reclaim_pending":      276336,
"reclaim_list_size":    276336,
"reclaim_list_count":   43,
"reclaim_threshold":    50,
"allocated_index":      264432,
"freed_index":          237264,
"reclaimed_index":      235968,
"num_pages":            509,
"items_count":          0,
"total_records":        101444,
"num_rec_allocs":       6268550,
"num_rec_frees":        6167106,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       15200000,
"lss_data_size":        819618,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        19366636,
"est_recovery_size":    830346,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1900000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            2,
"num_free_wctxs":       0,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           34091936,
"page_cnt":             20470,
"page_itemcnt":         4261492,
"avg_item_size":        8,
"avg_page_size":        1665,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":19280206,
"page_bytes_compressed":19280206,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    95960,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          29236,
    "lss_data_size":        846270,
    "lss_used_space":       18874368,
    "lss_disk_size":        18874368,
    "lss_fragmentation":    95,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        18874368,
    "bytes_incoming":       15200000,
    "write_amp":            0.00,
    "write_amp_avg":        1.24,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      18874368,
    "num_sctxs":            21,
    "num_free_sctxs":       3,
    "num_swapperWriter":    32
  }
}
{
"node_count":             508,
"soft_deletes":           0,
"read_conflicts":         3,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3425,
"memory_used":            23104,
"node_allocs":            4971,
"node_frees":             0,
"level_node_distribution":{
"level0": 379,
"level1": 94,
"level2": 28,
"level3": 5,
"level4": 1,
"level5": 1,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
LSS test.basic.TestPlasmaRecovery(shard1) : all deamons stopped
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery stopped
LSS test.basic.TestPlasmaRecovery(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery closed
***** reopen file *****
LSS test.basic.TestPlasmaRecovery(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecovery
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaRecovery/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaRecovery) to LSS (test.basic.TestPlasmaRecovery) and RecoveryLSS (test.basic.TestPlasmaRecovery/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaRecovery
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecovery to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaRecovery to LSS test.basic.TestPlasmaRecovery
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaRecovery/recovery], Data log [test.basic.TestPlasmaRecovery], Shared [false]
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [905216]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.058228ms]
LSS test.basic.TestPlasmaRecovery(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [19529728] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [427.564109ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaRecovery/recovery], Data log [test.basic.TestPlasmaRecovery], Shared [false]. Built [1] plasmas, took [427.828767ms]
Plasma: doInit: data UsedSpace 19529728 recovery UsedSpace 830802
LSS test.basic.TestPlasmaRecovery(shard1) : all deamons started
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery started
{
"memory_quota":         1099511627776,
"count":                0,
"compacts":             0,
"purges":               0,
"splits":               0,
"merges":               0,
"inserts":              0,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          36640,
"memory_size_index":    26624,
"allocated":            34952928,
"freed":                34916288,
"reclaimed":            34916288,
"reclaim_pending":      0,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      502672,
"freed_index":          476048,
"reclaimed_index":      476048,
"num_pages":            509,
"items_count":          0,
"total_records":        101444,
"num_rec_allocs":       2002452,
"num_rec_frees":        2002452,
"num_rec_swapout":      101444,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       0,
"lss_data_size":        826718,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        19377444,
"est_recovery_size":    830802,
"lss_num_reads":        0,
"lss_read_bs":          19373348,
"lss_blk_read_bs":      19496960,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           0,
"cache_misses":         0,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            2,
"num_free_wctxs":       0,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           0,
"page_cnt":             0,
"page_itemcnt":         0,
"avg_item_size":        0,
"avg_page_size":        0,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":0,
"page_bytes_compressed":0,
"compression_ratio":    0.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    32576,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          0,
    "lss_data_size":        853474,
    "lss_used_space":       20434944,
    "lss_disk_size":        20434944,
    "lss_fragmentation":    95,
    "lss_num_reads":        0,
    "lss_read_bs":          20200054,
    "lss_blk_read_bs":      20402176,
    "bytes_written":        0,
    "bytes_incoming":       0,
    "write_amp":            0.00,
    "write_amp_avg":        0.00,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      19529728,
    "num_sctxs":            13,
    "num_free_sctxs":       1,
    "num_swapperWriter":    32
  }
}
{
"node_count":             508,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.2756,
"memory_used":            22560,
"node_allocs":            9434,
"node_frees":             0,
"level_node_distribution":{
"level0": 403,
"level1": 77,
"level2": 24,
"level3": 3,
"level4": 0,
"level5": 0,
"level6": 1,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
LSS test.basic.TestPlasmaRecovery(shard1) : all deamons stopped
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery stopped
LSS test.basic.TestPlasmaRecovery(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaRecovery/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaRecovery ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaRecovery ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaRecovery sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaRecovery (24.83s)
=== RUN   TestShardRecoveryShared
----------- running TestShardRecoveryShared
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryShared_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.692µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [121.788µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [132.469µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryShared_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryShared_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryShared_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryShared_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryShared_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryShared_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryShared_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryShared_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryShared_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryShared_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryShared_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryShared_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryShared_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryShared_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryShared_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryShared_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryShared_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryShared_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryShared_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryShared_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4038656]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [34074624] took [131.801746ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [34107392] replayOffset [34074624]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [132.333772ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [137.597333ms]
Plasma: doInit: data UsedSpace 1716583 recovery UsedSpace 239544
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryShared_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715562 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryShared_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712962 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryShared_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1724160 recovery UsedSpace 239764
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryShared_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1702877 recovery UsedSpace 222952
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryShared_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1711149 recovery UsedSpace 231028
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryShared_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1707891 recovery UsedSpace 222926
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryShared_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1726107 recovery UsedSpace 239654
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryShared_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1724049 recovery UsedSpace 239680
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryShared_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1717478 recovery UsedSpace 231222
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryShared_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1721827 recovery UsedSpace 239764
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryShared_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1708968 recovery UsedSpace 231112
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryShared_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1701920 recovery UsedSpace 222816
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryShared_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715846 recovery UsedSpace 231222
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryShared_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715308 recovery UsedSpace 231274
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryShared_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1723262 recovery UsedSpace 239324
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryShared_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715936 recovery UsedSpace 231112
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryShared_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712718 recovery UsedSpace 231222
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryShared_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715305 recovery UsedSpace 231552
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryShared_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryShared_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryShared_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715022 recovery UsedSpace 231222
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryShared_19 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryShared (12.17s)
=== RUN   TestShardRecoveryRecoveryLogAhead
----------- running TestShardRecoveryRecoveryLogAhead
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRecoveryLogAhead_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.991µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [103.829µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [110.778µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRecoveryLogAhead_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRecoveryLogAhead_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRecoveryLogAhead_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRecoveryLogAhead_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRecoveryLogAhead_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRecoveryLogAhead_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRecoveryLogAhead_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRecoveryLogAhead_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRecoveryLogAhead_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRecoveryLogAhead_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRecoveryLogAhead_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRecoveryLogAhead_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRecoveryLogAhead_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRecoveryLogAhead_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRecoveryLogAhead_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRecoveryLogAhead_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRecoveryLogAhead_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRecoveryLogAhead_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRecoveryLogAhead_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 started
Loading data...
Done loading some data. Copying data log...
Loading more data...
Done loading data.
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Copying back old data log...
Validating recovery...
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRecoveryLogAhead_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8069120]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [34217984] took [687.645183ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [68288512] replayOffset [34217984]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [706.203493ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [713.082943ms]
Plasma: doInit: data UsedSpace 1719286 recovery UsedSpace 467004
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRecoveryLogAhead_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712156 recovery UsedSpace 450056
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRecoveryLogAhead_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1711035 recovery UsedSpace 458734
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRecoveryLogAhead_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1713078 recovery UsedSpace 450386
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRecoveryLogAhead_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1710131 recovery UsedSpace 450166
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRecoveryLogAhead_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1714967 recovery UsedSpace 458650
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRecoveryLogAhead_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1722822 recovery UsedSpace 466894
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRecoveryLogAhead_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712283 recovery UsedSpace 458268
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRecoveryLogAhead_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1713230 recovery UsedSpace 458436
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRecoveryLogAhead_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712793 recovery UsedSpace 450192
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRecoveryLogAhead_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1719655 recovery UsedSpace 466894
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRecoveryLogAhead_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1709802 recovery UsedSpace 458184
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRecoveryLogAhead_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1711950 recovery UsedSpace 458734
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRecoveryLogAhead_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1707954 recovery UsedSpace 441786
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRecoveryLogAhead_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1715114 recovery UsedSpace 458818
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRecoveryLogAhead_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1704579 recovery UsedSpace 450276
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRecoveryLogAhead_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1723004 recovery UsedSpace 467004
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRecoveryLogAhead_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1711363 recovery UsedSpace 458462
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRecoveryLogAhead_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1724959 recovery UsedSpace 475656
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRecoveryLogAhead_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1725460 recovery UsedSpace 475436
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 started
Validating skiplog...
Loading even more data...
Done loading more data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRecoveryLogAhead_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [12066816]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [102424576] took [325.66579ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [102457344] replayOffset [102424576]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [326.474835ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [336.309496ms]
Plasma: doInit: data UsedSpace 3439029 recovery UsedSpace 702590
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRecoveryLogAhead_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3427426 recovery UsedSpace 677320
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRecoveryLogAhead_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3422833 recovery UsedSpace 685804
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRecoveryLogAhead_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3427222 recovery UsedSpace 677540
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRecoveryLogAhead_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3412174 recovery UsedSpace 668914
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRecoveryLogAhead_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3441275 recovery UsedSpace 694236
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRecoveryLogAhead_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3447611 recovery UsedSpace 702700
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRecoveryLogAhead_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3427802 recovery UsedSpace 685532
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRecoveryLogAhead_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3428223 recovery UsedSpace 685590
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRecoveryLogAhead_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3438682 recovery UsedSpace 685998
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRecoveryLogAhead_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3451380 recovery UsedSpace 710776
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRecoveryLogAhead_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3430838 recovery UsedSpace 693466
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRecoveryLogAhead_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3431995 recovery UsedSpace 694100
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRecoveryLogAhead_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3429930 recovery UsedSpace 677372
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRecoveryLogAhead_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3438564 recovery UsedSpace 694514
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRecoveryLogAhead_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3415920 recovery UsedSpace 677236
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRecoveryLogAhead_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3444559 recovery UsedSpace 702590
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRecoveryLogAhead_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3433580 recovery UsedSpace 694048
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRecoveryLogAhead_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3449039 recovery UsedSpace 711352
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogAhead_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRecoveryLogAhead_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogAhead_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3441383 recovery UsedSpace 702946
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogAhead_19 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryRecoveryLogAhead (37.70s)
=== RUN   TestShardRecoveryDataLogAhead
----------- running TestShardRecoveryDataLogAhead
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDataLogAhead_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [80.626µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [103.026µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [112.802µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDataLogAhead_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDataLogAhead_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDataLogAhead_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDataLogAhead_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDataLogAhead_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDataLogAhead_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDataLogAhead_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDataLogAhead_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDataLogAhead_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDataLogAhead_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDataLogAhead_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDataLogAhead_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDataLogAhead_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDataLogAhead_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDataLogAhead_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDataLogAhead_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDataLogAhead_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDataLogAhead_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDataLogAhead_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 started
Loading data...
Done loading some data. Copying recovery log...
Loading more data...
Done loading data.
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Copying back old recovery log...
Validating recovery...
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDataLogAhead_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4055040]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [34078720] took [129.698113ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [67960832] replayOffset [34078720]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [1.175046129s]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [1.187927306s]
Plasma: doInit: data UsedSpace 3430396 recovery UsedSpace 239654
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDataLogAhead_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3424503 recovery UsedSpace 223036
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDataLogAhead_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3426745 recovery UsedSpace 231028
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDataLogAhead_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3413431 recovery UsedSpace 222706
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDataLogAhead_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3421614 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDataLogAhead_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3441817 recovery UsedSpace 239434
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDataLogAhead_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3417942 recovery UsedSpace 222816
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDataLogAhead_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3435960 recovery UsedSpace 231138
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDataLogAhead_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3423239 recovery UsedSpace 222926
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDataLogAhead_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3434686 recovery UsedSpace 231248
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDataLogAhead_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3449968 recovery UsedSpace 239434
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDataLogAhead_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3434821 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDataLogAhead_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3423759 recovery UsedSpace 231028
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDataLogAhead_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3426201 recovery UsedSpace 231384
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDataLogAhead_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3433990 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDataLogAhead_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3427508 recovery UsedSpace 231138
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDataLogAhead_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3416021 recovery UsedSpace 223036
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDataLogAhead_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3436130 recovery UsedSpace 231358
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDataLogAhead_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3422499 recovery UsedSpace 222842
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogAhead_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDataLogAhead_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogAhead_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 3442688 recovery UsedSpace 239764
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogAhead_19 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDataLogAhead (24.72s)
=== RUN   TestShardRecoveryDestroyBlksInDataLog
----------- running TestShardRecoveryDestroyBlksInDataLog
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.791µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [104.277µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [111.49µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_5 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_6 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_7 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_8 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_9 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 ...
Shard shards/shard1(1) : removed plasmaId 11 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_10 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 ...
Shard shards/shard1(1) : removed plasmaId 12 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_11 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 ...
Shard shards/shard1(1) : removed plasmaId 13 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_12 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 ...
Shard shards/shard1(1) : removed plasmaId 14 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_13 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 ...
Shard shards/shard1(1) : removed plasmaId 15 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_14 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 ...
Shard shards/shard1(1) : removed plasmaId 16 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_15 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 ...
Shard shards/shard1(1) : removed plasmaId 17 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_16 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 ...
Shard shards/shard1(1) : removed plasmaId 18 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_17 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 ...
Shard shards/shard1(1) : removed plasmaId 19 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_18 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 ...
Shard shards/shard1(1) : removed plasmaId 20 for instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_19 sucessfully destroyed
Cleaning LSS at: [shards/shard1/data/recovery] ...
Before cleaning: log head offset [0] tail offset [4145152]
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: starting... frag 91, data: 369306, used: 4145152 log:(0 - 4145152)
LSS shards/shard1/data/recovery(shard1) : recoveryCleaner: completed... frag 90, data: 369306, used: 4083678, relocated: 0, retries: 0, skipped: 0 log:(0 - 4165632) run:1 duration:50 ms
After cleaning: log head offset [81954] tail offset [4165632]
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [81954] tailOffset [4165632]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [34115584] took [48.138618ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [34152448] replayOffset [34115584]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [48.350041ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [5] plasmas, took [49.529267ms]
Plasma: doInit: data UsedSpace 1730582 recovery UsedSpace 243638
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1737876 recovery UsedSpace 243832
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1724983 recovery UsedSpace 235316
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1729006 recovery UsedSpace 235426
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInDataLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInDataLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1728218 recovery UsedSpace 235646
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInDataLog_4 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDestroyBlksInDataLog (11.20s)
=== RUN   TestShardRecoveryDestroyBlksInRecoveryLog
----------- running TestShardRecoveryDestroyBlksInRecoveryLog
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.947µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [130.102µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [141.198µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_5 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_6 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_7 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_8 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_9 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 ...
Shard shards/shard1(1) : removed plasmaId 11 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_10 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 ...
Shard shards/shard1(1) : removed plasmaId 12 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_11 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 ...
Shard shards/shard1(1) : removed plasmaId 13 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_12 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 ...
Shard shards/shard1(1) : removed plasmaId 14 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_13 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 ...
Shard shards/shard1(1) : removed plasmaId 15 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_14 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 ...
Shard shards/shard1(1) : removed plasmaId 16 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_15 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 ...
Shard shards/shard1(1) : removed plasmaId 17 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_16 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 ...
Shard shards/shard1(1) : removed plasmaId 18 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_17 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 ...
Shard shards/shard1(1) : removed plasmaId 19 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_18 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 ...
Shard shards/shard1(1) : removed plasmaId 20 for instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_19 sucessfully destroyed
Cleaning LSS at: [shards/shard1/data] ...
Before cleaning: log head offset [0] tail offset [34127872]
LSS shards/shard1/data(shard1) : logCleaner: starting... frag 88, data: 4062396, used: 34127872 log:(0 - 34127872)
LSS shards/shard1/data(shard1) : logCleaner: completed... frag 0, data: 1650062, used: 1073030, relocated: 3725, retries: 0, skipped: 3743 log:(0 - 35196928) run:1 duration:693 ms
After cleaning: log head offset [34123898] tail offset [35844096]
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4923392]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [35840412] took [50.344347ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [34123898] tailOffset [35844096] replayOffset [35840412]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [50.450795ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [5] plasmas, took [51.424214ms]
Plasma: doInit: data UsedSpace 370498 recovery UsedSpace 401848
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 329119 recovery UsedSpace 393468
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 329983 recovery UsedSpace 393774
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 335481 recovery UsedSpace 393912
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 391859 recovery UsedSpace 393996
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInRecoveryLog_4 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDestroyBlksInRecoveryLog (11.75s)
=== RUN   TestShardRecoveryDestroyBlksInBothLog
----------- running TestShardRecoveryDestroyBlksInBothLog
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.913µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [221.698µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [235.373µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_5 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_6 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_7 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_8 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_9 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 ...
Shard shards/shard1(1) : removed plasmaId 11 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_10 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 ...
Shard shards/shard1(1) : removed plasmaId 12 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_11 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 ...
Shard shards/shard1(1) : removed plasmaId 13 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_12 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 ...
Shard shards/shard1(1) : removed plasmaId 14 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_13 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 ...
Shard shards/shard1(1) : removed plasmaId 15 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_14 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 ...
Shard shards/shard1(1) : removed plasmaId 16 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_15 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 ...
Shard shards/shard1(1) : removed plasmaId 17 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_16 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 ...
Shard shards/shard1(1) : removed plasmaId 18 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_17 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 ...
Shard shards/shard1(1) : removed plasmaId 19 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_18 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 ...
Shard shards/shard1(1) : removed plasmaId 20 for instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_19 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4026368]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [34060436] took [47.867038ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [34226176] replayOffset [34060436]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [48.327111ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [5] plasmas, took [49.284387ms]
Plasma: doInit: data UsedSpace 1735251 recovery UsedSpace 247756
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1717222 recovery UsedSpace 231468
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1714824 recovery UsedSpace 231248
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1712873 recovery UsedSpace 231028
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyBlksInBothLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDestroyBlksInBothLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1711030 recovery UsedSpace 231028
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyBlksInBothLog_4 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDestroyBlksInBothLog (11.09s)
=== RUN   TestShardRecoveryRecoveryLogCorruption
----------- running TestShardRecoveryRecoveryLogCorruption
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRecoveryLogCorruption_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.384µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [135.164µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [142.573µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRecoveryLogCorruption_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRecoveryLogCorruption_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRecoveryLogCorruption_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRecoveryLogCorruption_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRecoveryLogCorruption_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRecoveryLogCorruption_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRecoveryLogCorruption_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRecoveryLogCorruption_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRecoveryLogCorruption_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRecoveryLogCorruption_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRecoveryLogCorruption_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRecoveryLogCorruption_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRecoveryLogCorruption_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRecoveryLogCorruption_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRecoveryLogCorruption_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRecoveryLogCorruption_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRecoveryLogCorruption_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRecoveryLogCorruption_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRecoveryLogCorruption_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Corrupting file at 2994176
Wrote [1048526] bytes to file [shards/shard1/data/recovery/log.00000000000000.data]
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRecoveryLogCorruption_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRecoveryLogCorruption_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRecoveryLogCorruption_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [4042752]
LSS shards/shard1/data/recovery(shard1) : (fatal error - Fail to recover plasmaId 7 due to error 'LSS shards/shard1/data/recovery(shard1) : fatal: LSS Block is corrupted')
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [107.012421ms]
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRecoveryLogCorruption_0 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryRecoveryLogCorruption (10.77s)
=== RUN   TestShardRecoveryDataLogCorruption
----------- running TestShardRecoveryDataLogCorruption
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDataLogCorruption_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.303µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [131.86µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [141.124µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryDataLogCorruption_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryDataLogCorruption_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryDataLogCorruption_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryDataLogCorruption_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryDataLogCorruption_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryDataLogCorruption_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryDataLogCorruption_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryDataLogCorruption_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryDataLogCorruption_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryDataLogCorruption_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryDataLogCorruption_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryDataLogCorruption_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryDataLogCorruption_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryDataLogCorruption_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryDataLogCorruption_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryDataLogCorruption_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryDataLogCorruption_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryDataLogCorruption_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryDataLogCorruption_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Corrupting file at 33181696
Wrote [1048526] bytes to file [shards/shard1/data/log.00000000000000.data]
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDataLogCorruption_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDataLogCorruption_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDataLogCorruption_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [34230272] replayOffset [0]
LSS shards/shard1/data(shard1) : (fatal error - Fail to recover plasmaId 12 due to error ': fatal: LSS Block is corrupted')
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [1.036931401s]
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDataLogCorruption_0 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDataLogCorruption (11.74s)
=== RUN   TestShardRecoverySharedNoRP
----------- running TestShardRecoverySharedNoRP
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoverySharedNoRP_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.914µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [113.981µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [121.483µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoverySharedNoRP_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoverySharedNoRP_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoverySharedNoRP_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoverySharedNoRP_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoverySharedNoRP_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoverySharedNoRP_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoverySharedNoRP_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoverySharedNoRP_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoverySharedNoRP_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoverySharedNoRP_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoverySharedNoRP_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoverySharedNoRP_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoverySharedNoRP_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoverySharedNoRP_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoverySharedNoRP_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoverySharedNoRP_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoverySharedNoRP_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoverySharedNoRP_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoverySharedNoRP_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoverySharedNoRP_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [2895872]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [81920] took [132.21248ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [27684864] replayOffset [81920]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [1.00882466s]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [1.011217788s]
Plasma: doInit: data UsedSpace 1362362 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoverySharedNoRP_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362076 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoverySharedNoRP_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362357 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoverySharedNoRP_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362141 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoverySharedNoRP_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362398 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoverySharedNoRP_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362203 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoverySharedNoRP_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362363 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoverySharedNoRP_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362252 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoverySharedNoRP_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362369 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoverySharedNoRP_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362301 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoverySharedNoRP_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362308 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoverySharedNoRP_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362365 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoverySharedNoRP_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362308 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoverySharedNoRP_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362413 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoverySharedNoRP_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362273 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoverySharedNoRP_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362445 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoverySharedNoRP_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362210 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoverySharedNoRP_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362450 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoverySharedNoRP_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1362167 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoverySharedNoRP_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoverySharedNoRP_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoverySharedNoRP_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 1366196 recovery UsedSpace 132548
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoverySharedNoRP_19 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoverySharedNoRP (11.43s)
=== RUN   TestShardRecoveryNotEnoughMem
----------- running TestShardRecoveryNotEnoughMem
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryNotEnoughMem_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.445µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [70.984µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [77.847µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryNotEnoughMem_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryNotEnoughMem_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryNotEnoughMem_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryNotEnoughMem_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryNotEnoughMem_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryNotEnoughMem_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryNotEnoughMem_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryNotEnoughMem_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryNotEnoughMem_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryNotEnoughMem_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryNotEnoughMem_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryNotEnoughMem_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryNotEnoughMem_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryNotEnoughMem_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryNotEnoughMem_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryNotEnoughMem_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryNotEnoughMem_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryNotEnoughMem_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryNotEnoughMem_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 started
Loading data...
Done loading data. Validating recovery...
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Set memQuota to [3259088] and swapperEvictionTimeout to [1s]
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryNotEnoughMem_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [61841408] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [1.315499846s]
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 closed
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Set memQuota to [1099511627776] and swapperEvictionTimeout to [5m0s]
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryNotEnoughMem_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [61841408] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [6.299855602s]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [6.338464011s]
Plasma: doInit: data UsedSpace 2883151 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryNotEnoughMem_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2861523 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryNotEnoughMem_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2861551 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryNotEnoughMem_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2861734 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryNotEnoughMem_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2861743 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryNotEnoughMem_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862075 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryNotEnoughMem_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2861920 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryNotEnoughMem_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862081 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryNotEnoughMem_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862207 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryNotEnoughMem_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862691 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryNotEnoughMem_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862295 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryNotEnoughMem_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862699 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryNotEnoughMem_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862456 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryNotEnoughMem_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862644 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryNotEnoughMem_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862735 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryNotEnoughMem_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862801 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryNotEnoughMem_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2863074 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryNotEnoughMem_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2862999 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryNotEnoughMem_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2863022 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 started
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 20.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryNotEnoughMem_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryNotEnoughMem_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryNotEnoughMem_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 2864161 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryNotEnoughMem_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Reset memQuota to [1099511627776] and swapperEvictionTimeout to [5m0s]
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryNotEnoughMem (23.12s)
=== RUN   TestShardRecoveryCleanup
----------- running TestShardRecoveryCleanup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryCleanup_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.551µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [108.384µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [115.967µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryCleanup_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryCleanup_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryCleanup_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryCleanup_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryCleanup_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryCleanup_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryCleanup_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryCleanup_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryCleanup_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryCleanup_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryCleanup_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryCleanup_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryCleanup_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryCleanup_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryCleanup_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryCleanup_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryCleanup_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryCleanup_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryCleanup_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryCleanup_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [163840]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [81920] took [5.858763ms]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [163840] replayOffset [81920]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [6.153419ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [20] plasmas, took [6.296106ms]
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryCleanup_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryCleanup_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryCleanup_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryCleanup_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryCleanup_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryCleanup_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryCleanup_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 started
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryCleanup_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryCleanup_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryCleanup_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 32 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 started
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryCleanup_8 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryCleanup (0.42s)
=== RUN   TestShardRecoveryRebuildSharedLog
----------- running TestShardRecoveryRebuildSharedLog
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRebuildSharedLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [735.506µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [778.658µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [790.342µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRebuildSharedLog_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRebuildSharedLog_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRebuildSharedLog_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRebuildSharedLog_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRebuildSharedLog_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRebuildSharedLog_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRebuildSharedLog_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRebuildSharedLog_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRebuildSharedLog_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRebuildSharedLog_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRebuildSharedLog_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRebuildSharedLog_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRebuildSharedLog_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRebuildSharedLog_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRebuildSharedLog_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRebuildSharedLog_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRebuildSharedLog_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRebuildSharedLog_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRebuildSharedLog_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 15.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildSharedLog_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRebuildSharedLog_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildSharedLog_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [2129920] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [32.542675ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [15] plasmas, took [32.894666ms]
Plasma: doInit: data UsedSpace 106549 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 started
Plasma: Start rebuilding recovery log (shards/shard1/data/recovery) for instance test.default.TestShardRecoveryRebuildSharedLog_0
Plasma: Finish rebulding recovery log (shards/shard1/data/recovery) for instance test.default.TestShardRecoveryRebuildSharedLog_0 (elapsed 5.4532ms)
num pages after recovery 24
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildSharedLog_0 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryRebuildSharedLog (1.11s)
=== RUN   TestShardRecoveryUpgradeWithCheckpoint
----------- running TestShardRecoveryUpgradeWithCheckpoint
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_0) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_0) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.242µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0], Shared [false]. Built [0] plasmas, took [124.657µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_1
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_1) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_1) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_1], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.5µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_1], Shared [false]. Built [0] plasmas, took [100.337µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_2
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_2) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_2) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_2], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.944µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_2], Shared [false]. Built [0] plasmas, took [92.956µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_3
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_3) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_3) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_3], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [451.003µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_3], Shared [false]. Built [0] plasmas, took [529.716µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_4
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_4) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_4) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_4], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.246µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_4], Shared [false]. Built [0] plasmas, took [133.94µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 closed
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 closed
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 closed
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 closed
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_0
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery. Instances to rebuild 1.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithCheckpoint_0) to LSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_0) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 to LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0], Shared [false]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithCheckpoint_0], Shared [false]. Built [0] plasmas, took [4.058µs]
Plasma: Recover from checkpoint test.default.TestShardRecoveryUpgradeWithCheckpoint_0/index/checkpoint.00000000000001
Plasma: checkpoint recovery replayOffset:102400 startOffset:0, endOffset:110592, recoverTime:896.402µs
Plasma: doInit: data UsedSpace 110592 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 started
Plasma: Start rebuilding recovery log (test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery) for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0
Plasma: Finish rebulding recovery log (test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery) for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 (elapsed 14.598093ms)
num pages after recovery 24
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithCheckpoint_0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 closed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_0 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_0 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_0 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_1 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_1 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_1 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_2 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_2 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_2 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_3 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_3 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_3 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_4 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithCheckpoint_4 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithCheckpoint_4 destroyed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryUpgradeWithCheckpoint (0.43s)
=== RUN   TestShardRecoveryUpgradeWithLogReplay
----------- running TestShardRecoveryUpgradeWithLogReplay
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_0) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_0) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_0], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.525µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_0], Shared [false]. Built [0] plasmas, took [80.941µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_1
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_1) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_1) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_1], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.971µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_1], Shared [false]. Built [0] plasmas, took [84.473µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_2
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_2) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_2) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_2], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [135.518µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_2], Shared [false]. Built [0] plasmas, took [180.762µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_3
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_3) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_3) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_3], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.517µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_3], Shared [false]. Built [0] plasmas, took [104.734µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_4
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_4) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_4) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_4], Shared [false]
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.538µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_4], Shared [false]. Built [0] plasmas, took [90.127µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 closed
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 closed
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 closed
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 closed
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_0
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery. Instances to rebuild 1.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryUpgradeWithLogReplay_0) to LSS (test.default.TestShardRecoveryUpgradeWithLogReplay_0) and RecoveryLSS (test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryUpgradeWithLogReplay_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 to LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_0], Shared [false]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery], Data log [test.default.TestShardRecoveryUpgradeWithLogReplay_0], Shared [false]. Built [0] plasmas, took [3.827µs]
Plasma: doInit: data UsedSpace 102400 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 started
Plasma: Start rebuilding recovery log (test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery) for instance test.default.TestShardRecoveryUpgradeWithLogReplay_0
Plasma: Finish rebulding recovery log (test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery) for instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 (elapsed 8.200071ms)
num pages after recovery 24
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : all deamons started
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : all deamons stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0(shard1) : LSSCleaner stopped
LSS test.default.TestShardRecoveryUpgradeWithLogReplay_0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 closed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_0 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_0 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_0 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_1 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_1 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_1 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_2 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_2 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_2 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_3 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_3 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_3 destroyed
Destory instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_4 in . ...
Shard shards/shard1(1) : destroying instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardRecoveryUpgradeWithLogReplay_4 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryUpgradeWithLogReplay_4 destroyed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryUpgradeWithLogReplay (0.35s)
=== RUN   TestShardRecoveryRebuildAfterError
----------- running TestShardRecoveryRebuildAfterError
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRebuildAfterError_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.418µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [121.492µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [128.808µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardRecoveryRebuildAfterError_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardRecoveryRebuildAfterError_2
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardRecoveryRebuildAfterError_3
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardRecoveryRebuildAfterError_4
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_4 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_5) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardRecoveryRebuildAfterError_5
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_5 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardRecoveryRebuildAfterError_6
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_6 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_7) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardRecoveryRebuildAfterError_7
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_7 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardRecoveryRebuildAfterError_8
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_8 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_9) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardRecoveryRebuildAfterError_9
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_9 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardRecoveryRebuildAfterError_10
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_10 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_11) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardRecoveryRebuildAfterError_11
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_11 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardRecoveryRebuildAfterError_12
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_12 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_13) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardRecoveryRebuildAfterError_13
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_13 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardRecoveryRebuildAfterError_14
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_14 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_15) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardRecoveryRebuildAfterError_15
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_15 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardRecoveryRebuildAfterError_16
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_16 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_17) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardRecoveryRebuildAfterError_17
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_17 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardRecoveryRebuildAfterError_18
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_18 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_19) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardRecoveryRebuildAfterError_19
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_19 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_19 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_1 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_2 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_3 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_4 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_5 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_5 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_6 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_7 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_7 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_8 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_9 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_9 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_10 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_11 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_11 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_12 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_13 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_13 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_14 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_15 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_15 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_16 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_17 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_17 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_18 closed
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_19 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
Shard shards/shard1(1) : Recovery log is corrupted.   Rebuild recovery log when creating recovery point.
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Recovery log is empty. Enable rebuilding recovery log shards/shard1/data/recovery. Instances to rebuild 15.
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryRebuildAfterError_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryRebuildAfterError_0
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryRebuildAfterError_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [2129920] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [35.936826ms]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [15] plasmas, took [36.292693ms]
Plasma: doInit: data UsedSpace 106549 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 started
Plasma: Start rebuilding recovery log (shards/shard1/data/recovery) for instance test.default.TestShardRecoveryRebuildAfterError_0
Plasma: Finish rebulding recovery log (shards/shard1/data/recovery) for instance test.default.TestShardRecoveryRebuildAfterError_0 (elapsed 8.293102ms)
num pages after recovery 24
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryRebuildAfterError_0 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
--- PASS: TestShardRecoveryRebuildAfterError (1.15s)
=== RUN   TestShardRecoveryAfterDeleteInstance
----------- running TestShardRecoveryAfterDeleteInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryAfterDeleteInstance_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryAfterDeleteInstance_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.817µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [116.131µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [124.08µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryAfterDeleteInstance_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardRecoveryAfterDeleteInstance_2(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryAfterDeleteInstance_2
LSS test.default.TestShardRecoveryAfterDeleteInstance_2/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryAfterDeleteInstance_2/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardRecoveryAfterDeleteInstance_2) to LSS (test.default.TestShardRecoveryAfterDeleteInstance_2) and RecoveryLSS (test.default.TestShardRecoveryAfterDeleteInstance_2/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardRecoveryAfterDeleteInstance_2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_2 to LSS test.default.TestShardRecoveryAfterDeleteInstance_2
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryAfterDeleteInstance_2/recovery], Data log [test.default.TestShardRecoveryAfterDeleteInstance_2], Shared [false]
LSS test.default.TestShardRecoveryAfterDeleteInstance_2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [61.349µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryAfterDeleteInstance_2/recovery], Data log [test.default.TestShardRecoveryAfterDeleteInstance_2], Shared [false]. Built [0] plasmas, took [96.355µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryAfterDeleteInstance_2(shard2) : all deamons started
LSS test.default.TestShardRecoveryAfterDeleteInstance_2/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardRecoveryAfterDeleteInstance_2 started
Shard shards/shard3(3) : Shard Created Successfully
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardRecoveryAfterDeleteInstance_3(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryAfterDeleteInstance_3
LSS test.default.TestShardRecoveryAfterDeleteInstance_3/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryAfterDeleteInstance_3/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardRecoveryAfterDeleteInstance_3) to LSS (test.default.TestShardRecoveryAfterDeleteInstance_3) and RecoveryLSS (test.default.TestShardRecoveryAfterDeleteInstance_3/recovery)
Shard shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardRecoveryAfterDeleteInstance_3
Shard shards/shard3(3) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_3 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardRecoveryAfterDeleteInstance_3 to LSS test.default.TestShardRecoveryAfterDeleteInstance_3
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryAfterDeleteInstance_3/recovery], Data log [test.default.TestShardRecoveryAfterDeleteInstance_3], Shared [false]
LSS test.default.TestShardRecoveryAfterDeleteInstance_3/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [46.606µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryAfterDeleteInstance_3/recovery], Data log [test.default.TestShardRecoveryAfterDeleteInstance_3], Shared [false]. Built [0] plasmas, took [78.843µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryAfterDeleteInstance_3(shard3) : all deamons started
LSS test.default.TestShardRecoveryAfterDeleteInstance_3/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardRecoveryAfterDeleteInstance_3 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryAfterDeleteInstance_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryAfterDeleteInstance_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
LSS test.default.TestShardRecoveryAfterDeleteInstance_2(shard2) : all deamons stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_2/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryAfterDeleteInstance_2 stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_2(shard2) : LSSCleaner stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_2/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryAfterDeleteInstance_2 closed
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
LSS test.default.TestShardRecoveryAfterDeleteInstance_3(shard3) : all deamons stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_3/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardRecoveryAfterDeleteInstance_3 stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_3(shard3) : LSSCleaner stopped
LSS test.default.TestShardRecoveryAfterDeleteInstance_3/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardRecoveryAfterDeleteInstance_3 closed
Shard shards/shard3(3) : All daemons stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
Destory instances matching prefix test.default.TestShardRecoveryAfterDeleteInstance_1 in  ...
Start shard recovery from shards: Recover shard shard1 from metadata completed.
: Recover shard shard2 from metadata completed.
: Recover shard shard3 from metadata completed.
: destroying instance test.default.TestShardRecoveryAfterDeleteInstance_1 ...
: removed plasmaId 1 for instance test.default.TestShardRecoveryAfterDeleteInstance_1 ...
: metadata saved successfully
: instance test.default.TestShardRecoveryAfterDeleteInstance_1 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryAfterDeleteInstance_1 destroyed
Destory instances matching prefix test.default.TestShardRecoveryAfterDeleteInstance_2 in  ...
Start shard recovery from shards: Recover shard shard1 from metadata completed.
: Recover shard shard2 from metadata completed.
: Recover shard shard3 from metadata completed.
: destroying instance test.default.TestShardRecoveryAfterDeleteInstance_2 ...
: removed plasmaId 1 for instance test.default.TestShardRecoveryAfterDeleteInstance_2 ...
: metadata saved successfully
: instance test.default.TestShardRecoveryAfterDeleteInstance_2 sucessfully destroyed
All instances matching prefix test.default.TestShardRecoveryAfterDeleteInstance_2 destroyed
: All instance closed
: Shutdown completed
: All instance closed
: Shutdown completed
: All instance closed
: Shutdown completed
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
: All instance closed
: destroying instance test.default.TestShardRecoveryAfterDeleteInstance_3 ...
: removed plasmaId 1 for instance test.default.TestShardRecoveryAfterDeleteInstance_3 ...
: metadata saved successfully
: instance test.default.TestShardRecoveryAfterDeleteInstance_3 sucessfully destroyed
: All instance destroyed
: Shard Destroyed Successfully
--- PASS: TestShardRecoveryAfterDeleteInstance (0.08s)
=== RUN   TestShardRecoveryDestroyShard
----------- running TestShardRecoveryDestroyShard
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyShard_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyShard_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.755µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [123.418µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [134.091µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_2
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_2/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_2) to LSS (test.default.TestShardRecoveryDestroyShard_2) and RecoveryLSS (test.default.TestShardRecoveryDestroyShard_2/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryDestroyShard_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryDestroyShard_2 to LSS test.default.TestShardRecoveryDestroyShard_2
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_2/recovery], Data log [test.default.TestShardRecoveryDestroyShard_2], Shared [false]
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.555µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_2/recovery], Data log [test.default.TestShardRecoveryDestroyShard_2], Shared [false]. Built [0] plasmas, took [94.801µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : all deamons started
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 started
Shard shards/shard3(3) : Shard Created Successfully
Shard shards/shard3(3) : metadata saved successfully
LSS shards/shard3/data(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data
LSS shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_3) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_3
Shard shards/shard3(3) : Add instance test.default.TestShardRecoveryDestroyShard_3 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardRecoveryDestroyShard_3 to LSS shards/shard3/data
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]
LSS shards/shard3/data/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.262µs]
LSS shards/shard3/data(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [95.9µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]. Built [0] plasmas, took [103.885µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard3/data(shard3) : all deamons started
LSS shards/shard3/data/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardRecoveryDestroyShard_3 started
Shard shards/shard4(4) : Shard Created Successfully
Shard shards/shard4(4) : metadata saved successfully
LSS test.default.TestShardRecoveryDestroyShard_4(shard4) : LSSCleaner initialized
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_4
LSS test.default.TestShardRecoveryDestroyShard_4/recovery(shard4) : LSSCleaner initialized for recovery
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_4/recovery
Shard shards/shard4(4) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_4) to LSS (test.default.TestShardRecoveryDestroyShard_4) and RecoveryLSS (test.default.TestShardRecoveryDestroyShard_4/recovery)
Shard shards/shard4(4) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_4
Shard shards/shard4(4) : Add instance test.default.TestShardRecoveryDestroyShard_4 to Shard shards/shard4
Shard shards/shard4(4) : Add instance test.default.TestShardRecoveryDestroyShard_4 to LSS test.default.TestShardRecoveryDestroyShard_4
Shard shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_4/recovery], Data log [test.default.TestShardRecoveryDestroyShard_4], Shared [false]
LSS test.default.TestShardRecoveryDestroyShard_4/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.096µs]
Shard shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_4/recovery], Data log [test.default.TestShardRecoveryDestroyShard_4], Shared [false]. Built [0] plasmas, took [83.366µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardRecoveryDestroyShard_4(shard4) : all deamons started
LSS test.default.TestShardRecoveryDestroyShard_4/recovery(shard4) : all deamons started
Shard shards/shard4(4) : instance test.default.TestShardRecoveryDestroyShard_4 started
Shard shards/shard5(5) : Shard Created Successfully
Shard shards/shard5(5) : metadata saved successfully
LSS shards/shard5/data(shard5) : LSSCleaner initialized
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data
LSS shards/shard5/data/recovery(shard5) : LSSCleaner initialized for recovery
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data/recovery
Shard shards/shard5(5) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_5) to LSS (shards/shard5/data) and RecoveryLSS (shards/shard5/data/recovery)
Shard shards/shard5(5) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_5
Shard shards/shard5(5) : Add instance test.default.TestShardRecoveryDestroyShard_5 to Shard shards/shard5
Shard shards/shard5(5) : Add instance test.default.TestShardRecoveryDestroyShard_5 to LSS shards/shard5/data
Shard shards/shard5(5) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]
LSS shards/shard5/data/recovery(shard5) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.617µs]
LSS shards/shard5/data(shard5) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from data and recovery log, took [88.32µs]
Shard shards/shard5(5) : Shard.doRecovery: Done recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]. Built [0] plasmas, took [94.793µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard5/data(shard5) : all deamons started
LSS shards/shard5/data/recovery(shard5) : all deamons started
Shard shards/shard5(5) : instance test.default.TestShardRecoveryDestroyShard_5 started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : all deamons stopped
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 stopped
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : LSSCleaner stopped
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 closed
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard3(3) : instance test.default.TestShardRecoveryDestroyShard_3 stopped
Shard shards/shard3(3) : instance test.default.TestShardRecoveryDestroyShard_3 closed
LSS shards/shard3/data(shard3) : all deamons stopped
LSS shards/shard3/data/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : All daemons stopped
LSS shards/shard3/data(shard3) : LSSCleaner stopped
LSS shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
LSS test.default.TestShardRecoveryDestroyShard_4(shard4) : all deamons stopped
LSS test.default.TestShardRecoveryDestroyShard_4/recovery(shard4) : all deamons stopped
Shard shards/shard4(4) : instance test.default.TestShardRecoveryDestroyShard_4 stopped
LSS test.default.TestShardRecoveryDestroyShard_4(shard4) : LSSCleaner stopped
LSS test.default.TestShardRecoveryDestroyShard_4/recovery(shard4) : LSSCleaner stopped
Shard shards/shard4(4) : instance test.default.TestShardRecoveryDestroyShard_4 closed
Shard shards/shard4(4) : All daemons stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : Shutdown completed
Shard shards/shard5(5) : instance test.default.TestShardRecoveryDestroyShard_5 stopped
Shard shards/shard5(5) : instance test.default.TestShardRecoveryDestroyShard_5 closed
LSS shards/shard5/data(shard5) : all deamons stopped
LSS shards/shard5/data/recovery(shard5) : all deamons stopped
Shard shards/shard5(5) : All daemons stopped
LSS shards/shard5/data(shard5) : LSSCleaner stopped
LSS shards/shard5/data/recovery(shard5) : LSSCleaner stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : Shutdown completed
Start shard recovery from shards: Recover shard shard1 from metadata completed.
: Recover shard shard2 from metadata completed.
: Recover shard shard3 from metadata completed.
: Recover shard shard4 from metadata completed.
Fail to recover shard shards/shard5. Error: : fatal: checksum mismatch when loading metadata fileLSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyShard_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardRecoveryDestroyShard_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [124.287µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [8192] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [197.101µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [1] plasmas, took [271.295µs]
Plasma: doInit: data UsedSpace 4150 recovery UsedSpace 8294
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 started
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_2
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardRecoveryDestroyShard_2/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardRecoveryDestroyShard_2) to LSS (test.default.TestShardRecoveryDestroyShard_2) and RecoveryLSS (test.default.TestShardRecoveryDestroyShard_2/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardRecoveryDestroyShard_2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryDestroyShard_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardRecoveryDestroyShard_2 to LSS test.default.TestShardRecoveryDestroyShard_2
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_2/recovery], Data log [test.default.TestShardRecoveryDestroyShard_2], Shared [false]
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [89.488µs]
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [8192] replayOffset [4096]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [154.565µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardRecoveryDestroyShard_2/recovery], Data log [test.default.TestShardRecoveryDestroyShard_2], Shared [false]. Built [1] plasmas, took [217.265µs]
Plasma: doInit: data UsedSpace 8192 recovery UsedSpace 8258
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 started
No instances in shard being recovered. Destroy shard shards/shard3: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
No instances in shard being recovered. Destroy shard shards/shard4: All instance closed
: destroying instance test.default.TestShardRecoveryDestroyShard_4 ...
: removed plasmaId 1 for instance test.default.TestShardRecoveryDestroyShard_4 ...
: metadata saved successfully
: instance test.default.TestShardRecoveryDestroyShard_4 sucessfully destroyed
: All instance destroyed
: Shard Destroyed Successfully
Destroy corrupt shard shards/shard5LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : all deamons started
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardRecoveryDestroyShard_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : all deamons stopped
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 stopped
LSS test.default.TestShardRecoveryDestroyShard_2(shard2) : LSSCleaner stopped
LSS test.default.TestShardRecoveryDestroyShard_2/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 closed
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : destroying instance test.default.TestShardRecoveryDestroyShard_2 ...
Shard shards/shard2(2) : removed plasmaId 1 for instance test.default.TestShardRecoveryDestroyShard_2 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardRecoveryDestroyShard_2 sucessfully destroyed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
--- PASS: TestShardRecoveryDestroyShard (0.17s)
=== RUN   TestSkipLogSimple
--- PASS: TestSkipLogSimple (0.00s)
=== RUN   TestShardMetadata
----------- running TestShardMetadata
Shard test.default.TestShardMetadata(0) : Shard Created Successfully
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 1 to instance test.default.TestShardMetadata-0
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-0 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 2 to instance test.default.TestShardMetadata-1
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-1 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 3 to instance test.default.TestShardMetadata-2
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-2 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 4 to instance test.default.TestShardMetadata-3
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-3 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 5 to instance test.default.TestShardMetadata-4
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-4 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 6 to instance test.default.TestShardMetadata-5
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-5 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 7 to instance test.default.TestShardMetadata-6
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-6 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 8 to instance test.default.TestShardMetadata-7
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-7 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 9 to instance test.default.TestShardMetadata-8
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-8 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : metadata saved successfully
Shard test.default.TestShardMetadata(0) : Assign plasmaId 10 to instance test.default.TestShardMetadata-9
Shard test.default.TestShardMetadata(0) : Add instance test.default.TestShardMetadata-9 to Shard test.default.TestShardMetadata
Shard test.default.TestShardMetadata(0) : All daemons stopped
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-0 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-1 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-2 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-3 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-4 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-5 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-6 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-7 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-8 closed
Shard test.default.TestShardMetadata(0) : instance test.default.TestShardMetadata-9 closed
Shard test.default.TestShardMetadata(0) : All instance closed
Shard test.default.TestShardMetadata(0) : Shutdown completed
Shard test.default.TestShardMetadata(0) : Shard Created Successfully
--- PASS: TestShardMetadata (0.00s)
=== RUN   TestPlasmaId
----------- running TestPlasmaId
Shard test.default.TestPlasmaId(0) : Shard Created Successfully
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 1 to instance test.default.TestPlasmaId-0
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-0 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 2 to instance test.default.TestPlasmaId-1
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-1 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 3 to instance test.default.TestPlasmaId-2
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-2 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 4 to instance test.default.TestPlasmaId-3
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-3 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 5 to instance test.default.TestPlasmaId-4
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-4 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 6 to instance test.default.TestPlasmaId-5
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-5 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 7 to instance test.default.TestPlasmaId-6
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-6 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 8 to instance test.default.TestPlasmaId-7
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-7 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 9 to instance test.default.TestPlasmaId-8
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-8 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : metadata saved successfully
Shard test.default.TestPlasmaId(0) : Assign plasmaId 10 to instance test.default.TestPlasmaId-9
Shard test.default.TestPlasmaId(0) : Add instance test.default.TestPlasmaId-9 to Shard test.default.TestPlasmaId
Shard test.default.TestPlasmaId(0) : instance test.default.TestPlasmaId-5 closed
--- PASS: TestPlasmaId (0.00s)
=== RUN   TestShardPersistence
----------- running TestShardPersistence
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardPersistence_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardPersistence_1
Shard shards/shard1(1) : Add instance test.default.TestShardPersistence_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardPersistence_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [54.819µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [95.731µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [102.851µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardPersistence_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardPersistence_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardPersistence_2
Shard shards/shard1(1) : Add instance test.default.TestShardPersistence_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardPersistence_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardPersistence_2 started
Shard shards/shard1(1) : instance test.default.TestShardPersistence_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardPersistence_2 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardPersistence_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardPersistence_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardPersistence_2 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardPersistence_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardPersistence_1 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardPersistence_1 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.default.TestShardPersistence_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardPersistence_1 sucessfully destroyed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardPersistence (0.18s)
=== RUN   TestShardDestroy
----------- running TestShardDestroy
Shard test.default.TestShardDestroy(0) : Shard Created Successfully
Shard test.default.TestShardDestroy(0) : metadata saved successfully
Shard test.default.TestShardDestroy(0) : Assign plasmaId 1 to instance test.default.TestShardDestroy-0
Shard test.default.TestShardDestroy(0) : Add instance test.default.TestShardDestroy-0 to Shard test.default.TestShardDestroy
Shard test.default.TestShardDestroy(0) : metadata saved successfully
Shard test.default.TestShardDestroy(0) : Assign plasmaId 2 to instance test.default.TestShardDestroy-1
Shard test.default.TestShardDestroy(0) : Add instance test.default.TestShardDestroy-1 to Shard test.default.TestShardDestroy
Shard test.default.TestShardDestroy(0) : All daemons stopped
Shard test.default.TestShardDestroy(0) : instance test.default.TestShardDestroy-0 closed
Shard test.default.TestShardDestroy(0) : instance test.default.TestShardDestroy-1 closed
Shard test.default.TestShardDestroy(0) : All instance closed
Shard test.default.TestShardDestroy(0) : Shutdown completed
Shard test.default.TestShardDestroy(0) : destroying instance test.default.TestShardDestroy-1 ...
Shard test.default.TestShardDestroy(0) : removed plasmaId 2 for instance test.default.TestShardDestroy-1 ...
Shard test.default.TestShardDestroy(0) : metadata saved successfully
Shard test.default.TestShardDestroy(0) : instance test.default.TestShardDestroy-1 sucessfully destroyed
Shard test.default.TestShardDestroy(0) : destroying instance test.default.TestShardDestroy-0 ...
Shard test.default.TestShardDestroy(0) : removed plasmaId 1 for instance test.default.TestShardDestroy-0 ...
Shard test.default.TestShardDestroy(0) : metadata saved successfully
Shard test.default.TestShardDestroy(0) : instance test.default.TestShardDestroy-0 sucessfully destroyed
Shard test.default.TestShardDestroy(0) : All daemons stopped
Shard test.default.TestShardDestroy(0) : All instance closed
Shard test.default.TestShardDestroy(0) : All instance destroyed
Shard test.default.TestShardDestroy(0) : Shard Destroyed Successfully
--- PASS: TestShardDestroy (0.00s)
=== RUN   TestShardClose
----------- running TestShardClose
Shard test.default.TestShardClose(0) : Shard Created Successfully
Shard test.default.TestShardClose(0) : metadata saved successfully
Shard test.default.TestShardClose(0) : Assign plasmaId 1 to instance test.default.TestShardClose-0
Shard test.default.TestShardClose(0) : Add instance test.default.TestShardClose-0 to Shard test.default.TestShardClose
Shard test.default.TestShardClose(0) : metadata saved successfully
Shard test.default.TestShardClose(0) : Assign plasmaId 2 to instance test.default.TestShardClose-1
Shard test.default.TestShardClose(0) : Add instance test.default.TestShardClose-1 to Shard test.default.TestShardClose
Shard test.default.TestShardClose(0) : instance test.default.TestShardClose-1 closed
Shard test.default.TestShardClose(0) : All daemons stopped
Shard test.default.TestShardClose(0) : instance test.default.TestShardClose-0 closed
Shard test.default.TestShardClose(0) : All instance closed
Shard test.default.TestShardClose(0) : Shutdown completed
--- PASS: TestShardClose (5.00s)
=== RUN   TestShardMgrRecovery
----------- running TestShardMgrRecovery
Shard test.default.TestShardMgrRecovery/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardMgrRecovery/shards/shard1(1) : metadata saved successfully
Shard test.default.TestShardMgrRecovery/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardMgrRecovery_1
Shard test.default.TestShardMgrRecovery/shards/shard1(1) : Add instance test.default.TestShardMgrRecovery_1 to Shard test.default.TestShardMgrRecovery/shards/shard1
Shard test.default.TestShardMgrRecovery/shards/shard2(2) : Shard Created Successfully
Shard test.default.TestShardMgrRecovery/shards/shard2(2) : metadata saved successfully
Shard test.default.TestShardMgrRecovery/shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardMgrRecovery_2
Shard test.default.TestShardMgrRecovery/shards/shard2(2) : Add instance test.default.TestShardMgrRecovery_2 to Shard test.default.TestShardMgrRecovery/shards/shard2
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard Created Successfully
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : metadata saved successfully
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardMgrRecovery_3
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Add instance test.default.TestShardMgrRecovery_3 to Shard test.default.TestShardMgrRecovery/shards/shard3
Start shard recovery from test.default.TestShardMgrRecovery/shards: Recover shard shard1 from metadata completed.
: Recover shard shard2 from metadata completed.
: Recover shard shard3 from metadata completed.
Start shard recovery from test.default.TestShardMgrRecovery/shards: Recover shard shard1 from metadata completed.
Fail to recover shard test.default.TestShardMgrRecovery/shards/shard2. Error: : fatal: checksum mismatch when loading metadata file: Recover shard shard3 from metadata completed.
LSS test.default.TestShardMgrRecovery/shards/shard3/data(shard3) : LSSCleaner initialized
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardMgrRecovery/shards/shard3/data
LSS test.default.TestShardMgrRecovery/shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardMgrRecovery/shards/shard3/data/recovery
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Map plasma instance (test.default.TestShardMgrRecovery_3) to LSS (test.default.TestShardMgrRecovery/shards/shard3/data) and RecoveryLSS (test.default.TestShardMgrRecovery/shards/shard3/data/recovery)
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardMgrRecovery_3
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Add instance test.default.TestShardMgrRecovery_3 to Shard test.default.TestShardMgrRecovery/shards/shard3
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Add instance test.default.TestShardMgrRecovery_3 to LSS test.default.TestShardMgrRecovery/shards/shard3/data
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardMgrRecovery/shards/shard3/data/recovery], Data log [test.default.TestShardMgrRecovery/shards/shard3/data], Shared [true]
LSS test.default.TestShardMgrRecovery/shards/shard3/data/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [54.029µs]
LSS test.default.TestShardMgrRecovery/shards/shard3/data(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [96.468µs]
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardMgrRecovery/shards/shard3/data/recovery], Data log [test.default.TestShardMgrRecovery/shards/shard3/data], Shared [true]. Built [0] plasmas, took [107.718µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : instance test.default.TestShardMgrRecovery_3 started
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : instance test.default.TestShardMgrRecovery_3 stopped
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : instance test.default.TestShardMgrRecovery_3 closed
: All instance closed
: Shutdown completed
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : All daemons stopped
LSS test.default.TestShardMgrRecovery/shards/shard3/data(shard3) : LSSCleaner stopped
LSS test.default.TestShardMgrRecovery/shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : All instance closed
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shutdown completed
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : All daemons stopped
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : All instance closed
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : All instance destroyed
Shard test.default.TestShardMgrRecovery/shards/shard3(3) : Shard Destroyed Successfully
--- PASS: TestShardMgrRecovery (0.02s)
=== RUN   TestShardDeadData
----------- running TestShardDeadData
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardDeadData_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardDeadData_1
Shard shards/shard1(1) : Add instance test.default.TestShardDeadData_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardDeadData_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.792µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [96.778µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [104.346µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardDeadData_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardDeadData_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardDeadData_2
Shard shards/shard1(1) : Add instance test.default.TestShardDeadData_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardDeadData_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardDeadData_2 started
Shard shards/shard1(1) : instance test.default.TestShardDeadData_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardDeadData_2 closed
Shard shards/shard1(1) : destroying instance test.default.TestShardDeadData_2 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardDeadData_2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardDeadData_2 sucessfully destroyed
Shard shards/shard1(1) : instance test.default.TestShardDeadData_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardDeadData_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardDeadData (0.17s)
=== RUN   TestShardConfigUpdate
----------- running TestShardConfigUpdate
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardConfigUpdate_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardConfigUpdate_1
Shard shards/shard1(1) : Add instance test.default.TestShardConfigUpdate_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardConfigUpdate_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [107.3µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [231.157µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [240.123µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestShardConfigUpdate_1 started
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.default.TestShardConfigUpdate_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardConfigUpdate_1 closed
Shard shards/shard1(1) : Swapper stopped
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardConfigUpdate (0.03s)
=== RUN   TestShardSelection
----------- running TestShardSelection
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardSelection_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardSelection_1
Shard shards/shard1(1) : Add instance test.default.TestShardSelection_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardSelection_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [83.168µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [139.619µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [150.224µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardSelection_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardSelection_2) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardSelection_2
Shard shards/shard2(2) : Add instance test.default.TestShardSelection_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardSelection_2 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.628µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [85.983µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [92.277µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard2/data(shard2) : all deamons started
LSS shards/shard2/data/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardSelection_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardSelection_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardSelection_3
Shard shards/shard1(1) : Add instance test.default.TestShardSelection_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardSelection_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardSelection_3 started
Shard shards/shard1(1) : instance test.default.TestShardSelection_3 stopped
Shard shards/shard1(1) : instance test.default.TestShardSelection_3 closed
Shard shards/shard2(2) : instance test.default.TestShardSelection_2 stopped
Shard shards/shard2(2) : instance test.default.TestShardSelection_2 closed
Shard shards/shard1(1) : instance test.default.TestShardSelection_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardSelection_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
LSS shards/shard2/data(shard2) : all deamons stopped
LSS shards/shard2/data/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
--- PASS: TestShardSelection (0.06s)
=== RUN   TestShardWriteAmp
----------- running TestShardWriteAmp
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardWriteAmp_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardWriteAmp_1
Shard shards/shard1(1) : Add instance test.default.TestShardWriteAmp_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardWriteAmp_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [56.532µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [98.415µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [105.066µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardWriteAmp_1 started
Shard shards/shard1(1) : instance test.default.TestShardWriteAmp_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardWriteAmp_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardWriteAmp (10.09s)
=== RUN   TestShardStats
----------- running TestShardStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardStats_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardStats_1
Shard shards/shard1(1) : Add instance test.default.TestShardStats_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardStats_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.265µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [122.675µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [133.056µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardStats_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardStats_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardStats_2
Shard shards/shard1(1) : Add instance test.default.TestShardStats_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardStats_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardStats_2 started
{
"num_instances":            2,
"num_shared_instances":     2,
"num_lssctx":               1,
"num_sctxs":                25,
"num_free_sctxs":           17,
"num_wctxs":                38,
"num_free_wctxs":           34,
"num_readers":              0,
"num_writers":              2,
"num_swapper_ctx":          0,
"memory_size":              801070,
"memory_size_index":        5070,
"memory_size_bloom_filter": 0,
"memory_size_delta":        21824,
"reclaim_pending":          3479682,
"buf_memused":              46957,
"num_pages":                73,
"items_count":              15000,
"total_records":            15000,
"inserts":                  15000,
"deletes":                  0,
"lss_data_size":            171059,
"lss_used_space":           315392,
"est_disk_size":            317438,
"est_recovery_size":        30362,
"lss_fragmentation":        45,
"lss_num_reads":            208896,
"lss_read_bs":              202321,
"lss_blk_read_bs":          0,
"rlss_num_reads":           0,
"lss_rdr_reads_bs":         0,
"lss_blk_rdr_reads_bs":     0,
"bytes_written":            315392,
"bytes_incoming":           660000,
"write_amp_avg":            0.48,
"avg_sweep_interval":       "0s",
"num_swapperWorker":        0,
"num_swapperWriter":        16,
"num_swapperPages":         0,
"cached_records":           15000,
"resident_ratio":           1.00000,
"cleaner_num_reads":        0,
"cleaner_read_bs":          0,
"cleaner_blk_read_bs":      0,
"recovery_num_lssctx":           1,
"recovery_num_sctxs":            5,
"recovery_num_free_sctxs":       1,
"recovery_buf_memused":          0,
"recovery_data_size":            22828,
"recovery_used_space":           28672,
"recovery_lss_blk_read_bs":      0,
"recovery_lss_read_bs":          0,
"recovery_lss__num_reads":       0,
"recovery_bytes_written":        28672,
"recovery_lss_frag":             45,
"recovery_cleaner_num_reads":    0,
"recovery_cleaner_read_bs":      0,
"recovery_cleaner_blk_read_bs":  0,
"mvcc_purge_ratio":         1.00000
}
{
"memory_quota":         1099511627776,
"count":                10000,
"compacts":             97,
"purges":               0,
"splits":               48,
"merges":               0,
"inserts":              10000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          531960,
"memory_size_index":    3424,
"allocated":            2878476,
"freed":                2346516,
"reclaimed":            0,
"reclaim_pending":      2346516,
"reclaim_list_size":    0,
"reclaim_list_count":   0,
"reclaim_threshold":    50,
"allocated_index":      3424,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            49,
"items_count":          10000,
"total_records":        10000,
"num_rec_allocs":       48841,
"num_rec_frees":        38841,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       440000,
"lss_data_size":        111262,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        210889,
"est_recovery_size":    17606,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           10000,
"cache_misses":         0,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     1.00000,
"currSn":               1,
"gcSn":                 0,
"gcSnIntervals":       "[0 1]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       17,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           0,
"page_cnt":             0,
"page_itemcnt":         0,
"avg_item_size":        0,
"avg_page_size":        0,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":862758,
"page_bytes_compressed":200519,
"compression_ratio":    4.30262,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    11912,
"lss_stats":            
  {
    "shared":               true,
    "punch_hole_support":   true,
    "buf_memused":          46957,
    "lss_data_size":        193887,
    "lss_used_space":       344064,
    "lss_disk_size":        344064,
    "lss_fragmentation":    43,
    "lss_num_reads":        0,
    "lss_read_bs":          202321,
    "lss_blk_read_bs":      208896,
    "bytes_written":        344064,
    "bytes_incoming":       660000,
    "write_amp":            0.00,
    "write_amp_avg":        0.48,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      315392,
    "num_sctxs":            30,
    "num_free_sctxs":       18,
    "num_swapperWriter":    32
  }
}
Shard shards/shard1(1) : instance test.default.TestShardStats_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardStats_2 closed
Shard shards/shard1(1) : instance test.default.TestShardStats_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardStats_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardStats (0.14s)
=== RUN   TestShardMultipleWriters
----------- running TestShardMultipleWriters
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardMultipleWriters_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardMultipleWriters_1
Shard shards/shard1(1) : Add instance test.default.TestShardMultipleWriters_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardMultipleWriters_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.775µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [122.637µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [132.905µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardMultipleWriters_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardMultipleWriters_2
Shard shards/shard1(1) : Add instance test.default.TestShardMultipleWriters_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardMultipleWriters_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_2 started
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_2 closed
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardMultipleWriters_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardMultipleWriters (0.13s)
=== RUN   TestShardDestroyMultiple
----------- running TestShardDestroyMultiple
Start shard recovery from test.default.TestShardDestroyMultiple/shardsDirectory test.default.TestShardDestroyMultiple/shards does not exist.  Skip shard recovery.
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardDestroyMultiple/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/shards/shard1/data
LSS test.default.TestShardDestroyMultiple/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/shards/shard1/data/recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Map plasma instance (test.default.TestShardDestroyMultiple/temp.index/1) to LSS (test.default.TestShardDestroyMultiple/shards/shard1/data) and RecoveryLSS (test.default.TestShardDestroyMultiple/shards/shard1/data/recovery)
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardDestroyMultiple/temp.index/1
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/1 to Shard test.default.TestShardDestroyMultiple/shards/shard1
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/1 to LSS test.default.TestShardDestroyMultiple/shards/shard1/data
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardDestroyMultiple/shards/shard1/data/recovery], Data log [test.default.TestShardDestroyMultiple/shards/shard1/data], Shared [true]
LSS test.default.TestShardDestroyMultiple/shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.306µs]
LSS test.default.TestShardDestroyMultiple/shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [94.115µs]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardDestroyMultiple/shards/shard1/data/recovery], Data log [test.default.TestShardDestroyMultiple/shards/shard1/data], Shared [true]. Built [0] plasmas, took [101.527µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardDestroyMultiple/shards/shard1/data(shard1) : all deamons started
LSS test.default.TestShardDestroyMultiple/shards/shard1/data/recovery(shard1) : all deamons started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/1 started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardDestroyMultiple/temp.index/2(shard1) : LSSCleaner initialized
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/temp.index/2
LSS test.default.TestShardDestroyMultiple/temp.index/2/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/temp.index/2/recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Map plasma instance (test.default.TestShardDestroyMultiple/temp.index/2) to LSS (test.default.TestShardDestroyMultiple/temp.index/2) and RecoveryLSS (test.default.TestShardDestroyMultiple/temp.index/2/recovery)
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardDestroyMultiple/temp.index/2
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/2 to Shard test.default.TestShardDestroyMultiple/shards/shard1
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/2 to LSS test.default.TestShardDestroyMultiple/temp.index/2
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardDestroyMultiple/temp.index/2/recovery], Data log [test.default.TestShardDestroyMultiple/temp.index/2], Shared [false]
LSS test.default.TestShardDestroyMultiple/temp.index/2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.983µs]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardDestroyMultiple/temp.index/2/recovery], Data log [test.default.TestShardDestroyMultiple/temp.index/2], Shared [false]. Built [0] plasmas, took [83.3µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardDestroyMultiple/temp.index/2(shard1) : all deamons started
LSS test.default.TestShardDestroyMultiple/temp.index/2/recovery(shard1) : all deamons started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/2 started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardDestroyMultiple/temp.index/3(shard1) : LSSCleaner initialized
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/temp.index/3
LSS test.default.TestShardDestroyMultiple/temp.index/3/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardDestroyMultiple/temp.index/3/recovery
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Map plasma instance (test.default.TestShardDestroyMultiple/temp.index/3) to LSS (test.default.TestShardDestroyMultiple/temp.index/3) and RecoveryLSS (test.default.TestShardDestroyMultiple/temp.index/3/recovery)
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardDestroyMultiple/temp.index/3
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/3 to Shard test.default.TestShardDestroyMultiple/shards/shard1
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Add instance test.default.TestShardDestroyMultiple/temp.index/3 to LSS test.default.TestShardDestroyMultiple/temp.index/3
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardDestroyMultiple/temp.index/3/recovery], Data log [test.default.TestShardDestroyMultiple/temp.index/3], Shared [false]
LSS test.default.TestShardDestroyMultiple/temp.index/3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.212µs]
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardDestroyMultiple/temp.index/3/recovery], Data log [test.default.TestShardDestroyMultiple/temp.index/3], Shared [false]. Built [0] plasmas, took [107.712µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardDestroyMultiple/temp.index/3(shard1) : all deamons started
LSS test.default.TestShardDestroyMultiple/temp.index/3/recovery(shard1) : all deamons started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/3 started
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/1 stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/1 closed
LSS test.default.TestShardDestroyMultiple/temp.index/2(shard1) : all deamons stopped
LSS test.default.TestShardDestroyMultiple/temp.index/2/recovery(shard1) : all deamons stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/2 stopped
LSS test.default.TestShardDestroyMultiple/temp.index/2(shard1) : LSSCleaner stopped
LSS test.default.TestShardDestroyMultiple/temp.index/2/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/2 closed
LSS test.default.TestShardDestroyMultiple/temp.index/3(shard1) : all deamons stopped
LSS test.default.TestShardDestroyMultiple/temp.index/3/recovery(shard1) : all deamons stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/3 stopped
LSS test.default.TestShardDestroyMultiple/temp.index/3(shard1) : LSSCleaner stopped
LSS test.default.TestShardDestroyMultiple/temp.index/3/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/3 closed
Destory instances matching prefix test.default.TestShardDestroyMultiple/temp.index/ in . ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : destroying instance test.default.TestShardDestroyMultiple/temp.index/1 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : removed plasmaId 1 for instance test.default.TestShardDestroyMultiple/temp.index/1 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/1 sucessfully destroyed
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : destroying instance test.default.TestShardDestroyMultiple/temp.index/2 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardDestroyMultiple/temp.index/2 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/2 sucessfully destroyed
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : destroying instance test.default.TestShardDestroyMultiple/temp.index/3 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : removed plasmaId 3 for instance test.default.TestShardDestroyMultiple/temp.index/3 ...
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : metadata saved successfully
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : instance test.default.TestShardDestroyMultiple/temp.index/3 sucessfully destroyed
All instances matching prefix test.default.TestShardDestroyMultiple/temp.index/ destroyed
LSS test.default.TestShardDestroyMultiple/shards/shard1/data(shard1) : all deamons stopped
LSS test.default.TestShardDestroyMultiple/shards/shard1/data/recovery(shard1) : all deamons stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : All daemons stopped
LSS test.default.TestShardDestroyMultiple/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardDestroyMultiple/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : All instance closed
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : All instance destroyed
Shard test.default.TestShardDestroyMultiple/shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestShardDestroyMultiple (0.10s)
=== RUN   TestShardBackupCorrupted
----------- running TestShardBackupCorrupted
Start shard recovery from test.default.TestShardBackupCorrupted/shardsDirectory test.default.TestShardBackupCorrupted/shards does not exist.  Skip shard recovery.
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardBackupCorrupted/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/shards/shard1/data
LSS test.default.TestShardBackupCorrupted/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/shards/shard1/data/recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Map plasma instance (test.default.TestShardBackupCorrupted/temp/1) to LSS (test.default.TestShardBackupCorrupted/shards/shard1/data) and RecoveryLSS (test.default.TestShardBackupCorrupted/shards/shard1/data/recovery)
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardBackupCorrupted/temp/1
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/temp/1 to Shard test.default.TestShardBackupCorrupted/shards/shard1
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/temp/1 to LSS test.default.TestShardBackupCorrupted/shards/shard1/data
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardBackupCorrupted/shards/shard1/data/recovery], Data log [test.default.TestShardBackupCorrupted/shards/shard1/data], Shared [true]
LSS test.default.TestShardBackupCorrupted/shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.239µs]
LSS test.default.TestShardBackupCorrupted/shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [120.122µs]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardBackupCorrupted/shards/shard1/data/recovery], Data log [test.default.TestShardBackupCorrupted/shards/shard1/data], Shared [true]. Built [0] plasmas, took [130.328µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardBackupCorrupted/shards/shard1/data(shard1) : all deamons started
LSS test.default.TestShardBackupCorrupted/shards/shard1/data/recovery(shard1) : all deamons started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/1 started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardBackupCorrupted/corrupted/2(shard1) : LSSCleaner initialized
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/corrupted/2
LSS test.default.TestShardBackupCorrupted/corrupted/2/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/corrupted/2/recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Map plasma instance (test.default.TestShardBackupCorrupted/corrupted/2) to LSS (test.default.TestShardBackupCorrupted/corrupted/2) and RecoveryLSS (test.default.TestShardBackupCorrupted/corrupted/2/recovery)
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardBackupCorrupted/corrupted/2
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/corrupted/2 to Shard test.default.TestShardBackupCorrupted/shards/shard1
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/corrupted/2 to LSS test.default.TestShardBackupCorrupted/corrupted/2
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardBackupCorrupted/corrupted/2/recovery], Data log [test.default.TestShardBackupCorrupted/corrupted/2], Shared [false]
LSS test.default.TestShardBackupCorrupted/corrupted/2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.649µs]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardBackupCorrupted/corrupted/2/recovery], Data log [test.default.TestShardBackupCorrupted/corrupted/2], Shared [false]. Built [0] plasmas, took [82.238µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardBackupCorrupted/corrupted/2(shard1) : all deamons started
LSS test.default.TestShardBackupCorrupted/corrupted/2/recovery(shard1) : all deamons started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/corrupted/2 started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardBackupCorrupted/temp/3(shard1) : LSSCleaner initialized
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/temp/3
LSS test.default.TestShardBackupCorrupted/temp/3/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorrupted/temp/3/recovery
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Map plasma instance (test.default.TestShardBackupCorrupted/temp/3) to LSS (test.default.TestShardBackupCorrupted/temp/3) and RecoveryLSS (test.default.TestShardBackupCorrupted/temp/3/recovery)
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardBackupCorrupted/temp/3
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/temp/3 to Shard test.default.TestShardBackupCorrupted/shards/shard1
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Add instance test.default.TestShardBackupCorrupted/temp/3 to LSS test.default.TestShardBackupCorrupted/temp/3
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardBackupCorrupted/temp/3/recovery], Data log [test.default.TestShardBackupCorrupted/temp/3], Shared [false]
LSS test.default.TestShardBackupCorrupted/temp/3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [89.888µs]
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardBackupCorrupted/temp/3/recovery], Data log [test.default.TestShardBackupCorrupted/temp/3], Shared [false]. Built [0] plasmas, took [133.956µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardBackupCorrupted/temp/3(shard1) : all deamons started
LSS test.default.TestShardBackupCorrupted/temp/3/recovery(shard1) : all deamons started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/3 started
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/1 stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/1 closed
LSS test.default.TestShardBackupCorrupted/corrupted/2(shard1) : all deamons stopped
LSS test.default.TestShardBackupCorrupted/corrupted/2/recovery(shard1) : all deamons stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/corrupted/2 stopped
LSS test.default.TestShardBackupCorrupted/corrupted/2(shard1) : LSSCleaner stopped
LSS test.default.TestShardBackupCorrupted/corrupted/2/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/corrupted/2 closed
LSS test.default.TestShardBackupCorrupted/temp/3(shard1) : all deamons stopped
LSS test.default.TestShardBackupCorrupted/temp/3/recovery(shard1) : all deamons stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/3 stopped
LSS test.default.TestShardBackupCorrupted/temp/3(shard1) : LSSCleaner stopped
LSS test.default.TestShardBackupCorrupted/temp/3/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : instance test.default.TestShardBackupCorrupted/temp/3 closed
LSS test.default.TestShardBackupCorrupted/shards/shard1/data(shard1) : all deamons stopped
LSS test.default.TestShardBackupCorrupted/shards/shard1/data/recovery(shard1) : all deamons stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : All daemons stopped
LSS test.default.TestShardBackupCorrupted/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardBackupCorrupted/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : All instance closed
Shard test.default.TestShardBackupCorrupted/shards/shard1(1) : Shutdown completed
Start shard recovery from test.default.TestShardBackupCorrupted/shards: Recover shard shard1 from metadata completed.
backup directory test.default.TestShardBackupCorrupted/backup/2
: destroying instance test.default.TestShardBackupCorrupted/corrupted/2 ...
: removed plasmaId 2 for instance test.default.TestShardBackupCorrupted/corrupted/2 ...
: metadata saved successfully
: instance test.default.TestShardBackupCorrupted/corrupted/2 sucessfully destroyed
backup directory test.default.TestShardBackupCorrupted/backup/shard1
: destroying instance test.default.TestShardBackupCorrupted/temp/1 ...
: removed plasmaId 1 for instance test.default.TestShardBackupCorrupted/temp/1 ...
: metadata saved successfully
: instance test.default.TestShardBackupCorrupted/temp/1 sucessfully destroyed
backup directory test.default.TestShardBackupCorrupted/backup/3
: destroying instance test.default.TestShardBackupCorrupted/temp/3 ...
: removed plasmaId 3 for instance test.default.TestShardBackupCorrupted/temp/3 ...
: metadata saved successfully
: instance test.default.TestShardBackupCorrupted/temp/3 sucessfully destroyed
: All instance closed
: Shutdown completed
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
--- PASS: TestShardBackupCorrupted (0.08s)
=== RUN   TestShardBackupCorruptedShare
----------- running TestShardBackupCorruptedShare
Start shard recovery from test.default.TestShardBackupCorruptedShare/shardsDirectory test.default.TestShardBackupCorruptedShare/shards does not exist.  Skip shard recovery.
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorruptedShare/shards/shard1/data
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Map plasma instance (test.default.TestShardBackupCorruptedShare/temp/1) to LSS (test.default.TestShardBackupCorruptedShare/shards/shard1/data) and RecoveryLSS (test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery)
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardBackupCorruptedShare/temp/1
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Add instance test.default.TestShardBackupCorruptedShare/temp/1 to Shard test.default.TestShardBackupCorruptedShare/shards/shard1
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Add instance test.default.TestShardBackupCorruptedShare/temp/1 to LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery], Data log [test.default.TestShardBackupCorruptedShare/shards/shard1/data], Shared [true]
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.091µs]
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [91.553µs]
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery], Data log [test.default.TestShardBackupCorruptedShare/shards/shard1/data], Shared [true]. Built [0] plasmas, took [98.748µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data(shard1) : all deamons started
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery(shard1) : all deamons started
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/temp/1 started
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : metadata saved successfully
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Map plasma instance (test.default.TestShardBackupCorruptedShare/corrupted/2) to LSS (test.default.TestShardBackupCorruptedShare/shards/shard1/data) and RecoveryLSS (test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery)
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardBackupCorruptedShare/corrupted/2
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Add instance test.default.TestShardBackupCorruptedShare/corrupted/2 to Shard test.default.TestShardBackupCorruptedShare/shards/shard1
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Add instance test.default.TestShardBackupCorruptedShare/corrupted/2 to LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/corrupted/2 started
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/temp/1 stopped
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/temp/1 closed
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/corrupted/2 stopped
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : instance test.default.TestShardBackupCorruptedShare/corrupted/2 closed
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data(shard1) : all deamons stopped
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery(shard1) : all deamons stopped
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : All daemons stopped
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardBackupCorruptedShare/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : All instance closed
Shard test.default.TestShardBackupCorruptedShare/shards/shard1(1) : Shutdown completed
Start shard recovery from test.default.TestShardBackupCorruptedShare/shards: Recover shard shard1 from metadata completed.
backup directory test.default.TestShardBackupCorruptedShare/backup/shard1
: destroying instance test.default.TestShardBackupCorruptedShare/corrupted/2 ...
: removed plasmaId 2 for instance test.default.TestShardBackupCorruptedShare/corrupted/2 ...
: metadata saved successfully
: instance test.default.TestShardBackupCorruptedShare/corrupted/2 sucessfully destroyed
backup directory test.default.TestShardBackupCorruptedShare/backup/shard1
: destroying instance test.default.TestShardBackupCorruptedShare/temp/1 ...
: removed plasmaId 1 for instance test.default.TestShardBackupCorruptedShare/temp/1 ...
: metadata saved successfully
: instance test.default.TestShardBackupCorruptedShare/temp/1 sucessfully destroyed
: All instance closed
: Shutdown completed
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
--- PASS: TestShardBackupCorruptedShare (0.03s)
=== RUN   TestShardCorruption
----------- running TestShardCorruption
Shard test.default.TestShardCorruption(0) : Shard Created Successfully
Shard test.default.TestShardCorruption(0) : metadata saved successfully
LSS test.default.TestShardCorruption/data(shard0) : LSSCleaner initialized
Shard test.default.TestShardCorruption(0) : LSSCtx Created Successfully. Path=test.default.TestShardCorruption/data
LSS test.default.TestShardCorruption/data/recovery(shard0) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruption(0) : LSSCtx Created Successfully. Path=test.default.TestShardCorruption/data/recovery
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-0) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 1 to instance test.default.TestShardCorruption-0
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-0 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-0 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-1) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 2 to instance test.default.TestShardCorruption-1
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-1 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-1 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-2) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 3 to instance test.default.TestShardCorruption-2
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-2 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-2 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-3) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 4 to instance test.default.TestShardCorruption-3
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-3 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-3 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-4) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 5 to instance test.default.TestShardCorruption-4
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-4 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-4 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-5) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 6 to instance test.default.TestShardCorruption-5
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-5 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-5 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-6) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 7 to instance test.default.TestShardCorruption-6
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-6 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-6 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-7) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 8 to instance test.default.TestShardCorruption-7
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-7 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-7 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-8) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 9 to instance test.default.TestShardCorruption-8
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-8 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-8 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : metadata saved successfully
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-9) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 10 to instance test.default.TestShardCorruption-9
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-9 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-9 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : All daemons stopped
Shard test.default.TestShardCorruption(0) : Shard Created Successfully
LSS test.default.TestShardCorruption/data(shard0) : LSSCleaner initialized
Shard test.default.TestShardCorruption(0) : LSSCtx Created Successfully. Path=test.default.TestShardCorruption/data
LSS test.default.TestShardCorruption/data/recovery(shard0) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruption(0) : LSSCtx Created Successfully. Path=test.default.TestShardCorruption/data/recovery
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-0) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-1) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-2) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-3) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-4) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-5) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 6 to instance test.default.TestShardCorruption-5
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-5 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-5 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-6) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 7 to instance test.default.TestShardCorruption-6
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-6 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-6 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-7) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 8 to instance test.default.TestShardCorruption-7
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-7 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-7 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-8) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 9 to instance test.default.TestShardCorruption-8
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-8 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-8 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : Map plasma instance (test.default.TestShardCorruption-9) to LSS (test.default.TestShardCorruption/data) and RecoveryLSS (test.default.TestShardCorruption/data/recovery)
Shard test.default.TestShardCorruption(0) : Assign plasmaId 10 to instance test.default.TestShardCorruption-9
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-9 to Shard test.default.TestShardCorruption
Shard test.default.TestShardCorruption(0) : Add instance test.default.TestShardCorruption-9 to LSS test.default.TestShardCorruption/data
Shard test.default.TestShardCorruption(0) : All daemons stopped
Shard test.default.TestShardCorruption(0) : instance test.default.TestShardCorruption-5 closed
Shard test.default.TestShardCorruption(0) : instance test.default.TestShardCorruption-6 closed
Shard test.default.TestShardCorruption(0) : instance test.default.TestShardCorruption-7 closed
Shard test.default.TestShardCorruption(0) : instance test.default.TestShardCorruption-8 closed
Shard test.default.TestShardCorruption(0) : instance test.default.TestShardCorruption-9 closed
LSS test.default.TestShardCorruption/data(shard0) : LSSCleaner stopped
LSS test.default.TestShardCorruption/data/recovery(shard0) : LSSCleaner stopped
Shard test.default.TestShardCorruption(0) : All instance closed
Shard test.default.TestShardCorruption(0) : Shutdown completed
--- PASS: TestShardCorruption (0.01s)
=== RUN   TestShardCorruptionAddInstance
----------- running TestShardCorruptionAddInstance
Start shard recovery from test.default.TestShardCorruptionAddInstance/shardsDirectory test.default.TestShardCorruptionAddInstance/shards does not exist.  Skip shard recovery.
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard1/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard1/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Map plasma instance (test.default.TestShardCorruptionAddInstance-0) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard1/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard1/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-0
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Add instance test.default.TestShardCorruptionAddInstance-0 to Shard test.default.TestShardCorruptionAddInstance/shards/shard1
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Add instance test.default.TestShardCorruptionAddInstance-0 to LSS test.default.TestShardCorruptionAddInstance/shards/shard1/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard2/data(shard2) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard2/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard2/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Map plasma instance (test.default.TestShardCorruptionAddInstance-1) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard2/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard2/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-1
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Add instance test.default.TestShardCorruptionAddInstance-1 to Shard test.default.TestShardCorruptionAddInstance/shards/shard2
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Add instance test.default.TestShardCorruptionAddInstance-1 to LSS test.default.TestShardCorruptionAddInstance/shards/shard2/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard3/data(shard3) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard3/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard3/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Map plasma instance (test.default.TestShardCorruptionAddInstance-2) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard3/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard3/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-2
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Add instance test.default.TestShardCorruptionAddInstance-2 to Shard test.default.TestShardCorruptionAddInstance/shards/shard3
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Add instance test.default.TestShardCorruptionAddInstance-2 to LSS test.default.TestShardCorruptionAddInstance/shards/shard3/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard4/data(shard4) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard4/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard4/data/recovery(shard4) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard4/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Map plasma instance (test.default.TestShardCorruptionAddInstance-3) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard4/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard4/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-3
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Add instance test.default.TestShardCorruptionAddInstance-3 to Shard test.default.TestShardCorruptionAddInstance/shards/shard4
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Add instance test.default.TestShardCorruptionAddInstance-3 to LSS test.default.TestShardCorruptionAddInstance/shards/shard4/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard5/data(shard5) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard5/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard5/data/recovery(shard5) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard5/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Map plasma instance (test.default.TestShardCorruptionAddInstance-4) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard5/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard5/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-4
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Add instance test.default.TestShardCorruptionAddInstance-4 to Shard test.default.TestShardCorruptionAddInstance/shards/shard5
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Add instance test.default.TestShardCorruptionAddInstance-4 to LSS test.default.TestShardCorruptionAddInstance/shards/shard5/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data(shard6) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard6/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data/recovery(shard6) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard6/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Map plasma instance (test.default.TestShardCorruptionAddInstance-5) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard6/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard6/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-5
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Add instance test.default.TestShardCorruptionAddInstance-5 to Shard test.default.TestShardCorruptionAddInstance/shards/shard6
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Add instance test.default.TestShardCorruptionAddInstance-5 to LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : metadata saved successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Map plasma instance (test.default.TestShardCorruptionAddInstance-6) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard6/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard6/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Assign plasmaId 2 to instance test.default.TestShardCorruptionAddInstance-6
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Add instance test.default.TestShardCorruptionAddInstance-6 to Shard test.default.TestShardCorruptionAddInstance/shards/shard6
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Add instance test.default.TestShardCorruptionAddInstance-6 to LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data(shard7) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard7/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data/recovery(shard7) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard7/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Map plasma instance (test.default.TestShardCorruptionAddInstance-7) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard7/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard7/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-7
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Add instance test.default.TestShardCorruptionAddInstance-7 to Shard test.default.TestShardCorruptionAddInstance/shards/shard7
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Add instance test.default.TestShardCorruptionAddInstance-7 to LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : metadata saved successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Map plasma instance (test.default.TestShardCorruptionAddInstance-8) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard7/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard7/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Assign plasmaId 2 to instance test.default.TestShardCorruptionAddInstance-8
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Add instance test.default.TestShardCorruptionAddInstance-8 to Shard test.default.TestShardCorruptionAddInstance/shards/shard7
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Add instance test.default.TestShardCorruptionAddInstance-8 to LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Shard Created Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : metadata saved successfully
LSS test.default.TestShardCorruptionAddInstance/shards/shard8/data(shard8) : LSSCleaner initialized
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard8/data
LSS test.default.TestShardCorruptionAddInstance/shards/shard8/data/recovery(shard8) : LSSCleaner initialized for recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : LSSCtx Created Successfully. Path=test.default.TestShardCorruptionAddInstance/shards/shard8/data/recovery
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Map plasma instance (test.default.TestShardCorruptionAddInstance-9) to LSS (test.default.TestShardCorruptionAddInstance/shards/shard8/data) and RecoveryLSS (test.default.TestShardCorruptionAddInstance/shards/shard8/data/recovery)
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Assign plasmaId 1 to instance test.default.TestShardCorruptionAddInstance-9
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Add instance test.default.TestShardCorruptionAddInstance-9 to Shard test.default.TestShardCorruptionAddInstance/shards/shard8
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Add instance test.default.TestShardCorruptionAddInstance-9 to LSS test.default.TestShardCorruptionAddInstance/shards/shard8/data
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : instance test.default.TestShardCorruptionAddInstance-0 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard1(1) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : instance test.default.TestShardCorruptionAddInstance-1 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard2/data(shard2) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard2(2) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : instance test.default.TestShardCorruptionAddInstance-2 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard3/data(shard3) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard3(3) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : instance test.default.TestShardCorruptionAddInstance-3 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard4/data(shard4) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard4/data/recovery(shard4) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard4(4) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : instance test.default.TestShardCorruptionAddInstance-4 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard5/data(shard5) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard5/data/recovery(shard5) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard5(5) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : instance test.default.TestShardCorruptionAddInstance-5 closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : instance test.default.TestShardCorruptionAddInstance-6 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data(shard6) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard6/data/recovery(shard6) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard6(6) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : instance test.default.TestShardCorruptionAddInstance-7 closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : instance test.default.TestShardCorruptionAddInstance-8 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data(shard7) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard7/data/recovery(shard7) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard7(7) : Shard Destroyed Successfully
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : All daemons stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : instance test.default.TestShardCorruptionAddInstance-9 closed
LSS test.default.TestShardCorruptionAddInstance/shards/shard8/data(shard8) : LSSCleaner stopped
LSS test.default.TestShardCorruptionAddInstance/shards/shard8/data/recovery(shard8) : LSSCleaner stopped
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : All instance closed
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : All instance destroyed
Shard test.default.TestShardCorruptionAddInstance/shards/shard8(8) : Shard Destroyed Successfully
--- PASS: TestShardCorruptionAddInstance (0.07s)
=== RUN   TestShardCreateError
----------- running TestShardCreateError
Start shard recovery from test.default.TestShardCreateError/shardsDirectory test.default.TestShardCreateError/shards does not exist.  Skip shard recovery.
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard Created Successfully
Shard test.default.TestShardCreateError/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard1/data
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard1/data/recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : Map plasma instance (test.default.TestShardCreateError/1) to LSS (test.default.TestShardCreateError/shards/shard1/data) and RecoveryLSS (test.default.TestShardCreateError/shards/shard1/data/recovery)
Shard test.default.TestShardCreateError/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardCreateError/1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/1 to Shard test.default.TestShardCreateError/shards/shard1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/1 to LSS test.default.TestShardCreateError/shards/shard1/data
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/shards/shard1/data/recovery], Data log [test.default.TestShardCreateError/shards/shard1/data], Shared [true]
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.538µs]
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [114.895µs]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/shards/shard1/data/recovery], Data log [test.default.TestShardCreateError/shards/shard1/data], Shared [true]. Built [0] plasmas, took [122.557µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : all deamons started
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : all deamons started
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 started
Shard test.default.TestShardCreateError/shards/shard1(1) : (fatal error - PlasmaId found for new instance. Path=test.default.TestShardCreateError/1, plasmaId=1)
Shard test.default.TestShardCreateError/shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardCreateError/2(shard1) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2
LSS test.default.TestShardCreateError/2/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2/recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : Map plasma instance (test.default.TestShardCreateError/2) to LSS (test.default.TestShardCreateError/2) and RecoveryLSS (test.default.TestShardCreateError/2/recovery)
Shard test.default.TestShardCreateError/shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/2 to Shard test.default.TestShardCreateError/shards/shard1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/2 to LSS test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]
LSS test.default.TestShardCreateError/2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.445µs]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]. Built [0] plasmas, took [108.514µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardCreateError/2(shard1) : all deamons started
LSS test.default.TestShardCreateError/2/recovery(shard1) : all deamons started
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 started
Shard test.default.TestShardCreateError/shards/shard1(1) : (fatal error - PlasmaId found for new instance. Path=test.default.TestShardCreateError/2, plasmaId=2)
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 closed
LSS test.default.TestShardCreateError/2(shard1) : all deamons stopped
LSS test.default.TestShardCreateError/2/recovery(shard1) : all deamons stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 stopped
LSS test.default.TestShardCreateError/2(shard1) : LSSCleaner stopped
LSS test.default.TestShardCreateError/2/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 closed
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : all deamons stopped
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : all deamons stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : All daemons stopped
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : All instance closed
Shard test.default.TestShardCreateError/shards/shard1(1) : Shutdown completed
Start shard recovery from test.default.TestShardCreateError/shards: Recover shard shard1 from metadata completed.
Shard test.default.TestShardCreateError/shards/shard1(1) : (fatal error - PlasmaId found for new instance. Path=test.default.TestShardCreateError/1, plasmaId=1)
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard1/data
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard1/data/recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : Map plasma instance (test.default.TestShardCreateError/1) to LSS (test.default.TestShardCreateError/shards/shard1/data) and RecoveryLSS (test.default.TestShardCreateError/shards/shard1/data/recovery)
Shard test.default.TestShardCreateError/shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardCreateError/1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/1 to Shard test.default.TestShardCreateError/shards/shard1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/1 to LSS test.default.TestShardCreateError/shards/shard1/data
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/shards/shard1/data/recovery], Data log [test.default.TestShardCreateError/shards/shard1/data], Shared [true]
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [123.697µs]
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [8192] replayOffset [4096]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [232.542µs]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/shards/shard1/data/recovery], Data log [test.default.TestShardCreateError/shards/shard1/data], Shared [true]. Built [1] plasmas, took [319.691µs]
Plasma: doInit: data UsedSpace 4150 recovery UsedSpace 8294
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 started
Shard test.default.TestShardCreateError/shards/shard1(1) : (fatal error - PlasmaId found for new instance. Path=test.default.TestShardCreateError/2, plasmaId=2)
LSS test.default.TestShardCreateError/2(shard1) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2
LSS test.default.TestShardCreateError/2/recovery(shard1) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2/recovery
Shard test.default.TestShardCreateError/shards/shard1(1) : Map plasma instance (test.default.TestShardCreateError/2) to LSS (test.default.TestShardCreateError/2) and RecoveryLSS (test.default.TestShardCreateError/2/recovery)
Shard test.default.TestShardCreateError/shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/2 to Shard test.default.TestShardCreateError/shards/shard1
Shard test.default.TestShardCreateError/shards/shard1(1) : Add instance test.default.TestShardCreateError/2 to LSS test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]
LSS test.default.TestShardCreateError/2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [121.281µs]
LSS test.default.TestShardCreateError/2(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [8192] replayOffset [4096]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [206.153µs]
Shard test.default.TestShardCreateError/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]. Built [1] plasmas, took [289.895µs]
Plasma: doInit: data UsedSpace 8192 recovery UsedSpace 8258
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 started
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/1 closed
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 stopped
LSS test.default.TestShardCreateError/2(shard1) : LSSCleaner stopped
LSS test.default.TestShardCreateError/2/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : instance test.default.TestShardCreateError/2 closed
Shard test.default.TestShardCreateError/shards/shard1(1) : All daemons stopped
LSS test.default.TestShardCreateError/shards/shard1/data(shard1) : LSSCleaner stopped
LSS test.default.TestShardCreateError/shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard1(1) : All instance closed
Shard test.default.TestShardCreateError/shards/shard1(1) : Shutdown completed
Start shard recovery from test.default.TestShardCreateError/shardsFail to recover shard test.default.TestShardCreateError/shards/shard1. Error: : fatal: checksum mismatch when loading metadata fileShard test.default.TestShardCreateError/shards/shard2(2) : Shard Created Successfully
Shard test.default.TestShardCreateError/shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardCreateError/shards/shard2/data(shard2) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard2/data
LSS test.default.TestShardCreateError/shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/shards/shard2/data/recovery
Shard test.default.TestShardCreateError/shards/shard2(2) : Map plasma instance (test.default.TestShardCreateError/1) to LSS (test.default.TestShardCreateError/shards/shard2/data) and RecoveryLSS (test.default.TestShardCreateError/shards/shard2/data/recovery)
Shard test.default.TestShardCreateError/shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardCreateError/1
Shard test.default.TestShardCreateError/shards/shard2(2) : Add instance test.default.TestShardCreateError/1 to Shard test.default.TestShardCreateError/shards/shard2
Shard test.default.TestShardCreateError/shards/shard2(2) : Add instance test.default.TestShardCreateError/1 to LSS test.default.TestShardCreateError/shards/shard2/data
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/shards/shard2/data/recovery], Data log [test.default.TestShardCreateError/shards/shard2/data], Shared [true]
LSS test.default.TestShardCreateError/shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.781µs]
LSS test.default.TestShardCreateError/shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [119.788µs]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/shards/shard2/data/recovery], Data log [test.default.TestShardCreateError/shards/shard2/data], Shared [true]. Built [0] plasmas, took [129.573µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/1 started
Fail to find shard for instance test.default.TestShardCreateError/2 due to corrupted or missing shards. Reassign to a different shard for this instance.Shard test.default.TestShardCreateError/shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardCreateError/2(shard2) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2
LSS test.default.TestShardCreateError/2/recovery(shard2) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2/recovery
Shard test.default.TestShardCreateError/shards/shard2(2) : Map plasma instance (test.default.TestShardCreateError/2) to LSS (test.default.TestShardCreateError/2) and RecoveryLSS (test.default.TestShardCreateError/2/recovery)
Shard test.default.TestShardCreateError/shards/shard2(2) : Assign plasmaId 2 to instance test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard2(2) : Add instance test.default.TestShardCreateError/2 to Shard test.default.TestShardCreateError/shards/shard2
Shard test.default.TestShardCreateError/shards/shard2(2) : Add instance test.default.TestShardCreateError/2 to LSS test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]
LSS test.default.TestShardCreateError/2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [16384]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [12288] took [145.739µs]
LSS test.default.TestShardCreateError/2(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [16384] replayOffset [12288]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [268.316µs]
Shard test.default.TestShardCreateError/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]. Built [1] plasmas, took [349.467µs]
Plasma: doInit: data UsedSpace 16384 recovery UsedSpace 16470
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/2 started
Shard test.default.TestShardCreateError/shards/shard2(2) : (fatal error - PlasmaId found for new instance. Path=test.default.TestShardCreateError/2, plasmaId=2)
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/1 stopped
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/1 closed
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/2 stopped
LSS test.default.TestShardCreateError/2(shard2) : LSSCleaner stopped
LSS test.default.TestShardCreateError/2/recovery(shard2) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard2(2) : instance test.default.TestShardCreateError/2 closed
Shard test.default.TestShardCreateError/shards/shard2(2) : All daemons stopped
LSS test.default.TestShardCreateError/shards/shard2/data(shard2) : LSSCleaner stopped
LSS test.default.TestShardCreateError/shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard2(2) : All instance closed
Shard test.default.TestShardCreateError/shards/shard2(2) : Shutdown completed
Start shard recovery from test.default.TestShardCreateError/shardsFail to recover shard test.default.TestShardCreateError/shards/shard1. Error: : fatal: checksum mismatch when loading metadata file: Recover shard shard2 from metadata completed.
Fail to find shard for instance test.default.TestShardCreateError/2 due to corrupted or missing shards. Reassign to a different shard for this instance.Shard test.default.TestShardCreateError/shards/shard3(3) : Shard Created Successfully
Shard test.default.TestShardCreateError/shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardCreateError/2(shard3) : LSSCleaner initialized
Shard test.default.TestShardCreateError/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2
LSS test.default.TestShardCreateError/2/recovery(shard3) : LSSCleaner initialized for recovery
Shard test.default.TestShardCreateError/shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardCreateError/2/recovery
Shard test.default.TestShardCreateError/shards/shard3(3) : Map plasma instance (test.default.TestShardCreateError/2) to LSS (test.default.TestShardCreateError/2) and RecoveryLSS (test.default.TestShardCreateError/2/recovery)
Shard test.default.TestShardCreateError/shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard3(3) : Add instance test.default.TestShardCreateError/2 to Shard test.default.TestShardCreateError/shards/shard3
Shard test.default.TestShardCreateError/shards/shard3(3) : Add instance test.default.TestShardCreateError/2 to LSS test.default.TestShardCreateError/2
Shard test.default.TestShardCreateError/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]
LSS test.default.TestShardCreateError/2/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [24576]
Shard test.default.TestShardCreateError/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [20480] took [1.069341ms]
LSS test.default.TestShardCreateError/2(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [24576] replayOffset [20480]
Shard test.default.TestShardCreateError/shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [1.136768ms]
Shard test.default.TestShardCreateError/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardCreateError/2/recovery], Data log [test.default.TestShardCreateError/2], Shared [false]. Built [1] plasmas, took [1.183041ms]
Plasma: doInit: data UsedSpace 24576 recovery UsedSpace 24682
Shard test.default.TestShardCreateError/shards/shard3(3) : instance test.default.TestShardCreateError/2 started
Shard test.default.TestShardCreateError/shards/shard3(3) : instance test.default.TestShardCreateError/2 stopped
LSS test.default.TestShardCreateError/2(shard3) : LSSCleaner stopped
LSS test.default.TestShardCreateError/2/recovery(shard3) : LSSCleaner stopped
Shard test.default.TestShardCreateError/shards/shard3(3) : instance test.default.TestShardCreateError/2 closed
: All instance closed
: Shutdown completed
Shard test.default.TestShardCreateError/shards/shard3(3) : All daemons stopped
Shard test.default.TestShardCreateError/shards/shard3(3) : All instance closed
Shard test.default.TestShardCreateError/shards/shard3(3) : Shutdown completed
: All instance closed
: All instance destroyed
: Shard Destroyed Successfully
Shard test.default.TestShardCreateError/shards/shard3(3) : All daemons stopped
Shard test.default.TestShardCreateError/shards/shard3(3) : All instance closed
Shard test.default.TestShardCreateError/shards/shard3(3) : destroying instance test.default.TestShardCreateError/2 ...
Shard test.default.TestShardCreateError/shards/shard3(3) : removed plasmaId 1 for instance test.default.TestShardCreateError/2 ...
Shard test.default.TestShardCreateError/shards/shard3(3) : metadata saved successfully
Shard test.default.TestShardCreateError/shards/shard3(3) : instance test.default.TestShardCreateError/2 sucessfully destroyed
Shard test.default.TestShardCreateError/shards/shard3(3) : All instance destroyed
Shard test.default.TestShardCreateError/shards/shard3(3) : Shard Destroyed Successfully
--- PASS: TestShardCreateError (0.17s)
=== RUN   TestShardNumInsts
----------- running TestShardNumInsts
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardNumInsts_0
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [375.647µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [403.898µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [413.611µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_1
LSS test.default.TestShardNumInsts_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_1) to LSS (test.default.TestShardNumInsts_1) and RecoveryLSS (test.default.TestShardNumInsts_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestShardNumInsts_1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_1 to LSS test.default.TestShardNumInsts_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_1/recovery], Data log [test.default.TestShardNumInsts_1], Shared [false]
LSS test.default.TestShardNumInsts_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.958µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_1/recovery], Data log [test.default.TestShardNumInsts_1], Shared [false]. Built [0] plasmas, took [77.926µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_1(shard1) : all deamons started
LSS test.default.TestShardNumInsts_1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestShardNumInsts_2
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_3
LSS test.default.TestShardNumInsts_3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_3/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_3) to LSS (test.default.TestShardNumInsts_3) and RecoveryLSS (test.default.TestShardNumInsts_3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestShardNumInsts_3
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_3 to LSS test.default.TestShardNumInsts_3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_3/recovery], Data log [test.default.TestShardNumInsts_3], Shared [false]
LSS test.default.TestShardNumInsts_3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.591µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_3/recovery], Data log [test.default.TestShardNumInsts_3], Shared [false]. Built [0] plasmas, took [81.85µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_3(shard1) : all deamons started
LSS test.default.TestShardNumInsts_3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestShardNumInsts_4
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_5
LSS test.default.TestShardNumInsts_5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_5/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_5) to LSS (test.default.TestShardNumInsts_5) and RecoveryLSS (test.default.TestShardNumInsts_5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.default.TestShardNumInsts_5
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_5 to LSS test.default.TestShardNumInsts_5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_5/recovery], Data log [test.default.TestShardNumInsts_5], Shared [false]
LSS test.default.TestShardNumInsts_5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.043µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_5/recovery], Data log [test.default.TestShardNumInsts_5], Shared [false]. Built [0] plasmas, took [87.176µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_5(shard1) : all deamons started
LSS test.default.TestShardNumInsts_5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_5 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_6) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.default.TestShardNumInsts_6
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_6 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_7
LSS test.default.TestShardNumInsts_7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_7/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_7) to LSS (test.default.TestShardNumInsts_7) and RecoveryLSS (test.default.TestShardNumInsts_7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.default.TestShardNumInsts_7
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_7 to LSS test.default.TestShardNumInsts_7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_7/recovery], Data log [test.default.TestShardNumInsts_7], Shared [false]
LSS test.default.TestShardNumInsts_7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [83.048µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_7/recovery], Data log [test.default.TestShardNumInsts_7], Shared [false]. Built [0] plasmas, took [96.903µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_7(shard1) : all deamons started
LSS test.default.TestShardNumInsts_7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_7 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_8) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.default.TestShardNumInsts_8
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_8 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_9
LSS test.default.TestShardNumInsts_9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_9/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_9) to LSS (test.default.TestShardNumInsts_9) and RecoveryLSS (test.default.TestShardNumInsts_9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.default.TestShardNumInsts_9
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_9 to LSS test.default.TestShardNumInsts_9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_9/recovery], Data log [test.default.TestShardNumInsts_9], Shared [false]
LSS test.default.TestShardNumInsts_9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.601µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_9/recovery], Data log [test.default.TestShardNumInsts_9], Shared [false]. Built [0] plasmas, took [106.09µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_9(shard1) : all deamons started
LSS test.default.TestShardNumInsts_9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_9 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_10) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 11 to instance test.default.TestShardNumInsts_10
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_10 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_10 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_10 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_11(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_11
LSS test.default.TestShardNumInsts_11/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_11/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_11) to LSS (test.default.TestShardNumInsts_11) and RecoveryLSS (test.default.TestShardNumInsts_11/recovery)
Shard shards/shard1(1) : Assign plasmaId 12 to instance test.default.TestShardNumInsts_11
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_11 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_11 to LSS test.default.TestShardNumInsts_11
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_11/recovery], Data log [test.default.TestShardNumInsts_11], Shared [false]
LSS test.default.TestShardNumInsts_11/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.52µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_11/recovery], Data log [test.default.TestShardNumInsts_11], Shared [false]. Built [0] plasmas, took [79.57µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_11(shard1) : all deamons started
LSS test.default.TestShardNumInsts_11/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_11 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_12) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 13 to instance test.default.TestShardNumInsts_12
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_12 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_12 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_12 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_13(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_13
LSS test.default.TestShardNumInsts_13/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_13/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_13) to LSS (test.default.TestShardNumInsts_13) and RecoveryLSS (test.default.TestShardNumInsts_13/recovery)
Shard shards/shard1(1) : Assign plasmaId 14 to instance test.default.TestShardNumInsts_13
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_13 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_13 to LSS test.default.TestShardNumInsts_13
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_13/recovery], Data log [test.default.TestShardNumInsts_13], Shared [false]
LSS test.default.TestShardNumInsts_13/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [43.889µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_13/recovery], Data log [test.default.TestShardNumInsts_13], Shared [false]. Built [0] plasmas, took [88.219µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_13(shard1) : all deamons started
LSS test.default.TestShardNumInsts_13/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_13 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_14) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 15 to instance test.default.TestShardNumInsts_14
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_14 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_14 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_14 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_15(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_15
LSS test.default.TestShardNumInsts_15/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_15/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_15) to LSS (test.default.TestShardNumInsts_15) and RecoveryLSS (test.default.TestShardNumInsts_15/recovery)
Shard shards/shard1(1) : Assign plasmaId 16 to instance test.default.TestShardNumInsts_15
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_15 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_15 to LSS test.default.TestShardNumInsts_15
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_15/recovery], Data log [test.default.TestShardNumInsts_15], Shared [false]
LSS test.default.TestShardNumInsts_15/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.895µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_15/recovery], Data log [test.default.TestShardNumInsts_15], Shared [false]. Built [0] plasmas, took [87.176µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_15(shard1) : all deamons started
LSS test.default.TestShardNumInsts_15/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_15 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_16) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 17 to instance test.default.TestShardNumInsts_16
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_16 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_16 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_16 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_17(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_17
LSS test.default.TestShardNumInsts_17/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_17/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_17) to LSS (test.default.TestShardNumInsts_17) and RecoveryLSS (test.default.TestShardNumInsts_17/recovery)
Shard shards/shard1(1) : Assign plasmaId 18 to instance test.default.TestShardNumInsts_17
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_17 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_17 to LSS test.default.TestShardNumInsts_17
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_17/recovery], Data log [test.default.TestShardNumInsts_17], Shared [false]
LSS test.default.TestShardNumInsts_17/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [73.919µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_17/recovery], Data log [test.default.TestShardNumInsts_17], Shared [false]. Built [0] plasmas, took [84.449µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_17(shard1) : all deamons started
LSS test.default.TestShardNumInsts_17/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_17 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_18) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 19 to instance test.default.TestShardNumInsts_18
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_18 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_18 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_18 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.default.TestShardNumInsts_19(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_19
LSS test.default.TestShardNumInsts_19/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_19/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNumInsts_19) to LSS (test.default.TestShardNumInsts_19) and RecoveryLSS (test.default.TestShardNumInsts_19/recovery)
Shard shards/shard1(1) : Assign plasmaId 20 to instance test.default.TestShardNumInsts_19
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_19 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNumInsts_19 to LSS test.default.TestShardNumInsts_19
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_19/recovery], Data log [test.default.TestShardNumInsts_19], Shared [false]
LSS test.default.TestShardNumInsts_19/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.689µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_19/recovery], Data log [test.default.TestShardNumInsts_19], Shared [false]. Built [0] plasmas, took [77.893µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_19(shard1) : all deamons started
LSS test.default.TestShardNumInsts_19/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_19 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_20) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardNumInsts_20
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_20 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_20 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.441µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [106.711µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [115.802µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard2/data(shard2) : all deamons started
LSS shards/shard2/data/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_20 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_21(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_21
LSS test.default.TestShardNumInsts_21/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_21/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_21) to LSS (test.default.TestShardNumInsts_21) and RecoveryLSS (test.default.TestShardNumInsts_21/recovery)
Shard shards/shard2(2) : Assign plasmaId 2 to instance test.default.TestShardNumInsts_21
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_21 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_21 to LSS test.default.TestShardNumInsts_21
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_21/recovery], Data log [test.default.TestShardNumInsts_21], Shared [false]
LSS test.default.TestShardNumInsts_21/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.026µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_21/recovery], Data log [test.default.TestShardNumInsts_21], Shared [false]. Built [0] plasmas, took [89.267µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_21(shard2) : all deamons started
LSS test.default.TestShardNumInsts_21/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_21 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_22) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 3 to instance test.default.TestShardNumInsts_22
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_22 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_22 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_22 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_23(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_23
LSS test.default.TestShardNumInsts_23/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_23/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_23) to LSS (test.default.TestShardNumInsts_23) and RecoveryLSS (test.default.TestShardNumInsts_23/recovery)
Shard shards/shard2(2) : Assign plasmaId 4 to instance test.default.TestShardNumInsts_23
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_23 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_23 to LSS test.default.TestShardNumInsts_23
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_23/recovery], Data log [test.default.TestShardNumInsts_23], Shared [false]
LSS test.default.TestShardNumInsts_23/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.425µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_23/recovery], Data log [test.default.TestShardNumInsts_23], Shared [false]. Built [0] plasmas, took [87.764µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_23(shard2) : all deamons started
LSS test.default.TestShardNumInsts_23/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_23 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_24) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 5 to instance test.default.TestShardNumInsts_24
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_24 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_24 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_24 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_25(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_25
LSS test.default.TestShardNumInsts_25/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_25/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_25) to LSS (test.default.TestShardNumInsts_25) and RecoveryLSS (test.default.TestShardNumInsts_25/recovery)
Shard shards/shard2(2) : Assign plasmaId 6 to instance test.default.TestShardNumInsts_25
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_25 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_25 to LSS test.default.TestShardNumInsts_25
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_25/recovery], Data log [test.default.TestShardNumInsts_25], Shared [false]
LSS test.default.TestShardNumInsts_25/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.892µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_25/recovery], Data log [test.default.TestShardNumInsts_25], Shared [false]. Built [0] plasmas, took [108.981µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_25(shard2) : all deamons started
LSS test.default.TestShardNumInsts_25/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_25 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_26) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 7 to instance test.default.TestShardNumInsts_26
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_26 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_26 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_26 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_27(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_27
LSS test.default.TestShardNumInsts_27/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_27/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_27) to LSS (test.default.TestShardNumInsts_27) and RecoveryLSS (test.default.TestShardNumInsts_27/recovery)
Shard shards/shard2(2) : Assign plasmaId 8 to instance test.default.TestShardNumInsts_27
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_27 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_27 to LSS test.default.TestShardNumInsts_27
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_27/recovery], Data log [test.default.TestShardNumInsts_27], Shared [false]
LSS test.default.TestShardNumInsts_27/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.978µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_27/recovery], Data log [test.default.TestShardNumInsts_27], Shared [false]. Built [0] plasmas, took [86.752µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_27(shard2) : all deamons started
LSS test.default.TestShardNumInsts_27/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_27 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_28) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 9 to instance test.default.TestShardNumInsts_28
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_28 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_28 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_28 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_29(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_29
LSS test.default.TestShardNumInsts_29/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_29/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_29) to LSS (test.default.TestShardNumInsts_29) and RecoveryLSS (test.default.TestShardNumInsts_29/recovery)
Shard shards/shard2(2) : Assign plasmaId 10 to instance test.default.TestShardNumInsts_29
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_29 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_29 to LSS test.default.TestShardNumInsts_29
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_29/recovery], Data log [test.default.TestShardNumInsts_29], Shared [false]
LSS test.default.TestShardNumInsts_29/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [57.459µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_29/recovery], Data log [test.default.TestShardNumInsts_29], Shared [false]. Built [0] plasmas, took [91.631µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_29(shard2) : all deamons started
LSS test.default.TestShardNumInsts_29/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_29 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_30) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 11 to instance test.default.TestShardNumInsts_30
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_30 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_30 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_30 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_31(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_31
LSS test.default.TestShardNumInsts_31/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_31/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_31) to LSS (test.default.TestShardNumInsts_31) and RecoveryLSS (test.default.TestShardNumInsts_31/recovery)
Shard shards/shard2(2) : Assign plasmaId 12 to instance test.default.TestShardNumInsts_31
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_31 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_31 to LSS test.default.TestShardNumInsts_31
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_31/recovery], Data log [test.default.TestShardNumInsts_31], Shared [false]
LSS test.default.TestShardNumInsts_31/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.525µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_31/recovery], Data log [test.default.TestShardNumInsts_31], Shared [false]. Built [0] plasmas, took [85.397µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_31(shard2) : all deamons started
LSS test.default.TestShardNumInsts_31/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_31 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_32) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 13 to instance test.default.TestShardNumInsts_32
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_32 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_32 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_32 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_33(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_33
LSS test.default.TestShardNumInsts_33/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_33/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_33) to LSS (test.default.TestShardNumInsts_33) and RecoveryLSS (test.default.TestShardNumInsts_33/recovery)
Shard shards/shard2(2) : Assign plasmaId 14 to instance test.default.TestShardNumInsts_33
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_33 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_33 to LSS test.default.TestShardNumInsts_33
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_33/recovery], Data log [test.default.TestShardNumInsts_33], Shared [false]
LSS test.default.TestShardNumInsts_33/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [45.51µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_33/recovery], Data log [test.default.TestShardNumInsts_33], Shared [false]. Built [0] plasmas, took [80.112µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_33(shard2) : all deamons started
LSS test.default.TestShardNumInsts_33/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_33 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_34) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 15 to instance test.default.TestShardNumInsts_34
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_34 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_34 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_34 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_35(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_35
LSS test.default.TestShardNumInsts_35/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_35/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_35) to LSS (test.default.TestShardNumInsts_35) and RecoveryLSS (test.default.TestShardNumInsts_35/recovery)
Shard shards/shard2(2) : Assign plasmaId 16 to instance test.default.TestShardNumInsts_35
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_35 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_35 to LSS test.default.TestShardNumInsts_35
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_35/recovery], Data log [test.default.TestShardNumInsts_35], Shared [false]
LSS test.default.TestShardNumInsts_35/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [85.877µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_35/recovery], Data log [test.default.TestShardNumInsts_35], Shared [false]. Built [0] plasmas, took [124.322µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_35(shard2) : all deamons started
LSS test.default.TestShardNumInsts_35/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_35 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_36) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 17 to instance test.default.TestShardNumInsts_36
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_36 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_36 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_36 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_37(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_37
LSS test.default.TestShardNumInsts_37/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_37/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_37) to LSS (test.default.TestShardNumInsts_37) and RecoveryLSS (test.default.TestShardNumInsts_37/recovery)
Shard shards/shard2(2) : Assign plasmaId 18 to instance test.default.TestShardNumInsts_37
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_37 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_37 to LSS test.default.TestShardNumInsts_37
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_37/recovery], Data log [test.default.TestShardNumInsts_37], Shared [false]
LSS test.default.TestShardNumInsts_37/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.874µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_37/recovery], Data log [test.default.TestShardNumInsts_37], Shared [false]. Built [0] plasmas, took [84.134µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_37(shard2) : all deamons started
LSS test.default.TestShardNumInsts_37/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_37 started
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_38) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 19 to instance test.default.TestShardNumInsts_38
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_38 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_38 to LSS shards/shard2/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_38 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.default.TestShardNumInsts_39(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_39
LSS test.default.TestShardNumInsts_39/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_39/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNumInsts_39) to LSS (test.default.TestShardNumInsts_39) and RecoveryLSS (test.default.TestShardNumInsts_39/recovery)
Shard shards/shard2(2) : Assign plasmaId 20 to instance test.default.TestShardNumInsts_39
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_39 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNumInsts_39 to LSS test.default.TestShardNumInsts_39
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_39/recovery], Data log [test.default.TestShardNumInsts_39], Shared [false]
LSS test.default.TestShardNumInsts_39/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.034µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_39/recovery], Data log [test.default.TestShardNumInsts_39], Shared [false]. Built [0] plasmas, took [88.565µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_39(shard2) : all deamons started
LSS test.default.TestShardNumInsts_39/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_39 started
Shard shards/shard3(3) : Shard Created Successfully
Shard shards/shard3(3) : metadata saved successfully
LSS shards/shard3/data(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data
LSS shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_40) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 1 to instance test.default.TestShardNumInsts_40
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_40 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_40 to LSS shards/shard3/data
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]
LSS shards/shard3/data/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.919µs]
LSS shards/shard3/data(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [95.97µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]. Built [0] plasmas, took [103.366µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard3/data(shard3) : all deamons started
LSS shards/shard3/data/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_40 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_41(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_41
LSS test.default.TestShardNumInsts_41/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_41/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_41) to LSS (test.default.TestShardNumInsts_41) and RecoveryLSS (test.default.TestShardNumInsts_41/recovery)
Shard shards/shard3(3) : Assign plasmaId 2 to instance test.default.TestShardNumInsts_41
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_41 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_41 to LSS test.default.TestShardNumInsts_41
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_41/recovery], Data log [test.default.TestShardNumInsts_41], Shared [false]
LSS test.default.TestShardNumInsts_41/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.571µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_41/recovery], Data log [test.default.TestShardNumInsts_41], Shared [false]. Built [0] plasmas, took [84.871µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_41(shard3) : all deamons started
LSS test.default.TestShardNumInsts_41/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_41 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_42) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 3 to instance test.default.TestShardNumInsts_42
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_42 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_42 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_42 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_43(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_43
LSS test.default.TestShardNumInsts_43/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_43/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_43) to LSS (test.default.TestShardNumInsts_43) and RecoveryLSS (test.default.TestShardNumInsts_43/recovery)
Shard shards/shard3(3) : Assign plasmaId 4 to instance test.default.TestShardNumInsts_43
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_43 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_43 to LSS test.default.TestShardNumInsts_43
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_43/recovery], Data log [test.default.TestShardNumInsts_43], Shared [false]
LSS test.default.TestShardNumInsts_43/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [54.879µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_43/recovery], Data log [test.default.TestShardNumInsts_43], Shared [false]. Built [0] plasmas, took [89.517µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_43(shard3) : all deamons started
LSS test.default.TestShardNumInsts_43/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_43 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_44) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 5 to instance test.default.TestShardNumInsts_44
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_44 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_44 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_44 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_45(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_45
LSS test.default.TestShardNumInsts_45/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_45/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_45) to LSS (test.default.TestShardNumInsts_45) and RecoveryLSS (test.default.TestShardNumInsts_45/recovery)
Shard shards/shard3(3) : Assign plasmaId 6 to instance test.default.TestShardNumInsts_45
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_45 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_45 to LSS test.default.TestShardNumInsts_45
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_45/recovery], Data log [test.default.TestShardNumInsts_45], Shared [false]
LSS test.default.TestShardNumInsts_45/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.725µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_45/recovery], Data log [test.default.TestShardNumInsts_45], Shared [false]. Built [0] plasmas, took [114.199µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_45(shard3) : all deamons started
LSS test.default.TestShardNumInsts_45/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_45 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_46) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 7 to instance test.default.TestShardNumInsts_46
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_46 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_46 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_46 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_47(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_47
LSS test.default.TestShardNumInsts_47/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_47/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_47) to LSS (test.default.TestShardNumInsts_47) and RecoveryLSS (test.default.TestShardNumInsts_47/recovery)
Shard shards/shard3(3) : Assign plasmaId 8 to instance test.default.TestShardNumInsts_47
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_47 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_47 to LSS test.default.TestShardNumInsts_47
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_47/recovery], Data log [test.default.TestShardNumInsts_47], Shared [false]
LSS test.default.TestShardNumInsts_47/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.766µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_47/recovery], Data log [test.default.TestShardNumInsts_47], Shared [false]. Built [0] plasmas, took [86.361µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_47(shard3) : all deamons started
LSS test.default.TestShardNumInsts_47/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_47 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_48) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 9 to instance test.default.TestShardNumInsts_48
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_48 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_48 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_48 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_49(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_49
LSS test.default.TestShardNumInsts_49/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_49/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_49) to LSS (test.default.TestShardNumInsts_49) and RecoveryLSS (test.default.TestShardNumInsts_49/recovery)
Shard shards/shard3(3) : Assign plasmaId 10 to instance test.default.TestShardNumInsts_49
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_49 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_49 to LSS test.default.TestShardNumInsts_49
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_49/recovery], Data log [test.default.TestShardNumInsts_49], Shared [false]
LSS test.default.TestShardNumInsts_49/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.419µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_49/recovery], Data log [test.default.TestShardNumInsts_49], Shared [false]. Built [0] plasmas, took [91.606µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_49(shard3) : all deamons started
LSS test.default.TestShardNumInsts_49/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_49 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_50) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 11 to instance test.default.TestShardNumInsts_50
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_50 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_50 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_50 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_51(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_51
LSS test.default.TestShardNumInsts_51/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_51/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_51) to LSS (test.default.TestShardNumInsts_51) and RecoveryLSS (test.default.TestShardNumInsts_51/recovery)
Shard shards/shard3(3) : Assign plasmaId 12 to instance test.default.TestShardNumInsts_51
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_51 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_51 to LSS test.default.TestShardNumInsts_51
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_51/recovery], Data log [test.default.TestShardNumInsts_51], Shared [false]
LSS test.default.TestShardNumInsts_51/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.391µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_51/recovery], Data log [test.default.TestShardNumInsts_51], Shared [false]. Built [0] plasmas, took [87.292µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_51(shard3) : all deamons started
LSS test.default.TestShardNumInsts_51/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_51 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_52) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 13 to instance test.default.TestShardNumInsts_52
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_52 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_52 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_52 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.default.TestShardNumInsts_53(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_53
LSS test.default.TestShardNumInsts_53/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.default.TestShardNumInsts_53/recovery
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_53) to LSS (test.default.TestShardNumInsts_53) and RecoveryLSS (test.default.TestShardNumInsts_53/recovery)
Shard shards/shard3(3) : Assign plasmaId 14 to instance test.default.TestShardNumInsts_53
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_53 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_53 to LSS test.default.TestShardNumInsts_53
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.default.TestShardNumInsts_53/recovery], Data log [test.default.TestShardNumInsts_53], Shared [false]
LSS test.default.TestShardNumInsts_53/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.01µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.default.TestShardNumInsts_53/recovery], Data log [test.default.TestShardNumInsts_53], Shared [false]. Built [0] plasmas, took [86.857µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.default.TestShardNumInsts_53(shard3) : all deamons started
LSS test.default.TestShardNumInsts_53/recovery(shard3) : all deamons started
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_53 started
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : Map plasma instance (test.default.TestShardNumInsts_54) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 15 to instance test.default.TestShardNumInsts_54
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_54 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.default.TestShardNumInsts_54 to LSS shards/shard3/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_54 started
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_0 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_0 closed
LSS test.default.TestShardNumInsts_1(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_1 stopped
LSS test.default.TestShardNumInsts_1(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_1 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_2 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_2 closed
LSS test.default.TestShardNumInsts_3(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_3 stopped
LSS test.default.TestShardNumInsts_3(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_3 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_4 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_4 closed
LSS test.default.TestShardNumInsts_5(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_5 stopped
LSS test.default.TestShardNumInsts_5(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_5 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_6 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_6 closed
LSS test.default.TestShardNumInsts_7(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_7 stopped
LSS test.default.TestShardNumInsts_7(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_7 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_8 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_8 closed
LSS test.default.TestShardNumInsts_9(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_9 stopped
LSS test.default.TestShardNumInsts_9(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_9 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_10 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_10 closed
LSS test.default.TestShardNumInsts_11(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_11/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_11 stopped
LSS test.default.TestShardNumInsts_11(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_11/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_11 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_12 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_12 closed
LSS test.default.TestShardNumInsts_13(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_13/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_13 stopped
LSS test.default.TestShardNumInsts_13(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_13/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_13 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_14 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_14 closed
LSS test.default.TestShardNumInsts_15(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_15/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_15 stopped
LSS test.default.TestShardNumInsts_15(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_15/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_15 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_16 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_16 closed
LSS test.default.TestShardNumInsts_17(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_17/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_17 stopped
LSS test.default.TestShardNumInsts_17(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_17/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_17 closed
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_18 stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_18 closed
LSS test.default.TestShardNumInsts_19(shard1) : all deamons stopped
LSS test.default.TestShardNumInsts_19/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_19 stopped
LSS test.default.TestShardNumInsts_19(shard1) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_19/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_19 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_20 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_20 closed
LSS test.default.TestShardNumInsts_21(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_21/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_21 stopped
LSS test.default.TestShardNumInsts_21(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_21/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_21 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_22 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_22 closed
LSS test.default.TestShardNumInsts_23(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_23/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_23 stopped
LSS test.default.TestShardNumInsts_23(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_23/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_23 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_24 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_24 closed
LSS test.default.TestShardNumInsts_25(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_25/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_25 stopped
LSS test.default.TestShardNumInsts_25(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_25/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_25 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_26 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_26 closed
LSS test.default.TestShardNumInsts_27(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_27/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_27 stopped
LSS test.default.TestShardNumInsts_27(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_27/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_27 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_28 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_28 closed
LSS test.default.TestShardNumInsts_29(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_29/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_29 stopped
LSS test.default.TestShardNumInsts_29(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_29/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_29 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_30 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_30 closed
LSS test.default.TestShardNumInsts_31(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_31/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_31 stopped
LSS test.default.TestShardNumInsts_31(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_31/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_31 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_32 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_32 closed
LSS test.default.TestShardNumInsts_33(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_33/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_33 stopped
LSS test.default.TestShardNumInsts_33(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_33/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_33 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_34 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_34 closed
LSS test.default.TestShardNumInsts_35(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_35/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_35 stopped
LSS test.default.TestShardNumInsts_35(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_35/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_35 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_36 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_36 closed
LSS test.default.TestShardNumInsts_37(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_37/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_37 stopped
LSS test.default.TestShardNumInsts_37(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_37/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_37 closed
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_38 stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_38 closed
LSS test.default.TestShardNumInsts_39(shard2) : all deamons stopped
LSS test.default.TestShardNumInsts_39/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_39 stopped
LSS test.default.TestShardNumInsts_39(shard2) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_39/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_39 closed
LSS shards/shard2/data(shard2) : all deamons stopped
LSS shards/shard2/data/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_40 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_40 closed
LSS test.default.TestShardNumInsts_41(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_41/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_41 stopped
LSS test.default.TestShardNumInsts_41(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_41/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_41 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_42 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_42 closed
LSS test.default.TestShardNumInsts_43(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_43/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_43 stopped
LSS test.default.TestShardNumInsts_43(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_43/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_43 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_44 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_44 closed
LSS test.default.TestShardNumInsts_45(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_45/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_45 stopped
LSS test.default.TestShardNumInsts_45(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_45/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_45 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_46 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_46 closed
LSS test.default.TestShardNumInsts_47(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_47/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_47 stopped
LSS test.default.TestShardNumInsts_47(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_47/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_47 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_48 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_48 closed
LSS test.default.TestShardNumInsts_49(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_49/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_49 stopped
LSS test.default.TestShardNumInsts_49(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_49/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_49 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_50 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_50 closed
LSS test.default.TestShardNumInsts_51(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_51/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_51 stopped
LSS test.default.TestShardNumInsts_51(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_51/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_51 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_52 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_52 closed
LSS test.default.TestShardNumInsts_53(shard3) : all deamons stopped
LSS test.default.TestShardNumInsts_53/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_53 stopped
LSS test.default.TestShardNumInsts_53(shard3) : LSSCleaner stopped
LSS test.default.TestShardNumInsts_53/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_53 closed
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_54 stopped
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_54 closed
LSS shards/shard3/data(shard3) : all deamons stopped
LSS shards/shard3/data/recovery(shard3) : all deamons stopped
Shard shards/shard3(3) : All daemons stopped
LSS shards/shard3/data(shard3) : LSSCleaner stopped
LSS shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_13 ...
Shard shards/shard1(1) : removed plasmaId 14 for instance test.default.TestShardNumInsts_13 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_13 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_15 ...
Shard shards/shard1(1) : removed plasmaId 16 for instance test.default.TestShardNumInsts_15 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_15 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.default.TestShardNumInsts_3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_3 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.default.TestShardNumInsts_7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_7 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_17 ...
Shard shards/shard1(1) : removed plasmaId 18 for instance test.default.TestShardNumInsts_17 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_17 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.default.TestShardNumInsts_5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_5 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_11 ...
Shard shards/shard1(1) : removed plasmaId 12 for instance test.default.TestShardNumInsts_11 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_11 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_19 ...
Shard shards/shard1(1) : removed plasmaId 20 for instance test.default.TestShardNumInsts_19 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_19 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.default.TestShardNumInsts_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_1 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.default.TestShardNumInsts_9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.default.TestShardNumInsts_9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.default.TestShardNumInsts_9 sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_37 ...
Shard shards/shard2(2) : removed plasmaId 18 for instance test.default.TestShardNumInsts_37 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_37 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_39 ...
Shard shards/shard2(2) : removed plasmaId 20 for instance test.default.TestShardNumInsts_39 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_39 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_27 ...
Shard shards/shard2(2) : removed plasmaId 8 for instance test.default.TestShardNumInsts_27 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_27 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_31 ...
Shard shards/shard2(2) : removed plasmaId 12 for instance test.default.TestShardNumInsts_31 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_31 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_33 ...
Shard shards/shard2(2) : removed plasmaId 14 for instance test.default.TestShardNumInsts_33 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_33 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_21 ...
Shard shards/shard2(2) : removed plasmaId 2 for instance test.default.TestShardNumInsts_21 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_21 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_29 ...
Shard shards/shard2(2) : removed plasmaId 10 for instance test.default.TestShardNumInsts_29 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_29 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_23 ...
Shard shards/shard2(2) : removed plasmaId 4 for instance test.default.TestShardNumInsts_23 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_23 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_25 ...
Shard shards/shard2(2) : removed plasmaId 6 for instance test.default.TestShardNumInsts_25 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_25 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.default.TestShardNumInsts_35 ...
Shard shards/shard2(2) : removed plasmaId 16 for instance test.default.TestShardNumInsts_35 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.default.TestShardNumInsts_35 sucessfully destroyed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
Shard shards/shard3(3) : All daemons stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_53 ...
Shard shards/shard3(3) : removed plasmaId 14 for instance test.default.TestShardNumInsts_53 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_53 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_43 ...
Shard shards/shard3(3) : removed plasmaId 4 for instance test.default.TestShardNumInsts_43 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_43 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_47 ...
Shard shards/shard3(3) : removed plasmaId 8 for instance test.default.TestShardNumInsts_47 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_47 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_41 ...
Shard shards/shard3(3) : removed plasmaId 2 for instance test.default.TestShardNumInsts_41 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_41 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_45 ...
Shard shards/shard3(3) : removed plasmaId 6 for instance test.default.TestShardNumInsts_45 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_45 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_51 ...
Shard shards/shard3(3) : removed plasmaId 12 for instance test.default.TestShardNumInsts_51 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_51 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.default.TestShardNumInsts_49 ...
Shard shards/shard3(3) : removed plasmaId 10 for instance test.default.TestShardNumInsts_49 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.default.TestShardNumInsts_49 sucessfully destroyed
Shard shards/shard3(3) : All instance destroyed
Shard shards/shard3(3) : Shard Destroyed Successfully
--- PASS: TestShardNumInsts (1.17s)
=== RUN   TestShardInstanceGroup
----------- running TestShardNInstanceGroup
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestShardNInstanceGroup_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestShardNInstanceGroup_1
Shard shards/shard1(1) : Add instance test.default.TestShardNInstanceGroup_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestShardNInstanceGroup_1 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.333µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [119.402µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [129.944µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestShardNInstanceGroup_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.default.TestShardNInstanceGroup_2) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.default.TestShardNInstanceGroup_2
Shard shards/shard2(2) : Add instance test.default.TestShardNInstanceGroup_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.default.TestShardNInstanceGroup_2 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [117.629µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [127.421µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard2/data(shard2) : all deamons started
LSS shards/shard2/data/recovery(shard2) : all deamons started
Shard shards/shard2(2) : instance test.default.TestShardNInstanceGroup_2 started
Shard shards/shard1(1) : instance test.default.TestShardNInstanceGroup_1 stopped
Shard shards/shard1(1) : instance test.default.TestShardNInstanceGroup_1 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : instance test.default.TestShardNInstanceGroup_2 stopped
Shard shards/shard2(2) : instance test.default.TestShardNInstanceGroup_2 closed
LSS shards/shard2/data(shard2) : all deamons stopped
LSS shards/shard2/data/recovery(shard2) : all deamons stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
--- PASS: TestShardInstanceGroup (0.08s)
=== RUN   TestShardLeak
----------- running TestShardLeak
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestShardLeak_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestShardLeak_0
Shard shards/shard1(1) : Add instance test.mvcc.TestShardLeak_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardLeak_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.444µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [106.57µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [112.646µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestShardLeak_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_1
LSS test.mvcc.TestShardLeak_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestShardLeak_1) to LSS (test.mvcc.TestShardLeak_1) and RecoveryLSS (test.mvcc.TestShardLeak_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestShardLeak_1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardLeak_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardLeak_1 to LSS test.mvcc.TestShardLeak_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardLeak_1/recovery], Data log [test.mvcc.TestShardLeak_1], Shared [false]
LSS test.mvcc.TestShardLeak_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.404µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardLeak_1/recovery], Data log [test.mvcc.TestShardLeak_1], Shared [false]. Built [0] plasmas, took [106.5µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.mvcc.TestShardLeak_2) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.mvcc.TestShardLeak_2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardLeak_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardLeak_2 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [75.03µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [110.003µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [137.804µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : Swapper started
Shard shards/shard2(2) : Instance added to swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_2 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.mvcc.TestShardLeak_3(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_3
LSS test.mvcc.TestShardLeak_3/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_3/recovery
Shard shards/shard2(2) : Map plasma instance (test.mvcc.TestShardLeak_3) to LSS (test.mvcc.TestShardLeak_3) and RecoveryLSS (test.mvcc.TestShardLeak_3/recovery)
Shard shards/shard2(2) : Assign plasmaId 2 to instance test.mvcc.TestShardLeak_3
Shard shards/shard2(2) : Add instance test.mvcc.TestShardLeak_3 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardLeak_3 to LSS test.mvcc.TestShardLeak_3
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardLeak_3/recovery], Data log [test.mvcc.TestShardLeak_3], Shared [false]
LSS test.mvcc.TestShardLeak_3/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [82.662µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardLeak_3/recovery], Data log [test.mvcc.TestShardLeak_3], Shared [false]. Built [0] plasmas, took [121.293µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : Instance added to swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_3 started
Shard shards/shard3(3) : Shard Created Successfully
Shard shards/shard3(3) : metadata saved successfully
LSS shards/shard3/data(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data
LSS shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data/recovery
Shard shards/shard3(3) : Map plasma instance (test.mvcc.TestShardLeak_4) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 1 to instance test.mvcc.TestShardLeak_4
Shard shards/shard3(3) : Add instance test.mvcc.TestShardLeak_4 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.mvcc.TestShardLeak_4 to LSS shards/shard3/data
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]
LSS shards/shard3/data/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.181µs]
LSS shards/shard3/data(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [119.864µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]. Built [0] plasmas, took [130.569µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : Swapper started
Shard shards/shard3(3) : Instance added to swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_4 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.mvcc.TestShardLeak_5(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_5
LSS test.mvcc.TestShardLeak_5/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_5/recovery
Shard shards/shard3(3) : Map plasma instance (test.mvcc.TestShardLeak_5) to LSS (test.mvcc.TestShardLeak_5) and RecoveryLSS (test.mvcc.TestShardLeak_5/recovery)
Shard shards/shard3(3) : Assign plasmaId 2 to instance test.mvcc.TestShardLeak_5
Shard shards/shard3(3) : Add instance test.mvcc.TestShardLeak_5 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.mvcc.TestShardLeak_5 to LSS test.mvcc.TestShardLeak_5
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardLeak_5/recovery], Data log [test.mvcc.TestShardLeak_5], Shared [false]
LSS test.mvcc.TestShardLeak_5/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [92.25µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardLeak_5/recovery], Data log [test.mvcc.TestShardLeak_5], Shared [false]. Built [0] plasmas, took [140.669µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : Instance added to swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_5 started
Shard shards/shard4(4) : Shard Created Successfully
Shard shards/shard4(4) : metadata saved successfully
LSS shards/shard4/data(shard4) : LSSCleaner initialized
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=shards/shard4/data
LSS shards/shard4/data/recovery(shard4) : LSSCleaner initialized for recovery
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=shards/shard4/data/recovery
Shard shards/shard4(4) : Map plasma instance (test.mvcc.TestShardLeak_6) to LSS (shards/shard4/data) and RecoveryLSS (shards/shard4/data/recovery)
Shard shards/shard4(4) : Assign plasmaId 1 to instance test.mvcc.TestShardLeak_6
Shard shards/shard4(4) : Add instance test.mvcc.TestShardLeak_6 to Shard shards/shard4
Shard shards/shard4(4) : Add instance test.mvcc.TestShardLeak_6 to LSS shards/shard4/data
Shard shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard4/data/recovery], Data log [shards/shard4/data], Shared [true]
LSS shards/shard4/data/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [70.988µs]
LSS shards/shard4/data(shard4) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from data and recovery log, took [95.244µs]
Shard shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [shards/shard4/data/recovery], Data log [shards/shard4/data], Shared [true]. Built [0] plasmas, took [105.777µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard4(4) : Swapper started
Shard shards/shard4(4) : Instance added to swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_6 started
Shard shards/shard4(4) : metadata saved successfully
LSS test.mvcc.TestShardLeak_7(shard4) : LSSCleaner initialized
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_7
LSS test.mvcc.TestShardLeak_7/recovery(shard4) : LSSCleaner initialized for recovery
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_7/recovery
Shard shards/shard4(4) : Map plasma instance (test.mvcc.TestShardLeak_7) to LSS (test.mvcc.TestShardLeak_7) and RecoveryLSS (test.mvcc.TestShardLeak_7/recovery)
Shard shards/shard4(4) : Assign plasmaId 2 to instance test.mvcc.TestShardLeak_7
Shard shards/shard4(4) : Add instance test.mvcc.TestShardLeak_7 to Shard shards/shard4
Shard shards/shard4(4) : Add instance test.mvcc.TestShardLeak_7 to LSS test.mvcc.TestShardLeak_7
Shard shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardLeak_7/recovery], Data log [test.mvcc.TestShardLeak_7], Shared [false]
LSS test.mvcc.TestShardLeak_7/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.976µs]
Shard shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardLeak_7/recovery], Data log [test.mvcc.TestShardLeak_7], Shared [false]. Built [0] plasmas, took [117.141µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard4(4) : Instance added to swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_7 started
Shard shards/shard5(5) : Shard Created Successfully
Shard shards/shard5(5) : metadata saved successfully
LSS shards/shard5/data(shard5) : LSSCleaner initialized
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data
LSS shards/shard5/data/recovery(shard5) : LSSCleaner initialized for recovery
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data/recovery
Shard shards/shard5(5) : Map plasma instance (test.mvcc.TestShardLeak_8) to LSS (shards/shard5/data) and RecoveryLSS (shards/shard5/data/recovery)
Shard shards/shard5(5) : Assign plasmaId 1 to instance test.mvcc.TestShardLeak_8
Shard shards/shard5(5) : Add instance test.mvcc.TestShardLeak_8 to Shard shards/shard5
Shard shards/shard5(5) : Add instance test.mvcc.TestShardLeak_8 to LSS shards/shard5/data
Shard shards/shard5(5) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]
LSS shards/shard5/data/recovery(shard5) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.368µs]
LSS shards/shard5/data(shard5) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from data and recovery log, took [116.152µs]
Shard shards/shard5(5) : Shard.doRecovery: Done recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]. Built [0] plasmas, took [126.551µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard5(5) : Swapper started
Shard shards/shard5(5) : Instance added to swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_8 started
Shard shards/shard5(5) : metadata saved successfully
LSS test.mvcc.TestShardLeak_9(shard5) : LSSCleaner initialized
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_9
LSS test.mvcc.TestShardLeak_9/recovery(shard5) : LSSCleaner initialized for recovery
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=test.mvcc.TestShardLeak_9/recovery
Shard shards/shard5(5) : Map plasma instance (test.mvcc.TestShardLeak_9) to LSS (test.mvcc.TestShardLeak_9) and RecoveryLSS (test.mvcc.TestShardLeak_9/recovery)
Shard shards/shard5(5) : Assign plasmaId 2 to instance test.mvcc.TestShardLeak_9
Shard shards/shard5(5) : Add instance test.mvcc.TestShardLeak_9 to Shard shards/shard5
Shard shards/shard5(5) : Add instance test.mvcc.TestShardLeak_9 to LSS test.mvcc.TestShardLeak_9
Shard shards/shard5(5) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardLeak_9/recovery], Data log [test.mvcc.TestShardLeak_9], Shared [false]
LSS test.mvcc.TestShardLeak_9/recovery(shard5) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [117.811µs]
Shard shards/shard5(5) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardLeak_9/recovery], Data log [test.mvcc.TestShardLeak_9], Shared [false]. Built [0] plasmas, took [208.633µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard5(5) : Instance added to swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_9 started
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_0 stopped
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_0 closed
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_1 stopped
LSS test.mvcc.TestShardLeak_1(shard1) : LSSCleaner stopped
LSS test.mvcc.TestShardLeak_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_1 closed
Shard shards/shard2(2) : Instance removed from swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_2 stopped
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_2 closed
Shard shards/shard2(2) : Instance removed from swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_3 stopped
LSS test.mvcc.TestShardLeak_3(shard2) : LSSCleaner stopped
LSS test.mvcc.TestShardLeak_3/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_3 closed
Shard shards/shard3(3) : Instance removed from swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_4 stopped
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_4 closed
Shard shards/shard3(3) : Instance removed from swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_5 stopped
LSS test.mvcc.TestShardLeak_5(shard3) : LSSCleaner stopped
LSS test.mvcc.TestShardLeak_5/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_5 closed
Shard shards/shard4(4) : Instance removed from swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_6 stopped
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_6 closed
Shard shards/shard4(4) : Instance removed from swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_7 stopped
LSS test.mvcc.TestShardLeak_7(shard4) : LSSCleaner stopped
LSS test.mvcc.TestShardLeak_7/recovery(shard4) : LSSCleaner stopped
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_7 closed
Shard shards/shard5(5) : Instance removed from swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_8 stopped
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_8 closed
Shard shards/shard5(5) : Instance removed from swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_9 stopped
LSS test.mvcc.TestShardLeak_9(shard5) : LSSCleaner stopped
LSS test.mvcc.TestShardLeak_9/recovery(shard5) : LSSCleaner stopped
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_9 closed
Destory instances matching prefix test.mvcc.TestShardLeak in . ...
Shard shards/shard1(1) : destroying instance test.mvcc.TestShardLeak_0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestShardLeak_0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_0 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestShardLeak_1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.mvcc.TestShardLeak_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestShardLeak_1 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.mvcc.TestShardLeak_2 ...
Shard shards/shard2(2) : removed plasmaId 1 for instance test.mvcc.TestShardLeak_2 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_2 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.mvcc.TestShardLeak_3 ...
Shard shards/shard2(2) : removed plasmaId 2 for instance test.mvcc.TestShardLeak_3 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.mvcc.TestShardLeak_3 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.mvcc.TestShardLeak_4 ...
Shard shards/shard3(3) : removed plasmaId 1 for instance test.mvcc.TestShardLeak_4 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_4 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.mvcc.TestShardLeak_5 ...
Shard shards/shard3(3) : removed plasmaId 2 for instance test.mvcc.TestShardLeak_5 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.mvcc.TestShardLeak_5 sucessfully destroyed
Shard shards/shard4(4) : destroying instance test.mvcc.TestShardLeak_6 ...
Shard shards/shard4(4) : removed plasmaId 1 for instance test.mvcc.TestShardLeak_6 ...
Shard shards/shard4(4) : metadata saved successfully
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_6 sucessfully destroyed
Shard shards/shard4(4) : destroying instance test.mvcc.TestShardLeak_7 ...
Shard shards/shard4(4) : removed plasmaId 2 for instance test.mvcc.TestShardLeak_7 ...
Shard shards/shard4(4) : metadata saved successfully
Shard shards/shard4(4) : instance test.mvcc.TestShardLeak_7 sucessfully destroyed
Shard shards/shard5(5) : destroying instance test.mvcc.TestShardLeak_8 ...
Shard shards/shard5(5) : removed plasmaId 1 for instance test.mvcc.TestShardLeak_8 ...
Shard shards/shard5(5) : metadata saved successfully
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_8 sucessfully destroyed
Shard shards/shard5(5) : destroying instance test.mvcc.TestShardLeak_9 ...
Shard shards/shard5(5) : removed plasmaId 2 for instance test.mvcc.TestShardLeak_9 ...
Shard shards/shard5(5) : metadata saved successfully
Shard shards/shard5(5) : instance test.mvcc.TestShardLeak_9 sucessfully destroyed
All instances matching prefix test.mvcc.TestShardLeak destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard3(3) : Swapper stopped
Shard shards/shard3(3) : All daemons stopped
LSS shards/shard3/data(shard3) : LSSCleaner stopped
LSS shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
Shard shards/shard4(4) : Swapper stopped
Shard shards/shard4(4) : All daemons stopped
LSS shards/shard4/data(shard4) : LSSCleaner stopped
LSS shards/shard4/data/recovery(shard4) : LSSCleaner stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : Shutdown completed
Shard shards/shard5(5) : Swapper stopped
Shard shards/shard5(5) : All daemons stopped
LSS shards/shard5/data(shard5) : LSSCleaner stopped
LSS shards/shard5/data/recovery(shard5) : LSSCleaner stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard3(3) : Swapper stopped
Shard shards/shard3(3) : All daemons stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
Shard shards/shard4(4) : Swapper stopped
Shard shards/shard4(4) : All daemons stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : Shutdown completed
Shard shards/shard5(5) : Swapper stopped
Shard shards/shard5(5) : All daemons stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
Shard shards/shard3(3) : Swapper stopped
Shard shards/shard3(3) : All daemons stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : All instance destroyed
Shard shards/shard3(3) : Shard Destroyed Successfully
Shard shards/shard4(4) : Swapper stopped
Shard shards/shard4(4) : All daemons stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : All instance destroyed
Shard shards/shard4(4) : Shard Destroyed Successfully
Shard shards/shard5(5) : Swapper stopped
Shard shards/shard5(5) : All daemons stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : All instance destroyed
Shard shards/shard5(5) : Shard Destroyed Successfully
--- PASS: TestShardLeak (1.75s)
=== RUN   TestShardMemLeak
----------- running TestShardMemLeak
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestShardMemLeak_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestShardMemLeak_0
Shard shards/shard1(1) : Add instance test.mvcc.TestShardMemLeak_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardMemLeak_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [54.523µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [96.963µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [104.14µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestShardMemLeak_1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_1
LSS test.mvcc.TestShardMemLeak_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_1/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestShardMemLeak_1) to LSS (test.mvcc.TestShardMemLeak_1) and RecoveryLSS (test.mvcc.TestShardMemLeak_1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.mvcc.TestShardMemLeak_1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardMemLeak_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestShardMemLeak_1 to LSS test.mvcc.TestShardMemLeak_1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardMemLeak_1/recovery], Data log [test.mvcc.TestShardMemLeak_1], Shared [false]
LSS test.mvcc.TestShardMemLeak_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [84.371µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardMemLeak_1/recovery], Data log [test.mvcc.TestShardMemLeak_1], Shared [false]. Built [0] plasmas, took [152.472µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_1 started
Shard shards/shard2(2) : Shard Created Successfully
Shard shards/shard2(2) : metadata saved successfully
LSS shards/shard2/data(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data
LSS shards/shard2/data/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=shards/shard2/data/recovery
Shard shards/shard2(2) : Map plasma instance (test.mvcc.TestShardMemLeak_2) to LSS (shards/shard2/data) and RecoveryLSS (shards/shard2/data/recovery)
Shard shards/shard2(2) : Assign plasmaId 1 to instance test.mvcc.TestShardMemLeak_2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardMemLeak_2 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardMemLeak_2 to LSS shards/shard2/data
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]
LSS shards/shard2/data/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [85.062µs]
LSS shards/shard2/data(shard2) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from data and recovery log, took [127.817µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [shards/shard2/data/recovery], Data log [shards/shard2/data], Shared [true]. Built [0] plasmas, took [134.577µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : Swapper started
Shard shards/shard2(2) : Instance added to swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_2 started
Shard shards/shard2(2) : metadata saved successfully
LSS test.mvcc.TestShardMemLeak_3(shard2) : LSSCleaner initialized
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_3
LSS test.mvcc.TestShardMemLeak_3/recovery(shard2) : LSSCleaner initialized for recovery
Shard shards/shard2(2) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_3/recovery
Shard shards/shard2(2) : Map plasma instance (test.mvcc.TestShardMemLeak_3) to LSS (test.mvcc.TestShardMemLeak_3) and RecoveryLSS (test.mvcc.TestShardMemLeak_3/recovery)
Shard shards/shard2(2) : Assign plasmaId 2 to instance test.mvcc.TestShardMemLeak_3
Shard shards/shard2(2) : Add instance test.mvcc.TestShardMemLeak_3 to Shard shards/shard2
Shard shards/shard2(2) : Add instance test.mvcc.TestShardMemLeak_3 to LSS test.mvcc.TestShardMemLeak_3
Shard shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardMemLeak_3/recovery], Data log [test.mvcc.TestShardMemLeak_3], Shared [false]
LSS test.mvcc.TestShardMemLeak_3/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.317µs]
Shard shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardMemLeak_3/recovery], Data log [test.mvcc.TestShardMemLeak_3], Shared [false]. Built [0] plasmas, took [82.055µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard2(2) : Instance added to swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_3 started
Shard shards/shard3(3) : Shard Created Successfully
Shard shards/shard3(3) : metadata saved successfully
LSS shards/shard3/data(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data
LSS shards/shard3/data/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=shards/shard3/data/recovery
Shard shards/shard3(3) : Map plasma instance (test.mvcc.TestShardMemLeak_4) to LSS (shards/shard3/data) and RecoveryLSS (shards/shard3/data/recovery)
Shard shards/shard3(3) : Assign plasmaId 1 to instance test.mvcc.TestShardMemLeak_4
Shard shards/shard3(3) : Add instance test.mvcc.TestShardMemLeak_4 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.mvcc.TestShardMemLeak_4 to LSS shards/shard3/data
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]
LSS shards/shard3/data/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.568µs]
LSS shards/shard3/data(shard3) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from data and recovery log, took [97.1µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [shards/shard3/data/recovery], Data log [shards/shard3/data], Shared [true]. Built [0] plasmas, took [107.719µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : Swapper started
Shard shards/shard3(3) : Instance added to swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_4 started
Shard shards/shard3(3) : metadata saved successfully
LSS test.mvcc.TestShardMemLeak_5(shard3) : LSSCleaner initialized
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_5
LSS test.mvcc.TestShardMemLeak_5/recovery(shard3) : LSSCleaner initialized for recovery
Shard shards/shard3(3) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_5/recovery
Shard shards/shard3(3) : Map plasma instance (test.mvcc.TestShardMemLeak_5) to LSS (test.mvcc.TestShardMemLeak_5) and RecoveryLSS (test.mvcc.TestShardMemLeak_5/recovery)
Shard shards/shard3(3) : Assign plasmaId 2 to instance test.mvcc.TestShardMemLeak_5
Shard shards/shard3(3) : Add instance test.mvcc.TestShardMemLeak_5 to Shard shards/shard3
Shard shards/shard3(3) : Add instance test.mvcc.TestShardMemLeak_5 to LSS test.mvcc.TestShardMemLeak_5
Shard shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardMemLeak_5/recovery], Data log [test.mvcc.TestShardMemLeak_5], Shared [false]
LSS test.mvcc.TestShardMemLeak_5/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [56.913µs]
Shard shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardMemLeak_5/recovery], Data log [test.mvcc.TestShardMemLeak_5], Shared [false]. Built [0] plasmas, took [70.559µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard3(3) : Instance added to swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_5 started
Shard shards/shard4(4) : Shard Created Successfully
Shard shards/shard4(4) : metadata saved successfully
LSS shards/shard4/data(shard4) : LSSCleaner initialized
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=shards/shard4/data
LSS shards/shard4/data/recovery(shard4) : LSSCleaner initialized for recovery
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=shards/shard4/data/recovery
Shard shards/shard4(4) : Map plasma instance (test.mvcc.TestShardMemLeak_6) to LSS (shards/shard4/data) and RecoveryLSS (shards/shard4/data/recovery)
Shard shards/shard4(4) : Assign plasmaId 1 to instance test.mvcc.TestShardMemLeak_6
Shard shards/shard4(4) : Add instance test.mvcc.TestShardMemLeak_6 to Shard shards/shard4
Shard shards/shard4(4) : Add instance test.mvcc.TestShardMemLeak_6 to LSS shards/shard4/data
Shard shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard4/data/recovery], Data log [shards/shard4/data], Shared [true]
LSS shards/shard4/data/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [77.742µs]
LSS shards/shard4/data(shard4) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from data and recovery log, took [127.236µs]
Shard shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [shards/shard4/data/recovery], Data log [shards/shard4/data], Shared [true]. Built [0] plasmas, took [138.67µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard4(4) : Swapper started
Shard shards/shard4(4) : Instance added to swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_6 started
Shard shards/shard4(4) : metadata saved successfully
LSS test.mvcc.TestShardMemLeak_7(shard4) : LSSCleaner initialized
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_7
LSS test.mvcc.TestShardMemLeak_7/recovery(shard4) : LSSCleaner initialized for recovery
Shard shards/shard4(4) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_7/recovery
Shard shards/shard4(4) : Map plasma instance (test.mvcc.TestShardMemLeak_7) to LSS (test.mvcc.TestShardMemLeak_7) and RecoveryLSS (test.mvcc.TestShardMemLeak_7/recovery)
Shard shards/shard4(4) : Assign plasmaId 2 to instance test.mvcc.TestShardMemLeak_7
Shard shards/shard4(4) : Add instance test.mvcc.TestShardMemLeak_7 to Shard shards/shard4
Shard shards/shard4(4) : Add instance test.mvcc.TestShardMemLeak_7 to LSS test.mvcc.TestShardMemLeak_7
Shard shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardMemLeak_7/recovery], Data log [test.mvcc.TestShardMemLeak_7], Shared [false]
LSS test.mvcc.TestShardMemLeak_7/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.749µs]
Shard shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardMemLeak_7/recovery], Data log [test.mvcc.TestShardMemLeak_7], Shared [false]. Built [0] plasmas, took [119.901µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard4(4) : Instance added to swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_7 started
Shard shards/shard5(5) : Shard Created Successfully
Shard shards/shard5(5) : metadata saved successfully
LSS shards/shard5/data(shard5) : LSSCleaner initialized
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data
LSS shards/shard5/data/recovery(shard5) : LSSCleaner initialized for recovery
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=shards/shard5/data/recovery
Shard shards/shard5(5) : Map plasma instance (test.mvcc.TestShardMemLeak_8) to LSS (shards/shard5/data) and RecoveryLSS (shards/shard5/data/recovery)
Shard shards/shard5(5) : Assign plasmaId 1 to instance test.mvcc.TestShardMemLeak_8
Shard shards/shard5(5) : Add instance test.mvcc.TestShardMemLeak_8 to Shard shards/shard5
Shard shards/shard5(5) : Add instance test.mvcc.TestShardMemLeak_8 to LSS shards/shard5/data
Shard shards/shard5(5) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]
LSS shards/shard5/data/recovery(shard5) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [111.434µs]
LSS shards/shard5/data(shard5) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from data and recovery log, took [314.888µs]
Shard shards/shard5(5) : Shard.doRecovery: Done recovery. Recovery log [shards/shard5/data/recovery], Data log [shards/shard5/data], Shared [true]. Built [0] plasmas, took [336.387µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard5(5) : Swapper started
Shard shards/shard5(5) : Instance added to swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_8 started
Shard shards/shard5(5) : metadata saved successfully
LSS test.mvcc.TestShardMemLeak_9(shard5) : LSSCleaner initialized
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_9
LSS test.mvcc.TestShardMemLeak_9/recovery(shard5) : LSSCleaner initialized for recovery
Shard shards/shard5(5) : LSSCtx Created Successfully. Path=test.mvcc.TestShardMemLeak_9/recovery
Shard shards/shard5(5) : Map plasma instance (test.mvcc.TestShardMemLeak_9) to LSS (test.mvcc.TestShardMemLeak_9) and RecoveryLSS (test.mvcc.TestShardMemLeak_9/recovery)
Shard shards/shard5(5) : Assign plasmaId 2 to instance test.mvcc.TestShardMemLeak_9
Shard shards/shard5(5) : Add instance test.mvcc.TestShardMemLeak_9 to Shard shards/shard5
Shard shards/shard5(5) : Add instance test.mvcc.TestShardMemLeak_9 to LSS test.mvcc.TestShardMemLeak_9
Shard shards/shard5(5) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestShardMemLeak_9/recovery], Data log [test.mvcc.TestShardMemLeak_9], Shared [false]
LSS test.mvcc.TestShardMemLeak_9/recovery(shard5) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard5(5) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.717µs]
Shard shards/shard5(5) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestShardMemLeak_9/recovery], Data log [test.mvcc.TestShardMemLeak_9], Shared [false]. Built [0] plasmas, took [104.776µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard5(5) : Instance added to swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_9 started
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_0 stopped
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_0 closed
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_1 stopped
LSS test.mvcc.TestShardMemLeak_1(shard1) : LSSCleaner stopped
LSS test.mvcc.TestShardMemLeak_1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_1 closed
Shard shards/shard2(2) : Instance removed from swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_2 stopped
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_2 closed
Shard shards/shard2(2) : Instance removed from swapper : shards/shard2
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_3 stopped
LSS test.mvcc.TestShardMemLeak_3(shard2) : LSSCleaner stopped
LSS test.mvcc.TestShardMemLeak_3/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_3 closed
Shard shards/shard3(3) : Instance removed from swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_4 stopped
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_4 closed
Shard shards/shard3(3) : Instance removed from swapper : shards/shard3
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_5 stopped
LSS test.mvcc.TestShardMemLeak_5(shard3) : LSSCleaner stopped
LSS test.mvcc.TestShardMemLeak_5/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_5 closed
Shard shards/shard4(4) : Instance removed from swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_6 stopped
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_6 closed
Shard shards/shard4(4) : Instance removed from swapper : shards/shard4
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_7 stopped
LSS test.mvcc.TestShardMemLeak_7(shard4) : LSSCleaner stopped
LSS test.mvcc.TestShardMemLeak_7/recovery(shard4) : LSSCleaner stopped
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_7 closed
Shard shards/shard5(5) : Instance removed from swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_8 stopped
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_8 closed
Shard shards/shard5(5) : Instance removed from swapper : shards/shard5
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_9 stopped
LSS test.mvcc.TestShardMemLeak_9(shard5) : LSSCleaner stopped
LSS test.mvcc.TestShardMemLeak_9/recovery(shard5) : LSSCleaner stopped
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_9 closed
Destory instances matching prefix test.mvcc.TestShardMemLeak in . ...
Shard shards/shard1(1) : destroying instance test.mvcc.TestShardMemLeak_0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestShardMemLeak_0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_0 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.mvcc.TestShardMemLeak_1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.mvcc.TestShardMemLeak_1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestShardMemLeak_1 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.mvcc.TestShardMemLeak_2 ...
Shard shards/shard2(2) : removed plasmaId 1 for instance test.mvcc.TestShardMemLeak_2 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_2 sucessfully destroyed
Shard shards/shard2(2) : destroying instance test.mvcc.TestShardMemLeak_3 ...
Shard shards/shard2(2) : removed plasmaId 2 for instance test.mvcc.TestShardMemLeak_3 ...
Shard shards/shard2(2) : metadata saved successfully
Shard shards/shard2(2) : instance test.mvcc.TestShardMemLeak_3 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.mvcc.TestShardMemLeak_5 ...
Shard shards/shard3(3) : removed plasmaId 2 for instance test.mvcc.TestShardMemLeak_5 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_5 sucessfully destroyed
Shard shards/shard3(3) : destroying instance test.mvcc.TestShardMemLeak_4 ...
Shard shards/shard3(3) : removed plasmaId 1 for instance test.mvcc.TestShardMemLeak_4 ...
Shard shards/shard3(3) : metadata saved successfully
Shard shards/shard3(3) : instance test.mvcc.TestShardMemLeak_4 sucessfully destroyed
Shard shards/shard4(4) : destroying instance test.mvcc.TestShardMemLeak_6 ...
Shard shards/shard4(4) : removed plasmaId 1 for instance test.mvcc.TestShardMemLeak_6 ...
Shard shards/shard4(4) : metadata saved successfully
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_6 sucessfully destroyed
Shard shards/shard4(4) : destroying instance test.mvcc.TestShardMemLeak_7 ...
Shard shards/shard4(4) : removed plasmaId 2 for instance test.mvcc.TestShardMemLeak_7 ...
Shard shards/shard4(4) : metadata saved successfully
Shard shards/shard4(4) : instance test.mvcc.TestShardMemLeak_7 sucessfully destroyed
Shard shards/shard5(5) : destroying instance test.mvcc.TestShardMemLeak_8 ...
Shard shards/shard5(5) : removed plasmaId 1 for instance test.mvcc.TestShardMemLeak_8 ...
Shard shards/shard5(5) : metadata saved successfully
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_8 sucessfully destroyed
Shard shards/shard5(5) : destroying instance test.mvcc.TestShardMemLeak_9 ...
Shard shards/shard5(5) : removed plasmaId 2 for instance test.mvcc.TestShardMemLeak_9 ...
Shard shards/shard5(5) : metadata saved successfully
Shard shards/shard5(5) : instance test.mvcc.TestShardMemLeak_9 sucessfully destroyed
All instances matching prefix test.mvcc.TestShardMemLeak destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
LSS shards/shard2/data(shard2) : LSSCleaner stopped
LSS shards/shard2/data/recovery(shard2) : LSSCleaner stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : Shutdown completed
Shard shards/shard3(3) : Swapper stopped
Shard shards/shard3(3) : All daemons stopped
LSS shards/shard3/data(shard3) : LSSCleaner stopped
LSS shards/shard3/data/recovery(shard3) : LSSCleaner stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : Shutdown completed
Shard shards/shard4(4) : Swapper stopped
Shard shards/shard4(4) : All daemons stopped
LSS shards/shard4/data(shard4) : LSSCleaner stopped
LSS shards/shard4/data/recovery(shard4) : LSSCleaner stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : Shutdown completed
Shard shards/shard5(5) : Swapper stopped
Shard shards/shard5(5) : All daemons stopped
LSS shards/shard5/data(shard5) : LSSCleaner stopped
LSS shards/shard5/data/recovery(shard5) : LSSCleaner stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
Shard shards/shard2(2) : Swapper stopped
Shard shards/shard2(2) : All daemons stopped
Shard shards/shard2(2) : All instance closed
Shard shards/shard2(2) : All instance destroyed
Shard shards/shard2(2) : Shard Destroyed Successfully
Shard shards/shard3(3) : Swapper stopped
Shard shards/shard3(3) : All daemons stopped
Shard shards/shard3(3) : All instance closed
Shard shards/shard3(3) : All instance destroyed
Shard shards/shard3(3) : Shard Destroyed Successfully
Shard shards/shard4(4) : Swapper stopped
Shard shards/shard4(4) : All daemons stopped
Shard shards/shard4(4) : All instance closed
Shard shards/shard4(4) : All instance destroyed
Shard shards/shard4(4) : Shard Destroyed Successfully
Shard shards/shard5(5) : Swapper stopped
Shard shards/shard5(5) : All daemons stopped
Shard shards/shard5(5) : All instance closed
Shard shards/shard5(5) : All instance destroyed
Shard shards/shard5(5) : Shard Destroyed Successfully
--- PASS: TestShardMemLeak (0.85s)
=== RUN   TestSMRSimple
----------- running TestSMRSimple
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestSMRSimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRSimple
LSS test.mvcc.TestSMRSimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRSimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestSMRSimple) to LSS (test.mvcc.TestSMRSimple) and RecoveryLSS (test.mvcc.TestSMRSimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestSMRSimple
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRSimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRSimple to LSS test.mvcc.TestSMRSimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestSMRSimple/recovery], Data log [test.mvcc.TestSMRSimple], Shared [false]
LSS test.mvcc.TestSMRSimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.711µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestSMRSimple/recovery], Data log [test.mvcc.TestSMRSimple], Shared [false]. Built [0] plasmas, took [118.296µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestSMRSimple(shard1) : all deamons started
LSS test.mvcc.TestSMRSimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple started
{
"memory_quota":         1099511627776,
"count":                1600,
"compacts":             9,
"purges":               0,
"splits":               2,
"merges":               1,
"inserts":              1600,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          23422,
"memory_size_index":    84,
"allocated":            246204,
"freed":                222782,
"reclaimed":            222782,
"reclaim_pending":      0,
"reclaim_list_size":    222782,
"reclaim_list_count":   9,
"reclaim_threshold":    50,
"allocated_index":      184,
"freed_index":          100,
"reclaimed_index":      100,
"num_pages":            2,
"items_count":          0,
"total_records":        390,
"num_rec_allocs":       4000,
"num_rec_frees":        3610,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       56000,
"lss_data_size":        8281,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        16621,
"est_recovery_size":    8678,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1600,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               3,
"gcSn":                 2,
"gcSnIntervals":       "[0 3]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            3,
"num_free_wctxs":       1,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           105600,
"page_cnt":             11,
"page_itemcnt":         2400,
"avg_item_size":        44,
"avg_page_size":        9600,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":35924,
"page_bytes_compressed":8387,
"compression_ratio":    4.28330,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    8080,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          38900,
    "lss_data_size":        16553,
    "lss_used_space":       8192,
    "lss_disk_size":        8192,
    "lss_fragmentation":    0,
    "lss_num_reads":        0,
    "lss_read_bs":          0,
    "lss_blk_read_bs":      0,
    "bytes_written":        8192,
    "bytes_incoming":       56000,
    "write_amp":            0.00,
    "write_amp_avg":        0.07,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      4096,
    "num_sctxs":            13,
    "num_free_sctxs":       2,
    "num_swapperWriter":    32
  }
}
LSS test.mvcc.TestSMRSimple(shard1) : all deamons stopped
LSS test.mvcc.TestSMRSimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple stopped
LSS test.mvcc.TestSMRSimple(shard1) : LSSCleaner stopped
LSS test.mvcc.TestSMRSimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple closed
Reopening database...
LSS test.mvcc.TestSMRSimple(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRSimple
LSS test.mvcc.TestSMRSimple/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRSimple/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestSMRSimple) to LSS (test.mvcc.TestSMRSimple) and RecoveryLSS (test.mvcc.TestSMRSimple/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestSMRSimple
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRSimple to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRSimple to LSS test.mvcc.TestSMRSimple
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestSMRSimple/recovery], Data log [test.mvcc.TestSMRSimple], Shared [false]
LSS test.mvcc.TestSMRSimple/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [8192]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [4096] took [164.347µs]
LSS test.mvcc.TestSMRSimple(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [16384] replayOffset [4096]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [475.387µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestSMRSimple/recovery], Data log [test.mvcc.TestSMRSimple], Shared [false]. Built [1] plasmas, took [555.285µs]
Plasma: doInit: data UsedSpace 16384 recovery UsedSpace 8850
LSS test.mvcc.TestSMRSimple(shard1) : all deamons started
LSS test.mvcc.TestSMRSimple/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple started
LSS test.mvcc.TestSMRSimple(shard1) : all deamons stopped
LSS test.mvcc.TestSMRSimple/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple stopped
LSS test.mvcc.TestSMRSimple(shard1) : LSSCleaner stopped
LSS test.mvcc.TestSMRSimple/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestSMRSimple ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestSMRSimple ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestSMRSimple sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSMRSimple (1.05s)
=== RUN   TestSMRConcurrent
----------- running TestSMRConcurrent
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestSMRConcurrent(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRConcurrent
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRConcurrent/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestSMRConcurrent) to LSS (test.mvcc.TestSMRConcurrent) and RecoveryLSS (test.mvcc.TestSMRConcurrent/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestSMRConcurrent
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRConcurrent to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRConcurrent to LSS test.mvcc.TestSMRConcurrent
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestSMRConcurrent/recovery], Data log [test.mvcc.TestSMRConcurrent], Shared [false]
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.301µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestSMRConcurrent/recovery], Data log [test.mvcc.TestSMRConcurrent], Shared [false]. Built [0] plasmas, took [79.176µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestSMRConcurrent(shard1) : all deamons started
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent started
{
"memory_quota":         5242880,
"count":                1000000,
"compacts":             5145,
"purges":               0,
"splits":               3055,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    808,
"split_conflicts":      759,
"merge_conflicts":      0,
"insert_conflicts":     3106,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    548,
"memory_size":          4939872,
"memory_size_index":    223308,
"allocated":            202652324,
"freed":                197712452,
"reclaimed":            195546048,
"reclaim_pending":      2166404,
"reclaim_list_size":    22446044,
"reclaim_list_count":   1995,
"reclaim_threshold":    50,
"allocated_index":      223308,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3056,
"items_count":          1000000,
"total_records":        1068910,
"num_rec_allocs":       4594320,
"num_rec_frees":        4511889,
"num_rec_swapout":      2821734,
"num_rec_swapin":       1835255,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       28000000,
"lss_data_size":        19700495,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        58758578,
"est_recovery_size":    3351222,
"lss_num_reads":        28167,
"lss_read_bs":          50183968,
"lss_blk_read_bs":      149028864,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           997612,
"cache_misses":         2388,
"cache_hit_ratio":      0.99891,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.07712,
"mvcc_purge_ratio":     1.06891,
"currSn":               13,
"gcSn":                 12,
"gcSnIntervals":       "[0 13]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            10,
"num_free_wctxs":       4,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           149811760,
"page_cnt":             18145,
"page_itemcnt":         5350420,
"avg_item_size":        28,
"avg_page_size":        8256,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":88247174,
"page_bytes_compressed":58545600,
"compression_ratio":    1.50732,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":369536,
"memory_size_delta":    1840864,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1074132,
    "lss_data_size":        20552047,
    "lss_used_space":       70156288,
    "lss_disk_size":        70156288,
    "lss_fragmentation":    70,
    "lss_num_reads":        28167,
    "lss_read_bs":          50183968,
    "lss_blk_read_bs":      149028864,
    "bytes_written":        70156288,
    "bytes_incoming":       28000000,
    "write_amp":            0.00,
    "write_amp_avg":        2.39,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      67006464,
    "num_sctxs":            29,
    "num_free_sctxs":       7,
    "num_swapperWriter":    32
  }
}
1000000 items insert took 4.318049834s -> 231586.02573922958 items/s
Starting update iteration  0
{
"memory_quota":         5242880,
"count":                3000000,
"compacts":             7478,
"purges":               0,
"splits":               3849,
"merges":               0,
"inserts":              3000000,
"deletes":              0,
"compact_conflicts":    941,
"split_conflicts":      804,
"merge_conflicts":      0,
"insert_conflicts":     3819,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    1407,
"memory_size":          4951892,
"memory_size_index":    282540,
"allocated":            579163456,
"freed":                574211564,
"reclaimed":            571749824,
"reclaim_pending":      2461740,
"reclaim_list_size":    20213016,
"reclaim_list_count":   2699,
"reclaim_threshold":    50,
"allocated_index":      282540,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3850,
"items_count":          1000000,
"total_records":        1216182,
"num_rec_allocs":       11796168,
"num_rec_frees":        11732731,
"num_rec_swapout":      9365432,
"num_rec_swapin":       8212687,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       96000000,
"lss_data_size":        21490207,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        179110746,
"est_recovery_size":    12442474,
"lss_num_reads":        122835,
"lss_read_bs":          179172894,
"lss_blk_read_bs":      657461248,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           2995493,
"cache_misses":         4507,
"cache_hit_ratio":      0.99923,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.05216,
"mvcc_purge_ratio":     1.21618,
"currSn":               28,
"gcSn":                 27,
"gcSnIntervals":       "[0 28]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            10,
"num_free_wctxs":       1,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           499061404,
"page_cnt":             57997,
"page_itemcnt":         15907660,
"avg_item_size":        31,
"avg_page_size":        8604,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":317493654,
"page_bytes_compressed":178344112,
"compression_ratio":    1.78023,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":484608,
"memory_size_delta":    1907416,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1185185,
    "lss_data_size":        22538763,
    "lss_used_space":       203264000,
    "lss_disk_size":        203264000,
    "lss_fragmentation":    88,
    "lss_num_reads":        122835,
    "lss_read_bs":          179172894,
    "lss_blk_read_bs":      657461248,
    "bytes_written":        203264000,
    "bytes_incoming":       96000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.99,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      190676992,
    "num_sctxs":            29,
    "num_free_sctxs":       4,
    "num_swapperWriter":    32
  }
}
1000000 items update took 8.867869301s -> 112766.65973045332 items/s
Starting update iteration  1
{
"memory_quota":         5242880,
"count":                5000000,
"compacts":             9015,
"purges":               0,
"splits":               3849,
"merges":               0,
"inserts":              5000000,
"deletes":              0,
"compact_conflicts":    1081,
"split_conflicts":      804,
"merge_conflicts":      0,
"insert_conflicts":     4457,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    2255,
"memory_size":          4919212,
"memory_size_index":    282540,
"allocated":            972258432,
"freed":                967339220,
"reclaimed":            966922044,
"reclaim_pending":      417176,
"reclaim_list_size":    25560776,
"reclaim_list_count":   4173,
"reclaim_threshold":    50,
"allocated_index":      282540,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3850,
"items_count":          1000000,
"total_records":        1208116,
"num_rec_allocs":       18638694,
"num_rec_frees":        18576216,
"num_rec_swapout":      15799936,
"num_rec_swapin":       14654298,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       164000000,
"lss_data_size":        21157799,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        296556503,
"est_recovery_size":    21646486,
"lss_num_reads":        220208,
"lss_read_bs":          302036060,
"lss_blk_read_bs":      1178046464,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           4993962,
"cache_misses":         6038,
"cache_hit_ratio":      0.99922,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.05172,
"mvcc_purge_ratio":     1.20812,
"currSn":               41,
"gcSn":                 40,
"gcSnIntervals":       "[0 41]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            10,
"num_free_wctxs":       8,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           759103260,
"page_cnt":             80581,
"page_itemcnt":         21274416,
"avg_item_size":        35,
"avg_page_size":        9420,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":575306386,
"page_bytes_compressed":295229721,
"compression_ratio":    1.94867,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":485248,
"memory_size_delta":    1907576,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1252742,
    "lss_data_size":        22199335,
    "lss_used_space":       333238272,
    "lss_disk_size":        333238272,
    "lss_fragmentation":    93,
    "lss_num_reads":        220208,
    "lss_read_bs":          302036060,
    "lss_blk_read_bs":      1178046464,
    "bytes_written":        333238272,
    "bytes_incoming":       164000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.90,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      312262656,
    "num_sctxs":            29,
    "num_free_sctxs":       11,
    "num_swapperWriter":    32
  }
}
1000000 items update took 8.482388804s -> 117891.31848429711 items/s
Starting update iteration  2
{
"memory_quota":         5242880,
"count":                7000000,
"compacts":             10989,
"purges":               0,
"splits":               3849,
"merges":               0,
"inserts":              7000000,
"deletes":              0,
"compact_conflicts":    1240,
"split_conflicts":      804,
"merge_conflicts":      0,
"insert_conflicts":     5080,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    3086,
"memory_size":          5429156,
"memory_size_index":    282540,
"allocated":            1347436752,
"freed":                1342007596,
"reclaimed":            1339900192,
"reclaim_pending":      2107404,
"reclaim_list_size":    33669080,
"reclaim_list_count":   6192,
"reclaim_threshold":    50,
"allocated_index":      282540,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3850,
"items_count":          1000000,
"total_records":        1211770,
"num_rec_allocs":       25129344,
"num_rec_frees":        25058147,
"num_rec_swapout":      21859937,
"num_rec_swapin":       20719364,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       232000000,
"lss_data_size":        21064988,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        406525827,
"est_recovery_size":    30081030,
"lss_num_reads":        309636,
"lss_read_bs":          416981668,
"lss_blk_read_bs":      1658077184,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           6992001,
"cache_misses":         7999,
"cache_hit_ratio":      0.99900,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.05875,
"mvcc_purge_ratio":     1.21177,
"currSn":               55,
"gcSn":                 54,
"gcSnIntervals":       "[0 55]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            10,
"num_free_wctxs":       2,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           803228412,
"page_cnt":             78707,
"page_itemcnt":         20195469,
"avg_item_size":        39,
"avg_page_size":        10205,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":817638712,
"page_bytes_compressed":404690399,
"compression_ratio":    2.02041,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":482816,
"memory_size_delta":    2051760,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1264467,
    "lss_data_size":        22094804,
    "lss_used_space":       455872512,
    "lss_disk_size":        455872512,
    "lss_fragmentation":    95,
    "lss_num_reads":        309636,
    "lss_read_bs":          416981668,
    "lss_blk_read_bs":      1658077184,
    "bytes_written":        455872512,
    "bytes_incoming":       232000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.83,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      425459712,
    "num_sctxs":            29,
    "num_free_sctxs":       5,
    "num_swapperWriter":    32
  }
}
1000000 items update took 8.24287822s -> 121316.84750281317 items/s
Starting update iteration  3
{
"memory_quota":         5242880,
"count":                9000000,
"compacts":             12628,
"purges":               0,
"splits":               3849,
"merges":               0,
"inserts":              9000000,
"deletes":              0,
"compact_conflicts":    1390,
"split_conflicts":      804,
"merge_conflicts":      0,
"insert_conflicts":     5718,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    3759,
"memory_size":          8108540,
"memory_size_index":    282540,
"allocated":            1768322504,
"freed":                1760213964,
"reclaimed":            1755398784,
"reclaim_pending":      4815180,
"reclaim_list_size":    41271608,
"reclaim_list_count":   7864,
"reclaim_threshold":    50,
"allocated_index":      282540,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3850,
"items_count":          1000000,
"total_records":        1196090,
"num_rec_allocs":       32557617,
"num_rec_frees":        32427830,
"num_rec_swapout":      28731830,
"num_rec_swapin":       27665527,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       300000000,
"lss_data_size":        20968748,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        451182390,
"est_recovery_size":    39133634,
"lss_num_reads":        400038,
"lss_read_bs":          549582393,
"lss_blk_read_bs":      2153119744,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           8990391,
"cache_misses":         9609,
"cache_hit_ratio":      0.99925,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.10851,
"mvcc_purge_ratio":     1.19609,
"currSn":               68,
"gcSn":                 67,
"gcSnIntervals":       "[0 68]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            15,
"num_free_wctxs":       1,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           787861452,
"page_cnt":             76237,
"page_itemcnt":         19733256,
"avg_item_size":        39,
"avg_page_size":        10334,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":1095719884,
"page_bytes_compressed":531139308,
"compression_ratio":    2.06296,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":451584,
"memory_size_delta":    1887800,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1656650,
    "lss_data_size":        21944800,
    "lss_used_space":       502998331,
    "lss_disk_size":        527990784,
    "lss_fragmentation":    95,
    "lss_num_reads":        454061,
    "lss_read_bs":          641978024,
    "lss_blk_read_bs":      2245869568,
    "bytes_written":        595099648,
    "bytes_incoming":       300000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.85,
    "lss_gc_num_reads":     74953,
    "lss_gc_reads_bs":      135672178,
    "lss_blk_gc_reads_bs":  214691840,
    "lss_gc_status":        "frag 95, data: 21093940, used: 427556864 log:(0 - 427556864)",
    "lss_head_offset":      91445654,
    "lss_tail_offset":      556060672,
    "num_sctxs":            30,
    "num_free_sctxs":       4,
    "num_swapperWriter":    32
  }
}
1000000 items update took 8.845557166s -> 113051.10364825152 items/s
Starting update iteration  4
{
"memory_quota":         5242880,
"count":                11000000,
"compacts":             13820,
"purges":               0,
"splits":               3849,
"merges":               0,
"inserts":              11000000,
"deletes":              0,
"compact_conflicts":    1495,
"split_conflicts":      804,
"merge_conflicts":      0,
"insert_conflicts":     6320,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    4454,
"memory_size":          10160628,
"memory_size_index":    282540,
"allocated":            2221188332,
"freed":                2211027704,
"reclaimed":            2208760852,
"reclaim_pending":      2266852,
"reclaim_list_size":    46162184,
"reclaim_list_count":   9027,
"reclaim_threshold":    50,
"allocated_index":      282540,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3850,
"items_count":          1000000,
"total_records":        1196140,
"num_rec_allocs":       40620857,
"num_rec_frees":        40454449,
"num_rec_swapout":      36275902,
"num_rec_swapin":       35246170,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       368000000,
"lss_data_size":        20958713,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        519686092,
"est_recovery_size":    49329460,
"lss_num_reads":        499956,
"lss_read_bs":          695223479,
"lss_blk_read_bs":      2699120640,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           10989212,
"cache_misses":         10788,
"cache_hit_ratio":      0.99941,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.13912,
"mvcc_purge_ratio":     1.19614,
"currSn":               82,
"gcSn":                 81,
"gcSnIntervals":       "[0 82]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            15,
"num_free_wctxs":       3,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           866875552,
"page_cnt":             83693,
"page_itemcnt":         21727006,
"avg_item_size":        39,
"avg_page_size":        10357,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":1399779460,
"page_bytes_compressed":670502554,
"compression_ratio":    2.08766,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":440320,
"memory_size_delta":    2354840,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          1674156,
    "lss_data_size":        21930377,
    "lss_used_space":       585402517,
    "lss_disk_size":        617070592,
    "lss_fragmentation":    96,
    "lss_num_reads":        610081,
    "lss_read_bs":          861087546,
    "lss_blk_read_bs":      2865680384,
    "bytes_written":        751288320,
    "bytes_incoming":       368000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.91,
    "lss_gc_num_reads":     159779,
    "lss_gc_reads_bs":      265007398,
    "lss_blk_gc_reads_bs":  453410816,
    "lss_gc_status":        "frag 95, data: 21093940, used: 427556864 log:(0 - 427556864)",
    "lss_head_offset":      165195411,
    "lss_tail_offset":      701763584,
    "num_sctxs":            32,
    "num_free_sctxs":       7,
    "num_swapperWriter":    32
  }
}
1000000 items update took 9.178676389s -> 108948.17047896251 items/s
LSS test.mvcc.TestSMRConcurrent(shard1) : logCleaner: starting... frag 95, data: 21093940, used: 427556864 log:(0 - 427556864)
LSS test.mvcc.TestSMRConcurrent(shard1) : logCleaner: completed... frag 96, data: 21037028, used: 536254779, relocated: 18022, retries: 738, skipped: 6420 log:(0 - 702812160) run:1 duration:17964 ms
LSS test.mvcc.TestSMRConcurrent(shard1) : all deamons stopped
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent stopped
LSS test.mvcc.TestSMRConcurrent(shard1) : LSSCleaner stopped
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent closed
Reopening db....
LSS test.mvcc.TestSMRConcurrent(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRConcurrent
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestSMRConcurrent/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestSMRConcurrent) to LSS (test.mvcc.TestSMRConcurrent) and RecoveryLSS (test.mvcc.TestSMRConcurrent/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestSMRConcurrent
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRConcurrent to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestSMRConcurrent to LSS test.mvcc.TestSMRConcurrent
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestSMRConcurrent/recovery], Data log [test.mvcc.TestSMRConcurrent], Shared [false]
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [50286592]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [702812160] took [1.837982583s]
LSS test.mvcc.TestSMRConcurrent(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [166557381] tailOffset [706478080] replayOffset [702812160]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [1.910273132s]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestSMRConcurrent/recovery], Data log [test.mvcc.TestSMRConcurrent], Shared [false]. Built [1] plasmas, took [1.911422281s]
Plasma: doInit: data UsedSpace 539920699 recovery UsedSpace 49624606
LSS test.mvcc.TestSMRConcurrent(shard1) : all deamons started
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent started
LSS test.mvcc.TestSMRConcurrent(shard1) : all deamons stopped
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent stopped
LSS test.mvcc.TestSMRConcurrent(shard1) : LSSCleaner stopped
LSS test.mvcc.TestSMRConcurrent/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestSMRConcurrent ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestSMRConcurrent ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestSMRConcurrent sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSMRConcurrent (50.15s)
=== RUN   TestSMRComplex
----------- running TestSMRComplex
Start shard recovery from smr_test/shardsDirectory smr_test/shards does not exist.  Skip shard recovery.
Shard smr_test/shards/shard1(1) : Shard Created Successfully
Shard smr_test/shards/shard1(1) : metadata saved successfully
LSS smr_test/V_1(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_1
LSS smr_test/V_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_1/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_1) to LSS (smr_test/V_1) and RecoveryLSS (smr_test/V_1/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 1 to instance smr_test/V_1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_1 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_1 to LSS smr_test/V_1
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_1/recovery], Data log [smr_test/V_1], Shared [false]
LSS smr_test/V_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [350.386µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_1/recovery], Data log [smr_test/V_1], Shared [false]. Built [0] plasmas, took [397.522µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_1(shard1) : all deamons started
LSS smr_test/V_1/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Swapper started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 started
Shard smr_test/shards/shard2(2) : Shard Created Successfully
Shard smr_test/shards/shard2(2) : metadata saved successfully
LSS smr_test/V_2(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_2
LSS smr_test/V_2/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_2/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_2) to LSS (smr_test/V_2) and RecoveryLSS (smr_test/V_2/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 1 to instance smr_test/V_2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_2 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_2 to LSS smr_test/V_2
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_2/recovery], Data log [smr_test/V_2], Shared [false]
LSS smr_test/V_2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.31µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_2/recovery], Data log [smr_test/V_2], Shared [false]. Built [0] plasmas, took [63.017µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_2(shard2) : all deamons started
LSS smr_test/V_2/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Swapper started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 started
Shard smr_test/shards/shard3(3) : Shard Created Successfully
Shard smr_test/shards/shard3(3) : metadata saved successfully
LSS smr_test/V_3(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_3
LSS smr_test/V_3/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_3/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_3) to LSS (smr_test/V_3) and RecoveryLSS (smr_test/V_3/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 1 to instance smr_test/V_3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_3 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_3 to LSS smr_test/V_3
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_3/recovery], Data log [smr_test/V_3], Shared [false]
LSS smr_test/V_3/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.779µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_3/recovery], Data log [smr_test/V_3], Shared [false]. Built [0] plasmas, took [80.63µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_3(shard3) : all deamons started
LSS smr_test/V_3/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Swapper started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 started
Shard smr_test/shards/shard4(4) : Shard Created Successfully
Shard smr_test/shards/shard4(4) : metadata saved successfully
LSS smr_test/V_4(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_4
LSS smr_test/V_4/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_4/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_4) to LSS (smr_test/V_4) and RecoveryLSS (smr_test/V_4/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 1 to instance smr_test/V_4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_4 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_4 to LSS smr_test/V_4
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_4/recovery], Data log [smr_test/V_4], Shared [false]
LSS smr_test/V_4/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.735µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_4/recovery], Data log [smr_test/V_4], Shared [false]. Built [0] plasmas, took [65.864µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_4(shard4) : all deamons started
LSS smr_test/V_4/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Swapper started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 started
Shard smr_test/shards/shard1(1) : metadata saved successfully
LSS smr_test/V_5(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_5
LSS smr_test/V_5/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_5/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_5) to LSS (smr_test/V_5) and RecoveryLSS (smr_test/V_5/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 2 to instance smr_test/V_5
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_5 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_5 to LSS smr_test/V_5
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_5/recovery], Data log [smr_test/V_5], Shared [false]
LSS smr_test/V_5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.055µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_5/recovery], Data log [smr_test/V_5], Shared [false]. Built [0] plasmas, took [84.18µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_5(shard1) : all deamons started
LSS smr_test/V_5/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 started
Shard smr_test/shards/shard2(2) : metadata saved successfully
LSS smr_test/V_6(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_6
LSS smr_test/V_6/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_6/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_6) to LSS (smr_test/V_6) and RecoveryLSS (smr_test/V_6/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 2 to instance smr_test/V_6
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_6 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_6 to LSS smr_test/V_6
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_6/recovery], Data log [smr_test/V_6], Shared [false]
LSS smr_test/V_6/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.002µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_6/recovery], Data log [smr_test/V_6], Shared [false]. Built [0] plasmas, took [83.17µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_6(shard2) : all deamons started
LSS smr_test/V_6/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 started
Shard smr_test/shards/shard3(3) : metadata saved successfully
LSS smr_test/V_7(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_7
LSS smr_test/V_7/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_7/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_7) to LSS (smr_test/V_7) and RecoveryLSS (smr_test/V_7/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 2 to instance smr_test/V_7
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_7 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_7 to LSS smr_test/V_7
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_7/recovery], Data log [smr_test/V_7], Shared [false]
LSS smr_test/V_7/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.281µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_7/recovery], Data log [smr_test/V_7], Shared [false]. Built [0] plasmas, took [83.493µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_7(shard3) : all deamons started
LSS smr_test/V_7/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 started
Shard smr_test/shards/shard4(4) : metadata saved successfully
LSS smr_test/V_8(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_8
LSS smr_test/V_8/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_8/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_8) to LSS (smr_test/V_8) and RecoveryLSS (smr_test/V_8/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 2 to instance smr_test/V_8
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_8 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_8 to LSS smr_test/V_8
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_8/recovery], Data log [smr_test/V_8], Shared [false]
LSS smr_test/V_8/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.727µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_8/recovery], Data log [smr_test/V_8], Shared [false]. Built [0] plasmas, took [114.836µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_8(shard4) : all deamons started
LSS smr_test/V_8/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 started
Shard smr_test/shards/shard1(1) : metadata saved successfully
LSS smr_test/V_9(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_9
LSS smr_test/V_9/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_9/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_9) to LSS (smr_test/V_9) and RecoveryLSS (smr_test/V_9/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 3 to instance smr_test/V_9
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_9 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_9 to LSS smr_test/V_9
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_9/recovery], Data log [smr_test/V_9], Shared [false]
LSS smr_test/V_9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.876µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_9/recovery], Data log [smr_test/V_9], Shared [false]. Built [0] plasmas, took [98.308µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_9(shard1) : all deamons started
LSS smr_test/V_9/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 started
Shard smr_test/shards/shard2(2) : metadata saved successfully
LSS smr_test/V_10(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_10
LSS smr_test/V_10/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_10/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_10) to LSS (smr_test/V_10) and RecoveryLSS (smr_test/V_10/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 3 to instance smr_test/V_10
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_10 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_10 to LSS smr_test/V_10
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_10/recovery], Data log [smr_test/V_10], Shared [false]
LSS smr_test/V_10/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.437µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_10/recovery], Data log [smr_test/V_10], Shared [false]. Built [0] plasmas, took [84.023µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_10(shard2) : all deamons started
LSS smr_test/V_10/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 started
Shard smr_test/shards/shard3(3) : metadata saved successfully
LSS smr_test/V_11(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_11
LSS smr_test/V_11/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_11/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_11) to LSS (smr_test/V_11) and RecoveryLSS (smr_test/V_11/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 3 to instance smr_test/V_11
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_11 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_11 to LSS smr_test/V_11
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_11/recovery], Data log [smr_test/V_11], Shared [false]
LSS smr_test/V_11/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.034µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_11/recovery], Data log [smr_test/V_11], Shared [false]. Built [0] plasmas, took [98.874µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_11(shard3) : all deamons started
LSS smr_test/V_11/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 started
Shard smr_test/shards/shard4(4) : metadata saved successfully
LSS smr_test/V_12(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_12
LSS smr_test/V_12/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_12/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_12) to LSS (smr_test/V_12) and RecoveryLSS (smr_test/V_12/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 3 to instance smr_test/V_12
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_12 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_12 to LSS smr_test/V_12
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_12/recovery], Data log [smr_test/V_12], Shared [false]
LSS smr_test/V_12/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.396µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_12/recovery], Data log [smr_test/V_12], Shared [false]. Built [0] plasmas, took [104.895µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_12(shard4) : all deamons started
LSS smr_test/V_12/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 started
Shard smr_test/shards/shard1(1) : metadata saved successfully
LSS smr_test/V_13(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_13
LSS smr_test/V_13/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_13/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_13) to LSS (smr_test/V_13) and RecoveryLSS (smr_test/V_13/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 4 to instance smr_test/V_13
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_13 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_13 to LSS smr_test/V_13
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_13/recovery], Data log [smr_test/V_13], Shared [false]
LSS smr_test/V_13/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.178µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_13/recovery], Data log [smr_test/V_13], Shared [false]. Built [0] plasmas, took [90.388µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_13(shard1) : all deamons started
LSS smr_test/V_13/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 started
Shard smr_test/shards/shard2(2) : metadata saved successfully
LSS smr_test/V_14(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_14
LSS smr_test/V_14/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_14/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_14) to LSS (smr_test/V_14) and RecoveryLSS (smr_test/V_14/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 4 to instance smr_test/V_14
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_14 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_14 to LSS smr_test/V_14
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_14/recovery], Data log [smr_test/V_14], Shared [false]
LSS smr_test/V_14/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.93µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_14/recovery], Data log [smr_test/V_14], Shared [false]. Built [0] plasmas, took [85.809µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_14(shard2) : all deamons started
LSS smr_test/V_14/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 started
Shard smr_test/shards/shard3(3) : metadata saved successfully
LSS smr_test/V_15(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_15
LSS smr_test/V_15/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_15/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_15) to LSS (smr_test/V_15) and RecoveryLSS (smr_test/V_15/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 4 to instance smr_test/V_15
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_15 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_15 to LSS smr_test/V_15
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_15/recovery], Data log [smr_test/V_15], Shared [false]
LSS smr_test/V_15/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.904µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_15/recovery], Data log [smr_test/V_15], Shared [false]. Built [0] plasmas, took [102.137µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_15(shard3) : all deamons started
LSS smr_test/V_15/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 started
Shard smr_test/shards/shard4(4) : metadata saved successfully
LSS smr_test/V_16(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_16
LSS smr_test/V_16/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_16/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_16) to LSS (smr_test/V_16) and RecoveryLSS (smr_test/V_16/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 4 to instance smr_test/V_16
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_16 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_16 to LSS smr_test/V_16
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_16/recovery], Data log [smr_test/V_16], Shared [false]
LSS smr_test/V_16/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [55.956µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_16/recovery], Data log [smr_test/V_16], Shared [false]. Built [0] plasmas, took [101.052µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_16(shard4) : all deamons started
LSS smr_test/V_16/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 started
Shard smr_test/shards/shard1(1) : metadata saved successfully
LSS smr_test/V_17(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_17
LSS smr_test/V_17/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_17/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_17) to LSS (smr_test/V_17) and RecoveryLSS (smr_test/V_17/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 5 to instance smr_test/V_17
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_17 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_17 to LSS smr_test/V_17
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_17/recovery], Data log [smr_test/V_17], Shared [false]
LSS smr_test/V_17/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [49.813µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_17/recovery], Data log [smr_test/V_17], Shared [false]. Built [0] plasmas, took [84.23µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_17(shard1) : all deamons started
LSS smr_test/V_17/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 started
Shard smr_test/shards/shard2(2) : metadata saved successfully
LSS smr_test/V_18(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_18
LSS smr_test/V_18/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_18/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_18) to LSS (smr_test/V_18) and RecoveryLSS (smr_test/V_18/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 5 to instance smr_test/V_18
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_18 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_18 to LSS smr_test/V_18
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_18/recovery], Data log [smr_test/V_18], Shared [false]
LSS smr_test/V_18/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.671µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_18/recovery], Data log [smr_test/V_18], Shared [false]. Built [0] plasmas, took [80.25µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_18(shard2) : all deamons started
LSS smr_test/V_18/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 started
Shard smr_test/shards/shard3(3) : metadata saved successfully
LSS smr_test/V_19(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_19
LSS smr_test/V_19/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_19/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_19) to LSS (smr_test/V_19) and RecoveryLSS (smr_test/V_19/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 5 to instance smr_test/V_19
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_19 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_19 to LSS smr_test/V_19
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_19/recovery], Data log [smr_test/V_19], Shared [false]
LSS smr_test/V_19/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.953µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_19/recovery], Data log [smr_test/V_19], Shared [false]. Built [0] plasmas, took [85.641µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_19(shard3) : all deamons started
LSS smr_test/V_19/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 started
Shard smr_test/shards/shard4(4) : metadata saved successfully
LSS smr_test/V_20(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_20
LSS smr_test/V_20/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_20/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_20) to LSS (smr_test/V_20) and RecoveryLSS (smr_test/V_20/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 5 to instance smr_test/V_20
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_20 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_20 to LSS smr_test/V_20
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_20/recovery], Data log [smr_test/V_20], Shared [false]
LSS smr_test/V_20/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.396µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_20/recovery], Data log [smr_test/V_20], Shared [false]. Built [0] plasmas, took [100.655µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_20(shard4) : all deamons started
LSS smr_test/V_20/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 started
LSS smr_test/V_1(shard1) : all deamons stopped
LSS smr_test/V_1/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 stopped
LSS smr_test/V_1(shard1) : LSSCleaner stopped
LSS smr_test/V_1/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 closed
LSS smr_test/V_2(shard2) : all deamons stopped
LSS smr_test/V_2/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 stopped
LSS smr_test/V_2(shard2) : LSSCleaner stopped
LSS smr_test/V_2/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 closed
LSS smr_test/V_3(shard3) : all deamons stopped
LSS smr_test/V_3/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 stopped
LSS smr_test/V_3(shard3) : LSSCleaner stopped
LSS smr_test/V_3/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 closed
LSS smr_test/V_4(shard4) : all deamons stopped
LSS smr_test/V_4/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 stopped
LSS smr_test/V_4(shard4) : LSSCleaner stopped
LSS smr_test/V_4/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 closed
LSS smr_test/V_5(shard1) : all deamons stopped
LSS smr_test/V_5/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 stopped
LSS smr_test/V_5(shard1) : LSSCleaner stopped
LSS smr_test/V_5/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 closed
LSS smr_test/V_6(shard2) : all deamons stopped
LSS smr_test/V_6/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 stopped
LSS smr_test/V_6(shard2) : LSSCleaner stopped
LSS smr_test/V_6/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 closed
LSS smr_test/V_7(shard3) : all deamons stopped
LSS smr_test/V_7/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 stopped
LSS smr_test/V_7(shard3) : LSSCleaner stopped
LSS smr_test/V_7/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 closed
LSS smr_test/V_8(shard4) : all deamons stopped
LSS smr_test/V_8/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 stopped
LSS smr_test/V_8(shard4) : LSSCleaner stopped
LSS smr_test/V_8/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 closed
LSS smr_test/V_9(shard1) : all deamons stopped
LSS smr_test/V_9/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 stopped
LSS smr_test/V_9(shard1) : LSSCleaner stopped
LSS smr_test/V_9/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 closed
LSS smr_test/V_10(shard2) : all deamons stopped
LSS smr_test/V_10/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 stopped
LSS smr_test/V_10(shard2) : LSSCleaner stopped
LSS smr_test/V_10/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 closed
LSS smr_test/V_11(shard3) : all deamons stopped
LSS smr_test/V_11/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 stopped
LSS smr_test/V_11(shard3) : LSSCleaner stopped
LSS smr_test/V_11/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 closed
LSS smr_test/V_12(shard4) : all deamons stopped
LSS smr_test/V_12/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 stopped
LSS smr_test/V_12(shard4) : LSSCleaner stopped
LSS smr_test/V_12/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 closed
LSS smr_test/V_13(shard1) : all deamons stopped
LSS smr_test/V_13/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 stopped
LSS smr_test/V_13(shard1) : LSSCleaner stopped
LSS smr_test/V_13/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 closed
LSS smr_test/V_14(shard2) : all deamons stopped
LSS smr_test/V_14/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 stopped
LSS smr_test/V_14(shard2) : LSSCleaner stopped
LSS smr_test/V_14/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 closed
LSS smr_test/V_15(shard3) : all deamons stopped
LSS smr_test/V_15/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 stopped
LSS smr_test/V_15(shard3) : LSSCleaner stopped
LSS smr_test/V_15/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 closed
LSS smr_test/V_16(shard4) : all deamons stopped
LSS smr_test/V_16/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 stopped
LSS smr_test/V_16(shard4) : LSSCleaner stopped
LSS smr_test/V_16/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 closed
LSS smr_test/V_17(shard1) : all deamons stopped
LSS smr_test/V_17/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 stopped
LSS smr_test/V_17(shard1) : LSSCleaner stopped
LSS smr_test/V_17/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 closed
LSS smr_test/V_18(shard2) : all deamons stopped
LSS smr_test/V_18/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 stopped
LSS smr_test/V_18(shard2) : LSSCleaner stopped
LSS smr_test/V_18/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 closed
LSS smr_test/V_19(shard3) : all deamons stopped
LSS smr_test/V_19/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 stopped
LSS smr_test/V_19(shard3) : LSSCleaner stopped
LSS smr_test/V_19/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 closed
LSS smr_test/V_20(shard4) : all deamons stopped
LSS smr_test/V_20/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 stopped
LSS smr_test/V_20(shard4) : LSSCleaner stopped
LSS smr_test/V_20/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 closed
reclaimPending: 161 MB tmpBufs: 18 MB
LSS smr_test/V_1(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_1
LSS smr_test/V_1/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_1/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_1) to LSS (smr_test/V_1) and RecoveryLSS (smr_test/V_1/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 1 to instance smr_test/V_1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_1 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_1 to LSS smr_test/V_1
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_1/recovery], Data log [smr_test/V_1], Shared [false]
LSS smr_test/V_1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.26µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_1/recovery], Data log [smr_test/V_1], Shared [false]. Built [0] plasmas, took [106.473µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_1(shard1) : all deamons started
LSS smr_test/V_1/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 started
LSS smr_test/V_2(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_2
LSS smr_test/V_2/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_2/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_2) to LSS (smr_test/V_2) and RecoveryLSS (smr_test/V_2/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 1 to instance smr_test/V_2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_2 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_2 to LSS smr_test/V_2
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_2/recovery], Data log [smr_test/V_2], Shared [false]
LSS smr_test/V_2/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [65.476µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_2/recovery], Data log [smr_test/V_2], Shared [false]. Built [0] plasmas, took [100.568µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_2(shard2) : all deamons started
LSS smr_test/V_2/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 started
LSS smr_test/V_3(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_3
LSS smr_test/V_3/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_3/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_3) to LSS (smr_test/V_3) and RecoveryLSS (smr_test/V_3/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 1 to instance smr_test/V_3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_3 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_3 to LSS smr_test/V_3
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_3/recovery], Data log [smr_test/V_3], Shared [false]
LSS smr_test/V_3/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.981µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_3/recovery], Data log [smr_test/V_3], Shared [false]. Built [0] plasmas, took [96.08µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_3(shard3) : all deamons started
LSS smr_test/V_3/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 started
LSS smr_test/V_4(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_4
LSS smr_test/V_4/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_4/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_4) to LSS (smr_test/V_4) and RecoveryLSS (smr_test/V_4/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 1 to instance smr_test/V_4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_4 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_4 to LSS smr_test/V_4
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_4/recovery], Data log [smr_test/V_4], Shared [false]
LSS smr_test/V_4/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [54.84µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_4/recovery], Data log [smr_test/V_4], Shared [false]. Built [0] plasmas, took [96.42µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_4(shard4) : all deamons started
LSS smr_test/V_4/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 started
LSS smr_test/V_5(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_5
LSS smr_test/V_5/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_5/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_5) to LSS (smr_test/V_5) and RecoveryLSS (smr_test/V_5/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 2 to instance smr_test/V_5
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_5 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_5 to LSS smr_test/V_5
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_5/recovery], Data log [smr_test/V_5], Shared [false]
LSS smr_test/V_5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [435.113µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_5/recovery], Data log [smr_test/V_5], Shared [false]. Built [0] plasmas, took [452.603µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_5(shard1) : all deamons started
LSS smr_test/V_5/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 started
LSS smr_test/V_6(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_6
LSS smr_test/V_6/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_6/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_6) to LSS (smr_test/V_6) and RecoveryLSS (smr_test/V_6/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 2 to instance smr_test/V_6
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_6 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_6 to LSS smr_test/V_6
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_6/recovery], Data log [smr_test/V_6], Shared [false]
LSS smr_test/V_6/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [55.194µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_6/recovery], Data log [smr_test/V_6], Shared [false]. Built [0] plasmas, took [89.742µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_6(shard2) : all deamons started
LSS smr_test/V_6/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 started
LSS smr_test/V_7(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_7
LSS smr_test/V_7/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_7/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_7) to LSS (smr_test/V_7) and RecoveryLSS (smr_test/V_7/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 2 to instance smr_test/V_7
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_7 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_7 to LSS smr_test/V_7
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_7/recovery], Data log [smr_test/V_7], Shared [false]
LSS smr_test/V_7/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.899µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_7/recovery], Data log [smr_test/V_7], Shared [false]. Built [0] plasmas, took [117.673µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_7(shard3) : all deamons started
LSS smr_test/V_7/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 started
LSS smr_test/V_8(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_8
LSS smr_test/V_8/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_8/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_8) to LSS (smr_test/V_8) and RecoveryLSS (smr_test/V_8/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 2 to instance smr_test/V_8
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_8 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_8 to LSS smr_test/V_8
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_8/recovery], Data log [smr_test/V_8], Shared [false]
LSS smr_test/V_8/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [499.209µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_8/recovery], Data log [smr_test/V_8], Shared [false]. Built [0] plasmas, took [536.678µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_8(shard4) : all deamons started
LSS smr_test/V_8/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 started
LSS smr_test/V_9(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_9
LSS smr_test/V_9/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_9/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_9) to LSS (smr_test/V_9) and RecoveryLSS (smr_test/V_9/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 3 to instance smr_test/V_9
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_9 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_9 to LSS smr_test/V_9
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_9/recovery], Data log [smr_test/V_9], Shared [false]
LSS smr_test/V_9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [552.385µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_9/recovery], Data log [smr_test/V_9], Shared [false]. Built [0] plasmas, took [587.849µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_9(shard1) : all deamons started
LSS smr_test/V_9/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 started
LSS smr_test/V_10(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_10
LSS smr_test/V_10/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_10/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_10) to LSS (smr_test/V_10) and RecoveryLSS (smr_test/V_10/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 3 to instance smr_test/V_10
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_10 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_10 to LSS smr_test/V_10
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_10/recovery], Data log [smr_test/V_10], Shared [false]
LSS smr_test/V_10/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.046µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_10/recovery], Data log [smr_test/V_10], Shared [false]. Built [0] plasmas, took [88.181µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_10(shard2) : all deamons started
LSS smr_test/V_10/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 started
LSS smr_test/V_11(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_11
LSS smr_test/V_11/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_11/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_11) to LSS (smr_test/V_11) and RecoveryLSS (smr_test/V_11/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 3 to instance smr_test/V_11
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_11 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_11 to LSS smr_test/V_11
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_11/recovery], Data log [smr_test/V_11], Shared [false]
LSS smr_test/V_11/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.157µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_11/recovery], Data log [smr_test/V_11], Shared [false]. Built [0] plasmas, took [111.157µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_11(shard3) : all deamons started
LSS smr_test/V_11/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 started
LSS smr_test/V_12(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_12
LSS smr_test/V_12/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_12/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_12) to LSS (smr_test/V_12) and RecoveryLSS (smr_test/V_12/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 3 to instance smr_test/V_12
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_12 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_12 to LSS smr_test/V_12
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_12/recovery], Data log [smr_test/V_12], Shared [false]
LSS smr_test/V_12/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.589µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_12/recovery], Data log [smr_test/V_12], Shared [false]. Built [0] plasmas, took [107.329µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_12(shard4) : all deamons started
LSS smr_test/V_12/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 started
LSS smr_test/V_13(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_13
LSS smr_test/V_13/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_13/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_13) to LSS (smr_test/V_13) and RecoveryLSS (smr_test/V_13/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 4 to instance smr_test/V_13
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_13 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_13 to LSS smr_test/V_13
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_13/recovery], Data log [smr_test/V_13], Shared [false]
LSS smr_test/V_13/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [584.645µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_13/recovery], Data log [smr_test/V_13], Shared [false]. Built [0] plasmas, took [620.153µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_13(shard1) : all deamons started
LSS smr_test/V_13/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 started
LSS smr_test/V_14(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_14
LSS smr_test/V_14/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_14/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_14) to LSS (smr_test/V_14) and RecoveryLSS (smr_test/V_14/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 4 to instance smr_test/V_14
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_14 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_14 to LSS smr_test/V_14
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_14/recovery], Data log [smr_test/V_14], Shared [false]
LSS smr_test/V_14/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.581µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_14/recovery], Data log [smr_test/V_14], Shared [false]. Built [0] plasmas, took [85.631µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_14(shard2) : all deamons started
LSS smr_test/V_14/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 started
LSS smr_test/V_15(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_15
LSS smr_test/V_15/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_15/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_15) to LSS (smr_test/V_15) and RecoveryLSS (smr_test/V_15/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 4 to instance smr_test/V_15
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_15 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_15 to LSS smr_test/V_15
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_15/recovery], Data log [smr_test/V_15], Shared [false]
LSS smr_test/V_15/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [639.602µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_15/recovery], Data log [smr_test/V_15], Shared [false]. Built [0] plasmas, took [710.92µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_15(shard3) : all deamons started
LSS smr_test/V_15/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 started
LSS smr_test/V_16(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_16
LSS smr_test/V_16/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_16/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_16) to LSS (smr_test/V_16) and RecoveryLSS (smr_test/V_16/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 4 to instance smr_test/V_16
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_16 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_16 to LSS smr_test/V_16
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_16/recovery], Data log [smr_test/V_16], Shared [false]
LSS smr_test/V_16/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.48µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_16/recovery], Data log [smr_test/V_16], Shared [false]. Built [0] plasmas, took [113.521µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_16(shard4) : all deamons started
LSS smr_test/V_16/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 started
LSS smr_test/V_17(shard1) : LSSCleaner initialized
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_17
LSS smr_test/V_17/recovery(shard1) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard1(1) : LSSCtx Created Successfully. Path=smr_test/V_17/recovery
Shard smr_test/shards/shard1(1) : Map plasma instance (smr_test/V_17) to LSS (smr_test/V_17) and RecoveryLSS (smr_test/V_17/recovery)
Shard smr_test/shards/shard1(1) : Assign plasmaId 5 to instance smr_test/V_17
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_17 to Shard smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : Add instance smr_test/V_17 to LSS smr_test/V_17
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_17/recovery], Data log [smr_test/V_17], Shared [false]
LSS smr_test/V_17/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [72.838µs]
Shard smr_test/shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_17/recovery], Data log [smr_test/V_17], Shared [false]. Built [0] plasmas, took [109.585µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_17(shard1) : all deamons started
LSS smr_test/V_17/recovery(shard1) : all deamons started
Shard smr_test/shards/shard1(1) : Instance added to swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 started
LSS smr_test/V_18(shard2) : LSSCleaner initialized
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_18
LSS smr_test/V_18/recovery(shard2) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard2(2) : LSSCtx Created Successfully. Path=smr_test/V_18/recovery
Shard smr_test/shards/shard2(2) : Map plasma instance (smr_test/V_18) to LSS (smr_test/V_18) and RecoveryLSS (smr_test/V_18/recovery)
Shard smr_test/shards/shard2(2) : Assign plasmaId 5 to instance smr_test/V_18
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_18 to Shard smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : Add instance smr_test/V_18 to LSS smr_test/V_18
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_18/recovery], Data log [smr_test/V_18], Shared [false]
LSS smr_test/V_18/recovery(shard2) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [83.538µs]
Shard smr_test/shards/shard2(2) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_18/recovery], Data log [smr_test/V_18], Shared [false]. Built [0] plasmas, took [131.976µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_18(shard2) : all deamons started
LSS smr_test/V_18/recovery(shard2) : all deamons started
Shard smr_test/shards/shard2(2) : Instance added to swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 started
LSS smr_test/V_19(shard3) : LSSCleaner initialized
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_19
LSS smr_test/V_19/recovery(shard3) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard3(3) : LSSCtx Created Successfully. Path=smr_test/V_19/recovery
Shard smr_test/shards/shard3(3) : Map plasma instance (smr_test/V_19) to LSS (smr_test/V_19) and RecoveryLSS (smr_test/V_19/recovery)
Shard smr_test/shards/shard3(3) : Assign plasmaId 5 to instance smr_test/V_19
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_19 to Shard smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : Add instance smr_test/V_19 to LSS smr_test/V_19
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_19/recovery], Data log [smr_test/V_19], Shared [false]
LSS smr_test/V_19/recovery(shard3) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.458µs]
Shard smr_test/shards/shard3(3) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_19/recovery], Data log [smr_test/V_19], Shared [false]. Built [0] plasmas, took [107.733µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_19(shard3) : all deamons started
LSS smr_test/V_19/recovery(shard3) : all deamons started
Shard smr_test/shards/shard3(3) : Instance added to swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 started
LSS smr_test/V_20(shard4) : LSSCleaner initialized
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_20
LSS smr_test/V_20/recovery(shard4) : LSSCleaner initialized for recovery
Shard smr_test/shards/shard4(4) : LSSCtx Created Successfully. Path=smr_test/V_20/recovery
Shard smr_test/shards/shard4(4) : Map plasma instance (smr_test/V_20) to LSS (smr_test/V_20) and RecoveryLSS (smr_test/V_20/recovery)
Shard smr_test/shards/shard4(4) : Assign plasmaId 5 to instance smr_test/V_20
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_20 to Shard smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : Add instance smr_test/V_20 to LSS smr_test/V_20
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Starting recovery. Recovery log [smr_test/V_20/recovery], Data log [smr_test/V_20], Shared [false]
LSS smr_test/V_20/recovery(shard4) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [57.315µs]
Shard smr_test/shards/shard4(4) : Shard.doRecovery: Done recovery. Recovery log [smr_test/V_20/recovery], Data log [smr_test/V_20], Shared [false]. Built [0] plasmas, took [66.934µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS smr_test/V_20(shard4) : all deamons started
LSS smr_test/V_20/recovery(shard4) : all deamons started
Shard smr_test/shards/shard4(4) : Instance added to swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 started
Plasma: SMR reclaim pending is higher than expected: pending = 4330 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 144, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7451 KB (expected = 1536 KB), wCtxCnt = 17, objCnt 254, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 72 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7692 KB (expected = 1536 KB), wCtxCnt = 15, objCnt 260, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 81 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 6320 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 215, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 5668 KB (expected = 1536 KB), wCtxCnt = 17, objCnt 191, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 72 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7255 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 247, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 5528 KB (expected = 1536 KB), wCtxCnt = 18, objCnt 185, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 68 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 3462 KB (expected = 1536 KB), wCtxCnt = 15, objCnt 116, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 81 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7613 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 257, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 3425 KB (expected = 1536 KB), wCtxCnt = 17, objCnt 114, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 72 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 3945 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 131, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 4052 KB (expected = 1536 KB), wCtxCnt = 22, objCnt 135, changed reclaimList flush threshold from 50 to 1, changed reclaimSize flush threshold from 10240 KB to 55 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 6435 KB (expected = 1536 KB), wCtxCnt = 18, objCnt 218, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 68 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 5494 KB (expected = 1536 KB), wCtxCnt = 17, objCnt 185, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 72 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 4748 KB (expected = 1536 KB), wCtxCnt = 15, objCnt 158, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 81 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 6470 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 221, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7298 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 249, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 6840 KB (expected = 1536 KB), wCtxCnt = 17, objCnt 233, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 72 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 7901 KB (expected = 1536 KB), wCtxCnt = 20, objCnt 265, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 61 KB.
Plasma: SMR reclaim pending is higher than expected: pending = 8833 KB (expected = 1536 KB), wCtxCnt = 16, objCnt 302, changed reclaimList flush threshold from 50 to 2, changed reclaimSize flush threshold from 10240 KB to 76 KB.
LSS smr_test/V_1(shard1) : all deamons stopped
LSS smr_test/V_1/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 stopped
LSS smr_test/V_1(shard1) : LSSCleaner stopped
LSS smr_test/V_1/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 closed
LSS smr_test/V_2(shard2) : all deamons stopped
LSS smr_test/V_2/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 stopped
LSS smr_test/V_2(shard2) : LSSCleaner stopped
LSS smr_test/V_2/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 closed
LSS smr_test/V_3(shard3) : all deamons stopped
LSS smr_test/V_3/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 stopped
LSS smr_test/V_3(shard3) : LSSCleaner stopped
LSS smr_test/V_3/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 closed
LSS smr_test/V_4(shard4) : all deamons stopped
LSS smr_test/V_4/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 stopped
LSS smr_test/V_4(shard4) : LSSCleaner stopped
LSS smr_test/V_4/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 closed
LSS smr_test/V_5(shard1) : all deamons stopped
LSS smr_test/V_5/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 stopped
LSS smr_test/V_5(shard1) : LSSCleaner stopped
LSS smr_test/V_5/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 closed
LSS smr_test/V_6(shard2) : all deamons stopped
LSS smr_test/V_6/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 stopped
LSS smr_test/V_6(shard2) : LSSCleaner stopped
LSS smr_test/V_6/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 closed
LSS smr_test/V_7(shard3) : all deamons stopped
LSS smr_test/V_7/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 stopped
LSS smr_test/V_7(shard3) : LSSCleaner stopped
LSS smr_test/V_7/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 closed
LSS smr_test/V_8(shard4) : all deamons stopped
LSS smr_test/V_8/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 stopped
LSS smr_test/V_8(shard4) : LSSCleaner stopped
LSS smr_test/V_8/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 closed
LSS smr_test/V_9(shard1) : all deamons stopped
LSS smr_test/V_9/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 stopped
LSS smr_test/V_9(shard1) : LSSCleaner stopped
LSS smr_test/V_9/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 closed
LSS smr_test/V_10(shard2) : all deamons stopped
LSS smr_test/V_10/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 stopped
LSS smr_test/V_10(shard2) : LSSCleaner stopped
LSS smr_test/V_10/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 closed
LSS smr_test/V_11(shard3) : all deamons stopped
LSS smr_test/V_11/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 stopped
LSS smr_test/V_11(shard3) : LSSCleaner stopped
LSS smr_test/V_11/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 closed
LSS smr_test/V_12(shard4) : all deamons stopped
LSS smr_test/V_12/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 stopped
LSS smr_test/V_12(shard4) : LSSCleaner stopped
LSS smr_test/V_12/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 closed
LSS smr_test/V_13(shard1) : all deamons stopped
LSS smr_test/V_13/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 stopped
LSS smr_test/V_13(shard1) : LSSCleaner stopped
LSS smr_test/V_13/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 closed
LSS smr_test/V_14(shard2) : all deamons stopped
LSS smr_test/V_14/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 stopped
LSS smr_test/V_14(shard2) : LSSCleaner stopped
LSS smr_test/V_14/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 closed
LSS smr_test/V_15(shard3) : all deamons stopped
LSS smr_test/V_15/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 stopped
LSS smr_test/V_15(shard3) : LSSCleaner stopped
LSS smr_test/V_15/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 closed
LSS smr_test/V_16(shard4) : all deamons stopped
LSS smr_test/V_16/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 stopped
LSS smr_test/V_16(shard4) : LSSCleaner stopped
LSS smr_test/V_16/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 closed
LSS smr_test/V_17(shard1) : all deamons stopped
LSS smr_test/V_17/recovery(shard1) : all deamons stopped
Shard smr_test/shards/shard1(1) : Instance removed from swapper : smr_test/shards/shard1
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 stopped
LSS smr_test/V_17(shard1) : LSSCleaner stopped
LSS smr_test/V_17/recovery(shard1) : LSSCleaner stopped
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 closed
LSS smr_test/V_18(shard2) : all deamons stopped
LSS smr_test/V_18/recovery(shard2) : all deamons stopped
Shard smr_test/shards/shard2(2) : Instance removed from swapper : smr_test/shards/shard2
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 stopped
LSS smr_test/V_18(shard2) : LSSCleaner stopped
LSS smr_test/V_18/recovery(shard2) : LSSCleaner stopped
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 closed
LSS smr_test/V_19(shard3) : all deamons stopped
LSS smr_test/V_19/recovery(shard3) : all deamons stopped
Shard smr_test/shards/shard3(3) : Instance removed from swapper : smr_test/shards/shard3
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 stopped
LSS smr_test/V_19(shard3) : LSSCleaner stopped
LSS smr_test/V_19/recovery(shard3) : LSSCleaner stopped
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 closed
LSS smr_test/V_20(shard4) : all deamons stopped
LSS smr_test/V_20/recovery(shard4) : all deamons stopped
Shard smr_test/shards/shard4(4) : Instance removed from swapper : smr_test/shards/shard4
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 stopped
LSS smr_test/V_20(shard4) : LSSCleaner stopped
LSS smr_test/V_20/recovery(shard4) : LSSCleaner stopped
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 closed
reclaimPending: 5 MB tmpBufs: 18 MB
Shard smr_test/shards/shard1(1) : Swapper stopped
Shard smr_test/shards/shard1(1) : All daemons stopped
Shard smr_test/shards/shard1(1) : All instance closed
Shard smr_test/shards/shard1(1) : Shutdown completed
Shard smr_test/shards/shard2(2) : Swapper stopped
Shard smr_test/shards/shard2(2) : All daemons stopped
Shard smr_test/shards/shard2(2) : All instance closed
Shard smr_test/shards/shard2(2) : Shutdown completed
Shard smr_test/shards/shard3(3) : Swapper stopped
Shard smr_test/shards/shard3(3) : All daemons stopped
Shard smr_test/shards/shard3(3) : All instance closed
Shard smr_test/shards/shard3(3) : Shutdown completed
Shard smr_test/shards/shard4(4) : Swapper stopped
Shard smr_test/shards/shard4(4) : All daemons stopped
Shard smr_test/shards/shard4(4) : All instance closed
Shard smr_test/shards/shard4(4) : Shutdown completed
Shard smr_test/shards/shard1(1) : Swapper stopped
Shard smr_test/shards/shard1(1) : All daemons stopped
Shard smr_test/shards/shard1(1) : All instance closed
Shard smr_test/shards/shard1(1) : destroying instance smr_test/V_1 ...
Shard smr_test/shards/shard1(1) : removed plasmaId 1 for instance smr_test/V_1 ...
Shard smr_test/shards/shard1(1) : metadata saved successfully
Shard smr_test/shards/shard1(1) : instance smr_test/V_1 sucessfully destroyed
Shard smr_test/shards/shard1(1) : destroying instance smr_test/V_5 ...
Shard smr_test/shards/shard1(1) : removed plasmaId 2 for instance smr_test/V_5 ...
Shard smr_test/shards/shard1(1) : metadata saved successfully
Shard smr_test/shards/shard1(1) : instance smr_test/V_5 sucessfully destroyed
Shard smr_test/shards/shard1(1) : destroying instance smr_test/V_9 ...
Shard smr_test/shards/shard1(1) : removed plasmaId 3 for instance smr_test/V_9 ...
Shard smr_test/shards/shard1(1) : metadata saved successfully
Shard smr_test/shards/shard1(1) : instance smr_test/V_9 sucessfully destroyed
Shard smr_test/shards/shard1(1) : destroying instance smr_test/V_13 ...
Shard smr_test/shards/shard1(1) : removed plasmaId 4 for instance smr_test/V_13 ...
Shard smr_test/shards/shard1(1) : metadata saved successfully
Shard smr_test/shards/shard1(1) : instance smr_test/V_13 sucessfully destroyed
Shard smr_test/shards/shard1(1) : destroying instance smr_test/V_17 ...
Shard smr_test/shards/shard1(1) : removed plasmaId 5 for instance smr_test/V_17 ...
Shard smr_test/shards/shard1(1) : metadata saved successfully
Shard smr_test/shards/shard1(1) : instance smr_test/V_17 sucessfully destroyed
Shard smr_test/shards/shard1(1) : All instance destroyed
Shard smr_test/shards/shard1(1) : Shard Destroyed Successfully
Shard smr_test/shards/shard2(2) : Swapper stopped
Shard smr_test/shards/shard2(2) : All daemons stopped
Shard smr_test/shards/shard2(2) : All instance closed
Shard smr_test/shards/shard2(2) : destroying instance smr_test/V_14 ...
Shard smr_test/shards/shard2(2) : removed plasmaId 4 for instance smr_test/V_14 ...
Shard smr_test/shards/shard2(2) : metadata saved successfully
Shard smr_test/shards/shard2(2) : instance smr_test/V_14 sucessfully destroyed
Shard smr_test/shards/shard2(2) : destroying instance smr_test/V_18 ...
Shard smr_test/shards/shard2(2) : removed plasmaId 5 for instance smr_test/V_18 ...
Shard smr_test/shards/shard2(2) : metadata saved successfully
Shard smr_test/shards/shard2(2) : instance smr_test/V_18 sucessfully destroyed
Shard smr_test/shards/shard2(2) : destroying instance smr_test/V_2 ...
Shard smr_test/shards/shard2(2) : removed plasmaId 1 for instance smr_test/V_2 ...
Shard smr_test/shards/shard2(2) : metadata saved successfully
Shard smr_test/shards/shard2(2) : instance smr_test/V_2 sucessfully destroyed
Shard smr_test/shards/shard2(2) : destroying instance smr_test/V_6 ...
Shard smr_test/shards/shard2(2) : removed plasmaId 2 for instance smr_test/V_6 ...
Shard smr_test/shards/shard2(2) : metadata saved successfully
Shard smr_test/shards/shard2(2) : instance smr_test/V_6 sucessfully destroyed
Shard smr_test/shards/shard2(2) : destroying instance smr_test/V_10 ...
Shard smr_test/shards/shard2(2) : removed plasmaId 3 for instance smr_test/V_10 ...
Shard smr_test/shards/shard2(2) : metadata saved successfully
Shard smr_test/shards/shard2(2) : instance smr_test/V_10 sucessfully destroyed
Shard smr_test/shards/shard2(2) : All instance destroyed
Shard smr_test/shards/shard2(2) : Shard Destroyed Successfully
Shard smr_test/shards/shard3(3) : Swapper stopped
Shard smr_test/shards/shard3(3) : All daemons stopped
Shard smr_test/shards/shard3(3) : All instance closed
Shard smr_test/shards/shard3(3) : destroying instance smr_test/V_3 ...
Shard smr_test/shards/shard3(3) : removed plasmaId 1 for instance smr_test/V_3 ...
Shard smr_test/shards/shard3(3) : metadata saved successfully
Shard smr_test/shards/shard3(3) : instance smr_test/V_3 sucessfully destroyed
Shard smr_test/shards/shard3(3) : destroying instance smr_test/V_7 ...
Shard smr_test/shards/shard3(3) : removed plasmaId 2 for instance smr_test/V_7 ...
Shard smr_test/shards/shard3(3) : metadata saved successfully
Shard smr_test/shards/shard3(3) : instance smr_test/V_7 sucessfully destroyed
Shard smr_test/shards/shard3(3) : destroying instance smr_test/V_11 ...
Shard smr_test/shards/shard3(3) : removed plasmaId 3 for instance smr_test/V_11 ...
Shard smr_test/shards/shard3(3) : metadata saved successfully
Shard smr_test/shards/shard3(3) : instance smr_test/V_11 sucessfully destroyed
Shard smr_test/shards/shard3(3) : destroying instance smr_test/V_15 ...
Shard smr_test/shards/shard3(3) : removed plasmaId 4 for instance smr_test/V_15 ...
Shard smr_test/shards/shard3(3) : metadata saved successfully
Shard smr_test/shards/shard3(3) : instance smr_test/V_15 sucessfully destroyed
Shard smr_test/shards/shard3(3) : destroying instance smr_test/V_19 ...
Shard smr_test/shards/shard3(3) : removed plasmaId 5 for instance smr_test/V_19 ...
Shard smr_test/shards/shard3(3) : metadata saved successfully
Shard smr_test/shards/shard3(3) : instance smr_test/V_19 sucessfully destroyed
Shard smr_test/shards/shard3(3) : All instance destroyed
Shard smr_test/shards/shard3(3) : Shard Destroyed Successfully
Shard smr_test/shards/shard4(4) : Swapper stopped
Shard smr_test/shards/shard4(4) : All daemons stopped
Shard smr_test/shards/shard4(4) : All instance closed
Shard smr_test/shards/shard4(4) : destroying instance smr_test/V_4 ...
Shard smr_test/shards/shard4(4) : removed plasmaId 1 for instance smr_test/V_4 ...
Shard smr_test/shards/shard4(4) : metadata saved successfully
Shard smr_test/shards/shard4(4) : instance smr_test/V_4 sucessfully destroyed
Shard smr_test/shards/shard4(4) : destroying instance smr_test/V_8 ...
Shard smr_test/shards/shard4(4) : removed plasmaId 2 for instance smr_test/V_8 ...
Shard smr_test/shards/shard4(4) : metadata saved successfully
Shard smr_test/shards/shard4(4) : instance smr_test/V_8 sucessfully destroyed
Shard smr_test/shards/shard4(4) : destroying instance smr_test/V_12 ...
Shard smr_test/shards/shard4(4) : removed plasmaId 3 for instance smr_test/V_12 ...
Shard smr_test/shards/shard4(4) : metadata saved successfully
Shard smr_test/shards/shard4(4) : instance smr_test/V_12 sucessfully destroyed
Shard smr_test/shards/shard4(4) : destroying instance smr_test/V_16 ...
Shard smr_test/shards/shard4(4) : removed plasmaId 4 for instance smr_test/V_16 ...
Shard smr_test/shards/shard4(4) : metadata saved successfully
Shard smr_test/shards/shard4(4) : instance smr_test/V_16 sucessfully destroyed
Shard smr_test/shards/shard4(4) : destroying instance smr_test/V_20 ...
Shard smr_test/shards/shard4(4) : removed plasmaId 5 for instance smr_test/V_20 ...
Shard smr_test/shards/shard4(4) : metadata saved successfully
Shard smr_test/shards/shard4(4) : instance smr_test/V_20 sucessfully destroyed
Shard smr_test/shards/shard4(4) : All instance destroyed
Shard smr_test/shards/shard4(4) : Shard Destroyed Successfully
--- PASS: TestSMRComplex (107.98s)
=== RUN   TestDGMWithCASConflicts
----------- running TestDGMWithCASConflicts
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestDGMWithCASConflicts
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.mvcc.TestDGMWithCASConflicts/recovery
Shard shards/shard1(1) : Map plasma instance (test.mvcc.TestDGMWithCASConflicts) to LSS (test.mvcc.TestDGMWithCASConflicts) and RecoveryLSS (test.mvcc.TestDGMWithCASConflicts/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.mvcc.TestDGMWithCASConflicts
Shard shards/shard1(1) : Add instance test.mvcc.TestDGMWithCASConflicts to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.mvcc.TestDGMWithCASConflicts to LSS test.mvcc.TestDGMWithCASConflicts
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.mvcc.TestDGMWithCASConflicts/recovery], Data log [test.mvcc.TestDGMWithCASConflicts], Shared [false]
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.922µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.mvcc.TestDGMWithCASConflicts/recovery], Data log [test.mvcc.TestDGMWithCASConflicts], Shared [false]. Built [0] plasmas, took [105.863µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : all deamons started
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestDGMWithCASConflicts started
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : recoveryCleaner: starting... frag 28, data: 1511846, used: 2101248 log:(0 - 2101248)
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : recoveryCleaner: completed... frag 15, data: 1558950, used: 1840804, relocated: 546, retries: 0, skipped: 481 log:(0 - 3096576) run:1 duration:44 ms
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : logCleaner: starting... frag 56, data: 5394270, used: 12537856 log:(0 - 12537856)
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : logCleaner: completed... frag 51, data: 20348014, used: 42247074, relocated: 479, retries: 0, skipped: 738 log:(0 - 54718464) run:1 duration:2941 ms
{
"memory_quota":         4194304,
"count":                1000000,
"compacts":             7054,
"purges":               0,
"splits":               3289,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    549,
"split_conflicts":      579,
"merge_conflicts":      0,
"insert_conflicts":     9102,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    408,
"memory_size":          2693442,
"memory_size_index":    1931046,
"allocated":            2520828800,
"freed":                2518135358,
"reclaimed":            2472933864,
"reclaim_pending":      45201494,
"reclaim_list_size":    43918402,
"reclaim_list_count":   327,
"reclaim_threshold":    50,
"allocated_index":      1931046,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            3290,
"items_count":          1000000,
"total_records":        1029093,
"num_rec_allocs":       4492441,
"num_rec_frees":        4491939,
"num_rec_swapout":      2581303,
"num_rec_swapin":       1552712,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       542000000,
"lss_data_size":        30621387,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        55471120,
"est_recovery_size":    11120618,
"lss_num_reads":        12755,
"lss_read_bs":          58449663,
"lss_blk_read_bs":      89436160,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           996615,
"cache_misses":         3385,
"cache_hit_ratio":      0.99763,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00049,
"mvcc_purge_ratio":     1.02909,
"currSn":               1,
"gcSn":                 0,
"gcSnIntervals":       "[0 1]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            13,
"num_free_wctxs":       7,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           2742005642,
"page_cnt":             17147,
"page_itemcnt":         5059051,
"avg_item_size":        542,
"avg_page_size":        159911,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":1518801942,
"page_bytes_compressed":82521394,
"compression_ratio":    18.40495,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":420864,
"memory_size_delta":    214640,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          17762368,
    "lss_data_size":        37618479,
    "lss_used_space":       71500591,
    "lss_disk_size":        127492096,
    "lss_fragmentation":    47,
    "lss_num_reads":        42860,
    "lss_read_bs":          114517538,
    "lss_blk_read_bs":      145747968,
    "bytes_written":        127492096,
    "bytes_incoming":       542000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.17,
    "lss_gc_num_reads":     33130,
    "lss_gc_reads_bs":      68759431,
    "lss_blk_gc_reads_bs":  80941056,
    "lss_gc_status":        "frag 51, data: 20348179, used: 42247074 log:(11454272 - 54718464)",
    "lss_head_offset":      30525317,
    "lss_tail_offset":      91918336,
    "num_sctxs":            29,
    "num_free_sctxs":       8,
    "num_swapperWriter":    32
  }
}
1000000 items insert took 6.990325573s -> 143054.8533908751 items/s
Starting delete phase of iteration  0
{
"memory_quota":         4194304,
"count":                2000000,
"compacts":             9706,
"purges":               0,
"splits":               3653,
"merges":               902,
"inserts":              2000000,
"deletes":              0,
"compact_conflicts":    873,
"split_conflicts":      682,
"merge_conflicts":      0,
"insert_conflicts":     11495,
"delete_conflicts":     0,
"swapin_conflicts":     9,
"persist_conflicts":    1051,
"memory_size":          2424296,
"memory_size_index":    1615274,
"allocated":            3047099954,
"freed":                3044675658,
"reclaimed":            3043064818,
"reclaim_pending":      1610840,
"reclaim_list_size":    1610840,
"reclaim_list_count":   224,
"reclaim_threshold":    50,
"allocated_index":      2144702,
"freed_index":          529428,
"reclaimed_index":      525936,
"num_pages":            2752,
"items_count":          0,
"total_records":        346722,
"num_rec_allocs":       6333712,
"num_rec_frees":        6330393,
"num_rec_swapout":      4132676,
"num_rec_swapin":       3789273,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       568000000,
"lss_data_size":        8346710,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        59461216,
"est_recovery_size":    20375472,
"lss_num_reads":        40274,
"lss_read_bs":          128306548,
"lss_blk_read_bs":      237662208,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           1993078,
"cache_misses":         6922,
"cache_hit_ratio":      0.99640,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00957,
"mvcc_purge_ratio":     0.00000,
"currSn":               1,
"gcSn":                 0,
"gcSnIntervals":       "[0 1]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            13,
"num_free_wctxs":       9,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           3884063204,
"page_cnt":             31536,
"page_itemcnt":         7297456,
"avg_item_size":        532,
"avg_page_size":        123162,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":1961461138,
"page_bytes_compressed":110818489,
"compression_ratio":    17.69976,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":346496,
"memory_size_delta":    295056,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          19060771,
    "lss_data_size":        17478268,
    "lss_used_space":       84350686,
    "lss_disk_size":        191954944,
    "lss_fragmentation":    79,
    "lss_num_reads":        159108,
    "lss_read_bs":          302736587,
    "lss_blk_read_bs":      412831744,
    "bytes_written":        259063808,
    "bytes_incoming":       568000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.22,
    "lss_gc_num_reads":     125182,
    "lss_gc_reads_bs":      194133379,
    "lss_blk_gc_reads_bs":  218624000,
    "lss_gc_status":        "frag 84, data: 10564124, used: 67483597 log:(54229247 - 122089472)",
    "lss_head_offset":      57802254,
    "lss_tail_offset":      123674624,
    "num_sctxs":            30,
    "num_free_sctxs":       12,
    "num_swapperWriter":    32
  }
}
1000000 items delete took 6.990325573s -> 143054.8533908751 items/s
Starting insert phase of iteration  0
{
"memory_quota":         4194304,
"count":                3000000,
"compacts":             15337,
"purges":               0,
"splits":               6095,
"merges":               2599,
"inserts":              3000000,
"deletes":              0,
"compact_conflicts":    1409,
"split_conflicts":      1166,
"merge_conflicts":      7,
"insert_conflicts":     20461,
"delete_conflicts":     0,
"swapin_conflicts":     57,
"persist_conflicts":    1541,
"memory_size":          2961914,
"memory_size_index":    2052480,
"allocated":            5142114340,
"freed":                5139152426,
"reclaimed":            5100143694,
"reclaim_pending":      39008732,
"reclaim_list_size":    32634696,
"reclaim_list_count":   275,
"reclaim_threshold":    50,
"allocated_index":      3578570,
"freed_index":          1526090,
"reclaimed_index":      1501310,
"num_pages":            3497,
"items_count":          1000000,
"total_records":        1027357,
"num_rec_allocs":       10067072,
"num_rec_frees":        10066357,
"num_rec_swapout":      6191876,
"num_rec_swapin":       5165234,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       1110000000,
"lss_data_size":        38951985,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        105253054,
"est_recovery_size":    25213414,
"lss_num_reads":        59793,
"lss_read_bs":          177112771,
"lss_blk_read_bs":      339353600,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           2988304,
"cache_misses":         11696,
"cache_hit_ratio":      0.99508,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00070,
"mvcc_purge_ratio":     1.02736,
"currSn":               2,
"gcSn":                 1,
"gcSnIntervals":       "[0 2]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            13,
"num_free_wctxs":       3,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           4746896940,
"page_cnt":             42778,
"page_itemcnt":         8919768,
"avg_item_size":        532,
"avg_page_size":        110965,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":3271603514,
"page_bytes_compressed":182371032,
"compression_ratio":    17.93927,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":447232,
"memory_size_delta":    227264,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          20709096,
    "lss_data_size":        53343063,
    "lss_used_space":       150860751,
    "lss_disk_size":        234459136,
    "lss_fragmentation":    64,
    "lss_num_reads":        258861,
    "lss_read_bs":          461197579,
    "lss_blk_read_bs":      624607232,
    "bytes_written":        435785728,
    "bytes_incoming":       1110000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.19,
    "lss_gc_num_reads":     208787,
    "lss_gc_reads_bs":      308108425,
    "lss_blk_gc_reads_bs":  346783744,
    "lss_gc_status":        "frag 71, data: 24923851, used: 87847069, relocated: 6047, retries: 27, skipped: 6324 log:(11454272 - 175276032) run:2 duration:13128 ms",
    "lss_head_offset":      86392645,
    "lss_tail_offset":      210853888,
    "num_sctxs":            30,
    "num_free_sctxs":       4,
    "num_swapperWriter":    32
  }
}
1000000 items insert took 7.454637781s -> 134144.68004719814 items/s
Starting delete phase of iteration  1
{
"memory_quota":         4194304,
"count":                4000000,
"compacts":             18664,
"purges":               0,
"splits":               6215,
"merges":               5512,
"inserts":              4000000,
"deletes":              0,
"compact_conflicts":    1804,
"split_conflicts":      1182,
"merge_conflicts":      7,
"insert_conflicts":     22978,
"delete_conflicts":     0,
"swapin_conflicts":     72,
"persist_conflicts":    2179,
"memory_size":          2127258,
"memory_size_index":    413018,
"allocated":            5780887914,
"freed":                5778760656,
"reclaimed":            5775366336,
"reclaim_pending":      3394320,
"reclaim_list_size":    3394320,
"reclaim_list_count":   262,
"reclaim_threshold":    50,
"allocated_index":      3648986,
"freed_index":          3235968,
"reclaimed_index":      3228968,
"num_pages":            704,
"items_count":          0,
"total_records":        124024,
"num_rec_allocs":       12228552,
"num_rec_frees":        12215480,
"num_rec_swapout":      7632638,
"num_rec_swapin":       7521686,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       1136000000,
"lss_data_size":        17821368,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        132288487,
"est_recovery_size":    50738002,
"lss_num_reads":        89997,
"lss_read_bs":          239535062,
"lss_blk_read_bs":      495165440,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           3982488,
"cache_misses":         17512,
"cache_hit_ratio":      0.99440,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.10540,
"mvcc_purge_ratio":     0.00000,
"currSn":               2,
"gcSn":                 1,
"gcSnIntervals":       "[0 2]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            13,
"num_free_wctxs":       11,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           4059899120,
"page_cnt":             48820,
"page_itemcnt":         7978522,
"avg_item_size":        508,
"avg_page_size":        83160,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":3696289632,
"page_bytes_compressed":209270397,
"compression_ratio":    17.66274,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":86016,
"memory_size_delta":    508416,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          20712470,
    "lss_data_size":        30362892,
    "lss_used_space":       208487375,
    "lss_disk_size":        292085760,
    "lss_fragmentation":    85,
    "lss_num_reads":        289065,
    "lss_read_bs":          523619870,
    "lss_blk_read_bs":      780419072,
    "bytes_written":        493412352,
    "bytes_incoming":       1136000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.21,
    "lss_gc_num_reads":     208787,
    "lss_gc_reads_bs":      308108425,
    "lss_blk_gc_reads_bs":  346783744,
    "lss_gc_status":        "frag 71, data: 24923851, used: 87847069, relocated: 6047, retries: 27, skipped: 6324 log:(11454272 - 175276032) run:2 duration:13128 ms",
    "lss_head_offset":      86392645,
    "lss_tail_offset":      241217536,
    "num_sctxs":            30,
    "num_free_sctxs":       12,
    "num_swapperWriter":    32
  }
}
1000000 items delete took 6.990325573s -> 143054.8533908751 items/s
Starting insert phase of iteration  1
{
"memory_quota":         4194304,
"count":                5000000,
"compacts":             25261,
"purges":               0,
"splits":               9093,
"merges":               5651,
"inserts":              5000000,
"deletes":              0,
"compact_conflicts":    2434,
"split_conflicts":      1776,
"merge_conflicts":      7,
"insert_conflicts":     36472,
"delete_conflicts":     0,
"swapin_conflicts":     75,
"persist_conflicts":    2606,
"memory_size":          2670342,
"memory_size_index":    2021388,
"allocated":            7911531670,
"freed":                7908861328,
"reclaimed":            7879360286,
"reclaim_pending":      29501042,
"reclaim_list_size":    29501042,
"reclaim_list_count":   285,
"reclaim_threshold":    50,
"allocated_index":      5338878,
"freed_index":          3317490,
"reclaimed_index":      3316908,
"num_pages":            3443,
"items_count":          1000000,
"total_records":        1026886,
"num_rec_allocs":       16019018,
"num_rec_frees":        16018771,
"num_rec_swapout":      9700642,
"num_rec_swapin":       8674003,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       1678000000,
"lss_data_size":        46638243,
"lss_recoverypt_size":  4096,
"lss_maxsn_size":       4096,
"checkpoint_used_space":0,
"est_disk_size":        199836217,
"est_recovery_size":    72064652,
"lss_num_reads":        103145,
"lss_read_bs":          283473009,
"lss_blk_read_bs":      565092352,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           4979116,
"cache_misses":         20884,
"cache_hit_ratio":      0.99647,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00024,
"mvcc_purge_ratio":     1.02689,
"currSn":               3,
"gcSn":                 2,
"gcSnIntervals":       "[0 3]",
"purger_running":       false,
"mem_throttled":        true,
"lss_throttled":        false,
"num_wctxs":            13,
"num_free_wctxs":       6,
"num_readers":          0,
"num_writers":          8,
"page_bytes":           5370086740,
"page_cnt":             50252,
"page_itemcnt":         10286834,
"avg_item_size":        522,
"avg_page_size":        106863,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":4932531508,
"page_bytes_compressed":276704457,
"compression_ratio":    17.82599,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":440704,
"memory_size_delta":    230200,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          20881115,
    "lss_data_size":        66809821,
    "lss_used_space":       308937679,
    "lss_disk_size":        392536064,
    "lss_fragmentation":    78,
    "lss_num_reads":        302213,
    "lss_read_bs":          567557817,
    "lss_blk_read_bs":      850345984,
    "bytes_written":        593862656,
    "bytes_incoming":       1678000000,
    "write_amp":            0.00,
    "write_amp_avg":        0.19,
    "lss_gc_num_reads":     208787,
    "lss_gc_reads_bs":      308108425,
    "lss_blk_gc_reads_bs":  346783744,
    "lss_gc_status":        "frag 71, data: 24923851, used: 87847069, relocated: 6047, retries: 27, skipped: 6324 log:(11454272 - 175276032) run:2 duration:13128 ms",
    "lss_head_offset":      86392645,
    "lss_tail_offset":      318599168,
    "num_sctxs":            30,
    "num_free_sctxs":       6,
    "num_swapperWriter":    32
  }
}
1000000 items insert took 6.229579379s -> 160524.4815357862 items/s
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : all deamons stopped
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.mvcc.TestDGMWithCASConflicts stopped
LSS test.mvcc.TestDGMWithCASConflicts(shard1) : LSSCleaner stopped
LSS test.mvcc.TestDGMWithCASConflicts/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.mvcc.TestDGMWithCASConflicts closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.mvcc.TestDGMWithCASConflicts ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.mvcc.TestDGMWithCASConflicts ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.mvcc.TestDGMWithCASConflicts sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestDGMWithCASConflicts (31.58s)
=== RUN   TestMaxSMRPendingMem
----------- running TestMaxSMRPendingMem
--- PASS: TestMaxSMRPendingMem (0.00s)
=== RUN   TestStatsLogger
----------- running TestStatsLogger
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS shards/shard1/data(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data
LSS shards/shard1/data/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=shards/shard1/data/recovery
Shard shards/shard1(1) : Map plasma instance (test.default.TestStatsLogger_0) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.default.TestStatsLogger_0
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_0 to LSS shards/shard1/data
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]
LSS shards/shard1/data/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [196.63µs]
LSS shards/shard1/data(shard1) : recoverFromDataReplay: Begin recovering from data log, headOffset [0] tailOffset [0] replayOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from data and recovery log, took [242.62µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [shards/shard1/data/recovery], Data log [shards/shard1/data], Shared [true]. Built [0] plasmas, took [249.602µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS shards/shard1/data(shard1) : all deamons started
LSS shards/shard1/data/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.default.TestStatsLogger_0 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestStatsLogger_1) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.default.TestStatsLogger_1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_1 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestStatsLogger_1 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestStatsLogger_2) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.default.TestStatsLogger_2
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_2 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestStatsLogger_2 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestStatsLogger_3) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.default.TestStatsLogger_3
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_3 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestStatsLogger_3 started
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : Map plasma instance (test.default.TestStatsLogger_4) to LSS (shards/shard1/data) and RecoveryLSS (shards/shard1/data/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.default.TestStatsLogger_4
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.default.TestStatsLogger_4 to LSS shards/shard1/data
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
Shard shards/shard1(1) : instance test.default.TestStatsLogger_4 started
Shard shards/shard1(1) : instance test.default.TestStatsLogger_0 stopped
Shard shards/shard1(1) : instance test.default.TestStatsLogger_0 closed
Shard shards/shard1(1) : instance test.default.TestStatsLogger_1 stopped
Shard shards/shard1(1) : instance test.default.TestStatsLogger_1 closed
Shard shards/shard1(1) : instance test.default.TestStatsLogger_2 stopped
Shard shards/shard1(1) : instance test.default.TestStatsLogger_2 closed
Shard shards/shard1(1) : instance test.default.TestStatsLogger_3 stopped
Shard shards/shard1(1) : instance test.default.TestStatsLogger_3 closed
Shard shards/shard1(1) : instance test.default.TestStatsLogger_4 stopped
Shard shards/shard1(1) : instance test.default.TestStatsLogger_4 closed
LSS shards/shard1/data(shard1) : all deamons stopped
LSS shards/shard1/data/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : All daemons stopped
LSS shards/shard1/data(shard1) : LSSCleaner stopped
LSS shards/shard1/data/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestStatsLogger (20.32s)
=== RUN   TestStatsSamplePercentile
----------- running TestStatsSamplePercentile
Sample : Mean 50.5 StdDefv 28.86607004772212
Sample 99 : ZScore 1.680173294106838 Percentile 0.95
Sample 95 : ZScore 1.541602300778439 Percentile 0.9
Sample 90 : ZScore 1.36838855911794 Percentile 0.9
Sample 85 : ZScore 1.1951748174574415 Percentile 0.85
Sample : Mean 50.5 StdDefv 28.86607004772212
--- PASS: TestStatsSamplePercentile (0.00s)
=== RUN   TestPlasmaSwapper
----------- running TestPlasmaSwapper
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaSwapper(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaSwapper
LSS test.basic.TestPlasmaSwapper/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaSwapper/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaSwapper) to LSS (test.basic.TestPlasmaSwapper) and RecoveryLSS (test.basic.TestPlasmaSwapper/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaSwapper
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaSwapper to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaSwapper to LSS test.basic.TestPlasmaSwapper
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaSwapper/recovery], Data log [test.basic.TestPlasmaSwapper], Shared [false]
LSS test.basic.TestPlasmaSwapper/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.461µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaSwapper/recovery], Data log [test.basic.TestPlasmaSwapper], Shared [false]. Built [0] plasmas, took [91.221µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaSwapper(shard1) : all deamons started
LSS test.basic.TestPlasmaSwapper/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.basic.TestPlasmaSwapper started
{
"memory_quota":         1099511627776,
"count":                1000000,
"compacts":             9949,
"purges":               0,
"splits":               4974,
"merges":               0,
"inserts":              1000000,
"deletes":              0,
"compact_conflicts":    0,
"split_conflicts":      0,
"merge_conflicts":      0,
"insert_conflicts":     0,
"delete_conflicts":     0,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          358160,
"memory_size_index":    266592,
"allocated":            114177432,
"freed":                113819272,
"reclaimed":            113780200,
"reclaim_pending":      39072,
"reclaim_list_size":    39072,
"reclaim_list_count":   4,
"reclaim_threshold":    50,
"allocated_index":      266592,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            4975,
"items_count":          0,
"total_records":        1000000,
"num_rec_allocs":       5004271,
"num_rec_frees":        5004271,
"num_rec_swapout":      1000000,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       8000000,
"lss_data_size":        8129408,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        16314984,
"est_recovery_size":    537234,
"lss_num_reads":        1000202,
"lss_read_bs":          1636019696,
"lss_blk_read_bs":      15728640,
"lss_rdr_reads_bs":     1636019696,
"lss_blk_rdr_reads_bs": 15728640,
"cache_hits":           1000000,
"cache_misses":         1000000,
"cache_hit_ratio":      0.00000,
"rlss_num_reads":       1000202,
"rcache_hits":          0,
"rcache_misses":        1000000,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       0.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       16,
"num_readers":          0,
"num_writers":          1,
"page_bytes":           40034168,
"page_cnt":             19898,
"page_itemcnt":         5004271,
"avg_item_size":        8,
"avg_page_size":        2011,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":16255290,
"page_bytes_compressed":16255290,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    318368,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          11742,
    "lss_data_size":        8388136,
    "lss_used_space":       15728640,
    "lss_disk_size":        15728640,
    "lss_fragmentation":    46,
    "lss_num_reads":        1000202,
    "lss_read_bs":          1636019696,
    "lss_blk_read_bs":      15728640,
    "bytes_written":        15728640,
    "bytes_incoming":       8000000,
    "write_amp":            0.00,
    "write_amp_avg":        1.97,
    "lss_gc_num_reads":     0,
    "lss_gc_reads_bs":      0,
    "lss_blk_gc_reads_bs":  0,
    "lss_gc_status":        "",
    "lss_head_offset":      0,
    "lss_tail_offset":      15728640,
    "num_sctxs":            28,
    "num_free_sctxs":       17,
    "num_swapperWriter":    32
  }
}
LSS test.basic.TestPlasmaSwapper(shard1) : all deamons stopped
LSS test.basic.TestPlasmaSwapper/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaSwapper stopped
LSS test.basic.TestPlasmaSwapper(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaSwapper/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaSwapper closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaSwapper ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaSwapper ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaSwapper sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaSwapper (18.63s)
=== RUN   TestPlasmaAutoSwapper
----------- running TestPlasmaAutoSwapper
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.basic.TestPlasmaAutoSwapper(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaAutoSwapper
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.basic.TestPlasmaAutoSwapper/recovery
Shard shards/shard1(1) : Map plasma instance (test.basic.TestPlasmaAutoSwapper) to LSS (test.basic.TestPlasmaAutoSwapper) and RecoveryLSS (test.basic.TestPlasmaAutoSwapper/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.basic.TestPlasmaAutoSwapper
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaAutoSwapper to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.basic.TestPlasmaAutoSwapper to LSS test.basic.TestPlasmaAutoSwapper
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.basic.TestPlasmaAutoSwapper/recovery], Data log [test.basic.TestPlasmaAutoSwapper], Shared [false]
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.922µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.basic.TestPlasmaAutoSwapper/recovery], Data log [test.basic.TestPlasmaAutoSwapper], Shared [false]. Built [0] plasmas, took [110.646µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.basic.TestPlasmaAutoSwapper(shard1) : all deamons started
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.basic.TestPlasmaAutoSwapper started
fragAutoTuner: FragRatio at 10. MaxFragRatio 100, MaxBandwidth 651583. BandwidthUsage 251263. AvailDisk N/A. TotalUsed 0. BandwidthRatio 0.3856193301544086. UsedSpaceRatio 0. CleanerBandwidth 9223372036854775807. Duration 0.
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : recoveryCleaner: starting... frag 16, data: 876652, used: 1048576 log:(0 - 1048576)
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : recoveryCleaner: completed... frag 10, data: 877080, used: 982708, relocated: 549, retries: 0, skipped: 563 log:(0 - 1048576) run:1 duration:4 ms
LSS test.basic.TestPlasmaAutoSwapper(shard1) : logCleaner: starting... frag 49, data: 27547590, used: 54525952 log:(0 - 54525952)
LSS test.basic.TestPlasmaAutoSwapper(shard1) : logCleaner: completed... frag 18, data: 34983932, used: 43124688, relocated: 16507, retries: 0, skipped: 16520 log:(0 - 97517568) run:1 duration:2502 ms
10000000 items insert took 26.231018768s -> 381228.04487484484 items/s
10000000 items update took 46.21305488s -> 216389.07070667992 items/s
{
"memory_quota":         1073741824,
"count":                10000000,
"compacts":             198979,
"purges":               0,
"splits":               49747,
"merges":               0,
"inserts":              20000000,
"deletes":              10000000,
"compact_conflicts":    19,
"split_conflicts":      17,
"merge_conflicts":      0,
"insert_conflicts":     1027,
"delete_conflicts":     2,
"swapin_conflicts":     0,
"persist_conflicts":    0,
"memory_size":          166548952,
"memory_size_index":    2648832,
"allocated":            3478362080,
"freed":                3311813128,
"reclaimed":            3303351960,
"reclaim_pending":      8461168,
"reclaim_list_size":    8461168,
"reclaim_list_count":   875,
"reclaim_threshold":    50,
"allocated_index":      2648832,
"freed_index":          0,
"reclaimed_index":      0,
"num_pages":            49748,
"items_count":          0,
"total_records":        10001576,
"num_rec_allocs":       153171859,
"num_rec_frees":        143170283,
"num_rec_swapout":      0,
"num_rec_swapin":       0,
"num_rec_compressed":   0,
"compresses":           0,
"decompresses":         0,
"num_compressed_pages": 0,
"bytes_incoming":       240000000,
"lss_data_size":        71693406,
"lss_recoverypt_size":  0,
"lss_maxsn_size":       0,
"checkpoint_used_space":0,
"est_disk_size":        81191286,
"est_recovery_size":    3507292,
"lss_num_reads":        0,
"lss_read_bs":          0,
"lss_blk_read_bs":      0,
"lss_rdr_reads_bs":     0,
"lss_blk_rdr_reads_bs": 0,
"cache_hits":           30000000,
"cache_misses":         0,
"cache_hit_ratio":      1.00000,
"rlss_num_reads":       0,
"rcache_hits":          0,
"rcache_misses":        0,
"rcache_hit_ratio":     0.00000,
"resident_ratio":       1.00000,
"mvcc_purge_ratio":     0.00000,
"currSn":               0,
"gcSn":                 0,
"gcSnIntervals":       "[]",
"purger_running":       false,
"mem_throttled":        false,
"lss_throttled":        false,
"num_wctxs":            18,
"num_free_wctxs":       15,
"num_readers":          0,
"num_writers":          16,
"page_bytes":           166907400,
"page_cnt":             103878,
"page_itemcnt":         20863425,
"avg_item_size":        8,
"avg_page_size":        1606,
"act_max_page_items":   400,
"act_min_page_items":   25,
"act_max_delta_len":    200,
"est_resident_mem":     0,
"page_bytes_marshalled":748301578,
"page_bytes_compressed":748301578,
"compression_ratio":    1.00000,
"bloom_tests":          0,
"bloom_negatives":      0,
"bloom_false_positives":0,
"bloom_fp_rate":        0.00000,
"memory_size_bloom_filter":0,
"memory_size_delta":    6258976,
"lss_stats":            
  {
    "shared":               false,
    "punch_hole_support":   true,
    "buf_memused":          57168,
    "lss_data_size":        73974182,
    "lss_used_space":       83321200,
    "lss_disk_size":        202375168,
    "lss_fragmentation":    11,
    "lss_num_reads":        2406218,
    "lss_read_bs":          780348654,
    "lss_blk_read_bs":      790999040,
    "bytes_written":        873463808,
    "bytes_incoming":       240000000,
    "write_amp":            3.26,
    "write_amp_avg":        3.13,
    "lss_gc_num_reads":     2406218,
    "lss_gc_reads_bs":      780348654,
    "lss_blk_gc_reads_bs":  790999040,
    "lss_gc_status":        "frag 10, data: 71831710, used: 80684586, relocated: 9245444, retries: 316, skipped: 1866502 log:(53396760 - 750780416) run:39 duration:62789 ms",
    "lss_head_offset":      669928208,
    "lss_tail_offset":      750780416,
    "num_sctxs":            39,
    "num_free_sctxs":       13,
    "num_swapperWriter":    32
  }
}
LSSInfo: frag:11, ds:71693406, used:80684586
LSS test.basic.TestPlasmaAutoSwapper(shard1) : all deamons stopped
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.basic.TestPlasmaAutoSwapper stopped
LSS test.basic.TestPlasmaAutoSwapper(shard1) : LSSCleaner stopped
LSS test.basic.TestPlasmaAutoSwapper/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.basic.TestPlasmaAutoSwapper closed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.basic.TestPlasmaAutoSwapper ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.basic.TestPlasmaAutoSwapper ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.basic.TestPlasmaAutoSwapper sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestPlasmaAutoSwapper (72.71s)
=== RUN   TestSwapperAddInstance
----------- running TestSwapperAddInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance0
LSS test.swapper.TestSwapperAddInstance0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance0) to LSS (test.swapper.TestSwapperAddInstance0) and RecoveryLSS (test.swapper.TestSwapperAddInstance0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperAddInstance0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance0 to LSS test.swapper.TestSwapperAddInstance0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance0/recovery], Data log [test.swapper.TestSwapperAddInstance0], Shared [false]
LSS test.swapper.TestSwapperAddInstance0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.987µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance0/recovery], Data log [test.swapper.TestSwapperAddInstance0], Shared [false]. Built [0] plasmas, took [113.683µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance0(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance1
LSS test.swapper.TestSwapperAddInstance1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance1) to LSS (test.swapper.TestSwapperAddInstance1) and RecoveryLSS (test.swapper.TestSwapperAddInstance1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperAddInstance1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance1 to LSS test.swapper.TestSwapperAddInstance1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance1/recovery], Data log [test.swapper.TestSwapperAddInstance1], Shared [false]
LSS test.swapper.TestSwapperAddInstance1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.832µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance1/recovery], Data log [test.swapper.TestSwapperAddInstance1], Shared [false]. Built [0] plasmas, took [97.871µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance1(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance2
LSS test.swapper.TestSwapperAddInstance2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance2/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance2) to LSS (test.swapper.TestSwapperAddInstance2) and RecoveryLSS (test.swapper.TestSwapperAddInstance2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.swapper.TestSwapperAddInstance2
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance2 to LSS test.swapper.TestSwapperAddInstance2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance2/recovery], Data log [test.swapper.TestSwapperAddInstance2], Shared [false]
LSS test.swapper.TestSwapperAddInstance2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.06µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance2/recovery], Data log [test.swapper.TestSwapperAddInstance2], Shared [false]. Built [0] plasmas, took [113.784µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance2(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance3
LSS test.swapper.TestSwapperAddInstance3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance3/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance3) to LSS (test.swapper.TestSwapperAddInstance3) and RecoveryLSS (test.swapper.TestSwapperAddInstance3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.swapper.TestSwapperAddInstance3
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance3 to LSS test.swapper.TestSwapperAddInstance3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance3/recovery], Data log [test.swapper.TestSwapperAddInstance3], Shared [false]
LSS test.swapper.TestSwapperAddInstance3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [57.412µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance3/recovery], Data log [test.swapper.TestSwapperAddInstance3], Shared [false]. Built [0] plasmas, took [106.002µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance3(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance4
LSS test.swapper.TestSwapperAddInstance4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance4/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance4) to LSS (test.swapper.TestSwapperAddInstance4) and RecoveryLSS (test.swapper.TestSwapperAddInstance4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.swapper.TestSwapperAddInstance4
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance4 to LSS test.swapper.TestSwapperAddInstance4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance4/recovery], Data log [test.swapper.TestSwapperAddInstance4], Shared [false]
LSS test.swapper.TestSwapperAddInstance4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.361µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance4/recovery], Data log [test.swapper.TestSwapperAddInstance4], Shared [false]. Built [0] plasmas, took [112.737µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance4(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance5
LSS test.swapper.TestSwapperAddInstance5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance5/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance5) to LSS (test.swapper.TestSwapperAddInstance5) and RecoveryLSS (test.swapper.TestSwapperAddInstance5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.swapper.TestSwapperAddInstance5
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance5 to LSS test.swapper.TestSwapperAddInstance5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance5/recovery], Data log [test.swapper.TestSwapperAddInstance5], Shared [false]
LSS test.swapper.TestSwapperAddInstance5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.459µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance5/recovery], Data log [test.swapper.TestSwapperAddInstance5], Shared [false]. Built [0] plasmas, took [112.606µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance5(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance5 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance6(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance6
LSS test.swapper.TestSwapperAddInstance6/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance6/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance6) to LSS (test.swapper.TestSwapperAddInstance6) and RecoveryLSS (test.swapper.TestSwapperAddInstance6/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.swapper.TestSwapperAddInstance6
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance6 to LSS test.swapper.TestSwapperAddInstance6
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance6/recovery], Data log [test.swapper.TestSwapperAddInstance6], Shared [false]
LSS test.swapper.TestSwapperAddInstance6/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [82.567µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance6/recovery], Data log [test.swapper.TestSwapperAddInstance6], Shared [false]. Built [0] plasmas, took [121.584µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance6(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance6/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance7
LSS test.swapper.TestSwapperAddInstance7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance7/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance7) to LSS (test.swapper.TestSwapperAddInstance7) and RecoveryLSS (test.swapper.TestSwapperAddInstance7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.swapper.TestSwapperAddInstance7
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance7 to LSS test.swapper.TestSwapperAddInstance7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance7/recovery], Data log [test.swapper.TestSwapperAddInstance7], Shared [false]
LSS test.swapper.TestSwapperAddInstance7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.579µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance7/recovery], Data log [test.swapper.TestSwapperAddInstance7], Shared [false]. Built [0] plasmas, took [91.517µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance7(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance7 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance8(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance8
LSS test.swapper.TestSwapperAddInstance8/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance8/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance8) to LSS (test.swapper.TestSwapperAddInstance8) and RecoveryLSS (test.swapper.TestSwapperAddInstance8/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.swapper.TestSwapperAddInstance8
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance8 to LSS test.swapper.TestSwapperAddInstance8
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance8/recovery], Data log [test.swapper.TestSwapperAddInstance8], Shared [false]
LSS test.swapper.TestSwapperAddInstance8/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.908µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance8/recovery], Data log [test.swapper.TestSwapperAddInstance8], Shared [false]. Built [0] plasmas, took [124.449µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance8(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance8/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperAddInstance9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance9
LSS test.swapper.TestSwapperAddInstance9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperAddInstance9/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperAddInstance9) to LSS (test.swapper.TestSwapperAddInstance9) and RecoveryLSS (test.swapper.TestSwapperAddInstance9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.swapper.TestSwapperAddInstance9
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperAddInstance9 to LSS test.swapper.TestSwapperAddInstance9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperAddInstance9/recovery], Data log [test.swapper.TestSwapperAddInstance9], Shared [false]
LSS test.swapper.TestSwapperAddInstance9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.322µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperAddInstance9/recovery], Data log [test.swapper.TestSwapperAddInstance9], Shared [false]. Built [0] plasmas, took [123.811µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperAddInstance9(shard1) : all deamons started
LSS test.swapper.TestSwapperAddInstance9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance9 started
LSS test.swapper.TestSwapperAddInstance0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance0 stopped
LSS test.swapper.TestSwapperAddInstance0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance0 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperAddInstance0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance0 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance1 stopped
LSS test.swapper.TestSwapperAddInstance1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperAddInstance1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance1 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance2(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance2 stopped
LSS test.swapper.TestSwapperAddInstance2(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance2 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.swapper.TestSwapperAddInstance2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance2 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance3(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance3 stopped
LSS test.swapper.TestSwapperAddInstance3(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance3 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.swapper.TestSwapperAddInstance3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance3 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance4(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance4 stopped
LSS test.swapper.TestSwapperAddInstance4(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance4 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.swapper.TestSwapperAddInstance4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance4 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance5(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance5 stopped
LSS test.swapper.TestSwapperAddInstance5(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance5 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.swapper.TestSwapperAddInstance5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance5 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance6(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance6/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance6 stopped
LSS test.swapper.TestSwapperAddInstance6(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance6/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance6 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.swapper.TestSwapperAddInstance6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance6 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance7(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance7 stopped
LSS test.swapper.TestSwapperAddInstance7(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance7 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.swapper.TestSwapperAddInstance7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance7 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance8(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance8/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance8 stopped
LSS test.swapper.TestSwapperAddInstance8(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance8/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance8 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.swapper.TestSwapperAddInstance8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance8 sucessfully destroyed
LSS test.swapper.TestSwapperAddInstance9(shard1) : all deamons stopped
LSS test.swapper.TestSwapperAddInstance9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance9 stopped
LSS test.swapper.TestSwapperAddInstance9(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperAddInstance9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance9 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperAddInstance9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.swapper.TestSwapperAddInstance9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperAddInstance9 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperAddInstance (4.30s)
=== RUN   TestSwapperRemoveInstance
----------- running TestSwapperRemoveInstance
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance0
LSS test.swapper.TestSwapperRemoveInstance0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance0) to LSS (test.swapper.TestSwapperRemoveInstance0) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperRemoveInstance0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance0 to LSS test.swapper.TestSwapperRemoveInstance0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance0/recovery], Data log [test.swapper.TestSwapperRemoveInstance0], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.731µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance0/recovery], Data log [test.swapper.TestSwapperRemoveInstance0], Shared [false]. Built [0] plasmas, took [100.893µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance0(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance1
LSS test.swapper.TestSwapperRemoveInstance1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance1) to LSS (test.swapper.TestSwapperRemoveInstance1) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperRemoveInstance1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance1 to LSS test.swapper.TestSwapperRemoveInstance1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance1/recovery], Data log [test.swapper.TestSwapperRemoveInstance1], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [92.186µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance1/recovery], Data log [test.swapper.TestSwapperRemoveInstance1], Shared [false]. Built [0] plasmas, took [130.947µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance1(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance2
LSS test.swapper.TestSwapperRemoveInstance2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance2/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance2) to LSS (test.swapper.TestSwapperRemoveInstance2) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.swapper.TestSwapperRemoveInstance2
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance2 to LSS test.swapper.TestSwapperRemoveInstance2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance2/recovery], Data log [test.swapper.TestSwapperRemoveInstance2], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [73.842µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance2/recovery], Data log [test.swapper.TestSwapperRemoveInstance2], Shared [false]. Built [0] plasmas, took [125.808µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance2(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance3
LSS test.swapper.TestSwapperRemoveInstance3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance3/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance3) to LSS (test.swapper.TestSwapperRemoveInstance3) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.swapper.TestSwapperRemoveInstance3
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance3 to LSS test.swapper.TestSwapperRemoveInstance3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance3/recovery], Data log [test.swapper.TestSwapperRemoveInstance3], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [171.584µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance3/recovery], Data log [test.swapper.TestSwapperRemoveInstance3], Shared [false]. Built [0] plasmas, took [213.477µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance3(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance4
LSS test.swapper.TestSwapperRemoveInstance4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance4/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance4) to LSS (test.swapper.TestSwapperRemoveInstance4) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.swapper.TestSwapperRemoveInstance4
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance4 to LSS test.swapper.TestSwapperRemoveInstance4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance4/recovery], Data log [test.swapper.TestSwapperRemoveInstance4], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.49µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance4/recovery], Data log [test.swapper.TestSwapperRemoveInstance4], Shared [false]. Built [0] plasmas, took [96.136µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance4(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance5
LSS test.swapper.TestSwapperRemoveInstance5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance5/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance5) to LSS (test.swapper.TestSwapperRemoveInstance5) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.swapper.TestSwapperRemoveInstance5
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance5 to LSS test.swapper.TestSwapperRemoveInstance5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance5/recovery], Data log [test.swapper.TestSwapperRemoveInstance5], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.003µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance5/recovery], Data log [test.swapper.TestSwapperRemoveInstance5], Shared [false]. Built [0] plasmas, took [95.04µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance5(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance5 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance6(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance6
LSS test.swapper.TestSwapperRemoveInstance6/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance6/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance6) to LSS (test.swapper.TestSwapperRemoveInstance6) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance6/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.swapper.TestSwapperRemoveInstance6
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance6 to LSS test.swapper.TestSwapperRemoveInstance6
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance6/recovery], Data log [test.swapper.TestSwapperRemoveInstance6], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance6/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.224µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance6/recovery], Data log [test.swapper.TestSwapperRemoveInstance6], Shared [false]. Built [0] plasmas, took [96.616µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance6(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance6/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance7
LSS test.swapper.TestSwapperRemoveInstance7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance7/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance7) to LSS (test.swapper.TestSwapperRemoveInstance7) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.swapper.TestSwapperRemoveInstance7
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance7 to LSS test.swapper.TestSwapperRemoveInstance7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance7/recovery], Data log [test.swapper.TestSwapperRemoveInstance7], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [76.537µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance7/recovery], Data log [test.swapper.TestSwapperRemoveInstance7], Shared [false]. Built [0] plasmas, took [115.126µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance7(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance7 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance8(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance8
LSS test.swapper.TestSwapperRemoveInstance8/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance8/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance8) to LSS (test.swapper.TestSwapperRemoveInstance8) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance8/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.swapper.TestSwapperRemoveInstance8
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance8 to LSS test.swapper.TestSwapperRemoveInstance8
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance8/recovery], Data log [test.swapper.TestSwapperRemoveInstance8], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance8/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [65.407µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance8/recovery], Data log [test.swapper.TestSwapperRemoveInstance8], Shared [false]. Built [0] plasmas, took [101.494µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance8(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance8/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstance9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance9
LSS test.swapper.TestSwapperRemoveInstance9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstance9/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstance9) to LSS (test.swapper.TestSwapperRemoveInstance9) and RecoveryLSS (test.swapper.TestSwapperRemoveInstance9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.swapper.TestSwapperRemoveInstance9
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstance9 to LSS test.swapper.TestSwapperRemoveInstance9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstance9/recovery], Data log [test.swapper.TestSwapperRemoveInstance9], Shared [false]
LSS test.swapper.TestSwapperRemoveInstance9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.67µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstance9/recovery], Data log [test.swapper.TestSwapperRemoveInstance9], Shared [false]. Built [0] plasmas, took [87.358µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstance9(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstance9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance9 started
LSS test.swapper.TestSwapperRemoveInstance0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance0 stopped
LSS test.swapper.TestSwapperRemoveInstance0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance0 closed
LSS test.swapper.TestSwapperRemoveInstance1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance1 stopped
LSS test.swapper.TestSwapperRemoveInstance1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance1 closed
LSS test.swapper.TestSwapperRemoveInstance2(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance2 stopped
LSS test.swapper.TestSwapperRemoveInstance2(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance2 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperRemoveInstance0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance0 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperRemoveInstance1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance1 sucessfully destroyed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.swapper.TestSwapperRemoveInstance2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance2 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance3(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance3 stopped
LSS test.swapper.TestSwapperRemoveInstance3(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance3 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.swapper.TestSwapperRemoveInstance3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance3 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance4(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance4 stopped
LSS test.swapper.TestSwapperRemoveInstance4(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance4 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.swapper.TestSwapperRemoveInstance4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance4 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance5(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance5 stopped
LSS test.swapper.TestSwapperRemoveInstance5(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance5 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.swapper.TestSwapperRemoveInstance5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance5 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance6(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance6/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance6 stopped
LSS test.swapper.TestSwapperRemoveInstance6(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance6/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance6 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.swapper.TestSwapperRemoveInstance6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance6 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance7(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance7 stopped
LSS test.swapper.TestSwapperRemoveInstance7(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance7 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.swapper.TestSwapperRemoveInstance7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance7 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance8(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance8/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance8 stopped
LSS test.swapper.TestSwapperRemoveInstance8(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance8/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance8 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.swapper.TestSwapperRemoveInstance8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance8 sucessfully destroyed
LSS test.swapper.TestSwapperRemoveInstance9(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstance9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance9 stopped
LSS test.swapper.TestSwapperRemoveInstance9(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstance9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance9 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstance9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.swapper.TestSwapperRemoveInstance9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstance9 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperRemoveInstance (4.05s)
=== RUN   TestSwapperJoinContext
----------- running TestSwapperJoinContext
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext0
LSS test.swapper.TestSwapperJoinContext0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext0) to LSS (test.swapper.TestSwapperJoinContext0) and RecoveryLSS (test.swapper.TestSwapperJoinContext0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperJoinContext0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext0 to LSS test.swapper.TestSwapperJoinContext0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext0/recovery], Data log [test.swapper.TestSwapperJoinContext0], Shared [false]
LSS test.swapper.TestSwapperJoinContext0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [73.524µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext0/recovery], Data log [test.swapper.TestSwapperJoinContext0], Shared [false]. Built [0] plasmas, took [111.458µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext0(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext1
LSS test.swapper.TestSwapperJoinContext1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext1) to LSS (test.swapper.TestSwapperJoinContext1) and RecoveryLSS (test.swapper.TestSwapperJoinContext1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperJoinContext1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext1 to LSS test.swapper.TestSwapperJoinContext1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext1/recovery], Data log [test.swapper.TestSwapperJoinContext1], Shared [false]
LSS test.swapper.TestSwapperJoinContext1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.762µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext1/recovery], Data log [test.swapper.TestSwapperJoinContext1], Shared [false]. Built [0] plasmas, took [131.037µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext1(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext2
LSS test.swapper.TestSwapperJoinContext2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext2/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext2) to LSS (test.swapper.TestSwapperJoinContext2) and RecoveryLSS (test.swapper.TestSwapperJoinContext2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.swapper.TestSwapperJoinContext2
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext2 to LSS test.swapper.TestSwapperJoinContext2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext2/recovery], Data log [test.swapper.TestSwapperJoinContext2], Shared [false]
LSS test.swapper.TestSwapperJoinContext2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.117µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext2/recovery], Data log [test.swapper.TestSwapperJoinContext2], Shared [false]. Built [0] plasmas, took [115.045µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext2(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext3
LSS test.swapper.TestSwapperJoinContext3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext3/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext3) to LSS (test.swapper.TestSwapperJoinContext3) and RecoveryLSS (test.swapper.TestSwapperJoinContext3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.swapper.TestSwapperJoinContext3
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext3 to LSS test.swapper.TestSwapperJoinContext3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext3/recovery], Data log [test.swapper.TestSwapperJoinContext3], Shared [false]
LSS test.swapper.TestSwapperJoinContext3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [52.921µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext3/recovery], Data log [test.swapper.TestSwapperJoinContext3], Shared [false]. Built [0] plasmas, took [89.292µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext3(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext4
LSS test.swapper.TestSwapperJoinContext4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext4/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext4) to LSS (test.swapper.TestSwapperJoinContext4) and RecoveryLSS (test.swapper.TestSwapperJoinContext4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.swapper.TestSwapperJoinContext4
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext4 to LSS test.swapper.TestSwapperJoinContext4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext4/recovery], Data log [test.swapper.TestSwapperJoinContext4], Shared [false]
LSS test.swapper.TestSwapperJoinContext4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [63.525µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext4/recovery], Data log [test.swapper.TestSwapperJoinContext4], Shared [false]. Built [0] plasmas, took [109.779µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext4(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext5
LSS test.swapper.TestSwapperJoinContext5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext5/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext5) to LSS (test.swapper.TestSwapperJoinContext5) and RecoveryLSS (test.swapper.TestSwapperJoinContext5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.swapper.TestSwapperJoinContext5
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext5 to LSS test.swapper.TestSwapperJoinContext5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext5/recovery], Data log [test.swapper.TestSwapperJoinContext5], Shared [false]
LSS test.swapper.TestSwapperJoinContext5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [75.943µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext5/recovery], Data log [test.swapper.TestSwapperJoinContext5], Shared [false]. Built [0] plasmas, took [128.25µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext5(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext5 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext6(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext6
LSS test.swapper.TestSwapperJoinContext6/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext6/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext6) to LSS (test.swapper.TestSwapperJoinContext6) and RecoveryLSS (test.swapper.TestSwapperJoinContext6/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.swapper.TestSwapperJoinContext6
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext6 to LSS test.swapper.TestSwapperJoinContext6
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext6/recovery], Data log [test.swapper.TestSwapperJoinContext6], Shared [false]
LSS test.swapper.TestSwapperJoinContext6/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.743µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext6/recovery], Data log [test.swapper.TestSwapperJoinContext6], Shared [false]. Built [0] plasmas, took [106.502µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext6(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext6/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext7
LSS test.swapper.TestSwapperJoinContext7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext7/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext7) to LSS (test.swapper.TestSwapperJoinContext7) and RecoveryLSS (test.swapper.TestSwapperJoinContext7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.swapper.TestSwapperJoinContext7
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext7 to LSS test.swapper.TestSwapperJoinContext7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext7/recovery], Data log [test.swapper.TestSwapperJoinContext7], Shared [false]
LSS test.swapper.TestSwapperJoinContext7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [94.363µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext7/recovery], Data log [test.swapper.TestSwapperJoinContext7], Shared [false]. Built [0] plasmas, took [130.907µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext7(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext7 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext8(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext8
LSS test.swapper.TestSwapperJoinContext8/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext8/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext8) to LSS (test.swapper.TestSwapperJoinContext8) and RecoveryLSS (test.swapper.TestSwapperJoinContext8/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.swapper.TestSwapperJoinContext8
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext8 to LSS test.swapper.TestSwapperJoinContext8
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext8/recovery], Data log [test.swapper.TestSwapperJoinContext8], Shared [false]
LSS test.swapper.TestSwapperJoinContext8/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [57.529µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext8/recovery], Data log [test.swapper.TestSwapperJoinContext8], Shared [false]. Built [0] plasmas, took [91.99µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext8(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext8/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperJoinContext9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext9
LSS test.swapper.TestSwapperJoinContext9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperJoinContext9/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperJoinContext9) to LSS (test.swapper.TestSwapperJoinContext9) and RecoveryLSS (test.swapper.TestSwapperJoinContext9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.swapper.TestSwapperJoinContext9
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperJoinContext9 to LSS test.swapper.TestSwapperJoinContext9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperJoinContext9/recovery], Data log [test.swapper.TestSwapperJoinContext9], Shared [false]
LSS test.swapper.TestSwapperJoinContext9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [229.347µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperJoinContext9/recovery], Data log [test.swapper.TestSwapperJoinContext9], Shared [false]. Built [0] plasmas, took [297.055µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperJoinContext9(shard1) : all deamons started
LSS test.swapper.TestSwapperJoinContext9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext9 started
LSS test.swapper.TestSwapperJoinContext0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext0 stopped
LSS test.swapper.TestSwapperJoinContext0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext0 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperJoinContext0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext0 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext1 stopped
LSS test.swapper.TestSwapperJoinContext1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperJoinContext1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext1 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext2(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext2 stopped
LSS test.swapper.TestSwapperJoinContext2(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext2 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.swapper.TestSwapperJoinContext2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext2 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext3(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext3 stopped
LSS test.swapper.TestSwapperJoinContext3(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext3 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.swapper.TestSwapperJoinContext3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext3 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext4(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext4 stopped
LSS test.swapper.TestSwapperJoinContext4(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext4 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.swapper.TestSwapperJoinContext4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext4 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext5(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext5 stopped
LSS test.swapper.TestSwapperJoinContext5(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext5 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.swapper.TestSwapperJoinContext5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext5 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext6(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext6/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext6 stopped
LSS test.swapper.TestSwapperJoinContext6(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext6/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext6 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.swapper.TestSwapperJoinContext6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext6 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext7(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext7 stopped
LSS test.swapper.TestSwapperJoinContext7(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext7 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.swapper.TestSwapperJoinContext7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext7 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext8(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext8/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext8 stopped
LSS test.swapper.TestSwapperJoinContext8(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext8/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext8 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.swapper.TestSwapperJoinContext8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext8 sucessfully destroyed
LSS test.swapper.TestSwapperJoinContext9(shard1) : all deamons stopped
LSS test.swapper.TestSwapperJoinContext9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext9 stopped
LSS test.swapper.TestSwapperJoinContext9(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperJoinContext9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext9 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperJoinContext9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.swapper.TestSwapperJoinContext9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperJoinContext9 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperJoinContext (4.71s)
=== RUN   TestSwapperSplitContext
----------- running TestSwapperSplitContext
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext0
LSS test.swapper.TestSwapperSplitContext0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext0) to LSS (test.swapper.TestSwapperSplitContext0) and RecoveryLSS (test.swapper.TestSwapperSplitContext0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperSplitContext0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext0 to LSS test.swapper.TestSwapperSplitContext0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext0/recovery], Data log [test.swapper.TestSwapperSplitContext0], Shared [false]
LSS test.swapper.TestSwapperSplitContext0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [50.698µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext0/recovery], Data log [test.swapper.TestSwapperSplitContext0], Shared [false]. Built [0] plasmas, took [94.883µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext0(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext1
LSS test.swapper.TestSwapperSplitContext1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext1) to LSS (test.swapper.TestSwapperSplitContext1) and RecoveryLSS (test.swapper.TestSwapperSplitContext1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperSplitContext1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext1 to LSS test.swapper.TestSwapperSplitContext1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext1/recovery], Data log [test.swapper.TestSwapperSplitContext1], Shared [false]
LSS test.swapper.TestSwapperSplitContext1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [68.218µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext1/recovery], Data log [test.swapper.TestSwapperSplitContext1], Shared [false]. Built [0] plasmas, took [104.569µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext1(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext2
LSS test.swapper.TestSwapperSplitContext2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext2/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext2) to LSS (test.swapper.TestSwapperSplitContext2) and RecoveryLSS (test.swapper.TestSwapperSplitContext2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.swapper.TestSwapperSplitContext2
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext2 to LSS test.swapper.TestSwapperSplitContext2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext2/recovery], Data log [test.swapper.TestSwapperSplitContext2], Shared [false]
LSS test.swapper.TestSwapperSplitContext2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [65.282µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext2/recovery], Data log [test.swapper.TestSwapperSplitContext2], Shared [false]. Built [0] plasmas, took [101.288µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext2(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext3
LSS test.swapper.TestSwapperSplitContext3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext3/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext3) to LSS (test.swapper.TestSwapperSplitContext3) and RecoveryLSS (test.swapper.TestSwapperSplitContext3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.swapper.TestSwapperSplitContext3
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext3 to LSS test.swapper.TestSwapperSplitContext3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext3/recovery], Data log [test.swapper.TestSwapperSplitContext3], Shared [false]
LSS test.swapper.TestSwapperSplitContext3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [74.183µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext3/recovery], Data log [test.swapper.TestSwapperSplitContext3], Shared [false]. Built [0] plasmas, took [113.153µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext3(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext4
LSS test.swapper.TestSwapperSplitContext4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext4/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext4) to LSS (test.swapper.TestSwapperSplitContext4) and RecoveryLSS (test.swapper.TestSwapperSplitContext4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.swapper.TestSwapperSplitContext4
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext4 to LSS test.swapper.TestSwapperSplitContext4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext4/recovery], Data log [test.swapper.TestSwapperSplitContext4], Shared [false]
LSS test.swapper.TestSwapperSplitContext4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [59.973µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext4/recovery], Data log [test.swapper.TestSwapperSplitContext4], Shared [false]. Built [0] plasmas, took [96.002µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext4(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext5
LSS test.swapper.TestSwapperSplitContext5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext5/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext5) to LSS (test.swapper.TestSwapperSplitContext5) and RecoveryLSS (test.swapper.TestSwapperSplitContext5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.swapper.TestSwapperSplitContext5
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext5 to LSS test.swapper.TestSwapperSplitContext5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext5/recovery], Data log [test.swapper.TestSwapperSplitContext5], Shared [false]
LSS test.swapper.TestSwapperSplitContext5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [96.697µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext5/recovery], Data log [test.swapper.TestSwapperSplitContext5], Shared [false]. Built [0] plasmas, took [146.956µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext5(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext5 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext6(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext6
LSS test.swapper.TestSwapperSplitContext6/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext6/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext6) to LSS (test.swapper.TestSwapperSplitContext6) and RecoveryLSS (test.swapper.TestSwapperSplitContext6/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.swapper.TestSwapperSplitContext6
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext6 to LSS test.swapper.TestSwapperSplitContext6
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext6/recovery], Data log [test.swapper.TestSwapperSplitContext6], Shared [false]
LSS test.swapper.TestSwapperSplitContext6/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [147.079µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext6/recovery], Data log [test.swapper.TestSwapperSplitContext6], Shared [false]. Built [0] plasmas, took [214.093µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext6(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext6/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext7
LSS test.swapper.TestSwapperSplitContext7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext7/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext7) to LSS (test.swapper.TestSwapperSplitContext7) and RecoveryLSS (test.swapper.TestSwapperSplitContext7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.swapper.TestSwapperSplitContext7
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext7 to LSS test.swapper.TestSwapperSplitContext7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext7/recovery], Data log [test.swapper.TestSwapperSplitContext7], Shared [false]
LSS test.swapper.TestSwapperSplitContext7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [87.472µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext7/recovery], Data log [test.swapper.TestSwapperSplitContext7], Shared [false]. Built [0] plasmas, took [123.624µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext7(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext7 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext8(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext8
LSS test.swapper.TestSwapperSplitContext8/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext8/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext8) to LSS (test.swapper.TestSwapperSplitContext8) and RecoveryLSS (test.swapper.TestSwapperSplitContext8/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.swapper.TestSwapperSplitContext8
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext8 to LSS test.swapper.TestSwapperSplitContext8
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext8/recovery], Data log [test.swapper.TestSwapperSplitContext8], Shared [false]
LSS test.swapper.TestSwapperSplitContext8/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [64.549µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext8/recovery], Data log [test.swapper.TestSwapperSplitContext8], Shared [false]. Built [0] plasmas, took [116.759µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext8(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext8/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSplitContext9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext9
LSS test.swapper.TestSwapperSplitContext9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSplitContext9/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSplitContext9) to LSS (test.swapper.TestSwapperSplitContext9) and RecoveryLSS (test.swapper.TestSwapperSplitContext9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.swapper.TestSwapperSplitContext9
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSplitContext9 to LSS test.swapper.TestSwapperSplitContext9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSplitContext9/recovery], Data log [test.swapper.TestSwapperSplitContext9], Shared [false]
LSS test.swapper.TestSwapperSplitContext9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [116.499µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSplitContext9/recovery], Data log [test.swapper.TestSwapperSplitContext9], Shared [false]. Built [0] plasmas, took [137.204µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSplitContext9(shard1) : all deamons started
LSS test.swapper.TestSwapperSplitContext9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext9 started
LSS test.swapper.TestSwapperSplitContext0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext0 stopped
LSS test.swapper.TestSwapperSplitContext0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext0 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperSplitContext0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext0 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext1 stopped
LSS test.swapper.TestSwapperSplitContext1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperSplitContext1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext1 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext2(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext2 stopped
LSS test.swapper.TestSwapperSplitContext2(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext2 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.swapper.TestSwapperSplitContext2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext2 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext3(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext3 stopped
LSS test.swapper.TestSwapperSplitContext3(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext3 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.swapper.TestSwapperSplitContext3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext3 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext4(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext4 stopped
LSS test.swapper.TestSwapperSplitContext4(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext4 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.swapper.TestSwapperSplitContext4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext4 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext5(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext5 stopped
LSS test.swapper.TestSwapperSplitContext5(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext5 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.swapper.TestSwapperSplitContext5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext5 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext6(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext6/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext6 stopped
LSS test.swapper.TestSwapperSplitContext6(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext6/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext6 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.swapper.TestSwapperSplitContext6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext6 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext7(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext7 stopped
LSS test.swapper.TestSwapperSplitContext7(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext7 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.swapper.TestSwapperSplitContext7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext7 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext8(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext8/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext8 stopped
LSS test.swapper.TestSwapperSplitContext8(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext8/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext8 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.swapper.TestSwapperSplitContext8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext8 sucessfully destroyed
LSS test.swapper.TestSwapperSplitContext9(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSplitContext9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext9 stopped
LSS test.swapper.TestSwapperSplitContext9(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSplitContext9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext9 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSplitContext9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.swapper.TestSwapperSplitContext9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSplitContext9 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperSplitContext (4.72s)
=== RUN   TestSwapperGlobalClock
----------- running TestSwapperGlobalClock
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock0
LSS test.swapper.TestSwapperGlobalClock0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock0) to LSS (test.swapper.TestSwapperGlobalClock0) and RecoveryLSS (test.swapper.TestSwapperGlobalClock0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperGlobalClock0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock0 to LSS test.swapper.TestSwapperGlobalClock0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock0/recovery], Data log [test.swapper.TestSwapperGlobalClock0], Shared [false]
LSS test.swapper.TestSwapperGlobalClock0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [48.13µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock0/recovery], Data log [test.swapper.TestSwapperGlobalClock0], Shared [false]. Built [0] plasmas, took [89.72µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock0(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock1
LSS test.swapper.TestSwapperGlobalClock1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock1) to LSS (test.swapper.TestSwapperGlobalClock1) and RecoveryLSS (test.swapper.TestSwapperGlobalClock1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperGlobalClock1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock1 to LSS test.swapper.TestSwapperGlobalClock1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock1/recovery], Data log [test.swapper.TestSwapperGlobalClock1], Shared [false]
LSS test.swapper.TestSwapperGlobalClock1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.746µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock1/recovery], Data log [test.swapper.TestSwapperGlobalClock1], Shared [false]. Built [0] plasmas, took [108.443µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock1(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock1 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock2(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock2
LSS test.swapper.TestSwapperGlobalClock2/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock2/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock2) to LSS (test.swapper.TestSwapperGlobalClock2) and RecoveryLSS (test.swapper.TestSwapperGlobalClock2/recovery)
Shard shards/shard1(1) : Assign plasmaId 3 to instance test.swapper.TestSwapperGlobalClock2
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock2 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock2 to LSS test.swapper.TestSwapperGlobalClock2
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock2/recovery], Data log [test.swapper.TestSwapperGlobalClock2], Shared [false]
LSS test.swapper.TestSwapperGlobalClock2/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [47.656µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock2/recovery], Data log [test.swapper.TestSwapperGlobalClock2], Shared [false]. Built [0] plasmas, took [83.256µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock2(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock2/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock2 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock3(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock3
LSS test.swapper.TestSwapperGlobalClock3/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock3/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock3) to LSS (test.swapper.TestSwapperGlobalClock3) and RecoveryLSS (test.swapper.TestSwapperGlobalClock3/recovery)
Shard shards/shard1(1) : Assign plasmaId 4 to instance test.swapper.TestSwapperGlobalClock3
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock3 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock3 to LSS test.swapper.TestSwapperGlobalClock3
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock3/recovery], Data log [test.swapper.TestSwapperGlobalClock3], Shared [false]
LSS test.swapper.TestSwapperGlobalClock3/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.238µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock3/recovery], Data log [test.swapper.TestSwapperGlobalClock3], Shared [false]. Built [0] plasmas, took [105.386µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock3(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock3/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock3 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock4(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock4
LSS test.swapper.TestSwapperGlobalClock4/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock4/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock4) to LSS (test.swapper.TestSwapperGlobalClock4) and RecoveryLSS (test.swapper.TestSwapperGlobalClock4/recovery)
Shard shards/shard1(1) : Assign plasmaId 5 to instance test.swapper.TestSwapperGlobalClock4
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock4 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock4 to LSS test.swapper.TestSwapperGlobalClock4
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock4/recovery], Data log [test.swapper.TestSwapperGlobalClock4], Shared [false]
LSS test.swapper.TestSwapperGlobalClock4/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.234µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock4/recovery], Data log [test.swapper.TestSwapperGlobalClock4], Shared [false]. Built [0] plasmas, took [111.987µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock4(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock4/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock4 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock5(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock5
LSS test.swapper.TestSwapperGlobalClock5/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock5/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock5) to LSS (test.swapper.TestSwapperGlobalClock5) and RecoveryLSS (test.swapper.TestSwapperGlobalClock5/recovery)
Shard shards/shard1(1) : Assign plasmaId 6 to instance test.swapper.TestSwapperGlobalClock5
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock5 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock5 to LSS test.swapper.TestSwapperGlobalClock5
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock5/recovery], Data log [test.swapper.TestSwapperGlobalClock5], Shared [false]
LSS test.swapper.TestSwapperGlobalClock5/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [66.546µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock5/recovery], Data log [test.swapper.TestSwapperGlobalClock5], Shared [false]. Built [0] plasmas, took [105.846µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock5(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock5/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock5 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock6(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock6
LSS test.swapper.TestSwapperGlobalClock6/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock6/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock6) to LSS (test.swapper.TestSwapperGlobalClock6) and RecoveryLSS (test.swapper.TestSwapperGlobalClock6/recovery)
Shard shards/shard1(1) : Assign plasmaId 7 to instance test.swapper.TestSwapperGlobalClock6
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock6 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock6 to LSS test.swapper.TestSwapperGlobalClock6
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock6/recovery], Data log [test.swapper.TestSwapperGlobalClock6], Shared [false]
LSS test.swapper.TestSwapperGlobalClock6/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [78.131µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock6/recovery], Data log [test.swapper.TestSwapperGlobalClock6], Shared [false]. Built [0] plasmas, took [117.021µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock6(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock6/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock6 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock7(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock7
LSS test.swapper.TestSwapperGlobalClock7/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock7/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock7) to LSS (test.swapper.TestSwapperGlobalClock7) and RecoveryLSS (test.swapper.TestSwapperGlobalClock7/recovery)
Shard shards/shard1(1) : Assign plasmaId 8 to instance test.swapper.TestSwapperGlobalClock7
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock7 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock7 to LSS test.swapper.TestSwapperGlobalClock7
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock7/recovery], Data log [test.swapper.TestSwapperGlobalClock7], Shared [false]
LSS test.swapper.TestSwapperGlobalClock7/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [60.144µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock7/recovery], Data log [test.swapper.TestSwapperGlobalClock7], Shared [false]. Built [0] plasmas, took [100.381µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock7(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock7/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock7 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock8(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock8
LSS test.swapper.TestSwapperGlobalClock8/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock8/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock8) to LSS (test.swapper.TestSwapperGlobalClock8) and RecoveryLSS (test.swapper.TestSwapperGlobalClock8/recovery)
Shard shards/shard1(1) : Assign plasmaId 9 to instance test.swapper.TestSwapperGlobalClock8
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock8 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock8 to LSS test.swapper.TestSwapperGlobalClock8
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock8/recovery], Data log [test.swapper.TestSwapperGlobalClock8], Shared [false]
LSS test.swapper.TestSwapperGlobalClock8/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [59.325µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock8/recovery], Data log [test.swapper.TestSwapperGlobalClock8], Shared [false]. Built [0] plasmas, took [94.83µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock8(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock8/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock8 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperGlobalClock9(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock9
LSS test.swapper.TestSwapperGlobalClock9/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperGlobalClock9/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperGlobalClock9) to LSS (test.swapper.TestSwapperGlobalClock9) and RecoveryLSS (test.swapper.TestSwapperGlobalClock9/recovery)
Shard shards/shard1(1) : Assign plasmaId 10 to instance test.swapper.TestSwapperGlobalClock9
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock9 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperGlobalClock9 to LSS test.swapper.TestSwapperGlobalClock9
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperGlobalClock9/recovery], Data log [test.swapper.TestSwapperGlobalClock9], Shared [false]
LSS test.swapper.TestSwapperGlobalClock9/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [452.256µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperGlobalClock9/recovery], Data log [test.swapper.TestSwapperGlobalClock9], Shared [false]. Built [0] plasmas, took [488.746µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperGlobalClock9(shard1) : all deamons started
LSS test.swapper.TestSwapperGlobalClock9/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock9 started
LSS test.swapper.TestSwapperGlobalClock0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock0 stopped
LSS test.swapper.TestSwapperGlobalClock0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock0 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperGlobalClock0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock0 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock1 stopped
LSS test.swapper.TestSwapperGlobalClock1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperGlobalClock1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock1 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock2(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock2/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock2 stopped
LSS test.swapper.TestSwapperGlobalClock2(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock2/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock2 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock2 ...
Shard shards/shard1(1) : removed plasmaId 3 for instance test.swapper.TestSwapperGlobalClock2 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock2 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock3(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock3/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock3 stopped
LSS test.swapper.TestSwapperGlobalClock3(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock3/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock3 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock3 ...
Shard shards/shard1(1) : removed plasmaId 4 for instance test.swapper.TestSwapperGlobalClock3 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock3 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock4(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock4/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock4 stopped
LSS test.swapper.TestSwapperGlobalClock4(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock4/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock4 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock4 ...
Shard shards/shard1(1) : removed plasmaId 5 for instance test.swapper.TestSwapperGlobalClock4 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock4 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock5(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock5/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock5 stopped
LSS test.swapper.TestSwapperGlobalClock5(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock5/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock5 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock5 ...
Shard shards/shard1(1) : removed plasmaId 6 for instance test.swapper.TestSwapperGlobalClock5 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock5 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock6(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock6/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock6 stopped
LSS test.swapper.TestSwapperGlobalClock6(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock6/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock6 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock6 ...
Shard shards/shard1(1) : removed plasmaId 7 for instance test.swapper.TestSwapperGlobalClock6 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock6 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock7(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock7/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock7 stopped
LSS test.swapper.TestSwapperGlobalClock7(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock7/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock7 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock7 ...
Shard shards/shard1(1) : removed plasmaId 8 for instance test.swapper.TestSwapperGlobalClock7 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock7 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock8(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock8/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock8 stopped
LSS test.swapper.TestSwapperGlobalClock8(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock8/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock8 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock8 ...
Shard shards/shard1(1) : removed plasmaId 9 for instance test.swapper.TestSwapperGlobalClock8 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock8 sucessfully destroyed
LSS test.swapper.TestSwapperGlobalClock9(shard1) : all deamons stopped
LSS test.swapper.TestSwapperGlobalClock9/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock9 stopped
LSS test.swapper.TestSwapperGlobalClock9(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperGlobalClock9/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock9 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperGlobalClock9 ...
Shard shards/shard1(1) : removed plasmaId 10 for instance test.swapper.TestSwapperGlobalClock9 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperGlobalClock9 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperGlobalClock (19.84s)
=== RUN   TestSwapperConflict
----------- running TestSwapperConflict
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperConflict0(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperConflict0
LSS test.swapper.TestSwapperConflict0/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperConflict0/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperConflict0) to LSS (test.swapper.TestSwapperConflict0) and RecoveryLSS (test.swapper.TestSwapperConflict0/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperConflict0
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperConflict0 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperConflict0 to LSS test.swapper.TestSwapperConflict0
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperConflict0/recovery], Data log [test.swapper.TestSwapperConflict0], Shared [false]
LSS test.swapper.TestSwapperConflict0/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [51.126µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperConflict0/recovery], Data log [test.swapper.TestSwapperConflict0], Shared [false]. Built [0] plasmas, took [99.349µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperConflict0(shard1) : all deamons started
LSS test.swapper.TestSwapperConflict0/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict0 started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperConflict1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperConflict1
LSS test.swapper.TestSwapperConflict1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperConflict1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperConflict1) to LSS (test.swapper.TestSwapperConflict1) and RecoveryLSS (test.swapper.TestSwapperConflict1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperConflict1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperConflict1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperConflict1 to LSS test.swapper.TestSwapperConflict1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperConflict1/recovery], Data log [test.swapper.TestSwapperConflict1], Shared [false]
LSS test.swapper.TestSwapperConflict1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [56.619µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperConflict1/recovery], Data log [test.swapper.TestSwapperConflict1], Shared [false]. Built [0] plasmas, took [92.73µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperConflict1(shard1) : all deamons started
LSS test.swapper.TestSwapperConflict1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict1 started
LSS test.swapper.TestSwapperConflict0(shard1) : all deamons stopped
LSS test.swapper.TestSwapperConflict0/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict0 stopped
LSS test.swapper.TestSwapperConflict0(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperConflict0/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict0 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperConflict0 ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperConflict0 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict0 sucessfully destroyed
LSS test.swapper.TestSwapperConflict1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperConflict1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict1 stopped
LSS test.swapper.TestSwapperConflict1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperConflict1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperConflict1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperConflict1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperConflict1 sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperConflict (2.80s)
=== RUN   TestSwapperRemoveInstanceWait
----------- running TestSwapperRemoveInstanceWait
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperRemoveInstanceWait(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstanceWait
LSS test.swapper.TestSwapperRemoveInstanceWait/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperRemoveInstanceWait/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperRemoveInstanceWait) to LSS (test.swapper.TestSwapperRemoveInstanceWait) and RecoveryLSS (test.swapper.TestSwapperRemoveInstanceWait/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperRemoveInstanceWait
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstanceWait to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperRemoveInstanceWait to LSS test.swapper.TestSwapperRemoveInstanceWait
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperRemoveInstanceWait/recovery], Data log [test.swapper.TestSwapperRemoveInstanceWait], Shared [false]
LSS test.swapper.TestSwapperRemoveInstanceWait/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [67.327µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperRemoveInstanceWait/recovery], Data log [test.swapper.TestSwapperRemoveInstanceWait], Shared [false]. Built [0] plasmas, took [103.159µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperRemoveInstanceWait(shard1) : all deamons started
LSS test.swapper.TestSwapperRemoveInstanceWait/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstanceWait started
LSS test.swapper.TestSwapperRemoveInstanceWait(shard1) : all deamons stopped
LSS test.swapper.TestSwapperRemoveInstanceWait/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstanceWait stopped
LSS test.swapper.TestSwapperRemoveInstanceWait(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperRemoveInstanceWait/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstanceWait closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperRemoveInstanceWait ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperRemoveInstanceWait ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperRemoveInstanceWait sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperRemoveInstanceWait (3.39s)
=== RUN   TestSwapperStats
----------- running TestSwapperStats
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperStats(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperStats
LSS test.swapper.TestSwapperStats/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperStats/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperStats) to LSS (test.swapper.TestSwapperStats) and RecoveryLSS (test.swapper.TestSwapperStats/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperStats
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperStats to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperStats to LSS test.swapper.TestSwapperStats
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperStats/recovery], Data log [test.swapper.TestSwapperStats], Shared [false]
LSS test.swapper.TestSwapperStats/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [71.677µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperStats/recovery], Data log [test.swapper.TestSwapperStats], Shared [false]. Built [0] plasmas, took [110.787µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperStats(shard1) : all deamons started
LSS test.swapper.TestSwapperStats/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats started
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperStats1(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperStats1
LSS test.swapper.TestSwapperStats1/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperStats1/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperStats1) to LSS (test.swapper.TestSwapperStats1) and RecoveryLSS (test.swapper.TestSwapperStats1/recovery)
Shard shards/shard1(1) : Assign plasmaId 2 to instance test.swapper.TestSwapperStats1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperStats1 to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperStats1 to LSS test.swapper.TestSwapperStats1
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperStats1/recovery], Data log [test.swapper.TestSwapperStats1], Shared [false]
LSS test.swapper.TestSwapperStats1/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [69.831µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperStats1/recovery], Data log [test.swapper.TestSwapperStats1], Shared [false]. Built [0] plasmas, took [106.062µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperStats1(shard1) : all deamons started
LSS test.swapper.TestSwapperStats1/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats1 started
updateStatsOnNewClock
updateStatsOnRebalance
LSS test.swapper.TestSwapperStats1(shard1) : all deamons stopped
LSS test.swapper.TestSwapperStats1/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats1 stopped
LSS test.swapper.TestSwapperStats1(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperStats1/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats1 closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperStats1 ...
Shard shards/shard1(1) : removed plasmaId 2 for instance test.swapper.TestSwapperStats1 ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats1 sucessfully destroyed
LSS test.swapper.TestSwapperStats(shard1) : all deamons stopped
LSS test.swapper.TestSwapperStats/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats stopped
LSS test.swapper.TestSwapperStats(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperStats/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperStats ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperStats ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperStats sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperStats (1.07s)
=== RUN   TestSwapperSweepInterval
----------- running TestSwapperSweepInterval
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSwapperSweepInterval(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSweepInterval
LSS test.swapper.TestSwapperSweepInterval/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSwapperSweepInterval/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSwapperSweepInterval) to LSS (test.swapper.TestSwapperSweepInterval) and RecoveryLSS (test.swapper.TestSwapperSweepInterval/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSwapperSweepInterval
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSweepInterval to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSwapperSweepInterval to LSS test.swapper.TestSwapperSweepInterval
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSwapperSweepInterval/recovery], Data log [test.swapper.TestSwapperSweepInterval], Shared [false]
LSS test.swapper.TestSwapperSweepInterval/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [53.207µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSwapperSweepInterval/recovery], Data log [test.swapper.TestSwapperSweepInterval], Shared [false]. Built [0] plasmas, took [95.509µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSwapperSweepInterval(shard1) : all deamons started
LSS test.swapper.TestSwapperSweepInterval/recovery(shard1) : all deamons started
Shard shards/shard1(1) : Swapper started
Shard shards/shard1(1) : Instance added to swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSweepInterval started
new sweep interval 4m53.999981145s ratio 0.489999968575
LSS test.swapper.TestSwapperSweepInterval(shard1) : all deamons stopped
LSS test.swapper.TestSwapperSweepInterval/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : Instance removed from swapper : shards/shard1
Shard shards/shard1(1) : instance test.swapper.TestSwapperSweepInterval stopped
LSS test.swapper.TestSwapperSweepInterval(shard1) : LSSCleaner stopped
LSS test.swapper.TestSwapperSweepInterval/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSwapperSweepInterval closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSwapperSweepInterval ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSwapperSweepInterval ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSwapperSweepInterval sucessfully destroyed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : Swapper stopped
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSwapperSweepInterval (0.43s)
=== RUN   TestSweepCompress
----------- running TestSweepCompress
Start shard recovery from shardsDirectory shards does not exist.  Skip shard recovery.
Shard shards/shard1(1) : Shard Created Successfully
Shard shards/shard1(1) : metadata saved successfully
LSS test.swapper.TestSweepCompress(shard1) : LSSCleaner initialized
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSweepCompress
LSS test.swapper.TestSweepCompress/recovery(shard1) : LSSCleaner initialized for recovery
Shard shards/shard1(1) : LSSCtx Created Successfully. Path=test.swapper.TestSweepCompress/recovery
Shard shards/shard1(1) : Map plasma instance (test.swapper.TestSweepCompress) to LSS (test.swapper.TestSweepCompress) and RecoveryLSS (test.swapper.TestSweepCompress/recovery)
Shard shards/shard1(1) : Assign plasmaId 1 to instance test.swapper.TestSweepCompress
Shard shards/shard1(1) : Add instance test.swapper.TestSweepCompress to Shard shards/shard1
Shard shards/shard1(1) : Add instance test.swapper.TestSweepCompress to LSS test.swapper.TestSweepCompress
Shard shards/shard1(1) : Shard.doRecovery: Starting recovery. Recovery log [test.swapper.TestSweepCompress/recovery], Data log [test.swapper.TestSweepCompress], Shared [false]
LSS test.swapper.TestSweepCompress/recovery(shard1) : recoverFromHeaderReplay: Begin recovering from recovery log, headOffset [0] tailOffset [0]
Shard shards/shard1(1) : Shard.doRecovery: Done recovering from recovery log, replayOffset [0] took [81.738µs]
Shard shards/shard1(1) : Shard.doRecovery: Done recovery. Recovery log [test.swapper.TestSweepCompress/recovery], Data log [test.swapper.TestSweepCompress], Shared [false]. Built [0] plasmas, took [126.596µs]
Plasma: doInit: data UsedSpace 0 recovery UsedSpace 0
LSS test.swapper.TestSweepCompress(shard1) : all deamons started
LSS test.swapper.TestSweepCompress/recovery(shard1) : all deamons started
Shard shards/shard1(1) : instance test.swapper.TestSweepCompress started
LSS test.swapper.TestSweepCompress(shard1) : all deamons stopped
LSS test.swapper.TestSweepCompress/recovery(shard1) : all deamons stopped
Shard shards/shard1(1) : instance test.swapper.TestSweepCompress stopped
LSS test.swapper.TestSweepCompress(shard1) : LSSCleaner stopped
LSS test.swapper.TestSweepCompress/recovery(shard1) : LSSCleaner stopped
Shard shards/shard1(1) : instance test.swapper.TestSweepCompress closed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : Shutdown completed
Shard shards/shard1(1) : All daemons stopped
Shard shards/shard1(1) : All instance closed
Shard shards/shard1(1) : destroying instance test.swapper.TestSweepCompress ...
Shard shards/shard1(1) : removed plasmaId 1 for instance test.swapper.TestSweepCompress ...
Shard shards/shard1(1) : metadata saved successfully
Shard shards/shard1(1) : instance test.swapper.TestSweepCompress sucessfully destroyed
Shard shards/shard1(1) : All instance destroyed
Shard shards/shard1(1) : Shard Destroyed Successfully
--- PASS: TestSweepCompress (0.02s)
=== RUN   TestSCtx
----------- running TestSCtx
0 1 510804
0 2 466380
0 3 443395
0 4 503685
0 5 487259
0 6 485159
0 7 579867
0 8 465093
0 9 469734
0 10 426483
0 11 505484
0 12 596305
0 13 465639
0 14 449008
0 15 534613
0 16 521734
0 17 492339
0 18 436216
0 19 514694
0 20 512730
0 21 511941
0 22 544577
0 23 567347
0 24 569905
0 25 496032
0 26 493588
0 27 505700
0 28 433112
0 29 474557
0 30 628975
0 31 493226
0 32 432343
0 33 497941
0 34 541372
0 35 542310
0 36 528740
0 37 594599
0 38 544654
0 39 483732
0 40 488472
0 41 431775
0 42 439201
0 43 514928
0 44 526557
0 45 477383
0 46 511267
0 47 512265
0 48 586832
--- PASS: TestSCtx (15.71s)
=== RUN   TestWCtxGeneric
----------- running TestWCtxGeneric
fragAutoTuner: FragRatio at 19. MaxFragRatio 100, MaxBandwidth 651583. BandwidthUsage 125631. AvailDisk 0. TotalUsed 0. BandwidthRatio 0.19280889771525653. UsedSpaceRatio 1. CleanerBandwidth 525952. Duration 0.
0 1 897061
0 2 893711
0 3 853304
0 4 900742
0 5 902710
0 6 898407
0 7 901367
0 8 895722
0 9 899781
0 10 902462
0 11 907724
0 12 899868
0 13 901414
0 14 897300
0 15 903478
0 16 895924
0 17 897872
0 18 893397
0 19 901659
0 20 906108
0 21 897030
0 22 895530
0 23 896962
0 24 896540
0 25 902056
0 26 897575
0 27 904296
--- PASS: TestWCtxGeneric (57.04s)
=== RUN   TestWCtxWriter
----------- running TestWCtxWriter
fatal error: runtime: cannot allocate memory

goroutine 545242 [running]:
runtime.throw(0x8ec559, 0x1f)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/panic.go:774 +0x72 fp=0xc0005fbb00 sp=0xc0005fbad0 pc=0x431972
runtime.newArenaMayUnlock(0xc70de0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mheap.go:2035 +0xda fp=0xc0005fbb38 sp=0xc0005fbb00 pc=0x42a71a
runtime.newMarkBits(0x66, 0x100)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mheap.go:1955 +0xc3 fp=0xc0005fbb80 sp=0xc0005fbb38 pc=0x42a2f3
runtime.heapBits.initSpan(0x7f35c9bf4b00, 0x20307100000000, 0x7f35c9d94fff, 0x7f36807ce830)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mbitmap.go:792 +0x74 fp=0xc0005fbc00 sp=0xc0005fbb80 pc=0x4179f4
runtime.(*mcentral).grow(0xc7a278, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mcentral.go:264 +0x13e fp=0xc0005fbc40 sp=0xc0005fbc00 pc=0x41a0ae
runtime.(*mcentral).cacheSpan(0xc7a278, 0x203071)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mcentral.go:106 +0x2fe fp=0xc0005fbca0 sp=0xc0005fbc40 pc=0x419b0e
runtime.(*mcache).refill(0x7f36944df7e0, 0xc)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mcache.go:138 +0x85 fp=0xc0005fbcc0 sp=0xc0005fbca0 pc=0x4195b5
runtime.(*mcache).nextFree(0x7f36944df7e0, 0xc, 0xd0, 0x8b93c0, 0x7f369002ff01)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/malloc.go:854 +0x87 fp=0xc0005fbcf8 sp=0xc0005fbcc0 pc=0x40df87
runtime.mallocgc(0x50, 0x8931c0, 0x1, 0xc1c4bf4340)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/malloc.go:1022 +0x793 fp=0xc0005fbd98 sp=0xc0005fbcf8 pc=0x40e8c3
runtime.makeslice(0x8931c0, 0xa, 0xa, 0xc0008d8078)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/slice.go:49 +0x6c fp=0xc0005fbdc8 sp=0xc0005fbd98 pc=0x44786c
github.com/couchbase/plasma.newSCtx(...)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:150
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:639 +0xbc fp=0xc0005fbe50 sp=0xc0005fbdc8 pc=0x7365ec
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x12, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0005fbe90 sp=0xc0005fbe50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x12, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0005fbf38 sp=0xc0005fbe90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x12)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0005fbfc0 sp=0xc0005fbf38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0005fbfc8 sp=0xc0005fbfc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 1 [chan receive, 1 minutes]:
runtime.gopark(0x9058a8, 0xc2c99d8cb8, 0xc000c0170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000069b10 sp=0xc000069af0 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc2c99d8c60, 0xc000069c27, 0xc000000101, 0x4eeb30)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000069ba0 sp=0xc000069b10 pc=0x408078
runtime.chanrecv1(0xc2c99d8c60, 0xc000069c27)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:406 +0x2b fp=0xc000069bd0 sp=0xc000069ba0 pc=0x407d3b
testing.(*T).Run(0xc00afb2000, 0x8e2e37, 0xe, 0x9046f8, 0x481a01)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:961 +0x377 fp=0xc000069c80 sp=0xc000069bd0 pc=0x4eeb57
testing.runTests.func1(0xc00015e000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:1202 +0x78 fp=0xc000069cd0 sp=0xc000069c80 pc=0x4f23c8
testing.tRunner(0xc00015e000, 0xc000069dc0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:909 +0xc9 fp=0xc000069d30 sp=0xc000069cd0 pc=0x4ee789
testing.runTests(0xc000132020, 0xc37c60, 0xc5, 0xc5, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:1200 +0x2a7 fp=0xc000069df0 sp=0xc000069d30 pc=0x4f0067
testing.(*M).Run(0xc00015a000, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:1117 +0x176 fp=0xc000069ef8 sp=0xc000069df0 pc=0x4eefc6
main.main()
	_testmain.go:438 +0x135 fp=0xc000069f60 sp=0xc000069ef8 pc=0x7e69e5
runtime.main()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:203 +0x21e fp=0xc000069fe0 sp=0xc000069f60 pc=0x43330e
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x460951

goroutine 2 [force gc (idle), 29 minutes]:
runtime.gopark(0x9058a8, 0xc708f0, 0x1411, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000056fb0 sp=0xc000056f90 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.forcegchelper()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:253 +0xb7 fp=0xc000056fe0 sp=0xc000056fb0 pc=0x433597
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x460951
created by runtime.init.5
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:242 +0x35

goroutine 3 [GC sweep wait]:
runtime.gopark(0x9058a8, 0xc70ac0, 0x140c, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0000577a8 sp=0xc000057788 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.bgsweep(0xc00007e000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgcsweep.go:89 +0x131 fp=0xc0000577d8 sp=0xc0000577a8 pc=0x424b61
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000577e0 sp=0xc0000577d8 pc=0x460951
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:210 +0x5c

goroutine 4 [GC scavenge wait]:
runtime.gopark(0x9058a8, 0xc70e20, 0x140d, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000057f40 sp=0xc000057f20 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.bgscavenge(0xc00007e000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgcscavenge.go:374 +0x3b3 fp=0xc000057fd8 sp=0xc000057f40 pc=0x424423
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000057fe0 sp=0xc000057fd8 pc=0x460951
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:211 +0x7e

goroutine 5 [finalizer wait, 7 minutes]:
runtime.gopark(0x9058a8, 0xc904b8, 0xc000221410, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000058758 sp=0xc000058738 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.runfinq()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mfinal.go:175 +0xa3 fp=0xc0000587e0 sp=0xc000058758 pc=0x41a9c3
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x460951
created by runtime.createfing
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mfinal.go:156 +0x61

goroutine 6 [chan receive, 1 minutes]:
runtime.gopark(0x9058a8, 0xc00010c058, 0x82170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000065e98 sp=0xc000065e78 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00010c000, 0xc000065fb8, 0x1, 0x101)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000065f28 sp=0xc000065e98 pc=0x408078
runtime.chanrecv2(0xc00010c000, 0xc000065fb8, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:411 +0x2b fp=0xc000065f58 sp=0xc000065f28 pc=0x407d7b
github.com/couchbase/plasma.runCleanerAutoTuner()
	/opt/build/goproj/src/github.com/couchbase/plasma/auto_tuner.go:646 +0x194 fp=0xc000065fe0 sp=0xc000065f58 pc=0x6d0614
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000065fe8 sp=0xc000065fe0 pc=0x460951
created by github.com/couchbase/plasma.init.2
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:181 +0x10c

goroutine 7 [chan receive, 1 minutes]:
runtime.gopark(0x9058a8, 0xc000104058, 0xc000dd170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dd0e98 sp=0xc000dd0e78 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc000104000, 0xc000dd0fb8, 0xc000108001, 0xc90560)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000dd0f28 sp=0xc000dd0e98 pc=0x408078
runtime.chanrecv2(0xc000104000, 0xc000dd0fb8, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:411 +0x2b fp=0xc000dd0f58 sp=0xc000dd0f28 pc=0x407d7b
github.com/couchbase/plasma.singletonWorker()
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:2229 +0xb7 fp=0xc000dd0fe0 sp=0xc000dd0f58 pc=0x723d97
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dd0fe8 sp=0xc000dd0fe0 pc=0x460951
created by github.com/couchbase/plasma.init.2
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:182 +0x124

goroutine 8 [chan receive]:
runtime.gopark(0x9058a8, 0xc000130058, 0xc00010170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000059698 sp=0xc000059678 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc000130000, 0xc0000597b8, 0xc000000001, 0xc000059758)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000059728 sp=0xc000059698 pc=0x408078
runtime.chanrecv2(0xc000130000, 0xc0000597b8, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:411 +0x2b fp=0xc000059758 sp=0xc000059728 pc=0x407d7b
github.com/couchbase/plasma.updateMemUsed()
	/opt/build/goproj/src/github.com/couchbase/plasma/mem.go:309 +0xc0 fp=0xc0000597e0 sp=0xc000059758 pc=0x7b3930
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000597e8 sp=0xc0000597e0 pc=0x460951
created by github.com/couchbase/plasma.init.2
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:183 +0x13c

goroutine 9 [chan receive, 1 minutes]:
runtime.gopark(0x9058a8, 0xc000160058, 0x170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000059e68 sp=0xc000059e48 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc000160000, 0xc000059fb8, 0xc000166001, 0xc000160101)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000059ef8 sp=0xc000059e68 pc=0x408078
runtime.chanrecv2(0xc000160000, 0xc000059fb8, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:411 +0x2b fp=0xc000059f28 sp=0xc000059ef8 pc=0x407d7b
github.com/couchbase/plasma.AggregateAndLogStats()
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:2198 +0xc2 fp=0xc000059fe0 sp=0xc000059f28 pc=0x723b92
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000059fe8 sp=0xc000059fe0 pc=0x460951
created by github.com/couchbase/plasma.init.2
	/opt/build/goproj/src/github.com/couchbase/plasma/shard.go:184 +0x154

goroutine 18 [syscall]:
runtime.notetsleepg(0xc77c60, 0x56326ed5e, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000052760 sp=0xc000052730 pc=0x40d224
runtime.timerproc(0xc77c40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:311 +0x2f1 fp=0xc0000527d8 sp=0xc000052760 pc=0x44fc01
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000527e0 sp=0xc0000527d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 34 [syscall]:
runtime.notetsleepg(0xc77ce0, 0x3b9845d7, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000128760 sp=0xc000128730 pc=0x40d224
runtime.timerproc(0xc77cc0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:311 +0x2f1 fp=0xc0001287d8 sp=0xc000128760 pc=0x44fc01
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0001287e0 sp=0xc0001287d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 35 [chan receive]:
runtime.gopark(0x9058a8, 0xc00010c118, 0x170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000128e90 sp=0xc000128e70 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00010c0c0, 0xc000128fb0, 0xc000110001, 0xc000100101)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000128f20 sp=0xc000128e90 pc=0x408078
runtime.chanrecv2(0xc00010c0c0, 0xc000128fb0, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:411 +0x2b fp=0xc000128f50 sp=0xc000128f20 pc=0x407d7b
github.com/couchbase/plasma.(*CleanerAutoTuner).refreshCleanerBandwidth(0xc00012c000)
	/opt/build/goproj/src/github.com/couchbase/plasma/auto_tuner.go:657 +0x9f fp=0xc000128fd8 sp=0xc000128f50 pc=0x6d06ff
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000128fe0 sp=0xc000128fd8 pc=0x460951
created by github.com/couchbase/plasma.runCleanerAutoTuner
	/opt/build/goproj/src/github.com/couchbase/plasma/auto_tuner.go:643 +0x119

goroutine 50 [syscall]:
runtime.notetsleepg(0xc77d60, 0x3b9887c8, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000124760 sp=0xc000124730 pc=0x40d224
runtime.timerproc(0xc77d40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:311 +0x2f1 fp=0xc0001247d8 sp=0xc000124760 pc=0x44fc01
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0001247e0 sp=0xc0001247d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 453911 [chan receive, 44 minutes]:
runtime.gopark(0x9058a8, 0xc00015c118, 0xc00013170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007c0d78 sp=0xc0007c0d58 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00015c0c0, 0x0, 0x4ee401, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc0007c0e08 sp=0xc0007c0d78 pc=0x408078
runtime.chanrecv1(0xc00015c0c0, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:406 +0x2b fp=0xc0007c0e38 sp=0xc0007c0e08 pc=0x407d3b
testing.(*T).Parallel(0xc00015e300)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:814 +0x1d5 fp=0xc0007c0ec0 sp=0xc0007c0e38 pc=0x4ee4d5
github.com/couchbase/plasma.TestExtrasN1(0xc00015e300)
	/opt/build/goproj/src/github.com/couchbase/plasma/extras_test.go:25 +0x40 fp=0xc0007c0f70 sp=0xc0007c0ec0 pc=0x7450b0
testing.tRunner(0xc00015e300, 0x904220)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:909 +0xc9 fp=0xc0007c0fd0 sp=0xc0007c0f70 pc=0x4ee789
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007c0fd8 sp=0xc0007c0fd0 pc=0x460951
created by testing.(*T).Run
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:960 +0x350

goroutine 66 [syscall]:
runtime.notetsleepg(0xc77de0, 0x3c968bc0bf, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000170760 sp=0xc000170730 pc=0x40d224
runtime.timerproc(0xc77dc0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:311 +0x2f1 fp=0xc0001707d8 sp=0xc000170760 pc=0x44fc01
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0001707e0 sp=0xc0001707d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545226 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00db31912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007a1ce8 sp=0xc0007a1cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0007a1d50 sp=0xc0007a1ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0007a1d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0007a1d80 sp=0xc0007a1d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0007a1dc8 sp=0xc0007a1d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0007a1e50 sp=0xc0007a1dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x2, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0007a1e90 sp=0xc0007a1e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x2, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0007a1f38 sp=0xc0007a1e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x2)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0007a1fc0 sp=0xc0007a1f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007a1fc8 sp=0xc0007a1fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 453913 [chan receive, 44 minutes]:
runtime.gopark(0x9058a8, 0xc00015c118, 0xc00013170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007bc5d8 sp=0xc0007bc5b8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00015c0c0, 0x0, 0x4ee401, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc0007bc668 sp=0xc0007bc5d8 pc=0x408078
runtime.chanrecv1(0xc00015c0c0, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:406 +0x2b fp=0xc0007bc698 sp=0xc0007bc668 pc=0x407d3b
testing.(*T).Parallel(0xc00015e500)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:814 +0x1d5 fp=0xc0007bc720 sp=0xc0007bc698 pc=0x4ee4d5
github.com/couchbase/plasma.TestExtrasN3(0xc00015e500)
	/opt/build/goproj/src/github.com/couchbase/plasma/extras_test.go:40 +0x2b fp=0xc0007bc770 sp=0xc0007bc720 pc=0x74550b
testing.tRunner(0xc00015e500, 0x904230)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:909 +0xc9 fp=0xc0007bc7d0 sp=0xc0007bc770 pc=0x4ee789
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007bc7d8 sp=0xc0007bc7d0 pc=0x460951
created by testing.(*T).Run
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:960 +0x350

goroutine 539920 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370020, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007be760 sp=0xc0007be740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000028000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0007be7d8 sp=0xc0007be760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007be7e0 sp=0xc0007be7d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 10 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc77bc0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc00017bf60 sp=0xc00017bf40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc77bc0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc00017bfd8 sp=0xc00017bf60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00017bfe0 sp=0xc00017bfd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 544417 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc00090cd60, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000a9af60 sp=0xc000a9af40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00003aa00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000a9afd8 sp=0xc000a9af60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000a9afe0 sp=0xc000a9afd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 114 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc77e40, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000607f60 sp=0xc000607f40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc77e40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000607fd8 sp=0xc000607f60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000607fe0 sp=0xc000607fd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 492010 [chan receive, 6 minutes]:
runtime.gopark(0x9058a8, 0xc0014bfbb8, 0xc00155170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000724eb8 sp=0xc000724e98 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc0014bfb60, 0x0, 0x7d8801, 0xc0000c0000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000724f48 sp=0xc000724eb8 pc=0x408078
runtime.chanrecv1(0xc0014bfb60, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:406 +0x2b fp=0xc000724f78 sp=0xc000724f48 pc=0x407d3b
github.com/couchbase/plasma.testSMRConcurrent.func1(0xc001556120, 0x8, 0xc0014b5700, 0x8, 0x8, 0xc000f44478)
	/opt/build/goproj/src/github.com/couchbase/plasma/smr_test.go:110 +0x45 fp=0xc000724fb0 sp=0xc000724f78 pc=0x7d87d5
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000724fb8 sp=0xc000724fb0 pc=0x460951
created by github.com/couchbase/plasma.testSMRConcurrent
	/opt/build/goproj/src/github.com/couchbase/plasma/smr_test.go:103 +0x3ad

goroutine 130 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc77ec0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000608760 sp=0xc000608740 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc77ec0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc0006087d8 sp=0xc000608760 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006087e0 sp=0xc0006087d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545235 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00eae1912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dd5ce8 sp=0xc000dd5cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0x43f300, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000dd5d50 sp=0xc000dd5ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000dd5d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000dd5d80 sp=0xc000dd5d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000dd5dc8 sp=0xc000dd5d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000dd5e50 sp=0xc000dd5dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xb, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000dd5e90 sp=0xc000dd5e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xb, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000dd5f38 sp=0xc000dd5e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xb)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000dd5fc0 sp=0xc000dd5f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dd5fc8 sp=0xc000dd5fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 162 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc77f40, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000859f60 sp=0xc000859f40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc77f40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000859fd8 sp=0xc000859f60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000859fe0 sp=0xc000859fd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 544719 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370040, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0006f5760 sp=0xc0006f5740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00003f400)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0006f57d8 sp=0xc0006f5760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006f57e0 sp=0xc0006f57d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545239 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc0016e1912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0013e6ce8 sp=0xc0013e6cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0013e6d50 sp=0xc0013e6ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0013e6d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0013e6d80 sp=0xc0013e6d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0013e6dc8 sp=0xc0013e6d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0013e6e50 sp=0xc0013e6dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xf, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0013e6e90 sp=0xc0013e6e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xf, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0013e6f38 sp=0xc0013e6e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xf)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0013e6fc0 sp=0xc0013e6f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0013e6fc8 sp=0xc0013e6fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 453912 [chan receive, 44 minutes]:
runtime.gopark(0x9058a8, 0xc00015c118, 0xc00013170e, 0x3)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000604dd8 sp=0xc000604db8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00015c0c0, 0x0, 0x4ee401, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000604e68 sp=0xc000604dd8 pc=0x408078
runtime.chanrecv1(0xc00015c0c0, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/chan.go:406 +0x2b fp=0xc000604e98 sp=0xc000604e68 pc=0x407d3b
testing.(*T).Parallel(0xc00015e400)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:814 +0x1d5 fp=0xc000604f20 sp=0xc000604e98 pc=0x4ee4d5
github.com/couchbase/plasma.TestExtrasN2(0xc00015e400)
	/opt/build/goproj/src/github.com/couchbase/plasma/extras_test.go:34 +0x2b fp=0xc000604f70 sp=0xc000604f20 pc=0x74549b
testing.tRunner(0xc00015e400, 0x904228)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:909 +0xc9 fp=0xc000604fd0 sp=0xc000604f70 pc=0x4ee789
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000604fd8 sp=0xc000604fd0 pc=0x460951
created by testing.(*T).Run
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:960 +0x350

goroutine 3266 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc78040, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007bef60 sp=0xc0007bef40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc78040)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc0007befd8 sp=0xc0007bef60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007befe0 sp=0xc0007befd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 544720 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370080, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0006f5f60 sp=0xc0006f5f40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000044000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0006f5fd8 sp=0xc0006f5f60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006f5fe0 sp=0xc0006f5fd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 17346 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc782c0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e7f60 sp=0xc0004e7f40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc782c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc0004e7fd8 sp=0xc0004e7f60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e7fe0 sp=0xc0004e7fd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545225 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000161912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc00079ece8 sp=0xc00079ecc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc00079ed50 sp=0xc00079ece8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc00079ed00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc00079ed80 sp=0xc00079ed50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc00079edc8 sp=0xc00079ed80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc00079ee50 sp=0xc00079edc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x1, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc00079ee90 sp=0xc00079ee50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x1, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc00079ef38 sp=0xc00079ee90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x1)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc00079efc0 sp=0xc00079ef38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00079efc8 sp=0xc00079efc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545229 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00eae1912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dd4ce8 sp=0xc000dd4cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000dd4d50 sp=0xc000dd4ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000dd4d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000dd4d80 sp=0xc000dd4d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000dd4dc8 sp=0xc000dd4d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000dd4e50 sp=0xc000dd4dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x5, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000dd4e90 sp=0xc000dd4e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x5, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000dd4f38 sp=0xc000dd4e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x5)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000dd4fc0 sp=0xc000dd4f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dd4fc8 sp=0xc000dd4fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545230 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000161912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc00079dce8 sp=0xc00079dcc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc00079dd50 sp=0xc00079dce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc00079dd00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc00079dd80 sp=0xc00079dd50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc00079ddc8 sp=0xc00079dd80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc00079de50 sp=0xc00079ddc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x6, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc00079de90 sp=0xc00079de50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x6, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc00079df38 sp=0xc00079de90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x6)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc00079dfc0 sp=0xc00079df38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00079dfc8 sp=0xc00079dfc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545233 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000941912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000066ce8 sp=0xc000066cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000066d50 sp=0xc000066ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000066d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000066d80 sp=0xc000066d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000066dc8 sp=0xc000066d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000066e50 sp=0xc000066dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x9, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000066e90 sp=0xc000066e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x9, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000066f38 sp=0xc000066e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x9)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000066fc0 sp=0xc000066f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000066fc8 sp=0xc000066fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545237 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000821912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc00056bce8 sp=0xc00056bcc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0x43f301, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc00056bd50 sp=0xc00056bce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc00056bd01, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc00056bd80 sp=0xc00056bd50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc00056bdc8 sp=0xc00056bd80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc00056be50 sp=0xc00056bdc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xd, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc00056be90 sp=0xc00056be50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xd, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc00056bf38 sp=0xc00056be90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xd)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc00056bfc0 sp=0xc00056bf38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00056bfc8 sp=0xc00056bfc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 540687 [semacquire]:
runtime.gopark(0x9058a8, 0xc7fd80, 0xc000821912, 0x4)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007256d0 sp=0xc0007256b0 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0004943c8, 0x43ab00, 0x1, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000725738 sp=0xc0007256d0 pc=0x443b40
sync.runtime_Semacquire(0xc0004943c8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:56 +0x42 fp=0xc000725768 sp=0xc000725738 pc=0x443792
sync.(*WaitGroup).Wait(0xc0004943c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/waitgroup.go:130 +0x64 fp=0xc000725790 sp=0xc000725768 pc=0x46ea34
github.com/couchbase/plasma.testWCtxWriter(0xc00afb2000, 0xc8, 0x190, 0x19, 0x0, 0x904728, 0x9047a0, 0x904738, 0x0, 0x0, ...)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:254 +0x308 fp=0xc0007258b8 sp=0xc000725790 pc=0x7af188
github.com/couchbase/plasma.runTest(0xc00afb2000, 0x8e2e37, 0xe, 0x905190, 0x8e0773, 0x7, 0x53da3090000)
	/opt/build/goproj/src/github.com/couchbase/plasma/testing.go:382 +0x3b1 fp=0xc000725f28 sp=0xc0007258b8 pc=0x732e81
github.com/couchbase/plasma.TestWCtxWriter(0xc00afb2000)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:826 +0x68 fp=0xc000725f70 sp=0xc000725f28 pc=0x7b1fc8
testing.tRunner(0xc00afb2000, 0x9046f8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:909 +0xc9 fp=0xc000725fd0 sp=0xc000725f70 pc=0x4ee789
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000725fd8 sp=0xc000725fd0 pc=0x460951
created by testing.(*T).Run
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/testing/testing.go:960 +0x350

goroutine 544916 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc00027cfc0, 0xc0009a1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000a9ef60 sp=0xc000a9ef40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00002ef00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000a9efd8 sp=0xc000a9ef60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000a9efe0 sp=0xc000a9efd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 6386 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc781c0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007aa760 sp=0xc0007aa740 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc781c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc0007aa7d8 sp=0xc0007aa760 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007aa7e0 sp=0xc0007aa7d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545232 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc001511912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000722ce8 sp=0xc000722cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000722d50 sp=0xc000722ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000722d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000722d80 sp=0xc000722d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000722dc8 sp=0xc000722d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000722e50 sp=0xc000722dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x8, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000722e90 sp=0xc000722e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x8, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000722f38 sp=0xc000722e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x8)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000722fc0 sp=0xc000722f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000722fc8 sp=0xc000722fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 3282 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc77fc0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000892f60 sp=0xc000892f40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc77fc0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000892fd8 sp=0xc000892f60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000892fe0 sp=0xc000892fd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545241 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00f561912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dd3ce8 sp=0xc000dd3cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000dd3d50 sp=0xc000dd3ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000dd3d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000dd3d80 sp=0xc000dd3d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000dd3dc8 sp=0xc000dd3d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000dd3e50 sp=0xc000dd3dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x11, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000dd3e90 sp=0xc000dd3e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x11, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000dd3f38 sp=0xc000dd3e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x11)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000dd3fc0 sp=0xc000dd3f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dd3fc8 sp=0xc000dd3fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545228 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000f51912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0005f5ce8 sp=0xc0005f5cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0005f5d50 sp=0xc0005f5ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0005f5d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0005f5d80 sp=0xc0005f5d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0005f5dc8 sp=0xc0005f5d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0005f5e50 sp=0xc0005f5dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x4, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0005f5e90 sp=0xc0005f5e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x4, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0005f5f38 sp=0xc0005f5e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x4)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0005f5fc0 sp=0xc0005f5f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0005f5fc8 sp=0xc0005f5fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 3602 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc780c0, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000ab6f60 sp=0xc000ab6f40 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc780c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000ab6fd8 sp=0xc000ab6f60 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000ab6fe0 sp=0xc000ab6fd8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 4546 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc78140, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000673760 sp=0xc000673740 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc78140)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc0006737d8 sp=0xc000673760 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006737e0 sp=0xc0006737d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 545240 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00f561912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007a0ce8 sp=0xc0007a0cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0007a0d50 sp=0xc0007a0ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0007a0d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0007a0d80 sp=0xc0007a0d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0007a0dc8 sp=0xc0007a0d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0007a0e50 sp=0xc0007a0dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x10, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0007a0e90 sp=0xc0007a0e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x10, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0007a0f38 sp=0xc0007a0e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x10)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0007a0fc0 sp=0xc0007a0f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007a0fc8 sp=0xc0007a0fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 540686 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370070, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000625760 sp=0xc000625740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000036000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0006257d8 sp=0xc000625760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006257e0 sp=0xc0006257d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545231 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00cc21912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0013e1ce8 sp=0xc0013e1cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0013e1d50 sp=0xc0013e1ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0013e1d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0013e1d80 sp=0xc0013e1d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0013e1dc8 sp=0xc0013e1d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0013e1e50 sp=0xc0013e1dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x7, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0013e1e90 sp=0xc0013e1e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x7, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0013e1f38 sp=0xc0013e1e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x7)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0013e1fc0 sp=0xc0013e1f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0013e1fc8 sp=0xc0013e1fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 544932 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc0002d0020, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000a9f760 sp=0xc000a9f740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000048a00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000a9f7d8 sp=0xc000a9f760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000a9f7e0 sp=0xc000a9f7d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545238 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00cc21912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dcfce8 sp=0xc000dcfcc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000dcfd50 sp=0xc000dcfce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000dcfd00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000dcfd80 sp=0xc000dcfd50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000dcfdc8 sp=0xc000dcfd80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000dcfe50 sp=0xc000dcfdc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xe, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000dcfe90 sp=0xc000dcfe50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xe, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000dcff38 sp=0xc000dcfe90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xe)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000dcffc0 sp=0xc000dcff38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dcffc8 sp=0xc000dcffc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545244 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc0008d1912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0013e7ce8 sp=0xc0013e7cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0013e7d50 sp=0xc0013e7ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0013e7d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0013e7d80 sp=0xc0013e7d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0013e7dc8 sp=0xc0013e7d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0013e7e50 sp=0xc0013e7dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x14, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0013e7e90 sp=0xc0013e7e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x14, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0013e7f38 sp=0xc0013e7e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x14)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0013e7fc0 sp=0xc0013e7f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0013e7fc8 sp=0xc0013e7fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 18450 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc78240, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000b14760 sp=0xc000b14740 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc78240)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000b147d8 sp=0xc000b14760 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000b147e0 sp=0xc000b147d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 18466 [timer goroutine (idle)]:
runtime.gopark(0x9058a8, 0xc78340, 0x1415, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000b13760 sp=0xc000b13740 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.timerproc(0xc78340)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:303 +0x27b fp=0xc000b137d8 sp=0xc000b13760 pc=0x44fb8b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000b137e0 sp=0xc000b137d8 pc=0x460951
created by runtime.(*timersBucket).addtimerLocked
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/time.go:169 +0x10e

goroutine 544742 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc0002360f0, 0xc01f0c1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0006f2760 sp=0xc0006f2740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000046500)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0006f27d8 sp=0xc0006f2760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0006f27e0 sp=0xc0006f27d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 544918 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc00090cd50, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc001041f60 sp=0xc001041f40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000038500)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc001041fd8 sp=0xc001041f60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc001041fe0 sp=0xc001041fd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545234 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc00db31912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000619ce8 sp=0xc000619cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000619d50 sp=0xc000619ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000619d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000619d80 sp=0xc000619d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000619dc8 sp=0xc000619d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000619e50 sp=0xc000619dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xa, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000619e90 sp=0xc000619e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xa, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000619f38 sp=0xc000619e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xa)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000619fc0 sp=0xc000619f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000619fc8 sp=0xc000619fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 540843 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370010, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc001536760 sp=0xc001536740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000033900)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0015367d8 sp=0xc001536760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0015367e0 sp=0xc0015367d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545227 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000161912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0005face8 sp=0xc0005facc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0005fad50 sp=0xc0005face8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0005fad00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0005fad80 sp=0xc0005fad50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0005fadc8 sp=0xc0005fad80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0005fae50 sp=0xc0005fadc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x3, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0005fae90 sp=0xc0005fae50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x3, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0005faf38 sp=0xc0005fae90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x3)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0005fafc0 sp=0xc0005faf38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0005fafc8 sp=0xc0005fafc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545062 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370060, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007c1f60 sp=0xc0007c1f40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000041900)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0007c1fd8 sp=0xc0007c1f60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007c1fe0 sp=0xc0007c1fd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 544902 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc0002360d0, 0xc000101418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000b17f60 sp=0xc000b17f40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc000031400)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000b17fd8 sp=0xc000b17f60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000b17fe0 sp=0xc000b17fd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545245 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000821912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0003bdce8 sp=0xc0003bdcc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc0003bdd50 sp=0xc0003bdce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc0003bdd00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc0003bdd80 sp=0xc0003bdd50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc0003bddc8 sp=0xc0003bdd80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc0003bde50 sp=0xc0003bddc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x15, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc0003bde90 sp=0xc0003bde50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x15, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc0003bdf38 sp=0xc0003bde90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x15)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc0003bdfc0 sp=0xc0003bdf38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0003bdfc8 sp=0xc0003bdfc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 544934 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370050, 0xc000101418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0002e6760 sp=0xc0002e6740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00002a500)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0002e67d8 sp=0xc0002e6760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0002e67e0 sp=0xc0002e67d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545243 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc001001912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc00079bce8 sp=0xc00079bcc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc00079bd50 sp=0xc00079bce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc00079bd00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc00079bd80 sp=0xc00079bd50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc00079bdc8 sp=0xc00079bd80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc00079be50 sp=0xc00079bdc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x13, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc00079be90 sp=0xc00079be50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x13, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc00079bf38 sp=0xc00079be90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x13)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc00079bfc0 sp=0xc00079bf38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00079bfc8 sp=0xc00079bfc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545247 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc001001912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000dd1ce8 sp=0xc000dd1cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000dd1d50 sp=0xc000dd1ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000dd1d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000dd1d80 sp=0xc000dd1d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000dd1dc8 sp=0xc000dd1d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000dd1e50 sp=0xc000dd1dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x17, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000dd1e90 sp=0xc000dd1e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x17, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000dd1f38 sp=0xc000dd1e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x17)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000dd1fc0 sp=0xc000dd1f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000dd1fc8 sp=0xc000dd1fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 544416 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc00090cd30, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000a9b760 sp=0xc000a9b740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00002ca00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000a9b7d8 sp=0xc000a9b760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000a9b7e0 sp=0xc000a9b7d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545236 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc0261a1912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000613ce8 sp=0xc000613cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000613d50 sp=0xc000613ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000613d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000613d80 sp=0xc000613d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000613dc8 sp=0xc000613d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000613e50 sp=0xc000613dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0xc, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000613e90 sp=0xc000613e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0xc, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000613f38 sp=0xc000613e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0xc)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000613fc0 sp=0xc000613f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000613fc8 sp=0xc000613fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 545224 [semacquire]:
runtime.gopark(0x9058a8, 0xc7d5c0, 0xc000f51912, 0x5)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000614ce8 sp=0xc000614cc8 pc=0x4336e0
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:310
runtime.semacquire1(0xc0000da6bc, 0xc0000da600, 0x3, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000614d50 sp=0xc000614ce8 pc=0x443b40
sync.runtime_SemacquireMutex(0xc0000da6bc, 0xc000614d00, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/sema.go:71 +0x47 fp=0xc000614d80 sp=0xc000614d50 pc=0x4438a7
sync.(*Mutex).lockSlow(0xc0000da6b8)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:138 +0xfc fp=0xc000614dc8 sp=0xc000614d80 pc=0x46d11c
sync.(*Mutex).Lock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/sync/mutex.go:81
github.com/couchbase/plasma.(*LSSCtx).newSCtx(0xc0000da640, 0x1, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:636 +0x1d7 fp=0xc000614e50 sp=0xc000614dc8 pc=0x736707
github.com/couchbase/plasma.(*LSSCtx).getSCtx(0xc0000da640, 0x0, 0x1, 0x1000000009a4b20)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:652 +0xb3 fp=0xc000614e90 sp=0xc000614e50 pc=0x7367d3
github.com/couchbase/plasma.(*Plasma).getWCtx(0xc000224b00, 0x0, 0x1, 0x0, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx.go:471 +0x3b0 fp=0xc000614f38 sp=0xc000614e90 pc=0x735790
github.com/couchbase/plasma.testWCtxWriter.func2(0xc0004943c0, 0xc000224b00, 0xc00afb2000, 0x0)
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:244 +0xc8 fp=0xc000614fc0 sp=0xc000614f38 pc=0x7daec8
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000614fc8 sp=0xc000614fc0 pc=0x460951
created by github.com/couchbase/plasma.testWCtxWriter
	/opt/build/goproj/src/github.com/couchbase/plasma/wctx_test.go:240 +0x2d0

goroutine 544919 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc00027cfe0, 0xc00a421418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc000b16760 sp=0xc000b16740 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00003cf00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc000b167d8 sp=0xc000b16760 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000b167e0 sp=0xc000b167d8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77

goroutine 545142 [GC worker (idle)]:
runtime.gopark(0x905740, 0xc000370030, 0x1418, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/proc.go:304 +0xe0 fp=0xc0007bff60 sp=0xc0007bff40 pc=0x4336e0
runtime.gcBgMarkWorker(0xc00004af00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1846 +0xff fp=0xc0007bffd8 sp=0xc0007bff60 pc=0x41e3cf
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0007bffe0 sp=0xc0007bffd8 pc=0x460951
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/runtime/mgc.go:1794 +0x77
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7fc4ebd0385b m=9 sigcode=18446744073709551610

goroutine 0 [idle]:
runtime: unknown pc 0x7fc4ebd0385b
stack: frame={sp:0x7fc4e1ffa840, fp:0x0} stack=[0x7fc4e17fb288,0x7fc4e1ffae88)
00007fc4e1ffa740:  00007fc4e1ffa770  00007fc4e1ffa780 
00007fc4e1ffa750:  00007fc4ebee54f0  0000000000000000 
00007fc4e1ffa760:  0000000000000001  0000000000b50098 
00007fc4e1ffa770:  0000000000679694   00007fc4e1ffa870 
00007fc4e1ffa780:  00007fc4ebcd3260  00007fc4ebeae500 
00007fc4e1ffa790:  00007fc4ebeaea68  0000000000000004 
00007fc4e1ffa7a0:  0000000000000001  0000000000000000 
00007fc4e1ffa7b0:  0000000000000005  000000008ff9ac08 
00007fc4e1ffa7c0:  00007fc4ebee54f0  0000000000e5f078 
00007fc4e1ffa7d0:  0000000000b30d7a  00007fc4c0000da0 
00007fc4e1ffa7e0:  0000000000000000  0000000000b16e30 
00007fc4e1ffa7f0:  0000000000000000  00007fc4ebecab33 
00007fc4e1ffa800:  0000000000000005  0000000000000000 
00007fc4e1ffa810:  0000000000000001  00007fc4ebcd3260 
00007fc4e1ffa820:  00007fc4e1ffaa70  00007fc4ebed13e3 
00007fc4e1ffa830:  000000000000000a  00007fc4ebdb68a7 
00007fc4e1ffa840: <0000000000000000  00007fc4ebe88703 
00007fc4e1ffa850:  0000000000000000  0000000000000000 
00007fc4e1ffa860:  00007fc4ebe898b0  0000000000e5f0a8 
00007fc4e1ffa870:  000000000000037f  0000000000000000 
00007fc4e1ffa880:  0000000000000000  0000ffff00001fa0 
00007fc4e1ffa890:  0000000000000000  0000000000000000 
00007fc4e1ffa8a0:  0000000000000000  0000000000000000 
00007fc4e1ffa8b0:  0000000000000000  0000000000000000 
00007fc4e1ffa8c0:  fffffffe7fffffff  ffffffffffffffff 
00007fc4e1ffa8d0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa8e0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa8f0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa900:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa910:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa920:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa930:  ffffffffffffffff  ffffffffffffffff 
runtime: unknown pc 0x7fc4ebd0385b
stack: frame={sp:0x7fc4e1ffa840, fp:0x0} stack=[0x7fc4e17fb288,0x7fc4e1ffae88)
00007fc4e1ffa740:  00007fc4e1ffa770  00007fc4e1ffa780 
00007fc4e1ffa750:  00007fc4ebee54f0  0000000000000000 
00007fc4e1ffa760:  0000000000000001  0000000000b50098 
00007fc4e1ffa770:  0000000000679694   00007fc4e1ffa870 
00007fc4e1ffa780:  00007fc4ebcd3260  00007fc4ebeae500 
00007fc4e1ffa790:  00007fc4ebeaea68  0000000000000004 
00007fc4e1ffa7a0:  0000000000000001  0000000000000000 
00007fc4e1ffa7b0:  0000000000000005  000000008ff9ac08 
00007fc4e1ffa7c0:  00007fc4ebee54f0  0000000000e5f078 
00007fc4e1ffa7d0:  0000000000b30d7a  00007fc4c0000da0 
00007fc4e1ffa7e0:  0000000000000000  0000000000b16e30 
00007fc4e1ffa7f0:  0000000000000000  00007fc4ebecab33 
00007fc4e1ffa800:  0000000000000005  0000000000000000 
00007fc4e1ffa810:  0000000000000001  00007fc4ebcd3260 
00007fc4e1ffa820:  00007fc4e1ffaa70  00007fc4ebed13e3 
00007fc4e1ffa830:  000000000000000a  00007fc4ebdb68a7 
00007fc4e1ffa840: <0000000000000000  00007fc4ebe88703 
00007fc4e1ffa850:  0000000000000000  0000000000000000 
00007fc4e1ffa860:  00007fc4ebe898b0  0000000000e5f0a8 
00007fc4e1ffa870:  000000000000037f  0000000000000000 
00007fc4e1ffa880:  0000000000000000  0000ffff00001fa0 
00007fc4e1ffa890:  0000000000000000  0000000000000000 
00007fc4e1ffa8a0:  0000000000000000  0000000000000000 
00007fc4e1ffa8b0:  0000000000000000  0000000000000000 
00007fc4e1ffa8c0:  fffffffe7fffffff  ffffffffffffffff 
00007fc4e1ffa8d0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa8e0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa8f0:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa900:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa910:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa920:  ffffffffffffffff  ffffffffffffffff 
00007fc4e1ffa930:  ffffffffffffffff  ffffffffffffffff 

goroutine 1 [semacquire, 46 minutes]:
runtime.gopark(0xa846c0, 0xeb7a20, 0xc0003b1912, 0x4)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0006037e0 sp=0xc0006037c0 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.semacquire1(0xc00028c274, 0x438d00, 0x1, 0x0)
	/usr/local/go/src/runtime/sema.go:144 +0x1c0 fp=0xc000603848 sp=0xc0006037e0 pc=0x441200
sync.runtime_Semacquire(0xc00028c274)
	/usr/local/go/src/runtime/sema.go:56 +0x42 fp=0xc000603878 sp=0xc000603848 pc=0x440e52
sync.(*WaitGroup).Wait(0xc00028c274)
	/usr/local/go/src/sync/waitgroup.go:130 +0x64 fp=0xc0006038a0 sp=0xc000603878 pc=0x47a154
cmd/go/internal/work.(*Builder).Do(0xc00050f220, 0xc0004152c0)
	/usr/local/go/src/cmd/go/internal/work/exec.go:186 +0x3c5 fp=0xc000603998 sp=0xc0006038a0 pc=0x83f8c5
cmd/go/internal/test.runTest(0xea3ec0, 0xc0000d8020, 0x3, 0x3)
	/usr/local/go/src/cmd/go/internal/test/test.go:770 +0xecc fp=0xc000603da0 sp=0xc000603998 pc=0x90998c
main.main()
	/usr/local/go/src/cmd/go/main.go:189 +0x57f fp=0xc000603f60 sp=0xc000603da0 pc=0x920e9f
runtime.main()
	/usr/local/go/src/runtime/proc.go:203 +0x21e fp=0xc000603fe0 sp=0xc000603f60 pc=0x43123e
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000603fe8 sp=0xc000603fe0 pc=0x45c711

goroutine 2 [force gc (idle), 2 minutes]:
runtime.gopark(0xa846c0, 0xead940, 0x1411, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000064fb0 sp=0xc000064f90 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.forcegchelper()
	/usr/local/go/src/runtime/proc.go:253 +0xb7 fp=0xc000064fe0 sp=0xc000064fb0 pc=0x4314c7
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000064fe8 sp=0xc000064fe0 pc=0x45c711
created by runtime.init.5
	/usr/local/go/src/runtime/proc.go:242 +0x35

goroutine 3 [GC sweep wait, 2 minutes]:
runtime.gopark(0xa846c0, 0xeadb60, 0x140c, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0000657a8 sp=0xc000065788 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.bgsweep(0xc00008c000)
	/usr/local/go/src/runtime/mgcsweep.go:89 +0x131 fp=0xc0000657d8 sp=0xc0000657a8 pc=0x424031
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000657e0 sp=0xc0000657d8 pc=0x45c711
created by runtime.gcenable
	/usr/local/go/src/runtime/mgc.go:210 +0x5c

goroutine 4 [GC scavenge wait, 2 minutes]:
runtime.gopark(0xa846c0, 0xeae300, 0x140d, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000065f40 sp=0xc000065f20 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.bgscavenge(0xc00008c000)
	/usr/local/go/src/runtime/mgcscavenge.go:374 +0x3b3 fp=0xc000065fd8 sp=0xc000065f40 pc=0x4238f3
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000065fe0 sp=0xc000065fd8 pc=0x45c711
created by runtime.gcenable
	/usr/local/go/src/runtime/mgc.go:211 +0x7e

goroutine 18 [finalizer wait, 44 minutes]:
runtime.gopark(0xa846c0, 0xecb880, 0xc0000f1410, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000064758 sp=0xc000064738 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.runfinq()
	/usr/local/go/src/runtime/mfinal.go:175 +0xa3 fp=0xc0000647e0 sp=0xc000064758 pc=0x419e93
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000647e8 sp=0xc0000647e0 pc=0x45c711
created by runtime.createfing
	/usr/local/go/src/runtime/mfinal.go:156 +0x61

goroutine 19 [syscall, 46 minutes]:
runtime.notetsleepg(0xecbd60, 0xffffffffffffffff, 0x0)
	/usr/local/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000060798 sp=0xc000060768 pc=0x40c674
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:147 +0x9c fp=0xc0000607c0 sp=0xc000060798 pc=0x44510c
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:23 +0x22 fp=0xc0000607e0 sp=0xc0000607c0 pc=0x59fe42
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x45c711
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 150 [syscall, 42 minutes]:
runtime.notetsleepg(0xeb30c0, 0xceec6f395d2, 0x0)
	/usr/local/go/src/runtime/lock_futex.go:227 +0x34 fp=0xc000239760 sp=0xc000239730 pc=0x40c674
runtime.timerproc(0xeb30a0)
	/usr/local/go/src/runtime/time.go:311 +0x2f1 fp=0xc0002397d8 sp=0xc000239760 pc=0x44d471
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0002397e0 sp=0xc0002397d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 15 [timer goroutine (idle), 42 minutes]:
runtime.gopark(0xa846c0, 0xeb3020, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000238760 sp=0xc000238740 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb3020)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0002387d8 sp=0xc000238760 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0002387e0 sp=0xc0002387d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 144 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb3220, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000235760 sp=0xc000235740 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb3220)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0002357d8 sp=0xc000235760 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0002357e0 sp=0xc0002357d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 57 [GC worker (idle)]:
runtime.gopark(0xa84560, 0xc00025dd80, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e8760 sp=0xc0004e8740 pc=0x431610
runtime.gcBgMarkWorker(0xc00003cf00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004e87d8 sp=0xc0004e8760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e87e0 sp=0xc0004e87d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 496 [timer goroutine (idle), 42 minutes]:
runtime.gopark(0xa846c0, 0xeb3120, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0005d7f60 sp=0xc0005d7f40 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb3120)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0005d7fd8 sp=0xc0005d7f60 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0005d7fe0 sp=0xc0005d7fd8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 55 [GC worker (idle)]:
runtime.systemstack_switch()
	/usr/local/go/src/runtime/asm_amd64.s:330 fp=0xc000061708 sp=0xc000061700 pc=0x45a640
runtime.gcMarkDone()
	/usr/local/go/src/runtime/mgc.go:1422 +0xbb fp=0xc000061760 sp=0xc000061708 pc=0x41c97b
runtime.gcBgMarkWorker(0xc000036000)
	/usr/local/go/src/runtime/mgc.go:1973 +0x29d fp=0xc0000617d8 sp=0xc000061760 pc=0x41da3d
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000617e0 sp=0xc0000617d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 82 [GC worker (idle)]:
runtime.gopark(0xa84560, 0xc00028d7e0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000237f60 sp=0xc000237f40 pc=0x431610
runtime.gcBgMarkWorker(0xc000038500)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc000237fd8 sp=0xc000237f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000237fe0 sp=0xc000237fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 98 [GC worker (idle), 2 minutes]:
runtime.gopark(0xa84560, 0xc0004f2000, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e4760 sp=0xc0004e4740 pc=0x431610
runtime.gcBgMarkWorker(0xc00003aa00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004e47d8 sp=0xc0004e4760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e47e0 sp=0xc0004e47d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 74 [GC worker (idle)]:
runtime.gopark(0xa84560, 0xc0004f2010, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000061f60 sp=0xc000061f40 pc=0x431610
runtime.gcBgMarkWorker(0xc00003f400)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc000061fd8 sp=0xc000061f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000061fe0 sp=0xc000061fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 114 [GC worker (idle), 2 minutes]:
runtime.gopark(0xa84560, 0xc00028d8c0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004fc760 sp=0xc0004fc740 pc=0x431610
runtime.gcBgMarkWorker(0xc000041900)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004fc7d8 sp=0xc0004fc760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004fc7e0 sp=0xc0004fc7d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 58 [GC worker (idle)]:
runtime.gopark(0xa84560, 0xc00025dd90, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e8f60 sp=0xc0004e8f40 pc=0x431610
runtime.gcBgMarkWorker(0xc000044000)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004e8fd8 sp=0xc0004e8f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e8fe0 sp=0xc0004e8fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 75 [GC worker (idle), 2 minutes]:
runtime.gopark(0xa84560, 0xc0004f2020, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000060f60 sp=0xc000060f40 pc=0x431610
runtime.gcBgMarkWorker(0xc000046500)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc000060fd8 sp=0xc000060f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000060fe0 sp=0xc000060fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 115 [runnable]:
runtime.gopark(0xa84560, 0xc00028d8d0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004fcf60 sp=0xc0004fcf40 pc=0x431610
runtime.gcBgMarkWorker(0xc000048a00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004fcfd8 sp=0xc0004fcf60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004fcfe0 sp=0xc0004fcfd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 59 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc00025dda0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e9760 sp=0xc0004e9740 pc=0x431610
runtime.gcBgMarkWorker(0xc00004af00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004e97d8 sp=0xc0004e9760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e97e0 sp=0xc0004e97d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 76 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc0004f2030, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004f8760 sp=0xc0004f8740 pc=0x431610
runtime.gcBgMarkWorker(0xc00004d400)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004f87d8 sp=0xc0004f8760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004f87e0 sp=0xc0004f87d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 116 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc00028d8e0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004fd760 sp=0xc0004fd740 pc=0x431610
runtime.gcBgMarkWorker(0xc00004f900)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004fd7d8 sp=0xc0004fd760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004fd7e0 sp=0xc0004fd7d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 60 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc00025ddb0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e9f60 sp=0xc0004e9f40 pc=0x431610
runtime.gcBgMarkWorker(0xc000052000)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004e9fd8 sp=0xc0004e9f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e9fe0 sp=0xc0004e9fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 77 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc0004f2040, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004f8f60 sp=0xc0004f8f40 pc=0x431610
runtime.gcBgMarkWorker(0xc000054500)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004f8fd8 sp=0xc0004f8f60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004f8fe0 sp=0xc0004f8fd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 117 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc00028d8f0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004fdf60 sp=0xc0004fdf40 pc=0x431610
runtime.gcBgMarkWorker(0xc000056a00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004fdfd8 sp=0xc0004fdf60 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004fdfe0 sp=0xc0004fdfd8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 61 [GC worker (idle), 46 minutes]:
runtime.gopark(0xa84560, 0xc00025ddc0, 0x1418, 0x0)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004ea760 sp=0xc0004ea740 pc=0x431610
runtime.gcBgMarkWorker(0xc000058f00)
	/usr/local/go/src/runtime/mgc.go:1846 +0xff fp=0xc0004ea7d8 sp=0xc0004ea760 pc=0x41d89f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004ea7e0 sp=0xc0004ea7d8 pc=0x45c711
created by runtime.gcBgMarkStartWorkers
	/usr/local/go/src/runtime/mgc.go:1794 +0x77

goroutine 247 [timer goroutine (idle), 44 minutes]:
runtime.gopark(0xa846c0, 0xeb31a0, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004eaf60 sp=0xc0004eaf40 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb31a0)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0004eafd8 sp=0xc0004eaf60 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004eafe0 sp=0xc0004eafd8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 119 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb32a0, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000067760 sp=0xc000067740 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb32a0)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0000677d8 sp=0xc000067760 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0000677e0 sp=0xc0000677d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 162 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb34a0, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0005d9f60 sp=0xc0005d9f40 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb34a0)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0005d9fd8 sp=0xc0005d9f60 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0005d9fe0 sp=0xc0005d9fd8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 367 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb33a0, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000063f60 sp=0xc000063f40 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb33a0)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc000063fd8 sp=0xc000063f60 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000063fe0 sp=0xc000063fd8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 88 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb3320, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004e5760 sp=0xc0004e5740 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb3320)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc0004e57d8 sp=0xc0004e5760 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004e57e0 sp=0xc0004e57d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 707 [timer goroutine (idle), 46 minutes]:
runtime.gopark(0xa846c0, 0xeb3520, 0x1415, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc00067c760 sp=0xc00067c740 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.timerproc(0xeb3520)
	/usr/local/go/src/runtime/time.go:303 +0x27b fp=0xc00067c7d8 sp=0xc00067c760 pc=0x44d3fb
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00067c7e0 sp=0xc00067c7d8 pc=0x45c711
created by runtime.(*timersBucket).addtimerLocked
	/usr/local/go/src/runtime/time.go:169 +0x10e

goroutine 512 [select, 46 minutes]:
runtime.gopark(0xa84700, 0x0, 0x1809, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000885dc8 sp=0xc000885da8 pc=0x431610
runtime.selectgo(0xc000885f68, 0xc000885f18, 0x2, 0xc000415180, 0xc000680f01)
	/usr/local/go/src/runtime/select.go:313 +0xc9b fp=0xc000885ef0 sp=0xc000885dc8 pc=0x44069b
cmd/go/internal/work.(*Builder).Do.func3(0xc00028c274, 0xc00050f220, 0xc00042d700)
	/usr/local/go/src/cmd/go/internal/work/exec.go:167 +0xf6 fp=0xc000885fc8 sp=0xc000885ef0 pc=0x8767f6
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000885fd0 sp=0xc000885fc8 pc=0x45c711
created by cmd/go/internal/work.(*Builder).Do
	/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 511 [select, 46 minutes]:
runtime.gopark(0xa84700, 0x0, 0x1809, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000887dc8 sp=0xc000887da8 pc=0x431610
runtime.selectgo(0xc000887f68, 0xc000887f18, 0x2, 0xc00046a000, 0xc000680701)
	/usr/local/go/src/runtime/select.go:313 +0xc9b fp=0xc000887ef0 sp=0xc000887dc8 pc=0x44069b
cmd/go/internal/work.(*Builder).Do.func3(0xc00028c274, 0xc00050f220, 0xc00042d700)
	/usr/local/go/src/cmd/go/internal/work/exec.go:167 +0xf6 fp=0xc000887fc8 sp=0xc000887ef0 pc=0x8767f6
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000887fd0 sp=0xc000887fc8 pc=0x45c711
created by cmd/go/internal/work.(*Builder).Do
	/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 509 [select, 46 minutes]:
runtime.gopark(0xa84700, 0x0, 0x1809, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000072dc8 sp=0xc000072da8 pc=0x431610
runtime.selectgo(0xc000072f68, 0xc000072f18, 0x2, 0xc00034d040, 0xc000062f01)
	/usr/local/go/src/runtime/select.go:313 +0xc9b fp=0xc000072ef0 sp=0xc000072dc8 pc=0x44069b
cmd/go/internal/work.(*Builder).Do.func3(0xc00028c274, 0xc00050f220, 0xc00042d700)
	/usr/local/go/src/cmd/go/internal/work/exec.go:167 +0xf6 fp=0xc000072fc8 sp=0xc000072ef0 pc=0x8767f6
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000072fd0 sp=0xc000072fc8 pc=0x45c711
created by cmd/go/internal/work.(*Builder).Do
	/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 903 [syscall, 46 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x6f0e, 0xc00067fe08, 0x1000004, 0x0, 0x0, 0x200, 0x0, 0x0)
	/usr/local/go/src/syscall/asm_linux_amd64.s:44 +0x5 fp=0xc00067fdb8 sp=0xc00067fdb0 pc=0x4aeda5
os.(*Process).blockUntilWaitable(0xc0003587b0, 0xc000374ea0, 0xb27de0, 0xc0002520f0)
	/usr/local/go/src/os/wait_waitid.go:31 +0x98 fp=0xc00067fea8 sp=0xc00067fdb8 pc=0x4d5028
os.(*Process).wait(0xc0003587b0, 0xb27280, 0xc000374ea0, 0xb27de0)
	/usr/local/go/src/os/exec_unix.go:22 +0x39 fp=0xc00067ff20 sp=0xc00067fea8 pc=0x4cccd9
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:125
os/exec.(*Cmd).Wait(0xc0004ae420, 0xa845b8, 0xc00067ffc8)
	/usr/local/go/src/os/exec/exec.go:501 +0x60 fp=0xc00067ff98 sp=0xc00067ff20 pc=0x50be30
cmd/go/internal/test.(*runCache).builderRunTest.func1(0xc0003c8de0, 0xc0004ae420)
	/usr/local/go/src/cmd/go/internal/test/test.go:1179 +0x2b fp=0xc00067ffd0 sp=0xc00067ff98 pc=0x915f5b
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc00067ffd8 sp=0xc00067ffd0 pc=0x45c711
created by cmd/go/internal/test.(*runCache).builderRunTest
	/usr/local/go/src/cmd/go/internal/test/test.go:1178 +0xd6d

goroutine 893 [select, 46 minutes, locked to thread]:
runtime.gopark(0xa84700, 0x0, 0x1809, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc0004fade0 sp=0xc0004fadc0 pc=0x431610
runtime.selectgo(0xc0004faf80, 0xc0004faf40, 0x2, 0x8, 0xc0006e1901)
	/usr/local/go/src/runtime/select.go:313 +0xc9b fp=0xc0004faf08 sp=0xc0004fade0 pc=0x44069b
runtime.ensureSigM.func1()
	/usr/local/go/src/runtime/signal_unix.go:549 +0x1e8 fp=0xc0004fafe0 sp=0xc0004faf08 pc=0x459d98
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc0004fafe8 sp=0xc0004fafe0 pc=0x45c711
created by runtime.ensureSigM
	/usr/local/go/src/runtime/signal_unix.go:532 +0xd5

goroutine 510 [select, 46 minutes]:
runtime.gopark(0xa84700, 0x0, 0x1809, 0x1)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000078930 sp=0xc000078910 pc=0x431610
runtime.selectgo(0xc000078d98, 0xc000078b78, 0x2, 0xc0004ae420, 0x27)
	/usr/local/go/src/runtime/select.go:313 +0xc9b fp=0xc000078a58 sp=0xc000078930 pc=0x44069b
cmd/go/internal/test.(*runCache).builderRunTest(0xc00021be50, 0xc00050f220, 0xc00034d400, 0x0, 0x0)
	/usr/local/go/src/cmd/go/internal/test/test.go:1182 +0xe11 fp=0xc000078df8 sp=0xc000078a58 pc=0x90e271
cmd/go/internal/test.(*runCache).builderRunTest-fm(0xc00050f220, 0xc00034d400, 0x0, 0x0)
	/usr/local/go/src/cmd/go/internal/test/test.go:1052 +0x3e fp=0xc000078e30 sp=0xc000078df8 pc=0x9162fe
cmd/go/internal/work.(*Builder).Do.func2(0xc00034d400)
	/usr/local/go/src/cmd/go/internal/work/exec.go:117 +0x36d fp=0xc000078ef0 sp=0xc000078e30 pc=0x87664d
cmd/go/internal/work.(*Builder).Do.func3(0xc00028c274, 0xc00050f220, 0xc00042d700)
	/usr/local/go/src/cmd/go/internal/work/exec.go:177 +0x79 fp=0xc000078fc8 sp=0xc000078ef0 pc=0x876779
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000078fd0 sp=0xc000078fc8 pc=0x45c711
created by cmd/go/internal/work.(*Builder).Do
	/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 902 [chan receive, 46 minutes]:
runtime.gopark(0xa846c0, 0xc00063b7f8, 0xc00048170e, 0x3)
	/usr/local/go/src/runtime/proc.go:304 +0xe0 fp=0xc000681ef8 sp=0xc000681ed8 pc=0x431610
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:310
runtime.chanrecv(0xc00063b7a0, 0x0, 0x50e301, 0x1)
	/usr/local/go/src/runtime/chan.go:524 +0x2e8 fp=0xc000681f88 sp=0xc000681ef8 pc=0x4075c8
runtime.chanrecv1(0xc00063b7a0, 0x0)
	/usr/local/go/src/runtime/chan.go:406 +0x2b fp=0xc000681fb8 sp=0xc000681f88 pc=0x40728b
cmd/go/internal/base.processSignals.func1(0xc00063b7a0)
	/usr/local/go/src/cmd/go/internal/base/signal.go:21 +0x34 fp=0xc000681fd8 sp=0xc000681fb8 pc=0x5a26b4
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000681fe0 sp=0xc000681fd8 pc=0x45c711
created by cmd/go/internal/base.processSignals
	/usr/local/go/src/cmd/go/internal/base/signal.go:20 +0x93

rax    0x0
rbx    0x6
rcx    0x7fc4ebd0385b
rdx    0x0
rdi    0x2
rsi    0x7fc4e1ffa840
rbp    0xb30d7a
rsp    0x7fc4e1ffa840
r8     0x0
r9     0x7fc4e1ffa840
r10    0x8
r11    0x246
r12    0x7fc4c0000da0
r13    0x0
r14    0xb16e30
r15    0x0
rip    0x7fc4ebd0385b
rflags 0x246
cs     0x33
fs     0x0
gs     0x0

-----

=== RUN   TestInteger
--- PASS: TestInteger (0.00s)
=== RUN   TestSmallDecimal
--- PASS: TestSmallDecimal (0.00s)
=== RUN   TestLargeDecimal
--- PASS: TestLargeDecimal (0.00s)
=== RUN   TestFloat
--- PASS: TestFloat (0.00s)
=== RUN   TestSuffixCoding
--- PASS: TestSuffixCoding (0.00s)
=== RUN   TestCodecLength
--- PASS: TestCodecLength (0.00s)
=== RUN   TestSpecialString
--- PASS: TestSpecialString (0.00s)
=== RUN   TestCodecNoLength
--- PASS: TestCodecNoLength (0.00s)
=== RUN   TestCodecJSON
--- PASS: TestCodecJSON (0.00s)
=== RUN   TestReference
--- PASS: TestReference (0.00s)
=== RUN   TestN1QLEncode
--- PASS: TestN1QLEncode (0.00s)
=== RUN   TestArrayExplodeJoin
--- PASS: TestArrayExplodeJoin (0.00s)
=== RUN   TestN1QLDecode
--- PASS: TestN1QLDecode (0.00s)
=== RUN   TestN1QLDecode2
--- PASS: TestN1QLDecode2 (0.00s)
=== RUN   TestArrayExplodeJoin2
--- PASS: TestArrayExplodeJoin2 (0.00s)
=== RUN   TestMB28956
--- PASS: TestMB28956 (0.00s)
=== RUN   TestFixEncodedInt
--- PASS: TestFixEncodedInt (0.00s)
=== RUN   TestN1QLDecodeLargeInt64
--- PASS: TestN1QLDecodeLargeInt64 (0.00s)
=== RUN   TestMixedModeFixEncodedInt
TESTING [4111686018427387900, -8223372036854775808, 822337203685477618] 
PASS 
TESTING [0] 
PASS 
TESTING [0.0] 
PASS 
TESTING [0.0000] 
PASS 
TESTING [0.0000000] 
PASS 
TESTING [-0] 
PASS 
TESTING [-0.0] 
PASS 
TESTING [-0.0000] 
PASS 
TESTING [-0.0000000] 
PASS 
TESTING [1] 
PASS 
TESTING [20] 
PASS 
TESTING [3456] 
PASS 
TESTING [7645000] 
PASS 
TESTING [9223372036854775807] 
PASS 
TESTING [9223372036854775806] 
PASS 
TESTING [9223372036854775808] 
PASS 
TESTING [92233720368547758071234000] 
PASS 
TESTING [92233720368547758071234987437653] 
PASS 
TESTING [12300000000000000000000000000000056] 
PASS 
TESTING [12300000000000000000000000000000000] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [12300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [210690] 
PASS 
TESTING [90000] 
PASS 
TESTING [123000000] 
PASS 
TESTING [3.60e2] 
PASS 
TESTING [36e2] 
PASS 
TESTING [1.9999999999e10] 
PASS 
TESTING [1.99999e10] 
PASS 
TESTING [1.99999e5] 
PASS 
TESTING [0.00000000000012e15] 
PASS 
TESTING [7.64507352e8] 
PASS 
TESTING [9.2233720368547758071234987437653e31] 
PASS 
TESTING [2650e-1] 
PASS 
TESTING [26500e-1] 
PASS 
TESTING [-1] 
PASS 
TESTING [-20] 
PASS 
TESTING [-3456] 
PASS 
TESTING [-7645000] 
PASS 
TESTING [-9223372036854775808] 
PASS 
TESTING [-9223372036854775807] 
PASS 
TESTING [-9223372036854775806] 
PASS 
TESTING [-9223372036854775809] 
PASS 
TESTING [-92233720368547758071234000] 
PASS 
TESTING [-92233720368547758071234987437653] 
PASS 
TESTING [-12300000000000000000000000000000056] 
PASS 
TESTING [-12300000000000000000000000000000000] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [-210690] 
PASS 
TESTING [-90000] 
PASS 
TESTING [-123000000] 
PASS 
TESTING [-3.60e2] 
PASS 
TESTING [-36e2] 
PASS 
TESTING [-1.9999999999e10] 
PASS 
TESTING [-1.99999e10] 
PASS 
TESTING [-1.99999e5] 
PASS 
TESTING [-0.00000000000012e15] 
PASS 
TESTING [-2650e-1] 
PASS 
TESTING [-26500e-1] 
PASS 
TESTING [0.03] 
PASS 
TESTING [198.60] 
PASS 
TESTING [2000045.178] 
PASS 
TESTING [1.7976931348623157e+308] 
PASS 
TESTING [0.000000000000000000890] 
PASS 
TESTING [257953786.9864236576] 
PASS 
TESTING [257953786.9864236576e8] 
PASS 
TESTING [36.912e3] 
PASS 
TESTING [2761.67e0] 
PASS 
TESTING [2761.67e00] 
PASS 
TESTING [2761.67e000] 
PASS 
TESTING [7676546.67e-3] 
PASS 
TESTING [-0.03] 
PASS 
TESTING [-198.60] 
PASS 
TESTING [-2000045.178] 
PASS 
TESTING [-1.7976931348623157e+308] 
PASS 
TESTING [-0.000000000000000000890] 
PASS 
TESTING [-257953786.9864236576] 
PASS 
TESTING [-257953786.9864236576e8] 
PASS 
TESTING [-36.912e3] 
PASS 
TESTING [-2761.67e0] 
PASS 
TESTING [-2761.67e00] 
PASS 
TESTING [-2761.67e000] 
PASS 
TESTING [-7676546.67e-3] 
PASS 
--- PASS: TestMixedModeFixEncodedInt (0.01s)
=== RUN   TestCodecDesc
--- PASS: TestCodecDesc (0.00s)
=== RUN   TestCodecDescPropLen
--- PASS: TestCodecDescPropLen (0.00s)
=== RUN   TestCodecDescSplChar
--- PASS: TestCodecDescSplChar (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/collatejson	0.034s
Initializing write barrier = 8000
=== RUN   TestForestDBIterator
2021-03-11T07:07:51.992+05:30 [INFO][FDB] Forestdb blockcache size 134217728 initialized in 5048 us

2021-03-11T07:07:51.994+05:30 [INFO][FDB] Forestdb opened database file test
2021-03-11T07:07:51.998+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIterator (0.02s)
=== RUN   TestForestDBIteratorSeek
2021-03-11T07:07:51.999+05:30 [INFO][FDB] Forestdb opened database file test
2021-03-11T07:07:52.002+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIteratorSeek (0.00s)
=== RUN   TestPrimaryIndexEntry
--- PASS: TestPrimaryIndexEntry (0.00s)
=== RUN   TestSecondaryIndexEntry
--- PASS: TestSecondaryIndexEntry (0.00s)
=== RUN   TestPrimaryIndexEntryMatch
--- PASS: TestPrimaryIndexEntryMatch (0.00s)
=== RUN   TestSecondaryIndexEntryMatch
--- PASS: TestSecondaryIndexEntryMatch (0.00s)
=== RUN   TestLongDocIdEntry
--- PASS: TestLongDocIdEntry (0.00s)
=== RUN   TestMemDBInsertionPerf
Maximum number of file descriptors = 200000
Set IO Concurrency: 7200
Initial build: 10000000 items took 1m48.848706046s -> 91870.63735763612 items/s
Incr build: 10000000 items took 1m54.280103897s -> 87504.29566473742 items/s
Main Index: {
"node_count":             18000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       5,
"next_pointers_per_node": 1.3333,
"memory_used":            1695887692,
"node_allocs":            18000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 13500190,
"level1": 3375550,
"level2": 843073,
"level3": 210794,
"level4": 52735,
"level5": 13348,
"level6": 3244,
"level7": 782,
"level8": 221,
"level9": 42,
"level10": 17,
"level11": 3,
"level12": 1,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Back Index 0 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 1 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 2 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 3 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 4 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 5 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 6 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 7 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 8 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 9 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 10 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 11 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 12 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 13 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 14 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 15 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
--- PASS: TestMemDBInsertionPerf (223.13s)
=== RUN   TestBasicsA
--- PASS: TestBasicsA (0.00s)
=== RUN   TestSizeA
--- PASS: TestSizeA (0.00s)
=== RUN   TestSizeWithFreelistA
--- PASS: TestSizeWithFreelistA (0.00s)
=== RUN   TestDequeueUptoSeqnoA
--- PASS: TestDequeueUptoSeqnoA (0.10s)
=== RUN   TestDequeueA
--- PASS: TestDequeueA (1.20s)
=== RUN   TestMultipleVbucketsA
--- PASS: TestMultipleVbucketsA (0.00s)
=== RUN   TestDequeueUptoFreelistA
--- PASS: TestDequeueUptoFreelistA (0.00s)
=== RUN   TestDequeueUptoFreelistMultVbA
--- PASS: TestDequeueUptoFreelistMultVbA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA
--- PASS: TestConcurrentEnqueueDequeueA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA1
--- PASS: TestConcurrentEnqueueDequeueA1 (10.00s)
=== RUN   TestEnqueueAppCh
--- PASS: TestEnqueueAppCh (2.00s)
=== RUN   TestDequeueN
--- PASS: TestDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN
--- PASS: TestConcurrentEnqueueDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN1
--- PASS: TestConcurrentEnqueueDequeueN1 (10.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/indexer	247.067s
=== RUN   TestConnPoolBasicSanity
2021-03-11T07:12:03.922+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 3 overflow 6 low WM 3 relConn batch size 1 ...
2021-03-11T07:12:04.128+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:12:04.923+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnPoolBasicSanity (5.00s)
=== RUN   TestConnRelease
2021-03-11T07:12:08.924+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Waiting for connections to get released
Waiting for more connections to get released
Waiting for further more connections to get released
2021-03-11T07:12:48.658+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:12:48.937+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnRelease (43.73s)
=== RUN   TestLongevity
2021-03-11T07:12:52.659+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Releasing 1 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 4 conns.
Getting 1 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 3 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 2 conns.
Getting 1 conns.
Releasing 0 conns.
Getting 1 conns.
Releasing 3 conns.
Getting 3 conns.
Releasing 2 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
2021-03-11T07:13:31.096+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:13:31.670+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLongevity (42.44s)
=== RUN   TestSustainedHighConns
2021-03-11T07:13:35.097+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Allocating 16 Connections
cp.curActConns = 0
Returning 3 Connections
cp.curActConns = 12
Returning 2 Connections
cp.curActConns = 11
Allocating 6 Connections
Returning 4 Connections
cp.curActConns = 13
Returning 1 Connections
Allocating 12 Connections
cp.curActConns = 24
Returning 1 Connections
Allocating 10 Connections
cp.curActConns = 24
Returning 1 Connections
cp.curActConns = 32
Returning 3 Connections
Allocating 15 Connections
cp.curActConns = 36
Returning 4 Connections
cp.curActConns = 40
Returning 3 Connections
Allocating 8 Connections
cp.curActConns = 45
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 46
Returning 4 Connections
Allocating 9 Connections
cp.curActConns = 48
Returning 3 Connections
cp.curActConns = 48
Allocating 7 Connections
Returning 1 Connections
cp.curActConns = 54
Returning 4 Connections
Allocating 24 Connections
cp.curActConns = 60
Returning 0 Connections
cp.curActConns = 72
Returning 0 Connections
cp.curActConns = 74
Returning 3 Connections
Allocating 13 Connections
cp.curActConns = 84
Returning 3 Connections
Allocating 5 Connections
cp.curActConns = 84
Returning 1 Connections
cp.curActConns = 85
Returning 0 Connections
Allocating 5 Connections
cp.curActConns = 90
Returning 1 Connections
Allocating 16 Connections
cp.curActConns = 96
Returning 3 Connections
cp.curActConns = 102
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 103
Returning 3 Connections
Allocating 21 Connections
cp.curActConns = 108
Returning 3 Connections
cp.curActConns = 118
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 119
Returning 3 Connections
Allocating 22 Connections
cp.curActConns = 119
Returning 4 Connections
cp.curActConns = 131
Returning 2 Connections
cp.curActConns = 132
Returning 3 Connections
Allocating 21 Connections
cp.curActConns = 143
Returning 0 Connections
cp.curActConns = 150
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 150
Returning 2 Connections
Allocating 8 Connections
cp.curActConns = 155
Returning 1 Connections
cp.curActConns = 155
Allocating 9 Connections
Returning 3 Connections
cp.curActConns = 161
Returning 3 Connections
Allocating 16 Connections
cp.curActConns = 166
Returning 2 Connections
cp.curActConns = 172
Returning 3 Connections
Allocating 11 Connections
cp.curActConns = 178
Returning 1 Connections
cp.curActConns = 179
Returning 2 Connections
Allocating 15 Connections
cp.curActConns = 190
Returning 3 Connections
cp.curActConns = 189
Returning 2 Connections
Allocating 18 Connections
cp.curActConns = 202
Returning 0 Connections
cp.curActConns = 205
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 202
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 201
Returning 2 Connections
Allocating 12 Connections
cp.curActConns = 211
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 211
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 211
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 212
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 210
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 212
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 212
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 211
Returning 4 Connections
Allocating 1 Connections
cp.curActConns = 208
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 207
Returning 4 Connections
Allocating 1 Connections
cp.curActConns = 204
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 207
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 210
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 211
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 211
Returning 0 Connections
Allocating 2 Connections
cp.curActConns = 213
Returning 4 Connections
Allocating 3 Connections
cp.curActConns = 212
Returning 3 Connections
Allocating 4 Connections
cp.curActConns = 213
Returning 3 Connections
Allocating 4 Connections
cp.curActConns = 214
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 214
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 213
Returning 4 Connections
Allocating 0 Connections
cp.curActConns = 209
Returning 1 Connections
Allocating 1 Connections
cp.curActConns = 209
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 208
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 211
Returning 4 Connections
Allocating 0 Connections
cp.curActConns = 207
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 205
Returning 0 Connections
Allocating 2 Connections
cp.curActConns = 207
Returning 3 Connections
Allocating 2 Connections
cp.curActConns = 206
Returning 1 Connections
Allocating 3 Connections
cp.curActConns = 208
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 206
Returning 3 Connections
Allocating 2 Connections
cp.curActConns = 205
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 205
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 203
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 203
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 204
Returning 2 Connections
cp.curActConns = 205
Allocating 2 Connections
Returning 0 Connections
cp.curActConns = 207
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 210
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 213
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 215
Returning 0 Connections
cp.curActConns = 217
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 214
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 212
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 212
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 214
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 217
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 216
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 215
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 218
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 217
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 216
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 220
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 220
Retuning from startDeallocatorRoutine
Retuning from startAllocatorRoutine
2021-03-11T07:14:30.134+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:14:31.113+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestSustainedHighConns (59.04s)
=== RUN   TestLowWM
2021-03-11T07:14:34.135+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 20 overflow 5 low WM 10 relConn batch size 2 ...
2021-03-11T07:15:34.150+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2021-03-11T07:16:34.167+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2021-03-11T07:16:39.642+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:16:40.168+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLowWM (129.51s)
=== RUN   TestTotalConns
2021-03-11T07:16:43.644+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 120 overflow 5 low WM 10 relConn batch size 10 ...
2021-03-11T07:16:57.706+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:16:58.648+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestTotalConns (18.06s)
=== RUN   TestUpdateTickRate
2021-03-11T07:17:01.707+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 40 overflow 5 low WM 2 relConn batch size 2 ...
2021-03-11T07:17:22.527+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2021-03-11T07:17:22.713+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestUpdateTickRate (24.82s)
PASS
ok  	github.com/couchbase/indexing/secondary/queryport/client	322.634s
Starting server: attempt 1

Functional tests

2021/03/11 07:20:19 In TestMain()
2021/03/11 07:20:19 Changing config key queryport.client.settings.backfillLimit to value 0
2021/03/11 07:20:19 Changing config key indexer.api.enableTestServer to value true
2021/03/11 07:20:19 Changing config key indexer.settings.persisted_snapshot_init_build.moi.interval to value 60000
2021/03/11 07:20:19 Changing config key indexer.settings.persisted_snapshot.moi.interval to value 60000
2021/03/11 07:20:19 Changing config key indexer.settings.log_level to value info
2021/03/11 07:20:19 Changing config key indexer.settings.storage_mode.disable_upgrade to value true
2021/03/11 07:20:19 Using plasma for creating indexes
2021/03/11 07:20:19 Changing config key indexer.settings.storage_mode to value plasma
2021-03-11T07:20:19.777+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T07:20:19.778+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 07:20:24 Data file exists. Skipping download
2021/03/11 07:20:24 Data file exists. Skipping download
2021/03/11 07:20:25 In DropAllSecondaryIndexes()
2021/03/11 07:20:25 Emptying the default bucket
2021/03/11 07:20:29 Flush Enabled on bucket default, responseBody: 
2021/03/11 07:21:07 Flushed the bucket default, Response body: 
2021/03/11 07:21:07 Create Index On the empty default Bucket()
2021/03/11 07:21:10 Created the secondary index index_eyeColor. Waiting for it become active
2021/03/11 07:21:10 Index is now active
2021/03/11 07:21:10 Populating the default bucket
=== RUN   TestScanAfterBucketPopulate
2021/03/11 07:21:19 In TestScanAfterBucketPopulate()
2021/03/11 07:21:19 Create an index on empty bucket, populate the bucket and Run a scan on the index
2021/03/11 07:21:19 Using n1ql client
2021-03-11T07:21:19.966+05:30 [Info] creating GsiClient for 127.0.0.1:9000
2021-03-11T07:21:19.973+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":2,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":120000,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T07:21:20.026+05:30 [Info] MetadataProvider.SetClusterStatus(): healthy nodes 1 failed node 0 unhealthy node 0 add node 0
2021-03-11T07:21:20.026+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:21:20.026+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:21:20.026+05:30 [Info] MetadataProvider.WatchMetadata(): indexer 127.0.0.1:9106
2021-03-11T07:21:20.062+05:30 [Info] WatchMetadata(): successfully reach indexer at 127.0.0.1:9106.
2021-03-11T07:21:20.064+05:30 [Info] MetadataProvider: Updating indexer version to 5
2021-03-11T07:21:20.064+05:30 [Info] initialized currmeta 3 force true 
2021-03-11T07:21:20.064+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] started poolsize 5000 overflow 30 low WM 1000 relConn batch size 100 ...
2021-03-11T07:21:20.064+05:30 [Info] [GsiScanClient:"127.0.0.1:9107"] started ...
2021-03-11T07:21:20.066+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] open new connection ...
2021-03-11T07:21:20.067+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:21:20.067+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:21:20.067+05:30 [Info] GSIC[default/default-_default-_default-1615427479962261874] started ...
2021/03/11 07:21:20 Expected and Actual scan responses are the same
--- PASS: TestScanAfterBucketPopulate (0.15s)
=== RUN   TestRestartNilSnapshot
2021/03/11 07:21:20 In TestRestartNilSnapshot()
2021/03/11 07:21:24 Created the secondary index idx_age. Waiting for it become active
2021/03/11 07:21:24 Index is now active
2021/03/11 07:21:24 Restarting indexer process ...
2021/03/11 07:21:24 []
2021-03-11T07:21:24.784+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T07:21:24.784+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T07:21:24.785+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T07:21:24.785+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 07:24:44 Using n1ql client
2021-03-11T07:24:44.773+05:30 [Error] transport error between 127.0.0.1:37988->127.0.0.1:9107: write tcp 127.0.0.1:37988->127.0.0.1:9107: write: broken pipe
2021-03-11T07:24:44.773+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -4640144259173878255 request transport failed `write tcp 127.0.0.1:37988->127.0.0.1:9107: write: broken pipe`
2021-03-11T07:24:44.773+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 07:24:44 Expected and Actual scan responses are the same
--- PASS: TestRestartNilSnapshot (204.71s)
=== RUN   TestThreeIndexCreates
2021/03/11 07:24:44 In TestThreeIndexCreates()
2021/03/11 07:24:49 Created the secondary index index_balance. Waiting for it become active
2021/03/11 07:24:49 Index is now active
2021/03/11 07:24:49 Create docs mutations
2021/03/11 07:24:49 Using n1ql client
2021/03/11 07:24:49 Expected and Actual scan responses are the same
2021/03/11 07:24:53 Created the secondary index index_email. Waiting for it become active
2021/03/11 07:24:53 Index is now active
2021/03/11 07:24:53 Create docs mutations
2021/03/11 07:24:53 Using n1ql client
2021/03/11 07:24:53 Expected and Actual scan responses are the same
2021/03/11 07:24:57 Created the secondary index index_pin. Waiting for it become active
2021/03/11 07:24:57 Index is now active
2021/03/11 07:24:57 Delete docs mutations
2021/03/11 07:24:57 Using n1ql client
2021/03/11 07:24:57 Expected and Actual scan responses are the same
--- PASS: TestThreeIndexCreates (13.14s)
=== RUN   TestMultipleIndexCreatesDropsWithMutations
2021/03/11 07:24:57 In TestThreeIndexCreates()
2021/03/11 07:25:02 Created the secondary index index_state. Waiting for it become active
2021/03/11 07:25:02 Index is now active
2021/03/11 07:25:02 Create docs mutations
2021/03/11 07:25:02 Using n1ql client
2021/03/11 07:25:02 Expected and Actual scan responses are the same
2021/03/11 07:25:06 Created the secondary index index_registered. Waiting for it become active
2021/03/11 07:25:06 Index is now active
2021/03/11 07:25:06 Create docs mutations
2021/03/11 07:25:06 Using n1ql client
2021/03/11 07:25:07 Expected and Actual scan responses are the same
2021/03/11 07:25:11 Created the secondary index index_gender. Waiting for it become active
2021/03/11 07:25:11 Index is now active
2021/03/11 07:25:11 Create docs mutations
2021/03/11 07:25:11 Using n1ql client
2021/03/11 07:25:11 Expected and Actual scan responses are the same
2021/03/11 07:25:11 Dropping the secondary index index_registered
2021/03/11 07:25:11 Index dropped
2021/03/11 07:25:11 Create docs mutations
2021/03/11 07:25:11 Delete docs mutations
2021/03/11 07:25:12 Using n1ql client
2021/03/11 07:25:12 Expected and Actual scan responses are the same
2021/03/11 07:25:16 Created the secondary index index_longitude. Waiting for it become active
2021/03/11 07:25:16 Index is now active
2021/03/11 07:25:16 Create docs mutations
2021/03/11 07:25:16 Using n1ql client
2021/03/11 07:25:16 Expected and Actual scan responses are the same
--- PASS: TestMultipleIndexCreatesDropsWithMutations (18.96s)
=== RUN   TestCreateDropScan
2021/03/11 07:25:16 In TestCreateDropScan()
2021/03/11 07:25:20 Created the secondary index index_cd. Waiting for it become active
2021/03/11 07:25:20 Index is now active
2021/03/11 07:25:20 Using n1ql client
2021/03/11 07:25:21 Expected and Actual scan responses are the same
2021/03/11 07:25:21 Dropping the secondary index index_cd
2021/03/11 07:25:21 Index dropped
2021/03/11 07:25:21 Using n1ql client
2021/03/11 07:25:21 Scan failed as expected with error: Index Not Found - cause: GSI index index_cd not found.
--- PASS: TestCreateDropScan (4.26s)
=== RUN   TestCreateDropCreate
2021/03/11 07:25:21 In TestCreateDropCreate()
2021/03/11 07:25:25 Created the secondary index index_cdc. Waiting for it become active
2021/03/11 07:25:25 Index is now active
2021/03/11 07:25:25 Using n1ql client
2021/03/11 07:25:25 Expected and Actual scan responses are the same
2021/03/11 07:25:25 Dropping the secondary index index_cdc
2021/03/11 07:25:25 Index dropped
2021/03/11 07:25:25 Using n1ql client
2021/03/11 07:25:25 Scan 2 failed as expected with error: Index Not Found - cause: GSI index index_cdc not found.
2021/03/11 07:25:29 Created the secondary index index_cdc. Waiting for it become active
2021/03/11 07:25:29 Index is now active
2021/03/11 07:25:29 Using n1ql client
2021/03/11 07:25:30 Expected and Actual scan responses are the same
2021/03/11 07:25:30 (Inclusion 1) Lengths of expected and actual scan results are 5035 and 5035. Num of docs in bucket = 10402
2021/03/11 07:25:30 Using n1ql client
2021/03/11 07:25:30 Expected and Actual scan responses are the same
2021/03/11 07:25:30 (Inclusion 3) Lengths of expected and actual scan results are 5035 and 5035. Num of docs in bucket = 10402
--- PASS: TestCreateDropCreate (9.02s)
=== RUN   TestCreate2Drop1Scan2
2021/03/11 07:25:30 In TestCreate2Drop1Scan2()
2021/03/11 07:25:34 Created the secondary index index_i1. Waiting for it become active
2021/03/11 07:25:34 Index is now active
2021/03/11 07:25:38 Created the secondary index index_i2. Waiting for it become active
2021/03/11 07:25:38 Index is now active
2021/03/11 07:25:38 Using n1ql client
2021/03/11 07:25:38 Expected and Actual scan responses are the same
2021/03/11 07:25:38 Using n1ql client
2021/03/11 07:25:38 Expected and Actual scan responses are the same
2021/03/11 07:25:38 Dropping the secondary index index_i1
2021/03/11 07:25:38 Index dropped
2021/03/11 07:25:38 Using n1ql client
2021/03/11 07:25:38 Expected and Actual scan responses are the same
--- PASS: TestCreate2Drop1Scan2 (8.56s)
=== RUN   TestIndexNameCaseSensitivity
2021/03/11 07:25:38 In TestIndexNameCaseSensitivity()
2021/03/11 07:25:42 Created the secondary index index_age. Waiting for it become active
2021/03/11 07:25:42 Index is now active
2021/03/11 07:25:42 Using n1ql client
2021/03/11 07:25:42 Expected and Actual scan responses are the same
2021/03/11 07:25:42 Using n1ql client
2021/03/11 07:25:42 Scan failed as expected with error: Index Not Found - cause: GSI index index_Age not found.
--- PASS: TestIndexNameCaseSensitivity (4.13s)
=== RUN   TestCreateDuplicateIndex
2021/03/11 07:25:42 In TestCreateDuplicateIndex()
2021/03/11 07:25:47 Created the secondary index index_di1. Waiting for it become active
2021/03/11 07:25:47 Index is now active
2021/03/11 07:25:47 Index found:  index_di1
2021/03/11 07:25:47 Create failed as expected with error: Index index_di1 already exists.
--- PASS: TestCreateDuplicateIndex (4.28s)
=== RUN   TestDropNonExistingIndex
2021/03/11 07:25:47 In TestDropNonExistingIndex()
2021/03/11 07:25:47 Dropping the secondary index 123456
2021/03/11 07:25:47 Index drop failed as expected with error: Index does not exist.
--- PASS: TestDropNonExistingIndex (0.12s)
=== RUN   TestCreateIndexNonExistentBucket
2021/03/11 07:25:47 In TestCreateIndexNonExistentBucket()
2021-03-11T07:25:47.904+05:30 [Error] Encountered error during create index.  Error: Bucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time (err=Bucket Not Found).
2021-03-11T07:25:57.907+05:30 [Error] Fail to create index: Bucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time (err=Bucket Not Found).
2021/03/11 07:25:57 Index create failed as expected with error: Bucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time (err=Bucket Not Found).
--- PASS: TestCreateIndexNonExistentBucket (10.63s)
=== RUN   TestScanWithNoTimeout
2021/03/11 07:25:57 Create an index on empty bucket, populate the bucket and Run a scan on the index
2021/03/11 07:25:57 Changing config key indexer.settings.scan_timeout to value 0
2021/03/11 07:25:58 Using n1ql client
2021-03-11T07:25:58.105+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 07:25:58 Expected and Actual scan responses are the same
--- PASS: TestScanWithNoTimeout (0.24s)
=== RUN   TestIndexingOnBinaryBucketMeta
2021/03/11 07:25:58 In TestIndexingOnBinaryBucketMeta()
2021/03/11 07:25:58 	 1. Populate a bucekt with binary docs and create indexs on the `id`, `cas` and `expiration` fields of Metadata
2021/03/11 07:25:58 	 2. Validate the test by comparing the items_count of indexes and the number of docs in the bucket for each of the fields
2021/03/11 07:26:01 Modified parameters of bucket default, responseBody: 
2021/03/11 07:26:01 Created bucket binaryBucket, responseBody: 
2021/03/11 07:26:19 Created the secondary index index_binary_meta_id. Waiting for it become active
2021/03/11 07:26:19 Index is now active
2021/03/11 07:26:24 items_count stat is 10 for index index_binary_meta_id
2021/03/11 07:26:24 Dropping the secondary index index_binary_meta_id
2021/03/11 07:26:24 Index dropped
2021/03/11 07:26:27 Created the secondary index index_binary_meta_cas. Waiting for it become active
2021/03/11 07:26:27 Index is now active
2021/03/11 07:26:32 items_count stat is 10 for index index_binary_meta_cas
2021/03/11 07:26:32 Dropping the secondary index index_binary_meta_cas
2021/03/11 07:26:32 Index dropped
2021/03/11 07:26:35 Created the secondary index index_binary_meta_expiration. Waiting for it become active
2021/03/11 07:26:35 Index is now active
2021/03/11 07:26:40 items_count stat is 10 for index index_binary_meta_expiration
2021/03/11 07:26:40 Dropping the secondary index index_binary_meta_expiration
2021/03/11 07:26:40 Index dropped
2021/03/11 07:26:42 Deleted bucket binaryBucket, responseBody: 
2021/03/11 07:26:45 Modified parameters of bucket default, responseBody: 
--- PASS: TestIndexingOnBinaryBucketMeta (62.07s)
=== RUN   TestRetainDeleteXATTRBinaryDocs
2021/03/11 07:27:00 In TestRetainDeleteXATTRBinaryDocs()
2021/03/11 07:27:00 	 1. Populate a bucket with binary docs having system XATTRS
2021/03/11 07:27:00 	 2. Create index on the system XATTRS with "retain_deleted_xattr" attribute set to true
2021/03/11 07:27:00 	 3. Delete the documents in the bucket
2021/03/11 07:27:00 	 4. Query for the meta() information in the source bucket. The total number of results should be equivalent to the number of documents in the bucket before deletion of documents
2021/03/11 07:27:03 Modified parameters of bucket default, responseBody: 
2021/03/11 07:27:03 Created bucket binaryBucket, responseBody: 
2021/03/11 07:27:21 Created the secondary index index_system_xattr. Waiting for it become active
2021/03/11 07:27:21 Index is now active
2021/03/11 07:27:26 Deleted all the documents in the bucket: binaryBucket successfully
2021/03/11 07:27:28 Deleted bucket binaryBucket, responseBody: 
2021/03/11 07:27:31 Modified parameters of bucket default, responseBody: 
--- PASS: TestRetainDeleteXATTRBinaryDocs (46.56s)
=== RUN   TestSimpleIndex_FloatDataType
2021/03/11 07:27:46 In TestSimpleIndex_FloatDataType()
2021/03/11 07:27:46 Index found:  index_age
2021/03/11 07:27:46 Using n1ql client
2021/03/11 07:27:46 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_FloatDataType (0.02s)
=== RUN   TestSimpleIndex_StringDataType
2021/03/11 07:27:46 In TestSimpleIndex_StringDataType()
2021/03/11 07:27:50 Created the secondary index index_company. Waiting for it become active
2021/03/11 07:27:50 Index is now active
2021/03/11 07:27:51 Using n1ql client
2021/03/11 07:27:51 Expected and Actual scan responses are the same
2021/03/11 07:27:51 Using n1ql client
2021/03/11 07:27:51 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_StringDataType (4.35s)
=== RUN   TestSimpleIndex_FieldValueCaseSensitivity
2021/03/11 07:27:51 In TestSimpleIndex_StringCaseSensitivity()
2021/03/11 07:27:51 Index found:  index_company
2021/03/11 07:27:51 Using n1ql client
2021/03/11 07:27:51 Expected and Actual scan responses are the same
2021/03/11 07:27:51 Using n1ql client
2021/03/11 07:27:51 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_FieldValueCaseSensitivity (0.06s)
=== RUN   TestSimpleIndex_BoolDataType
2021/03/11 07:27:51 In TestSimpleIndex_BoolDataType()
2021/03/11 07:27:55 Created the secondary index index_isActive. Waiting for it become active
2021/03/11 07:27:55 Index is now active
2021/03/11 07:27:55 Using n1ql client
2021/03/11 07:27:55 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_BoolDataType (4.46s)
=== RUN   TestBasicLookup
2021/03/11 07:27:55 In TestBasicLookup()
2021/03/11 07:27:55 Index found:  index_company
2021/03/11 07:27:55 Using n1ql client
2021/03/11 07:27:55 Expected and Actual scan responses are the same
--- PASS: TestBasicLookup (0.01s)
=== RUN   TestIndexOnNonExistentField
2021/03/11 07:27:55 In TestIndexOnNonExistentField()
2021/03/11 07:27:59 Created the secondary index index_height. Waiting for it become active
2021/03/11 07:27:59 Index is now active
2021/03/11 07:27:59 Using n1ql client
2021/03/11 07:27:59 Expected and Actual scan responses are the same
--- PASS: TestIndexOnNonExistentField (4.22s)
=== RUN   TestIndexPartiallyMissingField
2021/03/11 07:27:59 In TestIndexPartiallyMissingField()
2021/03/11 07:28:04 Created the secondary index index_nationality. Waiting for it become active
2021/03/11 07:28:04 Index is now active
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestIndexPartiallyMissingField (4.15s)
=== RUN   TestScanNonMatchingDatatype
2021/03/11 07:28:04 In TestScanNonMatchingDatatype()
2021/03/11 07:28:04 Index found:  index_age
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestScanNonMatchingDatatype (0.01s)
=== RUN   TestInclusionNeither
2021/03/11 07:28:04 In TestInclusionNeither()
2021/03/11 07:28:04 Index found:  index_age
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestInclusionNeither (0.07s)
=== RUN   TestInclusionLow
2021/03/11 07:28:04 In TestInclusionLow()
2021/03/11 07:28:04 Index found:  index_age
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestInclusionLow (0.03s)
=== RUN   TestInclusionHigh
2021/03/11 07:28:04 In TestInclusionHigh()
2021/03/11 07:28:04 Index found:  index_age
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestInclusionHigh (0.02s)
=== RUN   TestInclusionBoth
2021/03/11 07:28:04 In TestInclusionBoth()
2021/03/11 07:28:04 Index found:  index_age
2021/03/11 07:28:04 Using n1ql client
2021/03/11 07:28:04 Expected and Actual scan responses are the same
--- PASS: TestInclusionBoth (0.02s)
=== RUN   TestNestedIndex_String
2021/03/11 07:28:04 In TestNestedIndex_String()
2021/03/11 07:28:08 Created the secondary index index_streetname. Waiting for it become active
2021/03/11 07:28:08 Index is now active
2021/03/11 07:28:08 Using n1ql client
2021/03/11 07:28:08 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_String (4.38s)
=== RUN   TestNestedIndex_Float
2021/03/11 07:28:08 In TestNestedIndex_Float()
2021/03/11 07:28:12 Created the secondary index index_floor. Waiting for it become active
2021/03/11 07:28:12 Index is now active
2021/03/11 07:28:12 Using n1ql client
2021/03/11 07:28:12 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_Float (4.19s)
=== RUN   TestNestedIndex_Bool
2021/03/11 07:28:12 In TestNestedIndex_Bool()
2021/03/11 07:28:17 Created the secondary index index_isresidential. Waiting for it become active
2021/03/11 07:28:17 Index is now active
2021/03/11 07:28:17 Using n1ql client
2021/03/11 07:28:17 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_Bool (4.29s)
=== RUN   TestLookupJsonObject
2021/03/11 07:28:17 In TestLookupJsonObject()
2021/03/11 07:28:22 Created the secondary index index_streetaddress. Waiting for it become active
2021/03/11 07:28:22 Index is now active
2021/03/11 07:28:22 Using n1ql client
2021/03/11 07:28:23 Count of docScanResults is 1
2021/03/11 07:28:23 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2021/03/11 07:28:23 Count of scanResults is 1
2021/03/11 07:28:23 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2021/03/11 07:28:23 Expected and Actual scan responses are the same
--- PASS: TestLookupJsonObject (5.96s)
=== RUN   TestLookupObjDifferentOrdering
2021/03/11 07:28:23 In TestLookupObjDifferentOrdering()
2021/03/11 07:28:23 Index found:  index_streetaddress
2021/03/11 07:28:23 Using n1ql client
2021/03/11 07:28:23 Count of docScanResults is 1
2021/03/11 07:28:23 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2021/03/11 07:28:23 Count of scanResults is 1
2021/03/11 07:28:23 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2021/03/11 07:28:23 Expected and Actual scan responses are the same
--- PASS: TestLookupObjDifferentOrdering (0.05s)
=== RUN   TestRangeJsonObject
2021/03/11 07:28:23 In TestRangeJsonObject()
2021/03/11 07:28:23 Index found:  index_streetaddress
2021/03/11 07:28:23 Using n1ql client
2021/03/11 07:28:23 Count of scanResults is 2
2021/03/11 07:28:23 Key: string Userbb48952f-f8d1-4e04-a0e1-96b9019706fb  Value: value.Values [{"buildingname":"Rosewood Gardens","doornumber":"514","floor":2,"streetname":"Karweg Place"}] false
2021/03/11 07:28:23 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2021/03/11 07:28:23 Count of docScanResults is 2
2021/03/11 07:28:23 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2021/03/11 07:28:23 Key: Userbb48952f-f8d1-4e04-a0e1-96b9019706fb  Value: [map[buildingname:Rosewood Gardens doornumber:514 floor:2 streetname:Karweg Place]]
2021/03/11 07:28:23 Expected and Actual scan responses are the same
--- PASS: TestRangeJsonObject (0.00s)
=== RUN   TestLookupFloatDiffForms
2021/03/11 07:28:23 In TestLookupFloatDiffForms()
2021/03/11 07:28:27 Created the secondary index index_latitude. Waiting for it become active
2021/03/11 07:28:27 Index is now active
2021/03/11 07:28:27 Scan 1
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 2
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 3
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 4
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 5
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 6
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
--- PASS: TestLookupFloatDiffForms (4.65s)
=== RUN   TestRangeFloatInclVariations
2021/03/11 07:28:27 In TestRangeFloatInclVariations()
2021/03/11 07:28:27 Index found:  index_latitude
2021/03/11 07:28:27 Scan 1
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 2
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 3
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 4
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 5
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
2021/03/11 07:28:27 Scan 6
2021/03/11 07:28:27 Using n1ql client
2021/03/11 07:28:27 Expected and Actual scan responses are the same
--- PASS: TestRangeFloatInclVariations (0.06s)
=== RUN   TestScanAll
2021/03/11 07:28:27 In TestScanAll()
2021/03/11 07:28:31 Created the secondary index index_name. Waiting for it become active
2021/03/11 07:28:31 Index is now active
2021/03/11 07:28:32 Length of docScanResults = 10502
2021/03/11 07:28:32 Using n1ql client
2021/03/11 07:28:32 Length of scanResults = 10502
2021/03/11 07:28:32 Expected and Actual scan responses are the same
--- PASS: TestScanAll (4.30s)
=== RUN   TestScanAllNestedField
2021/03/11 07:28:32 In TestScanAllNestedField()
2021/03/11 07:28:32 Index found:  index_streetname
2021/03/11 07:28:32 Length of docScanResults = 2
2021/03/11 07:28:32 Using n1ql client
2021/03/11 07:28:32 Length of scanResults = 2
2021/03/11 07:28:32 Expected and Actual scan responses are the same
--- PASS: TestScanAllNestedField (0.01s)
=== RUN   TestBasicPrimaryIndex
2021/03/11 07:28:32 In TestBasicPrimaryIndex()
2021/03/11 07:28:37 Created the secondary index index_p1. Waiting for it become active
2021/03/11 07:28:37 Index is now active
2021-03-11T07:28:37.139+05:30 [Error] transport error between 127.0.0.1:37576->127.0.0.1:9107: write tcp 127.0.0.1:37576->127.0.0.1:9107: write: broken pipe
2021-03-11T07:28:37.139+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:37576->127.0.0.1:9107: write: broken pipe`
2021-03-11T07:28:37.139+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 07:28:37 Expected and Actual scan responses are the same
2021/03/11 07:28:37 CountRange() expected and actual is:  1896 and 1896
2021/03/11 07:28:37 lookupkey for CountLookup() = Userf4580e5a-dbc0-4a68-a7a6-3b943bcfe83c
2021/03/11 07:28:37 CountLookup() = 1
--- PASS: TestBasicPrimaryIndex (5.15s)
=== RUN   TestBasicNullDataType
2021/03/11 07:28:37 In TestBasicNullDataType()
2021/03/11 07:28:37 Index found:  index_email
2021/03/11 07:28:37 Using n1ql client
2021/03/11 07:28:37 Expected and Actual scan responses are the same
--- PASS: TestBasicNullDataType (0.01s)
=== RUN   TestBasicArrayDataType_ScanAll
2021/03/11 07:28:37 In TestBasicArrayDataType_ScanAll()
2021/03/11 07:28:41 Created the secondary index index_tags. Waiting for it become active
2021/03/11 07:28:41 Index is now active
2021/03/11 07:28:41 Using n1ql client
2021/03/11 07:28:42 Expected and Actual scan responses are the same
--- PASS: TestBasicArrayDataType_ScanAll (4.78s)
=== RUN   TestBasicArrayDataType_Lookup
2021/03/11 07:28:42 In TestBasicArrayDataType_Lookup()
2021/03/11 07:28:44 Index found:  index_tags
2021/03/11 07:28:44 Count of scanResults is 1
2021/03/11 07:28:44 Key: string Usere46cea01-38f6-4e7b-92e5-69d64668ae75  Value: value.Values [["reprehenderit","tempor","officia","exercitation","labore","sunt","tempor"]] false
--- PASS: TestBasicArrayDataType_Lookup (2.00s)
=== RUN   TestArrayDataType_LookupMissingArrayValue
2021/03/11 07:28:44 In TestArrayDataType_LookupMissingArrayValue()
2021/03/11 07:28:44 Index found:  index_tags
2021/03/11 07:28:44 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupMissingArrayValue (0.00s)
=== RUN   TestArrayDataType_LookupWrongOrder
2021/03/11 07:28:44 In TestArrayDataType_LookupWrongOrder()
2021/03/11 07:28:44 Index found:  index_tags
2021/03/11 07:28:44 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupWrongOrder (0.00s)
=== RUN   TestArrayDataType_LookupSubset
2021/03/11 07:28:44 In TestArrayDataType_LookupSubset()
2021/03/11 07:28:44 Index found:  index_tags
2021/03/11 07:28:44 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupSubset (0.00s)
=== RUN   TestScanLimitParameter
2021/03/11 07:28:44 In TestScanLimitParameter()
2021/03/11 07:28:44 Index found:  index_age
2021/03/11 07:28:44 Using n1ql client
2021/03/11 07:28:44 Using n1ql client
--- PASS: TestScanLimitParameter (0.01s)
=== RUN   TestCountRange
2021/03/11 07:28:44 In TestRangeCount()
2021/03/11 07:28:44 Index found:  index_age
2021/03/11 07:28:44 Count of expected and actual Range is:  2375 and 2375
2021/03/11 07:28:44 Count of expected and actual Range is: 10002 and 10002
2021/03/11 07:28:44 Count of expected and actual Range are: 0 and 0
2021/03/11 07:28:44 Count of expected and actual Range are: 494 and 494
2021/03/11 07:28:44 Testing CountRange() for key <= val
2021/03/11 07:28:44 Count of expected and actual CountRange for key <= 30 are: 5245 and 5245
2021/03/11 07:28:44 Testing CountRange() for key >= val
2021/03/11 07:28:44 Count of expected and actual CountRange for key >= 25 are: 7668 and 7668
2021/03/11 07:28:44 Testing CountRange() for null < key <= val
2021/03/11 07:28:44 Count of expected and actual CountRange for key > null && key <= 30 are: 5245 and 5245
2021/03/11 07:28:44 Testing CountRange() for val <= key < null 
2021/03/11 07:28:44 Count of expected and actual CountRange for key >= 25 && key < null are: 0 and 0
2021/03/11 07:28:44 Count of expected and actual Range are: 0 and 0
--- PASS: TestCountRange (0.07s)
=== RUN   TestCountLookup
2021/03/11 07:28:44 In TestCountLookup()
2021/03/11 07:28:44 Index found:  index_age
2021/03/11 07:28:44 Count of expected and actual Range are: 497 and 497
2021/03/11 07:28:44 Count of expected and actual Range are: 0 and 0
--- PASS: TestCountLookup (0.01s)
=== RUN   TestRangeStatistics
2021/03/11 07:28:44 In TestRangeCount()
2021/03/11 07:28:44 Index found:  index_age
--- PASS: TestRangeStatistics (0.00s)
=== RUN   TestIndexCreateWithWhere
2021/03/11 07:28:44 In TestIndexCreateWithWhere()
2021/03/11 07:28:48 Created the secondary index index_ageabove30. Waiting for it become active
2021/03/11 07:28:48 Index is now active
2021/03/11 07:28:48 Using n1ql client
2021/03/11 07:28:48 Expected and Actual scan responses are the same
2021/03/11 07:28:48 Lengths of expected and actual scanReuslts are:  4263 and 4263
2021/03/11 07:28:52 Created the secondary index index_ageteens. Waiting for it become active
2021/03/11 07:28:52 Index is now active
2021/03/11 07:28:52 Using n1ql client
2021/03/11 07:28:52 Expected and Actual scan responses are the same
2021/03/11 07:28:52 Lengths of expected and actual scanReuslts are:  0 and 0
2021/03/11 07:28:56 Created the secondary index index_age35to45. Waiting for it become active
2021/03/11 07:28:56 Index is now active
2021/03/11 07:28:56 Using n1ql client
2021/03/11 07:28:57 Expected and Actual scan responses are the same
2021/03/11 07:28:57 Lengths of expected and actual scanReuslts are:  2869 and 2869
--- PASS: TestIndexCreateWithWhere (12.95s)
=== RUN   TestDeferredIndexCreate
2021/03/11 07:28:57 In TestDeferredIndexCreate()
2021/03/11 07:28:57 Created the index index_deferred in deferred mode. Index state is INDEX_STATE_READY
2021/03/11 07:28:59 Build the deferred index index_deferred. Waiting for the index to become active
2021/03/11 07:28:59 Waiting for index to go active ...
2021/03/11 07:29:00 Waiting for index to go active ...
2021/03/11 07:29:01 Waiting for index to go active ...
2021/03/11 07:29:02 Index is now active
2021/03/11 07:29:02 Using n1ql client
2021/03/11 07:29:02 Expected and Actual scan responses are the same
--- PASS: TestDeferredIndexCreate (5.21s)
=== RUN   TestCompositeIndex_NumAndString
2021/03/11 07:29:02 In TestCompositeIndex()
2021/03/11 07:29:06 Created the secondary index index_composite1. Waiting for it become active
2021/03/11 07:29:06 Index is now active
2021/03/11 07:29:06 Using n1ql client
2021/03/11 07:29:07 Using n1ql client
2021/03/11 07:29:07 Using n1ql client
2021/03/11 07:29:07 Expected and Actual scan responses are the same
--- PASS: TestCompositeIndex_NumAndString (5.46s)
=== RUN   TestCompositeIndex_TwoNumberFields
2021/03/11 07:29:07 In TestCompositeIndex()
2021/03/11 07:29:12 Created the secondary index index_composite2. Waiting for it become active
2021/03/11 07:29:12 Index is now active
2021/03/11 07:29:12 Using n1ql client
--- PASS: TestCompositeIndex_TwoNumberFields (4.52s)
=== RUN   TestNumbers_Int64_Float64
2021/03/11 07:29:12 In TestNumbers_Int64_Float64()
2021/03/11 07:29:16 Created the secondary index idx_numbertest. Waiting for it become active
2021/03/11 07:29:16 Index is now active
2021/03/11 07:29:16 
 ==== Int64 test #0
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #1
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #2
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #3
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #4
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #5
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #6
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #7
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Int64 test #8
2021/03/11 07:29:16 Using n1ql client
2021/03/11 07:29:16 Expected and Actual scan responses are the same
2021/03/11 07:29:16 
 ==== Float64 test #0
2021/03/11 07:29:17 Using n1ql client
2021/03/11 07:29:17 Expected and Actual scan responses are the same
2021/03/11 07:29:17 
 ==== Float64 test #1
2021/03/11 07:29:17 Using n1ql client
2021/03/11 07:29:17 Expected and Actual scan responses are the same
2021/03/11 07:29:17 
 ==== Float64 test #2
2021/03/11 07:29:17 Using n1ql client
2021/03/11 07:29:17 Expected and Actual scan responses are the same
2021/03/11 07:29:17 
 ==== Float64 test #3
2021/03/11 07:29:17 Using n1ql client
2021/03/11 07:29:17 Expected and Actual scan responses are the same
--- PASS: TestNumbers_Int64_Float64 (4.86s)
=== RUN   TestRestartIndexer
2021/03/11 07:29:17 In TestRestartIndexer()
2021/03/11 07:29:17 []
2021-03-11T07:29:17.208+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T07:29:17.208+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T07:29:17.210+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T07:29:17.210+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 07:29:37 Using n1ql client
2021-03-11T07:29:37.151+05:30 [Error] transport error between 127.0.0.1:38656->127.0.0.1:9107: write tcp 127.0.0.1:38656->127.0.0.1:9107: write: broken pipe
2021-03-11T07:29:37.151+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -105064333308839366 request transport failed `write tcp 127.0.0.1:38656->127.0.0.1:9107: write: broken pipe`
2021-03-11T07:29:37.151+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 07:29:37 Len of expected and actual scan results are :  10002 and 10002
2021/03/11 07:29:37 Expected and Actual scan responses are the same
--- PASS: TestRestartIndexer (20.10s)
=== RUN   TestCreateDocsMutation
2021/03/11 07:29:37 In TestCreateDocsMutation()
2021/03/11 07:29:37 Index found:  index_age
2021/03/11 07:29:37 Using n1ql client
2021/03/11 07:29:37 Len of expected and actual scan results are :  10002 and 10002
2021/03/11 07:29:37 Expected and Actual scan responses are the same
2021/03/11 07:29:37 Using n1ql client
2021/03/11 07:29:37 Index Scan after mutations took 161.844632ms
2021/03/11 07:29:37 Len of expected and actual scan results are :  10102 and 10102
2021/03/11 07:29:37 Expected and Actual scan responses are the same
--- PASS: TestCreateDocsMutation (0.51s)
=== RUN   TestRestartProjector
2021/03/11 07:29:37 In TestRestartProjector()
2021/03/11 07:29:37 []
2021/03/11 07:29:57 Using n1ql client
2021/03/11 07:29:57 Len of expected and actual scan results are :  10102 and 10102
2021/03/11 07:29:57 Expected and Actual scan responses are the same
--- PASS: TestRestartProjector (20.08s)
=== RUN   TestDeleteDocsMutation
2021/03/11 07:29:57 In TestDeleteDocsMutation()
2021/03/11 07:29:57 Index found:  index_age
2021/03/11 07:29:57 Using n1ql client
2021/03/11 07:29:57 Len of expected and actual scan results are :  10102 and 10102
2021/03/11 07:29:57 Expected and Actual scan responses are the same
2021/03/11 07:29:58 Using n1ql client
2021/03/11 07:29:58 Index Scan after mutations took 154.976774ms
2021/03/11 07:29:58 Len of expected and actual scan results are :  9902 and 9902
2021/03/11 07:29:58 Expected and Actual scan responses are the same
--- PASS: TestDeleteDocsMutation (0.46s)
=== RUN   TestUpdateDocsMutation
2021/03/11 07:29:58 In TestUpdateDocsMutation()
2021/03/11 07:29:58 Index found:  index_age
2021/03/11 07:29:58 Using n1ql client
2021/03/11 07:29:58 Len of expected and actual scan results are :  9433 and 9433
2021/03/11 07:29:58 Expected and Actual scan responses are the same
2021/03/11 07:29:58 Num of keysFromMutDocs: 100
2021/03/11 07:29:58 Updating number of documents: 99
2021/03/11 07:29:58 Using n1ql client
2021/03/11 07:29:58 Index Scan after mutations took 129.18713ms
2021/03/11 07:29:58 Len of expected and actual scan results are :  9425 and 9425
2021/03/11 07:29:58 Expected and Actual scan responses are the same
--- PASS: TestUpdateDocsMutation (0.38s)
=== RUN   TestLargeMutations
2021/03/11 07:29:58 In TestLargeMutations()
2021/03/11 07:29:58 In DropAllSecondaryIndexes()
2021/03/11 07:29:58 Index found:  index_p1
2021/03/11 07:29:58 Dropped index index_p1
2021/03/11 07:29:58 Index found:  index_longitude
2021/03/11 07:29:58 Dropped index index_longitude
2021/03/11 07:29:58 Index found:  idx_age
2021/03/11 07:29:58 Dropped index idx_age
2021/03/11 07:29:58 Index found:  index_company
2021/03/11 07:29:59 Dropped index index_company
2021/03/11 07:29:59 Index found:  index_composite1
2021/03/11 07:29:59 Dropped index index_composite1
2021/03/11 07:29:59 Index found:  index_height
2021/03/11 07:29:59 Dropped index index_height
2021/03/11 07:29:59 Index found:  index_latitude
2021/03/11 07:29:59 Dropped index index_latitude
2021/03/11 07:29:59 Index found:  index_pin
2021/03/11 07:29:59 Dropped index index_pin
2021/03/11 07:29:59 Index found:  index_tags
2021/03/11 07:29:59 Dropped index index_tags
2021/03/11 07:29:59 Index found:  index_name
2021/03/11 07:30:00 Dropped index index_name
2021/03/11 07:30:00 Index found:  index_isresidential
2021/03/11 07:30:00 Dropped index index_isresidential
2021/03/11 07:30:00 Index found:  idx_numbertest
2021/03/11 07:30:00 Dropped index idx_numbertest
2021/03/11 07:30:00 Index found:  index_state
2021/03/11 07:30:00 Dropped index index_state
2021/03/11 07:30:00 Index found:  index_email
2021/03/11 07:30:00 Dropped index index_email
2021/03/11 07:30:00 Index found:  index_streetname
2021/03/11 07:30:00 Dropped index index_streetname
2021/03/11 07:30:00 Index found:  index_isActive
2021/03/11 07:30:00 Dropped index index_isActive
2021/03/11 07:30:00 Index found:  index_ageabove30
2021/03/11 07:30:00 Dropped index index_ageabove30
2021/03/11 07:30:00 Index found:  index_ageteens
2021/03/11 07:30:01 Dropped index index_ageteens
2021/03/11 07:30:01 Index found:  index_balance
2021/03/11 07:30:01 Dropped index index_balance
2021/03/11 07:30:01 Index found:  index_eyeColor
2021/03/11 07:30:01 Dropped index index_eyeColor
2021/03/11 07:30:01 Index found:  index_gender
2021/03/11 07:30:01 Dropped index index_gender
2021/03/11 07:30:01 Index found:  index_age
2021/03/11 07:30:01 Dropped index index_age
2021/03/11 07:30:01 Index found:  index_nationality
2021/03/11 07:30:01 Dropped index index_nationality
2021/03/11 07:30:01 Index found:  index_age35to45
2021/03/11 07:30:01 Dropped index index_age35to45
2021/03/11 07:30:01 Index found:  index_di1
2021/03/11 07:30:01 Dropped index index_di1
2021/03/11 07:30:01 Index found:  index_floor
2021/03/11 07:30:01 Dropped index index_floor
2021/03/11 07:30:01 Index found:  index_cdc
2021/03/11 07:30:01 Dropped index index_cdc
2021/03/11 07:30:01 Index found:  index_composite2
2021/03/11 07:30:01 Dropped index index_composite2
2021/03/11 07:30:01 Index found:  index_streetaddress
2021/03/11 07:30:02 Dropped index index_streetaddress
2021/03/11 07:30:02 Index found:  index_deferred
2021/03/11 07:30:02 Dropped index index_deferred
2021/03/11 07:30:02 Index found:  index_i2
2021/03/11 07:30:02 Dropped index index_i2
2021/03/11 07:30:24 Created the secondary index indexmut_1. Waiting for it become active
2021/03/11 07:30:24 Index is now active
2021/03/11 07:30:24 Using n1ql client
2021/03/11 07:30:25 Expected and Actual scan responses are the same
2021/03/11 07:30:25 Len of expected and actual scan results are :  29902 and 29902
2021/03/11 07:30:25 ITERATION 0
2021/03/11 07:30:44 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:30:44 Index is now active
2021/03/11 07:30:44 Using n1ql client
2021/03/11 07:30:44 Expected and Actual scan responses are the same
2021/03/11 07:30:44 Len of expected and actual scan results are :  39902 and 39902
2021/03/11 07:30:44 Using n1ql client
2021/03/11 07:30:44 Expected and Actual scan responses are the same
2021/03/11 07:30:44 Len of expected and actual scan results are :  39902 and 39902
2021/03/11 07:30:44 Dropping the secondary index indexmut_2
2021/03/11 07:30:44 Index dropped
2021/03/11 07:30:44 ITERATION 1
2021/03/11 07:31:03 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:31:03 Index is now active
2021/03/11 07:31:03 Using n1ql client
2021/03/11 07:31:03 Expected and Actual scan responses are the same
2021/03/11 07:31:03 Len of expected and actual scan results are :  49902 and 49902
2021/03/11 07:31:03 Using n1ql client
2021/03/11 07:31:03 Expected and Actual scan responses are the same
2021/03/11 07:31:03 Len of expected and actual scan results are :  49902 and 49902
2021/03/11 07:31:03 Dropping the secondary index indexmut_2
2021/03/11 07:31:04 Index dropped
2021/03/11 07:31:04 ITERATION 2
2021/03/11 07:31:23 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:31:23 Index is now active
2021/03/11 07:31:23 Using n1ql client
2021/03/11 07:31:23 Expected and Actual scan responses are the same
2021/03/11 07:31:23 Len of expected and actual scan results are :  59902 and 59902
2021/03/11 07:31:23 Using n1ql client
2021/03/11 07:31:24 Expected and Actual scan responses are the same
2021/03/11 07:31:24 Len of expected and actual scan results are :  59902 and 59902
2021/03/11 07:31:24 Dropping the secondary index indexmut_2
2021/03/11 07:31:24 Index dropped
2021/03/11 07:31:24 ITERATION 3
2021/03/11 07:31:43 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:31:43 Index is now active
2021/03/11 07:31:43 Using n1ql client
2021/03/11 07:31:44 Expected and Actual scan responses are the same
2021/03/11 07:31:44 Len of expected and actual scan results are :  69902 and 69902
2021/03/11 07:31:44 Using n1ql client
2021/03/11 07:31:45 Expected and Actual scan responses are the same
2021/03/11 07:31:45 Len of expected and actual scan results are :  69902 and 69902
2021/03/11 07:31:45 Dropping the secondary index indexmut_2
2021/03/11 07:31:45 Index dropped
2021/03/11 07:31:45 ITERATION 4
2021/03/11 07:32:05 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:32:05 Index is now active
2021/03/11 07:32:06 Using n1ql client
2021/03/11 07:32:06 Expected and Actual scan responses are the same
2021/03/11 07:32:06 Len of expected and actual scan results are :  79902 and 79902
2021/03/11 07:32:06 Using n1ql client
2021/03/11 07:32:07 Expected and Actual scan responses are the same
2021/03/11 07:32:07 Len of expected and actual scan results are :  79902 and 79902
2021/03/11 07:32:07 Dropping the secondary index indexmut_2
2021/03/11 07:32:07 Index dropped
2021/03/11 07:32:07 ITERATION 5
2021/03/11 07:32:28 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:32:28 Index is now active
2021/03/11 07:32:28 Using n1ql client
2021/03/11 07:32:29 Expected and Actual scan responses are the same
2021/03/11 07:32:29 Len of expected and actual scan results are :  89902 and 89902
2021/03/11 07:32:29 Using n1ql client
2021/03/11 07:32:30 Expected and Actual scan responses are the same
2021/03/11 07:32:30 Len of expected and actual scan results are :  89902 and 89902
2021/03/11 07:32:30 Dropping the secondary index indexmut_2
2021/03/11 07:32:30 Index dropped
2021/03/11 07:32:30 ITERATION 6
2021/03/11 07:32:51 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:32:51 Index is now active
2021/03/11 07:32:51 Using n1ql client
2021/03/11 07:32:52 Expected and Actual scan responses are the same
2021/03/11 07:32:52 Len of expected and actual scan results are :  99902 and 99902
2021/03/11 07:32:52 Using n1ql client
2021/03/11 07:32:52 Expected and Actual scan responses are the same
2021/03/11 07:32:52 Len of expected and actual scan results are :  99902 and 99902
2021/03/11 07:32:52 Dropping the secondary index indexmut_2
2021/03/11 07:32:52 Index dropped
2021/03/11 07:32:52 ITERATION 7
2021/03/11 07:33:15 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:33:15 Index is now active
2021/03/11 07:33:15 Using n1ql client
2021/03/11 07:33:16 Expected and Actual scan responses are the same
2021/03/11 07:33:16 Len of expected and actual scan results are :  109902 and 109902
2021/03/11 07:33:16 Using n1ql client
2021/03/11 07:33:17 Expected and Actual scan responses are the same
2021/03/11 07:33:17 Len of expected and actual scan results are :  109902 and 109902
2021/03/11 07:33:17 Dropping the secondary index indexmut_2
2021/03/11 07:33:17 Index dropped
2021/03/11 07:33:17 ITERATION 8
2021/03/11 07:33:38 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:33:38 Index is now active
2021/03/11 07:33:38 Using n1ql client
2021/03/11 07:33:40 Expected and Actual scan responses are the same
2021/03/11 07:33:40 Len of expected and actual scan results are :  119902 and 119902
2021/03/11 07:33:40 Using n1ql client
2021/03/11 07:33:40 Expected and Actual scan responses are the same
2021/03/11 07:33:40 Len of expected and actual scan results are :  119902 and 119902
2021/03/11 07:33:40 Dropping the secondary index indexmut_2
2021/03/11 07:33:40 Index dropped
2021/03/11 07:33:40 ITERATION 9
2021/03/11 07:34:02 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:34:02 Index is now active
2021/03/11 07:34:02 Using n1ql client
2021/03/11 07:34:03 Expected and Actual scan responses are the same
2021/03/11 07:34:03 Len of expected and actual scan results are :  129902 and 129902
2021/03/11 07:34:03 Using n1ql client
2021/03/11 07:34:04 Expected and Actual scan responses are the same
2021/03/11 07:34:04 Len of expected and actual scan results are :  129902 and 129902
2021/03/11 07:34:04 Dropping the secondary index indexmut_2
2021/03/11 07:34:05 Index dropped
2021/03/11 07:34:05 ITERATION 10
2021/03/11 07:34:28 Created the secondary index indexmut_2. Waiting for it become active
2021/03/11 07:34:28 Index is now active
2021/03/11 07:34:29 Using n1ql client
2021/03/11 07:34:30 Expected and Actual scan responses are the same
2021/03/11 07:34:30 Len of expected and actual scan results are :  139902 and 139902
2021/03/11 07:34:30 Using n1ql client
2021/03/11 07:34:32 Expected and Actual scan responses are the same
2021/03/11 07:34:32 Len of expected and actual scan results are :  139902 and 139902
2021/03/11 07:34:32 Dropping the secondary index indexmut_2
2021/03/11 07:34:32 Index dropped
--- PASS: TestLargeMutations (273.50s)
=== RUN   TestPlanner
2021/03/11 07:34:32 In TestPlanner()
2021/03/11 07:34:32 -------------------------------------------
2021/03/11 07:34:32 initial placement - 20-50M, 10 index, 3 replica, 2x
2021-03-11T07:34:32.198+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:32.198+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:32.204+05:30 [Info] switched currmeta from 458 -> 458 force true 
2021-03-11T07:34:32.211+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:32.212+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:32.224+05:30 [Info] switched currmeta from 453 -> 455 force true 
2021-03-11T07:34:32.261+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:32.261+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:32.265+05:30 [Info] switched currmeta from 458 -> 458 force true 
2021-03-11T07:34:32.271+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:32.271+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:32.275+05:30 [Info] switched currmeta from 455 -> 455 force true 
2021-03-11T07:34:32.472+05:30 [Info] Score: 0.09853060129933422
2021-03-11T07:34:32.472+05:30 [Info] Memory Quota: 55303831648 (51.5057G)
2021-03-11T07:34:32.472+05:30 [Info] CPU Quota: 12
2021-03-11T07:34:32.472+05:30 [Info] Indexer Memory Mean 33635712627 (31.3257G)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Memory Deviation 3314146990 (3.08654G) (9.85%)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Memory Utilization 0.6082
2021-03-11T07:34:32.472+05:30 [Info] Indexer CPU Mean 9.8208
2021-03-11T07:34:32.472+05:30 [Info] Indexer CPU Deviation 1.87 (19.09%)
2021-03-11T07:34:32.472+05:30 [Info] Indexer CPU Utilization 0.8184
2021-03-11T07:34:32.472+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:32.472+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:32.472+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:32.472+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Data Size Mean 33635712627 (31.3257G)
2021-03-11T07:34:32.472+05:30 [Info] Indexer Data Size Deviation 3314146990 (3.08654G) (9.85%)
2021-03-11T07:34:32.472+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:32.472+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:32.472+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:32.472+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:32 -------------------------------------------
2021/03/11 07:34:32 initial placement - 20-50M, 30 index, 3 replica, 2x
2021-03-11T07:34:34.107+05:30 [Info] Score: 0.06488455464215707
2021-03-11T07:34:34.107+05:30 [Info] Memory Quota: 64927099972 (60.4681G)
2021-03-11T07:34:34.107+05:30 [Info] CPU Quota: 12
2021-03-11T07:34:34.107+05:30 [Info] Indexer Memory Mean 41520226104 (38.6687G)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Memory Deviation 2694021379 (2.509G) (6.49%)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Memory Utilization 0.6395
2021-03-11T07:34:34.107+05:30 [Info] Indexer CPU Mean 11.2582
2021-03-11T07:34:34.107+05:30 [Info] Indexer CPU Deviation 2.91 (25.87%)
2021-03-11T07:34:34.107+05:30 [Info] Indexer CPU Utilization 0.9382
2021-03-11T07:34:34.107+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:34.107+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:34.107+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:34.107+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Data Size Mean 41520226104 (38.6687G)
2021-03-11T07:34:34.107+05:30 [Info] Indexer Data Size Deviation 2694021379 (2.509G) (6.49%)
2021-03-11T07:34:34.107+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:34.107+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:34.107+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:34.107+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:34 -------------------------------------------
2021/03/11 07:34:34 initial placement - 20-50M, 30 index, 3 replica, 4x
2021-03-11T07:34:34.554+05:30 [Info] Score: 0.03181294089390763
2021-03-11T07:34:34.554+05:30 [Info] Memory Quota: 133584432860 (124.41G)
2021-03-11T07:34:34.554+05:30 [Info] CPU Quota: 24
2021-03-11T07:34:34.554+05:30 [Info] Indexer Memory Mean 82795529976 (77.1093G)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Memory Deviation 2633969301 (2.45308G) (3.18%)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Memory Utilization 0.6198
2021-03-11T07:34:34.554+05:30 [Info] Indexer CPU Mean 23.5481
2021-03-11T07:34:34.554+05:30 [Info] Indexer CPU Deviation 3.45 (14.63%)
2021-03-11T07:34:34.554+05:30 [Info] Indexer CPU Utilization 0.9812
2021-03-11T07:34:34.554+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:34.554+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:34.554+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:34.554+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Data Size Mean 82795529976 (77.1093G)
2021-03-11T07:34:34.554+05:30 [Info] Indexer Data Size Deviation 2633969301 (2.45308G) (3.18%)
2021-03-11T07:34:34.554+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:34.554+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:34.554+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:34.554+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:34 -------------------------------------------
2021/03/11 07:34:34 initial placement - 200-500M, 10 index, 3 replica, 2x
2021-03-11T07:34:36.875+05:30 [Info] Score: 0.005836593887747437
2021-03-11T07:34:36.875+05:30 [Info] Memory Quota: 576210709876 (536.638G)
2021-03-11T07:34:36.875+05:30 [Info] CPU Quota: 12
2021-03-11T07:34:36.875+05:30 [Info] Indexer Memory Mean 471097447329 (438.744G)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Memory Deviation 2749604481 (2.56077G) (0.58%)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Memory Utilization 0.8176
2021-03-11T07:34:36.875+05:30 [Info] Indexer CPU Mean 13.9951
2021-03-11T07:34:36.875+05:30 [Info] Indexer CPU Deviation 4.27 (30.54%)
2021-03-11T07:34:36.875+05:30 [Info] Indexer CPU Utilization 1.1663
2021-03-11T07:34:36.875+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:36.875+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:36.875+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:36.875+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Data Size Mean 471097447329 (438.744G)
2021-03-11T07:34:36.875+05:30 [Info] Indexer Data Size Deviation 2749604481 (2.56077G) (0.58%)
2021-03-11T07:34:36.876+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:36.876+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:36.876+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:36.876+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:36 -------------------------------------------
2021/03/11 07:34:36 initial placement - 200-500M, 30 index, 3 replica, 2x
2021-03-11T07:34:37.850+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T07:34:37.895+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:37.895+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:37.899+05:30 [Info] switched currmeta from 458 -> 458 force true 
2021-03-11T07:34:37.900+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:37.900+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:37.902+05:30 [Info] switched currmeta from 455 -> 455 force true 
2021-03-11T07:34:41.122+05:30 [Info] Score: 0.01728214416093081
2021-03-11T07:34:41.122+05:30 [Info] Memory Quota: 460301850856 (428.69G)
2021-03-11T07:34:41.122+05:30 [Info] CPU Quota: 12
2021-03-11T07:34:41.122+05:30 [Info] Indexer Memory Mean 374789544572 (349.05G)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Memory Deviation 6477166939 (6.03233G) (1.73%)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Memory Utilization 0.8142
2021-03-11T07:34:41.122+05:30 [Info] Indexer CPU Mean 10.6795
2021-03-11T07:34:41.122+05:30 [Info] Indexer CPU Deviation 2.33 (21.83%)
2021-03-11T07:34:41.122+05:30 [Info] Indexer CPU Utilization 0.8900
2021-03-11T07:34:41.122+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:41.122+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:41.122+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:41.122+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Data Size Mean 374789544572 (349.05G)
2021-03-11T07:34:41.122+05:30 [Info] Indexer Data Size Deviation 6477166939 (6.03233G) (1.73%)
2021-03-11T07:34:41.122+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:41.122+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:41.122+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:41.122+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:41 -------------------------------------------
2021/03/11 07:34:41 initial placement - mixed small/medium, 30 index, 3 replica, 1.5/4x
2021-03-11T07:34:41.682+05:30 [Info] Score: 0.01787778112272637
2021-03-11T07:34:41.682+05:30 [Info] Memory Quota: 321062683896 (299.013G)
2021-03-11T07:34:41.682+05:30 [Info] CPU Quota: 16
2021-03-11T07:34:41.682+05:30 [Info] Indexer Memory Mean 260104489573 (242.241G)
2021-03-11T07:34:41.682+05:30 [Info] Indexer Memory Deviation 4650091133 (4.33073G) (1.79%)
2021-03-11T07:34:41.682+05:30 [Info] Indexer Memory Utilization 0.8101
2021-03-11T07:34:41.682+05:30 [Info] Indexer CPU Mean 11.2520
2021-03-11T07:34:41.682+05:30 [Info] Indexer CPU Deviation 5.04 (44.83%)
2021-03-11T07:34:41.682+05:30 [Info] Indexer CPU Utilization 0.7033
2021-03-11T07:34:41.682+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:41.682+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:41.682+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:41.682+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:41.682+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:41.682+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:41.682+05:30 [Info] Indexer Data Size Mean 260104489573 (242.241G)
2021-03-11T07:34:41.683+05:30 [Info] Indexer Data Size Deviation 4650091133 (4.33073G) (1.79%)
2021-03-11T07:34:41.683+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:41.683+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:41.683+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:41.683+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:41 -------------------------------------------
2021/03/11 07:34:41 initial placement - mixed all, 30 index, 3 replica, 1.5/4x
2021-03-11T07:34:42.327+05:30 [Info] Score: 0.021656673689863848
2021-03-11T07:34:42.327+05:30 [Info] Memory Quota: 376218480667 (350.381G)
2021-03-11T07:34:42.327+05:30 [Info] CPU Quota: 24
2021-03-11T07:34:42.327+05:30 [Info] Indexer Memory Mean 303019181264 (282.209G)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Memory Deviation 6562387530 (6.1117G) (2.17%)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Memory Utilization 0.8054
2021-03-11T07:34:42.327+05:30 [Info] Indexer CPU Mean 10.9772
2021-03-11T07:34:42.327+05:30 [Info] Indexer CPU Deviation 3.85 (35.04%)
2021-03-11T07:34:42.327+05:30 [Info] Indexer CPU Utilization 0.4574
2021-03-11T07:34:42.327+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:42.327+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:42.327+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:42.327+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Data Size Mean 303019181264 (282.209G)
2021-03-11T07:34:42.327+05:30 [Info] Indexer Data Size Deviation 6562387530 (6.1117G) (2.17%)
2021-03-11T07:34:42.327+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:42.327+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:42.327+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:42.327+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:42 -------------------------------------------
2021/03/11 07:34:42 initial placement - 6 2M index, 1 replica, 2x
2021-03-11T07:34:42.537+05:30 [Info] Score: 0
2021-03-11T07:34:42.537+05:30 [Info] Memory Quota: 4848128000 (4.51517G)
2021-03-11T07:34:42.537+05:30 [Info] CPU Quota: 2
2021-03-11T07:34:42.537+05:30 [Info] Indexer Memory Mean 2080000000 (1.93715G)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Memory Utilization 0.4290
2021-03-11T07:34:42.537+05:30 [Info] Indexer CPU Mean 1.2000
2021-03-11T07:34:42.537+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Indexer CPU Utilization 0.6000
2021-03-11T07:34:42.537+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:42.537+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:42.537+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:42.537+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Data Size Mean 2080000000 (1.93715G)
2021-03-11T07:34:42.537+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:42.537+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:42.537+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:42.537+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:42 -------------------------------------------
2021/03/11 07:34:42 initial placement - 5 20M primary index, 2 replica, 2x
2021-03-11T07:34:42.984+05:30 [Info] Score: 0
2021-03-11T07:34:42.984+05:30 [Info] Memory Quota: 14310128000 (13.3273G)
2021-03-11T07:34:42.984+05:30 [Info] CPU Quota: 2
2021-03-11T07:34:42.984+05:30 [Info] Indexer Memory Mean 10960000000 (10.2073G)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Memory Utilization 0.7659
2021-03-11T07:34:42.984+05:30 [Info] Indexer CPU Mean 1.2000
2021-03-11T07:34:42.984+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Indexer CPU Utilization 0.6000
2021-03-11T07:34:42.984+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:42.984+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:42.984+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:42.984+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Data Size Mean 10960000000 (10.2073G)
2021-03-11T07:34:42.984+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:42.984+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:42.984+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:42.984+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:42 -------------------------------------------
2021/03/11 07:34:42 initial placement - 5 20M array index, 2 replica, 2x
2021-03-11T07:34:43.433+05:30 [Info] Score: 0
2021-03-11T07:34:43.433+05:30 [Info] Memory Quota: 237416768000 (221.112G)
2021-03-11T07:34:43.433+05:30 [Info] CPU Quota: 2
2021-03-11T07:34:43.433+05:30 [Info] Indexer Memory Mean 191440000000 (178.292G)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Memory Utilization 0.8063
2021-03-11T07:34:43.433+05:30 [Info] Indexer CPU Mean 1.2000
2021-03-11T07:34:43.433+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Indexer CPU Utilization 0.6000
2021-03-11T07:34:43.433+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:43.433+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:43.433+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:43.433+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Data Size Mean 191440000000 (178.292G)
2021-03-11T07:34:43.433+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:43.433+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:43.433+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:43.434+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:43 -------------------------------------------
2021/03/11 07:34:43 initial placement - 3 replica constraint, 2 index, 2x
2021-03-11T07:34:44.344+05:30 [Info] Score: 0
2021-03-11T07:34:44.344+05:30 [Info] Memory Quota: 530294000 (505.728M)
2021-03-11T07:34:44.344+05:30 [Info] CPU Quota: 2
2021-03-11T07:34:44.344+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Memory Utilization 0.0049
2021-03-11T07:34:44.344+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:34:44.344+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:34:44.344+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:44.344+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:44.344+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:44.344+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2021-03-11T07:34:44.344+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:44.344+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:44.344+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:44.344+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:44 -------------------------------------------
2021/03/11 07:34:44 incr placement - 20-50M, 5 2M index, 1 replica, 1x
2021-03-11T07:34:45.645+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T07:34:45.696+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:45.696+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:45.702+05:30 [Info] switched currmeta from 455 -> 455 force true 
2021-03-11T07:34:45.710+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T07:34:45.710+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T07:34:45.717+05:30 [Info] switched currmeta from 458 -> 458 force true 
2021-03-11T07:34:45.958+05:30 [Info] Score: 0.014379877136943393
2021-03-11T07:34:45.958+05:30 [Info] Memory Quota: 125233041042 (116.632G)
2021-03-11T07:34:45.958+05:30 [Info] CPU Quota: 27
2021-03-11T07:34:45.958+05:30 [Info] Indexer Memory Mean 71117238485 (66.2331G)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Memory Deviation 1022657151 (975.282M) (1.44%)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Memory Utilization 0.5679
2021-03-11T07:34:45.958+05:30 [Info] Indexer CPU Mean 20.1734
2021-03-11T07:34:45.958+05:30 [Info] Indexer CPU Deviation 1.87 (9.29%)
2021-03-11T07:34:45.958+05:30 [Info] Indexer CPU Utilization 0.7472
2021-03-11T07:34:45.958+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:45.958+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:45.958+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:45.958+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Data Size Mean 71117238485 (66.2331G)
2021-03-11T07:34:45.958+05:30 [Info] Indexer Data Size Deviation 1022657151 (975.282M) (1.44%)
2021-03-11T07:34:45.958+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:45.958+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:45.958+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:45.958+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:45 -------------------------------------------
2021/03/11 07:34:45 incr placement - mixed small/medium, 6 2M index, 1 replica, 1x
2021-03-11T07:34:48.930+05:30 [Info] Score: 0.0008743719299104943
2021-03-11T07:34:48.930+05:30 [Info] Memory Quota: 536870912000 (500G)
2021-03-11T07:34:48.930+05:30 [Info] CPU Quota: 20
2021-03-11T07:34:48.930+05:30 [Info] Indexer Memory Mean 393025602195 (366.034G)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Memory Deviation 343650554 (327.731M) (0.09%)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Memory Utilization 0.7321
2021-03-11T07:34:48.930+05:30 [Info] Indexer CPU Mean 14.2305
2021-03-11T07:34:48.930+05:30 [Info] Indexer CPU Deviation 0.95 (6.65%)
2021-03-11T07:34:48.930+05:30 [Info] Indexer CPU Utilization 0.7115
2021-03-11T07:34:48.930+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:48.930+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:48.930+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:48.930+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Data Size Mean 393025602195 (366.034G)
2021-03-11T07:34:48.930+05:30 [Info] Indexer Data Size Deviation 343650554 (327.731M) (0.09%)
2021-03-11T07:34:48.930+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:48.930+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:48.930+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:48.930+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:48 -------------------------------------------
2021/03/11 07:34:48 incr placement - 3 server group, 3 replica, 1x
2021-03-11T07:34:49.098+05:30 [Info] Score: 0
2021-03-11T07:34:49.098+05:30 [Info] Memory Quota: 530294000 (505.728M)
2021-03-11T07:34:49.098+05:30 [Info] CPU Quota: 16
2021-03-11T07:34:49.098+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Memory Utilization 0.0049
2021-03-11T07:34:49.098+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:34:49.098+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:34:49.098+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:49.098+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:49.098+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:49.098+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2021-03-11T07:34:49.098+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:49.098+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:49.098+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:49.098+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:49 -------------------------------------------
2021/03/11 07:34:49 incr placement - 2 server group, 3 replica, 1x
2021-03-11T07:34:49.102+05:30 [Warn] Index has more replica than server group. Index=index1 0 (replica 1) Bucket=bucket2 Scope= Collection=
2021-03-11T07:34:49.102+05:30 [Warn] Index has more replica than server group. Index=index1 0 (replica 2) Bucket=bucket2 Scope= Collection=
2021-03-11T07:34:49.102+05:30 [Warn] Index has more replica than server group. Index=index1 0 Bucket=bucket2 Scope= Collection=
2021-03-11T07:34:49.251+05:30 [Info] Score: 0
2021-03-11T07:34:49.251+05:30 [Info] Memory Quota: 530294000 (505.728M)
2021-03-11T07:34:49.251+05:30 [Info] CPU Quota: 16
2021-03-11T07:34:49.251+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Memory Utilization 0.0049
2021-03-11T07:34:49.251+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:34:49.251+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:34:49.251+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:49.251+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:49.251+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:49.251+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2021-03-11T07:34:49.251+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:49.251+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:49.251+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:49.251+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:49 -------------------------------------------
2021/03/11 07:34:49 rebalance - 20-50M, 90 index, 20% shuffle, 1x, utilization 90%+
2021-03-11T07:34:51.878+05:30 [Info] Score: 0.11107509891607209
2021-03-11T07:34:51.878+05:30 [Info] Memory Quota: 139586437120 (130G)
2021-03-11T07:34:51.878+05:30 [Info] CPU Quota: 30
2021-03-11T07:34:51.878+05:30 [Info] Indexer Memory Mean 88876568718 (82.7728G)
2021-03-11T07:34:51.878+05:30 [Info] Indexer Memory Deviation 2099022918 (1.95487G) (2.36%)
2021-03-11T07:34:51.878+05:30 [Info] Indexer Memory Utilization 0.6367
2021-03-11T07:34:51.878+05:30 [Info] Indexer CPU Mean 24.0538
2021-03-11T07:34:51.878+05:30 [Info] Indexer CPU Deviation 3.02 (12.55%)
2021-03-11T07:34:51.878+05:30 [Info] Indexer CPU Utilization 0.8018
2021-03-11T07:34:51.878+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:51.878+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:51.878+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:51.878+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:51.878+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:51.878+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:51.878+05:30 [Info] Indexer Data Size Mean 88876568718 (82.7728G)
2021-03-11T07:34:51.879+05:30 [Info] Indexer Data Size Deviation 2099022918 (1.95487G) (2.36%)
2021-03-11T07:34:51.879+05:30 [Info] Total Index Data (from non-deleted node) 993.273G
2021-03-11T07:34:51.879+05:30 [Info] Index Data Moved (exclude new node) 205.28G (20.67%)
2021-03-11T07:34:51.879+05:30 [Info] No. Index (from non-deleted node) 90
2021-03-11T07:34:51.879+05:30 [Info] No. Index Moved (exclude new node) 18 (20.00%)
2021/03/11 07:34:51 -------------------------------------------
2021/03/11 07:34:51 rebalance - mixed small/medium, 90 index, 20% shuffle, 1x
2021-03-11T07:34:54.579+05:30 [Info] Score: 0.14080053438474172
2021-03-11T07:34:54.579+05:30 [Info] Memory Quota: 536870912000 (500G)
2021-03-11T07:34:54.579+05:30 [Info] CPU Quota: 20
2021-03-11T07:34:54.579+05:30 [Info] Indexer Memory Mean 392505602195 (365.549G)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Memory Deviation 25887345744 (24.1095G) (6.60%)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Memory Utilization 0.7311
2021-03-11T07:34:54.579+05:30 [Info] Indexer CPU Mean 13.9305
2021-03-11T07:34:54.579+05:30 [Info] Indexer CPU Deviation 3.38 (24.24%)
2021-03-11T07:34:54.579+05:30 [Info] Indexer CPU Utilization 0.6965
2021-03-11T07:34:54.579+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:54.579+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:54.579+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:54.579+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Data Size Mean 392505602195 (365.549G)
2021-03-11T07:34:54.579+05:30 [Info] Indexer Data Size Deviation 25887345744 (24.1095G) (6.60%)
2021-03-11T07:34:54.579+05:30 [Info] Total Index Data (from non-deleted node) 4.28378T
2021-03-11T07:34:54.579+05:30 [Info] Index Data Moved (exclude new node) 1.31166T (30.62%)
2021-03-11T07:34:54.579+05:30 [Info] No. Index (from non-deleted node) 90
2021-03-11T07:34:54.580+05:30 [Info] No. Index Moved (exclude new node) 14 (15.56%)
2021/03/11 07:34:54 -------------------------------------------
2021/03/11 07:34:54 rebalance - travel sample, 10% shuffle, 1x
2021-03-11T07:34:54.660+05:30 [Info] Score: 0.031089424333813048
2021-03-11T07:34:54.660+05:30 [Info] Memory Quota: 536870912 (512M)
2021-03-11T07:34:54.661+05:30 [Info] CPU Quota: 8
2021-03-11T07:34:54.661+05:30 [Info] Indexer Memory Mean 17503138 (16.6923M)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Memory Deviation 1632487 (1.55686M) (9.33%)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Memory Utilization 0.0326
2021-03-11T07:34:54.661+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:34:54.661+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:34:54.661+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:54.661+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:54.661+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:54.661+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Data Size Mean 0 (0)
2021-03-11T07:34:54.661+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:54.661+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:54.661+05:30 [Info] No. Index (from non-deleted node) 10
2021-03-11T07:34:54.661+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:54 -------------------------------------------
2021/03/11 07:34:54 rebalance - 20-50M, 90 index, swap 2, 1x
2021-03-11T07:34:58.081+05:30 [Info] Score: 0.02726369278893342
2021-03-11T07:34:58.081+05:30 [Info] Memory Quota: 139586437120 (130G)
2021-03-11T07:34:58.081+05:30 [Info] CPU Quota: 30
2021-03-11T07:34:58.081+05:30 [Info] Indexer Memory Mean 88876568718 (82.7728G)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Memory Deviation 2423103465 (2.25669G) (2.73%)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Memory Utilization 0.6367
2021-03-11T07:34:58.081+05:30 [Info] Indexer CPU Mean 24.0538
2021-03-11T07:34:58.081+05:30 [Info] Indexer CPU Deviation 1.25 (5.19%)
2021-03-11T07:34:58.081+05:30 [Info] Indexer CPU Utilization 0.8018
2021-03-11T07:34:58.081+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:58.081+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:58.081+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:58.081+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Data Size Mean 88876568718 (82.7728G)
2021-03-11T07:34:58.081+05:30 [Info] Indexer Data Size Deviation 2423103465 (2.25669G) (2.73%)
2021-03-11T07:34:58.081+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:58.081+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:58.081+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:58.081+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:58 -------------------------------------------
2021/03/11 07:34:58 rebalance - mixed small/medium, 90 index, swap 2, 1x
2021-03-11T07:34:59.977+05:30 [Info] Score: 0.0025267253364071765
2021-03-11T07:34:59.977+05:30 [Info] Memory Quota: 536870912000 (500G)
2021-03-11T07:34:59.977+05:30 [Info] CPU Quota: 20
2021-03-11T07:34:59.977+05:30 [Info] Indexer Memory Mean 392505602195 (365.549G)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Memory Deviation 991753849 (945.81M) (0.25%)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Memory Utilization 0.7311
2021-03-11T07:34:59.977+05:30 [Info] Indexer CPU Mean 13.9305
2021-03-11T07:34:59.977+05:30 [Info] Indexer CPU Deviation 0.89 (6.37%)
2021-03-11T07:34:59.977+05:30 [Info] Indexer CPU Utilization 0.6965
2021-03-11T07:34:59.977+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:34:59.977+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:34:59.977+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:34:59.977+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Data Size Mean 392505602195 (365.549G)
2021-03-11T07:34:59.977+05:30 [Info] Indexer Data Size Deviation 991753849 (945.81M) (0.25%)
2021-03-11T07:34:59.977+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:34:59.977+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:34:59.977+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:34:59.977+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:34:59 -------------------------------------------
2021/03/11 07:34:59 rebalance - travel sample, swap 2, 1x
2021-03-11T07:35:01.506+05:30 [Info] Score: 0.0006385854742565169
2021-03-11T07:35:01.506+05:30 [Info] Memory Quota: 536870912 (512M)
2021-03-11T07:35:01.506+05:30 [Info] CPU Quota: 8
2021-03-11T07:35:01.506+05:30 [Info] Indexer Memory Mean 17503138 (16.6923M)
2021-03-11T07:35:01.506+05:30 [Info] Indexer Memory Deviation 22354 (21.8301K) (0.13%)
2021-03-11T07:35:01.507+05:30 [Info] Indexer Memory Utilization 0.0326
2021-03-11T07:35:01.507+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:35:01.507+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:35:01.507+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:35:01.507+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:35:01.507+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:35:01.507+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] Indexer Data Size Mean 0 (0)
2021-03-11T07:35:01.507+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:35:01.507+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:35:01.507+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:35:01.507+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:35:01 -------------------------------------------
2021/03/11 07:35:01 rebalance - 8 identical index, add 4, 1x
2021-03-11T07:35:02.151+05:30 [Info] Score: 0
2021-03-11T07:35:02.151+05:30 [Info] Memory Quota: 530294000 (505.728M)
2021-03-11T07:35:02.151+05:30 [Info] CPU Quota: 2
2021-03-11T07:35:02.151+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Memory Utilization 0.0049
2021-03-11T07:35:02.151+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:35:02.151+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:35:02.151+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:35:02.151+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:35:02.151+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:35:02.151+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2021-03-11T07:35:02.151+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] Total Index Data (from non-deleted node) 19.8364M
2021-03-11T07:35:02.151+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:35:02.151+05:30 [Info] No. Index (from non-deleted node) 8
2021-03-11T07:35:02.151+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:35:02 -------------------------------------------
2021/03/11 07:35:02 rebalance - 8 identical index, delete 2, 2x
2021-03-11T07:35:02.621+05:30 [Info] Score: 0
2021-03-11T07:35:02.621+05:30 [Info] Memory Quota: 1060588000 (1011.46M)
2021-03-11T07:35:02.621+05:30 [Info] CPU Quota: 4
2021-03-11T07:35:02.621+05:30 [Info] Indexer Memory Mean 10400000 (9.91821M)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Memory Utilization 0.0098
2021-03-11T07:35:02.621+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:35:02.621+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:35:02.621+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:35:02.621+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:35:02.621+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:35:02.621+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Data Size Mean 10400000 (9.91821M)
2021-03-11T07:35:02.621+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:35:02.621+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:35:02.621+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:35:02.621+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:35:02 -------------------------------------------
2021/03/11 07:35:02 rebalance - drop replcia - 3 replica, 3 zone, delete 1, 2x
2021-03-11T07:35:02.622+05:30 [Warn] There is more replia than available nodes.  Will not move index replica (default,,,country) from ejected node 127.0.0.1:9003
2021-03-11T07:35:02.622+05:30 [Info] Score: 0
2021-03-11T07:35:02.622+05:30 [Info] Memory Quota: 536870912 (512M)
2021-03-11T07:35:02.622+05:30 [Info] CPU Quota: 16
2021-03-11T07:35:02.622+05:30 [Info] Indexer Memory Mean 0 (0)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Memory Utilization 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:35:02.622+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Data Size Mean 0 (0)
2021-03-11T07:35:02.622+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:35:02.622+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:35:02.622+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:35:02.622+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:35:02 -------------------------------------------
2021/03/11 07:35:02 rebalance - rebuid replica - 3 replica, 3 zone, add 1, delete 1, 1x
2021-03-11T07:35:02.623+05:30 [Info] Score: 0
2021-03-11T07:35:02.623+05:30 [Info] Memory Quota: 268435456 (256M)
2021-03-11T07:35:02.623+05:30 [Info] CPU Quota: 8
2021-03-11T07:35:02.623+05:30 [Info] Indexer Memory Mean 0 (0)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Memory Utilization 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer CPU Mean 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Indexer CPU Utilization 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer IO Mean 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Drain Rate Mean 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Scan Rate Mean 0.0000
2021-03-11T07:35:02.623+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Data Size Mean 0 (0)
2021-03-11T07:35:02.623+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] Total Index Data (from non-deleted node) 0
2021-03-11T07:35:02.623+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2021-03-11T07:35:02.623+05:30 [Info] No. Index (from non-deleted node) 0
2021-03-11T07:35:02.623+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2021/03/11 07:35:02 -------------------------------------------
2021/03/11 07:35:02 Minimum memory test 1: min memory = 0
2021/03/11 07:35:02 -------------------------------------------
2021/03/11 07:35:02 Minimum memory test 2: min memory > quota
2021-03-11T07:35:02.648+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=7us
2021-03-11T07:35:02.648+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=9us
2021-03-11T07:35:02.648+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=6us
2021/03/11 07:35:13 -------------------------------------------
2021/03/11 07:35:13 Minimum memory test 3: min memory < quota
2021-03-11T07:35:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T07:35:19.421+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T07:35:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T07:35:19.426+05:30 [Info] average scan response {1 ms}
2021-03-11T07:35:19.773+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T07:35:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T07:35:20.065+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T07:35:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T07:35:20.068+05:30 [Info] average scan response {517 ms}
2021-03-11T07:35:20.068+05:30 [Info] GSIC[default/default-_default-_default-1615427479962261874] logstats "default" {"gsi_scan_count":109,"gsi_scan_duration":10595784873,"gsi_throttle_duration":1112124756,"gsi_prime_duration":831010675,"gsi_blocked_duration":2100939770,"gsi_total_temp_files":0}
2021-03-11T07:35:20.389+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021/03/11 07:35:25 -------------------------------------------
2021/03/11 07:35:25 Minimum memory test 4: replica repair with min memory > quota
2021-03-11T07:35:25.355+05:30 [Info] Rebuilding lost replica for (default,,,country,0)
2021-03-11T07:35:25.355+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=9us
2021-03-11T07:35:25.355+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=5us
2021-03-11T07:35:25.355+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=4us
2021-03-11T07:35:25.355+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4.  Elapsed Time=194us
2021-03-11T07:35:25.355+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5.  Elapsed Time=97us
2021-03-11T07:35:25.356+05:30 [Info] Cannot rebuild lost replica due to resource constraint in cluster.  Will not rebuild lost replica.
2021-03-11T07:35:25.356+05:30 [Warn] 
MemoryQuota: 200
CpuQuota: 8
--- Violations for index  (mem 130, cpu 0) at node 127.0.0.1:9003 
	Cannot move to 127.0.0.1:9001: ReplicaViolation (free mem 1.67772e+07T, free cpu 8)
	Cannot move to 127.0.0.1:9002: ReplicaViolation (free mem 1.67772e+07T, free cpu 8)
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 5: replica repair with min memory < quota
2021-03-11T07:35:29.868+05:30 [Info] Rebuilding lost replica for (default,,,country,0)
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 6: rebalance with min memory > quota
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=7us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=7us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=6.  Elapsed Time=7us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=7.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=8.  Elapsed Time=5us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=9.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=10.  Elapsed Time=6us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=11.  Elapsed Time=7us
2021-03-11T07:35:29.956+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=12.  Elapsed Time=6us
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 7: rebalance-out with min memory > quota
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=5us
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=7us
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=7us
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4.  Elapsed Time=6us
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5.  Elapsed Time=7us
2021-03-11T07:35:29.957+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=6.  Elapsed Time=6us
2021-03-11T07:35:29.957+05:30 [Warn] Unable to find a solution with rersource costraint.  Relax resource constraint check.
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 8: plan with min memory > quota
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=856us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=6us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=4us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4.  Elapsed Time=4us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5.  Elapsed Time=4us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=6.  Elapsed Time=4us
2021-03-11T07:35:29.967+05:30 [Warn] Unable to find a solution with rersource costraint.  Relax resource constraint check.
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 9: single node rebalance with min memory > quota
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=1.  Elapsed Time=4us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=2.  Elapsed Time=7us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=3.  Elapsed Time=6us
2021-03-11T07:35:29.967+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=4.  Elapsed Time=6us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=5.  Elapsed Time=8us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=6.  Elapsed Time=4us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=7.  Elapsed Time=3us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=8.  Elapsed Time=3us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=9.  Elapsed Time=3us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=10.  Elapsed Time=3us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=11.  Elapsed Time=3us
2021-03-11T07:35:29.968+05:30 [Info] Planner::Fail to create plan satisyfig constraint. Re-planning. Num of Try=12.  Elapsed Time=3us
2021/03/11 07:35:29 -------------------------------------------
2021/03/11 07:35:29 Minimum memory test 10: plan with partitioned index on empty cluster
--- PASS: TestPlanner (58.21s)
=== RUN   TestRestfulAPI
2021/03/11 07:35:30 In TestRestfulAPI()
2021/03/11 07:35:30 In DropAllSecondaryIndexes()
2021/03/11 07:35:30 Index found:  indexmut_1
2021/03/11 07:35:30 Dropped index indexmut_1
2021/03/11 07:35:30 Setting JSON docs in KV
2021/03/11 07:35:31 GET all indexes
2021/03/11 07:35:31 200 OK
2021/03/11 07:35:31 FOUND indexes: []
2021/03/11 07:35:31 DROP index: badindexid
2021/03/11 07:35:31 status: 400 Bad Request
2021/03/11 07:35:31 DROP index: 23544142
2021/03/11 07:35:31 status: 500 Internal Server Error
2021/03/11 07:35:31 TEST: malformed body
2021/03/11 07:35:31 400 Bad Request "invalid request body ({name:), unmarshal failed invalid character 'n' looking for beginning of object key string"

2021/03/11 07:35:31 TEST: missing field ``name``
2021/03/11 07:35:31 400 Bad Request "missing field name"
2021/03/11 07:35:31 TEST: empty field ``name``
2021/03/11 07:35:31 400 Bad Request "empty field name"
2021/03/11 07:35:31 TEST: missing field ``bucket``
2021/03/11 07:35:31 400 Bad Request "missing field bucket"
2021/03/11 07:35:31 TEST: empty field ``bucket``
2021/03/11 07:35:31 400 Bad Request "empty field bucket"
2021/03/11 07:35:31 TEST: missing field ``secExprs``
2021/03/11 07:35:31 400 Bad Request "missing field secExprs"
2021/03/11 07:35:31 TEST: empty field ``secExprs``
2021/03/11 07:35:31 400 Bad Request "empty field secExprs"
2021/03/11 07:35:31 TEST: incomplete field ``desc``
2021/03/11 07:35:31 400 Bad Request "incomplete desc information [true]"
2021/03/11 07:35:31 TEST: invalid field ``desc``
2021/03/11 07:35:31 400 Bad Request "incomplete desc information [1]"
2021/03/11 07:35:31 
2021/03/11 07:35:31 CREATE INDEX: idx1
2021/03/11 07:35:42 status : 201 Created
2021/03/11 07:35:42 {"id": "3193051120113075854"} 
2021/03/11 07:35:42 CREATE INDEX: idx2 (defer)
2021/03/11 07:35:42 status : 201 Created
2021/03/11 07:35:42 {"id": "2550532654978778518"} 
2021/03/11 07:35:42 CREATE INDEX: idx3 (defer)
2021/03/11 07:35:42 status : 201 Created
2021/03/11 07:35:42 {"id": "4614031699541313173"} 
2021/03/11 07:35:42 CREATE INDEX: idx4 (defer)
2021/03/11 07:35:43 status : 201 Created
2021/03/11 07:35:43 {"id": "17200484775279025233"} 
2021/03/11 07:35:43 CREATE INDEX: idx5
2021/03/11 07:35:53 status : 201 Created
2021/03/11 07:35:53 {"id": "4267816244453050979"} 
2021/03/11 07:35:53 BUILD single deferred index
2021/03/11 07:35:53 202 Accepted
2021/03/11 07:35:54 GET all indexes
2021/03/11 07:35:54 200 OK
2021/03/11 07:35:54 index idx1 in INDEX_STATE_ACTIVE
2021/03/11 07:35:54 GET all indexes
2021/03/11 07:35:54 200 OK
2021/03/11 07:35:54 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:35:55 GET all indexes
2021/03/11 07:35:55 200 OK
2021/03/11 07:35:55 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:35:56 GET all indexes
2021/03/11 07:35:56 200 OK
2021/03/11 07:35:56 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:35:57 GET all indexes
2021/03/11 07:35:57 200 OK
2021/03/11 07:35:57 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:35:58 GET all indexes
2021/03/11 07:35:58 200 OK
2021/03/11 07:35:58 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:35:59 GET all indexes
2021/03/11 07:35:59 200 OK
2021/03/11 07:35:59 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:36:00 GET all indexes
2021/03/11 07:36:00 200 OK
2021/03/11 07:36:00 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:36:01 GET all indexes
2021/03/11 07:36:01 200 OK
2021/03/11 07:36:01 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:36:02 GET all indexes
2021/03/11 07:36:02 200 OK
2021/03/11 07:36:02 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:36:03 GET all indexes
2021/03/11 07:36:03 200 OK
2021/03/11 07:36:03 index idx2 in INDEX_STATE_INITIAL
2021/03/11 07:36:04 GET all indexes
2021/03/11 07:36:04 200 OK
2021/03/11 07:36:04 index idx2 in INDEX_STATE_CATCHUP
2021/03/11 07:36:05 GET all indexes
2021/03/11 07:36:05 200 OK
2021/03/11 07:36:05 index idx2 in INDEX_STATE_ACTIVE
2021/03/11 07:36:05 BUILD many deferred index
2021/03/11 07:36:05 202 Accepted 
2021/03/11 07:36:06 GET all indexes
2021/03/11 07:36:06 200 OK
2021/03/11 07:36:06 index idx1 in INDEX_STATE_ACTIVE
2021/03/11 07:36:06 GET all indexes
2021/03/11 07:36:06 200 OK
2021/03/11 07:36:06 index idx2 in INDEX_STATE_ACTIVE
2021/03/11 07:36:06 GET all indexes
2021/03/11 07:36:06 200 OK
2021/03/11 07:36:06 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:07 GET all indexes
2021/03/11 07:36:07 200 OK
2021/03/11 07:36:07 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:08 GET all indexes
2021/03/11 07:36:08 200 OK
2021/03/11 07:36:08 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:09 GET all indexes
2021/03/11 07:36:09 200 OK
2021/03/11 07:36:09 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:10 GET all indexes
2021/03/11 07:36:10 200 OK
2021/03/11 07:36:10 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:11 GET all indexes
2021/03/11 07:36:11 200 OK
2021/03/11 07:36:11 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:12 GET all indexes
2021/03/11 07:36:12 200 OK
2021/03/11 07:36:12 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:13 GET all indexes
2021/03/11 07:36:13 200 OK
2021/03/11 07:36:13 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:14 GET all indexes
2021/03/11 07:36:14 200 OK
2021/03/11 07:36:14 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:15 GET all indexes
2021/03/11 07:36:15 200 OK
2021/03/11 07:36:15 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:16 GET all indexes
2021/03/11 07:36:16 200 OK
2021/03/11 07:36:16 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:17 GET all indexes
2021/03/11 07:36:17 200 OK
2021/03/11 07:36:17 index idx3 in INDEX_STATE_INITIAL
2021/03/11 07:36:18 GET all indexes
2021/03/11 07:36:18 200 OK
2021/03/11 07:36:18 index idx3 in INDEX_STATE_CATCHUP
2021/03/11 07:36:19 GET all indexes
2021/03/11 07:36:19 200 OK
2021/03/11 07:36:19 index idx3 in INDEX_STATE_ACTIVE
2021/03/11 07:36:19 GET all indexes
2021/03/11 07:36:19 200 OK
2021/03/11 07:36:19 index idx4 in INDEX_STATE_ACTIVE
2021/03/11 07:36:19 GET all indexes
2021/03/11 07:36:19 200 OK
2021/03/11 07:36:19 index idx5 in INDEX_STATE_ACTIVE
2021/03/11 07:36:19 GET all indexes
2021/03/11 07:36:19 200 OK
2021/03/11 07:36:19 CREATED indexes: [3193051120113075854 2550532654978778518 4614031699541313173 17200484775279025233 4267816244453050979]
2021/03/11 07:36:19 
2021/03/11 07:36:19 LOOKUP missing index
2021/03/11 07:36:19 status : 404 Not Found
2021/03/11 07:36:19 LOOKUP Pyongyang
2021/03/11 07:36:20 status : 200 OK
2021/03/11 07:36:20 number of entries 579
2021/03/11 07:36:20 Expected and Actual scan responses are the same
2021/03/11 07:36:20 LOOKUP with stale as false
2021/03/11 07:36:20 status : 200 OK
2021/03/11 07:36:20 number of entries 579
2021/03/11 07:36:20 Expected and Actual scan responses are the same
2021/03/11 07:36:20 LOOKUP with Rome
2021/03/11 07:36:20 status : 200 OK
2021/03/11 07:36:20 number of entries 552
2021/03/11 07:36:20 Expected and Actual scan responses are the same
2021/03/11 07:36:20 RANGE missing index
2021/03/11 07:36:20 Status : 404 Not Found
2021/03/11 07:36:20 RANGE cities - none
2021/03/11 07:36:20 Status : 200 OK
2021/03/11 07:36:26 number of entries 140902
2021/03/11 07:36:27 Expected and Actual scan responses are the same
2021/03/11 07:36:27 RANGE cities -low
2021/03/11 07:36:27 Status : 200 OK
2021/03/11 07:36:32 number of entries 140902
2021/03/11 07:36:33 Expected and Actual scan responses are the same
2021/03/11 07:36:33 RANGE cities -high
2021/03/11 07:36:33 Status : 200 OK
2021/03/11 07:36:38 number of entries 140902
2021/03/11 07:36:39 Expected and Actual scan responses are the same
2021/03/11 07:36:39 RANGE cities - both
2021/03/11 07:36:39 Status : 200 OK
2021/03/11 07:36:45 number of entries 140902
2021/03/11 07:36:46 Expected and Actual scan responses are the same
2021/03/11 07:36:46 RANGE missing cities
2021/03/11 07:36:46 Status : 200 OK
2021/03/11 07:36:46 number of entries 0
2021/03/11 07:36:46 Expected and Actual scan responses are the same
2021/03/11 07:36:46 
2021/03/11 07:36:46 SCANALL missing index
2021/03/11 07:36:46 {"limit":1000000,"stale":"ok"}
2021/03/11 07:36:46 Status : 404 Not Found
2021/03/11 07:36:46 SCANALL stale ok
2021/03/11 07:36:46 {"limit":1000000,"stale":"ok"}
2021/03/11 07:36:46 Status : 200 OK
2021/03/11 07:36:51 number of entries 140902
2021/03/11 07:36:52 Expected and Actual scan responses are the same
2021/03/11 07:36:52 SCANALL stale false
2021/03/11 07:36:52 {"limit":1000000,"stale":"false"}
2021/03/11 07:36:52 Status : 200 OK
2021/03/11 07:36:57 number of entries 140902
2021/03/11 07:36:58 Expected and Actual scan responses are the same
2021/03/11 07:36:58 
2021/03/11 07:36:58 COUNT missing index
2021/03/11 07:36:58 Status : 404 Not Found
2021/03/11 07:36:58 COUNT cities - none
2021/03/11 07:36:59 Status : 200 OK
2021/03/11 07:36:59 number of entries 140902
2021/03/11 07:36:59 COUNT cities -low
2021/03/11 07:36:59 Status : 200 OK
2021/03/11 07:36:59 number of entries 140902
2021/03/11 07:36:59 COUNT cities -high
2021/03/11 07:36:59 Status : 200 OK
2021/03/11 07:36:59 number of entries 140902
2021/03/11 07:36:59 COUNT cities - both
2021/03/11 07:37:00 Status : 200 OK
2021/03/11 07:37:00 number of entries 140902
2021/03/11 07:37:00 COUNT missing cities
2021/03/11 07:37:00 Status : 200 OK
2021/03/11 07:37:00 number of entries 0
2021/03/11 07:37:00 
2021/03/11 07:37:02 STATS: Testing URLs with valid authentication
2021/03/11 07:37:02 STATS: Testing URLs with invalid authentication
2021/03/11 07:37:02 STATS: Testing invalid URLs
2021/03/11 07:37:02 STATS: Testing unsupported methods
2021/03/11 07:37:02 
--- PASS: TestRestfulAPI (91.95s)
=== RUN   TestStatIndexInstFilter
2021/03/11 07:37:02 CREATE INDEX: statIdx1
2021/03/11 07:37:13 status : 201 Created
2021/03/11 07:37:13 {"id": "6026021882070080364"} 
2021/03/11 07:37:13 CREATE INDEX: statIdx2
2021/03/11 07:37:24 status : 201 Created
2021/03/11 07:37:24 {"id": "13701149729469632317"} 
2021/03/11 07:37:24 Instance Id for statIdx2 is 10578502817959856863, common.IndexInstId
--- PASS: TestStatIndexInstFilter (22.48s)
=== RUN   TestBucketDefaultDelete
2021-03-11T07:37:24.870+05:30 [Warn] servicesChangeNotifier: Connection terminated for collection manifest notifier instance of http://%40query-cbauth@127.0.0.1:9000, default, bucket: default, (invalid byte in chunk length)
2021-03-11T07:37:24.931+05:30 [Warn] servicesChangeNotifier: Connection terminated for pool notifier instance of http://%40query-cbauth@127.0.0.1:9000, default (Notifier invalidated due to internal error)
2021/03/11 07:37:27 Deleted bucket default, responseBody: 
2021/03/11 07:37:42 Created bucket default, responseBody: 
2021-03-11T07:37:42.098+05:30 [Warn] servicesChangeNotifier: Connection terminated for buckets notifier instance of http://%40query-cbauth@127.0.0.1:9000, default (Notifier invalidated due to internal error)
2021/03/11 07:37:58 Populating the default bucket
2021/03/11 07:38:06 Using n1ql client
2021-03-11T07:38:06.200+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:38:06.200+05:30 [Info] GSIC[default/default-_default-_default-1615428486197576213] started ...
2021/03/11 07:38:06 Scan failed as expected with error: Index Not Found - cause: GSI index index_isActive not found.
2021/03/11 07:38:06 Populating the default bucket after it was deleted
2021/03/11 07:38:15 Created the secondary index index_isActive. Waiting for it become active
2021/03/11 07:38:15 Index is now active
2021/03/11 07:38:15 Using n1ql client
2021/03/11 07:38:15 Expected and Actual scan responses are the same
--- PASS: TestBucketDefaultDelete (51.08s)
=== RUN   TestMixedDatatypesScanAll
2021/03/11 07:38:15 In TestMixedDatatypesScanAll()
2021/03/11 07:38:15 Before test begin: Length of kv docs is 10002
2021/03/11 07:38:15 In DropAllSecondaryIndexes()
2021/03/11 07:38:15 Index found:  index_isActive
2021/03/11 07:38:16 Dropped index index_isActive
2021/03/11 07:38:16 Number of number fields is: 255
2021/03/11 07:38:16 Number of string fields is: 276
2021/03/11 07:38:16 Number of json fields is: 227
2021/03/11 07:38:16 Number of true bool fields is: 112
2021/03/11 07:38:16 Number of false bool fields is: 130
2021/03/11 07:38:16 After generate docs: Length of kv docs is 11002
2021/03/11 07:38:16 Setting mixed datatypes JSON docs in KV
2021/03/11 07:38:20 Created the secondary index index_mixeddt. Waiting for it become active
2021/03/11 07:38:20 Index is now active
2021/03/11 07:38:20 Using n1ql client
2021/03/11 07:38:20 Expected and Actual scan responses are the same
2021/03/11 07:38:20 Lengths of expected and actual scan results are:  1000 and 1000
2021/03/11 07:38:20 End: Length of kv docs is 11002
--- PASS: TestMixedDatatypesScanAll (4.78s)
=== RUN   TestMixedDatatypesRange_Float
2021/03/11 07:38:20 In TestMixedDatatypesRange_Float()
2021/03/11 07:38:20 In DropAllSecondaryIndexes()
2021/03/11 07:38:20 Index found:  index_mixeddt
2021/03/11 07:38:20 Dropped index index_mixeddt
2021/03/11 07:38:21 Number of number fields is: 262
2021/03/11 07:38:21 Number of string fields is: 232
2021/03/11 07:38:21 Number of json fields is: 254
2021/03/11 07:38:21 Number of true bool fields is: 117
2021/03/11 07:38:21 Number of false bool fields is: 135
2021/03/11 07:38:21 Setting mixed datatypes JSON docs in KV
2021/03/11 07:38:25 Created the secondary index index_mixeddt. Waiting for it become active
2021/03/11 07:38:25 Index is now active
2021/03/11 07:38:25 Using n1ql client
2021/03/11 07:38:25 Expected and Actual scan responses are the same
2021/03/11 07:38:25 Lengths of expected and actual scan results are:  17 and 17
2021/03/11 07:38:25 Using n1ql client
2021/03/11 07:38:25 Expected and Actual scan responses are the same
2021/03/11 07:38:25 Lengths of expected and actual scan results are:  1 and 1
2021/03/11 07:38:25 Length of kv docs is 12002
--- PASS: TestMixedDatatypesRange_Float (4.90s)
=== RUN   TestMixedDatatypesRange_String
2021/03/11 07:38:25 In TestMixedDatatypesRange_String()
2021/03/11 07:38:25 In DropAllSecondaryIndexes()
2021/03/11 07:38:25 Index found:  index_mixeddt
2021/03/11 07:38:25 Dropped index index_mixeddt
2021/03/11 07:38:26 Number of number fields is: 233
2021/03/11 07:38:26 Number of string fields is: 230
2021/03/11 07:38:26 Number of json fields is: 285
2021/03/11 07:38:26 Number of true bool fields is: 138
2021/03/11 07:38:26 Number of false bool fields is: 114
2021/03/11 07:38:26 Setting mixed datatypes JSON docs in KV
2021/03/11 07:38:31 Created the secondary index index_mixeddt. Waiting for it become active
2021/03/11 07:38:31 Index is now active
2021/03/11 07:38:31 Using n1ql client
2021/03/11 07:38:31 Expected and Actual scan responses are the same
2021/03/11 07:38:31 Lengths of expected and actual scan results are:  206 and 206
2021/03/11 07:38:31 Length of kv docs is 13002
--- PASS: TestMixedDatatypesRange_String (5.92s)
=== RUN   TestMixedDatatypesRange_Json
2021/03/11 07:38:31 In TestMixedDatatypesRange_Json()
2021/03/11 07:38:31 In DropAllSecondaryIndexes()
2021/03/11 07:38:31 Index found:  index_mixeddt
2021/03/11 07:38:31 Dropped index index_mixeddt
2021/03/11 07:38:31 Number of number fields is: 255
2021/03/11 07:38:31 Number of string fields is: 242
2021/03/11 07:38:31 Number of json fields is: 248
2021/03/11 07:38:31 Number of true bool fields is: 120
2021/03/11 07:38:31 Number of false bool fields is: 135
2021/03/11 07:38:31 Setting mixed datatypes JSON docs in KV
2021/03/11 07:38:36 Created the secondary index index_mixeddt. Waiting for it become active
2021/03/11 07:38:36 Index is now active
2021/03/11 07:38:36 Using n1ql client
2021/03/11 07:38:36 Expected and Actual scan responses are the same
2021/03/11 07:38:36 Lengths of expected and actual scan results are:  787 and 787
2021/03/11 07:38:36 Length of kv docs is 14002
--- PASS: TestMixedDatatypesRange_Json (4.93s)
=== RUN   TestMixedDatatypesScan_Bool
2021/03/11 07:38:36 In TestMixedDatatypesScan_Bool()
2021/03/11 07:38:36 In DropAllSecondaryIndexes()
2021/03/11 07:38:36 Index found:  index_mixeddt
2021/03/11 07:38:36 Dropped index index_mixeddt
2021/03/11 07:38:36 Number of number fields is: 269
2021/03/11 07:38:36 Number of string fields is: 242
2021/03/11 07:38:36 Number of json fields is: 270
2021/03/11 07:38:36 Number of true bool fields is: 98
2021/03/11 07:38:36 Number of false bool fields is: 121
2021/03/11 07:38:36 Setting mixed datatypes JSON docs in KV
2021/03/11 07:38:42 Created the secondary index index_mixeddt. Waiting for it become active
2021/03/11 07:38:42 Index is now active
2021/03/11 07:38:42 Using n1ql client
2021/03/11 07:38:42 Expected and Actual scan responses are the same
2021/03/11 07:38:42 Lengths of expected and actual scan results are:  473 and 473
2021/03/11 07:38:42 Using n1ql client
2021/03/11 07:38:42 Expected and Actual scan responses are the same
2021/03/11 07:38:42 Lengths of expected and actual scan results are:  505 and 505
2021/03/11 07:38:42 Length of kv docs is 15002
--- PASS: TestMixedDatatypesScan_Bool (6.46s)
=== RUN   TestLargeSecondaryKeyLength
2021/03/11 07:38:42 In TestLargeSecondaryKeyLength()
2021/03/11 07:38:42 In DropAllSecondaryIndexes()
2021/03/11 07:38:42 Index found:  index_mixeddt
2021/03/11 07:38:42 Dropped index index_mixeddt
2021/03/11 07:38:43 Setting JSON docs in KV
2021/03/11 07:38:49 Created the secondary index index_LongSecField. Waiting for it become active
2021/03/11 07:38:49 Index is now active
2021/03/11 07:38:49 Using n1ql client
2021/03/11 07:38:49 ScanAll: Lengths of expected and actual scan results are:  1000 and 1000
2021/03/11 07:38:49 Expected and Actual scan responses are the same
2021/03/11 07:38:49 Using n1ql client
2021/03/11 07:38:49 Range: Lengths of expected and actual scan results are:  817 and 817
2021/03/11 07:38:49 Expected and Actual scan responses are the same
2021/03/11 07:38:49 End: Length of kv docs is 16002
--- PASS: TestLargeSecondaryKeyLength (6.38s)
=== RUN   TestLargePrimaryKeyLength
2021/03/11 07:38:49 In TestLargePrimaryKeyLength()
2021/03/11 07:38:49 In DropAllSecondaryIndexes()
2021/03/11 07:38:49 Index found:  index_LongSecField
2021/03/11 07:38:49 Dropped index index_LongSecField
2021/03/11 07:38:49 Setting JSON docs in KV
2021/03/11 07:38:55 Created the secondary index index_LongPrimaryField. Waiting for it become active
2021/03/11 07:38:55 Index is now active
2021/03/11 07:38:55 Using n1ql client
2021/03/11 07:38:55 Lengths of num of docs and scanResults are:  17002 and 17002
2021/03/11 07:38:55 End: Length of kv docs is 17002
--- PASS: TestLargePrimaryKeyLength (6.19s)
=== RUN   TestUpdateMutations_DeleteField
2021/03/11 07:38:55 In TestUpdateMutations_DeleteField()
2021/03/11 07:38:55 Setting JSON docs in KV
2021/03/11 07:39:03 Created the secondary index index_bal. Waiting for it become active
2021/03/11 07:39:03 Index is now active
2021/03/11 07:39:03 Using n1ql client
2021/03/11 07:39:03 Expected and Actual scan responses are the same
2021/03/11 07:39:03 Using n1ql client
2021/03/11 07:39:03 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_DeleteField (8.07s)
=== RUN   TestUpdateMutations_AddField
2021/03/11 07:39:03 In TestUpdateMutations_AddField()
2021/03/11 07:39:03 Setting JSON docs in KV
2021/03/11 07:39:09 Created the secondary index index_newField. Waiting for it become active
2021/03/11 07:39:09 Index is now active
2021/03/11 07:39:09 Using n1ql client
2021/03/11 07:39:09 Count of scan results before add field mutations:  0
2021/03/11 07:39:09 Expected and Actual scan responses are the same
2021/03/11 07:39:10 Using n1ql client
2021/03/11 07:39:10 Count of scan results after add field mutations:  300
2021/03/11 07:39:10 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_AddField (6.55s)
=== RUN   TestUpdateMutations_DataTypeChange
2021/03/11 07:39:10 In TestUpdateMutations_DataTypeChange()
2021/03/11 07:39:10 Setting JSON docs in KV
2021/03/11 07:39:18 Created the secondary index index_isUserActive. Waiting for it become active
2021/03/11 07:39:18 Index is now active
2021/03/11 07:39:18 Using n1ql client
2021/03/11 07:39:18 Expected and Actual scan responses are the same
2021/03/11 07:39:19 Using n1ql client
2021/03/11 07:39:19 Expected and Actual scan responses are the same
2021/03/11 07:39:19 Using n1ql client
2021/03/11 07:39:19 Expected and Actual scan responses are the same
2021/03/11 07:39:19 Using n1ql client
2021/03/11 07:39:19 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_DataTypeChange (9.25s)
=== RUN   TestMultipleBuckets
2021/03/11 07:39:19 In TestMultipleBuckets()
2021/03/11 07:39:19 In DropAllSecondaryIndexes()
2021/03/11 07:39:19 Index found:  index_bal
2021/03/11 07:39:19 Dropped index index_bal
2021/03/11 07:39:19 Index found:  index_LongPrimaryField
2021/03/11 07:39:19 Dropped index index_LongPrimaryField
2021/03/11 07:39:19 Index found:  index_newField
2021/03/11 07:39:19 Dropped index index_newField
2021/03/11 07:39:19 Index found:  index_isUserActive
2021/03/11 07:39:20 Dropped index index_isUserActive
2021/03/11 07:39:59 Flushed the bucket default, Response body: 
2021/03/11 07:40:02 Modified parameters of bucket default, responseBody: 
2021/03/11 07:40:02 Created bucket testbucket2, responseBody: 
2021/03/11 07:40:02 Created bucket testbucket3, responseBody: 
2021/03/11 07:40:02 Created bucket testbucket4, responseBody: 
2021/03/11 07:40:17 Generating docs and Populating all the buckets
2021/03/11 07:40:21 Created the secondary index bucket1_age. Waiting for it become active
2021/03/11 07:40:21 Index is now active
2021/03/11 07:40:28 Created the secondary index bucket2_city. Waiting for it become active
2021/03/11 07:40:28 Index is now active
2021/03/11 07:40:35 Created the secondary index bucket3_gender. Waiting for it become active
2021/03/11 07:40:35 Index is now active
2021/03/11 07:40:42 Created the secondary index bucket4_balance. Waiting for it become active
2021/03/11 07:40:42 Index is now active
2021/03/11 07:40:45 Using n1ql client
2021/03/11 07:40:45 Expected and Actual scan responses are the same
2021/03/11 07:40:45 Using n1ql client
2021-03-11T07:40:45.631+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:40:45.631+05:30 [Info] GSIC[default/testbucket2-_default-_default-1615428645627032597] started ...
2021/03/11 07:40:45 Expected and Actual scan responses are the same
2021/03/11 07:40:45 Using n1ql client
2021-03-11T07:40:45.641+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:40:45.641+05:30 [Info] GSIC[default/testbucket3-_default-_default-1615428645638131017] started ...
2021/03/11 07:40:45 Expected and Actual scan responses are the same
2021/03/11 07:40:45 Using n1ql client
2021-03-11T07:40:45.656+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:40:45.656+05:30 [Info] GSIC[default/testbucket4-_default-_default-1615428645649474336] started ...
2021/03/11 07:40:45 Expected and Actual scan responses are the same
2021/03/11 07:40:49 Deleted bucket testbucket2, responseBody: 
2021/03/11 07:40:52 Deleted bucket testbucket3, responseBody: 
2021/03/11 07:40:55 Deleted bucket testbucket4, responseBody: 
2021/03/11 07:40:58 Modified parameters of bucket default, responseBody: 
--- PASS: TestMultipleBuckets (113.88s)
=== RUN   TestBucketFlush
2021/03/11 07:41:13 In TestBucketFlush()
2021/03/11 07:41:13 In DropAllSecondaryIndexes()
2021/03/11 07:41:13 Index found:  bucket1_age
2021/03/11 07:41:13 Dropped index bucket1_age
2021/03/11 07:41:52 Flushed the bucket default, Response body: 
2021/03/11 07:41:56 Created the secondary index index_age. Waiting for it become active
2021/03/11 07:41:56 Index is now active
2021/03/11 07:41:56 Using n1ql client
2021/03/11 07:41:57 Expected and Actual scan responses are the same
2021/03/11 07:42:01 Created the secondary index index_gender. Waiting for it become active
2021/03/11 07:42:01 Index is now active
2021/03/11 07:42:01 Using n1ql client
2021/03/11 07:42:01 Expected and Actual scan responses are the same
2021/03/11 07:42:05 Created the secondary index index_city. Waiting for it become active
2021/03/11 07:42:05 Index is now active
2021/03/11 07:42:05 Using n1ql client
2021/03/11 07:42:05 Expected and Actual scan responses are the same
2021/03/11 07:42:43 Flushed the bucket default, Response body: 
2021/03/11 07:42:43 TestBucketFlush:: Flushed the bucket
2021/03/11 07:42:43 Using n1ql client
2021/03/11 07:42:43 Using n1ql client
2021/03/11 07:42:43 Using n1ql client
--- PASS: TestBucketFlush (90.59s)
=== RUN   TestLargeDocumentSize
2021/03/11 07:42:43 In TestLargeDocumentSize()
2021/03/11 07:42:43 Data file exists. Skipping download
2021/03/11 07:42:43 Length of docs and largeDocs = 200 and 200
2021/03/11 07:42:48 Created the secondary index index_userscreenname. Waiting for it become active
2021/03/11 07:42:48 Index is now active
2021/03/11 07:42:48 Using n1ql client
2021/03/11 07:42:48 Expected and Actual scan responses are the same
--- PASS: TestLargeDocumentSize (4.35s)
=== RUN   TestFieldsWithSpecialCharacters
2021/03/11 07:42:48 In TestFieldsWithSpecialCharacters()
2021/03/11 07:42:53 Created the secondary index index_specialchar. Waiting for it become active
2021/03/11 07:42:53 Index is now active
2021/03/11 07:42:53 Looking up for value šµŠßwvv
2021/03/11 07:42:53 Using n1ql client
2021/03/11 07:42:53 Expected and Actual scan responses are the same
--- PASS: TestFieldsWithSpecialCharacters (5.18s)
=== RUN   TestIndexNameValidation
2021/03/11 07:42:53 In TestIndexNameValidation()
2021/03/11 07:42:53 Setting JSON docs in KV
2021/03/11 07:42:54 Creation of index with invalid name ÌñÐÉx&(abc_% failed as expected
2021/03/11 07:42:58 Created the secondary index #primary-Index_test. Waiting for it become active
2021/03/11 07:42:58 Index is now active
2021/03/11 07:42:58 Using n1ql client
2021/03/11 07:42:58 Expected and Actual scan responses are the same
--- PASS: TestIndexNameValidation (5.34s)
=== RUN   TestSameFieldNameAtDifferentLevels
2021/03/11 07:42:58 In TestSameFieldNameAtDifferentLevels()
2021/03/11 07:42:59 Setting JSON docs in KV
2021/03/11 07:43:03 Created the secondary index cityindex. Waiting for it become active
2021/03/11 07:43:03 Index is now active
2021/03/11 07:43:03 Using n1ql client
2021/03/11 07:43:03 Expected and Actual scan responses are the same
--- PASS: TestSameFieldNameAtDifferentLevels (5.33s)
=== RUN   TestSameIndexNameInTwoBuckets
2021/03/11 07:43:03 In TestSameIndexNameInTwoBuckets()
2021/03/11 07:43:03 In DropAllSecondaryIndexes()
2021/03/11 07:43:03 Index found:  index_gender
2021/03/11 07:43:04 Dropped index index_gender
2021/03/11 07:43:04 Index found:  index_userscreenname
2021/03/11 07:43:04 Dropped index index_userscreenname
2021/03/11 07:43:04 Index found:  cityindex
2021/03/11 07:43:04 Dropped index cityindex
2021/03/11 07:43:04 Index found:  index_city
2021/03/11 07:43:04 Dropped index index_city
2021/03/11 07:43:04 Index found:  index_specialchar
2021/03/11 07:43:04 Dropped index index_specialchar
2021/03/11 07:43:04 Index found:  #primary-Index_test
2021/03/11 07:43:04 Dropped index #primary-Index_test
2021/03/11 07:43:04 Index found:  index_age
2021/03/11 07:43:04 Dropped index index_age
2021/03/11 07:43:43 Flushed the bucket default, Response body: 
2021/03/11 07:43:46 Modified parameters of bucket default, responseBody: 
2021/03/11 07:43:46 Created bucket buck2, responseBody: 
2021/03/11 07:44:01 Generating docs and Populating all the buckets
2021/03/11 07:44:05 Created the secondary index b_idx. Waiting for it become active
2021/03/11 07:44:05 Index is now active
2021/03/11 07:44:11 Created the secondary index b_idx. Waiting for it become active
2021/03/11 07:44:11 Index is now active
2021/03/11 07:44:14 Using n1ql client
2021/03/11 07:44:14 Expected and Actual scan responses are the same
2021/03/11 07:44:14 Using n1ql client
2021-03-11T07:44:14.539+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:44:14.540+05:30 [Info] GSIC[default/buck2-_default-_default-1615428854537088338] started ...
2021/03/11 07:44:14 Expected and Actual scan responses are the same
2021/03/11 07:44:17 Modified parameters of bucket default, responseBody: 
2021/03/11 07:44:20 Deleted bucket buck2, responseBody: 
--- PASS: TestSameIndexNameInTwoBuckets (91.12s)
=== RUN   TestLargeKeysSplChars
2021/03/11 07:44:35 In TestLargeKeysSplChars()
2021/03/11 07:44:46 Created the secondary index idspl1. Waiting for it become active
2021/03/11 07:44:46 Index is now active
2021/03/11 07:44:53 Created the secondary index idspl2. Waiting for it become active
2021/03/11 07:44:53 Index is now active
2021/03/11 07:44:57 Created the secondary index idspl3. Waiting for it become active
2021/03/11 07:44:57 Index is now active
2021/03/11 07:44:57 Using n1ql client
2021/03/11 07:44:58 Expected and Actual scan responses are the same
2021-03-11T07:44:58.010+05:30 [Error] transport error between 127.0.0.1:41130->127.0.0.1:9107: write tcp 127.0.0.1:41130->127.0.0.1:9107: write: broken pipe
2021-03-11T07:44:58.010+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:41130->127.0.0.1:9107: write: broken pipe`
2021-03-11T07:44:58.017+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 07:44:58 Expected and Actual scan responses are the same
2021/03/11 07:44:58 Using n1ql client
2021/03/11 07:44:58 Expected and Actual scan responses are the same
--- PASS: TestLargeKeysSplChars (23.40s)
=== RUN   TestVeryLargeIndexKey
2021/03/11 07:44:58 In DropAllSecondaryIndexes()
2021/03/11 07:44:58 Index found:  idspl3
2021/03/11 07:44:58 Dropped index idspl3
2021/03/11 07:44:58 Index found:  b_idx
2021/03/11 07:44:58 Dropped index b_idx
2021/03/11 07:44:58 Index found:  idspl2
2021/03/11 07:44:58 Dropped index idspl2
2021/03/11 07:44:58 Index found:  idspl1
2021/03/11 07:44:59 Dropped index idspl1
2021/03/11 07:45:37 Flushed the bucket default, Response body: 
2021/03/11 07:45:37 TestVeryLargeIndexKey:: Flushed the bucket
2021/03/11 07:45:38 clusterconfig.KVAddress = 127.0.0.1:9000
2021/03/11 07:45:43 Created the secondary index i1. Waiting for it become active
2021/03/11 07:45:43 Index is now active
2021/03/11 07:45:43 Using n1ql client
2021/03/11 07:45:44 Expected and Actual scan responses are the same
2021/03/11 07:45:49 Created the secondary index i2. Waiting for it become active
2021/03/11 07:45:49 Index is now active
2021/03/11 07:45:49 Using n1ql client
2021/03/11 07:45:50 Expected and Actual scan responses are the same
2021/03/11 07:45:50 In DropAllSecondaryIndexes()
2021/03/11 07:45:50 Index found:  i1
2021/03/11 07:45:51 Dropped index i1
2021/03/11 07:45:51 Index found:  i2
2021/03/11 07:45:51 Dropped index i2
2021/03/11 07:46:30 Flushed the bucket default, Response body: 
--- PASS: TestVeryLargeIndexKey (92.05s)
=== RUN   TestTempBufScanResult
2021/03/11 07:46:30 In DropAllSecondaryIndexes()
2021/03/11 07:47:09 Flushed the bucket default, Response body: 
2021/03/11 07:47:09 TestTempBufScanResult:: Flushed the bucket
2021/03/11 07:47:12 Created the secondary index index_idxKey. Waiting for it become active
2021/03/11 07:47:12 Index is now active
2021/03/11 07:47:12 Using n1ql client
2021/03/11 07:47:13 Expected and Actual scan responses are the same
2021/03/11 07:47:13 In DropAllSecondaryIndexes()
2021/03/11 07:47:13 Index found:  index_idxKey
2021/03/11 07:47:13 Dropped index index_idxKey
2021/03/11 07:47:52 Flushed the bucket default, Response body: 
--- PASS: TestTempBufScanResult (82.02s)
=== RUN   TestBuildDeferredAnotherBuilding
2021/03/11 07:47:52 In TestBuildDeferredAnotherBuilding()
2021/03/11 07:47:52 In DropAllSecondaryIndexes()
2021/03/11 07:48:42 Setting JSON docs in KV
2021/03/11 07:50:42 Build the deferred index id_age1. Waiting for the index to become active
2021/03/11 07:50:42 Waiting for index to go active ...
2021/03/11 07:50:43 Waiting for index to go active ...
2021/03/11 07:50:44 Waiting for index to go active ...
2021/03/11 07:50:45 Waiting for index to go active ...
2021/03/11 07:50:46 Waiting for index to go active ...
2021/03/11 07:50:47 Waiting for index to go active ...
2021/03/11 07:50:48 Waiting for index to go active ...
2021/03/11 07:50:49 Waiting for index to go active ...
2021/03/11 07:50:50 Waiting for index to go active ...
2021/03/11 07:50:51 Waiting for index to go active ...
2021/03/11 07:50:52 Waiting for index to go active ...
2021/03/11 07:50:53 Waiting for index to go active ...
2021/03/11 07:50:54 Waiting for index to go active ...
2021/03/11 07:50:55 Waiting for index to go active ...
2021/03/11 07:50:56 Index is now active
2021/03/11 07:50:56 Build command issued for the deferred indexes [16132651459823605031]
2021/03/11 07:50:58 Build index failed as expected: Build index fails. Index id_age will retry building in the background for reason: Build Already In Progress. Keyspace default.
2021/03/11 07:50:58 Waiting for index to go active ...
2021/03/11 07:50:59 Waiting for index to go active ...
2021/03/11 07:51:00 Waiting for index to go active ...
2021/03/11 07:51:01 Waiting for index to go active ...
2021/03/11 07:51:02 Waiting for index to go active ...
2021/03/11 07:51:03 Waiting for index to go active ...
2021/03/11 07:51:04 Waiting for index to go active ...
2021/03/11 07:51:05 Waiting for index to go active ...
2021/03/11 07:51:06 Waiting for index to go active ...
2021/03/11 07:51:07 Waiting for index to go active ...
2021/03/11 07:51:08 Waiting for index to go active ...
2021/03/11 07:51:09 Waiting for index to go active ...
2021/03/11 07:51:10 Waiting for index to go active ...
2021/03/11 07:51:11 Index is now active
2021/03/11 07:51:11 Waiting for index to go active ...
2021/03/11 07:51:12 Waiting for index to go active ...
2021/03/11 07:51:13 Waiting for index to go active ...
2021/03/11 07:51:14 Waiting for index to go active ...
2021/03/11 07:51:15 Waiting for index to go active ...
2021/03/11 07:51:16 Waiting for index to go active ...
2021/03/11 07:51:17 Waiting for index to go active ...
2021/03/11 07:51:18 Waiting for index to go active ...
2021/03/11 07:51:19 Waiting for index to go active ...
2021/03/11 07:51:20 Waiting for index to go active ...
2021/03/11 07:51:21 Waiting for index to go active ...
2021/03/11 07:51:22 Waiting for index to go active ...
2021/03/11 07:51:23 Waiting for index to go active ...
2021/03/11 07:51:24 Waiting for index to go active ...
2021/03/11 07:51:25 Waiting for index to go active ...
2021/03/11 07:51:26 Waiting for index to go active ...
2021/03/11 07:51:27 Waiting for index to go active ...
2021/03/11 07:51:28 Waiting for index to go active ...
2021/03/11 07:51:29 Waiting for index to go active ...
2021/03/11 07:51:30 Waiting for index to go active ...
2021/03/11 07:51:31 Waiting for index to go active ...
2021/03/11 07:51:32 Waiting for index to go active ...
2021/03/11 07:51:33 Waiting for index to go active ...
2021/03/11 07:51:34 Waiting for index to go active ...
2021/03/11 07:51:35 Waiting for index to go active ...
2021/03/11 07:51:36 Waiting for index to go active ...
2021/03/11 07:51:37 Waiting for index to go active ...
2021/03/11 07:51:38 Waiting for index to go active ...
2021/03/11 07:51:39 Waiting for index to go active ...
2021/03/11 07:51:40 Waiting for index to go active ...
2021/03/11 07:51:41 Waiting for index to go active ...
2021/03/11 07:51:42 Index is now active
2021/03/11 07:51:42 Using n1ql client
2021/03/11 07:51:43 Expected and Actual scan responses are the same
2021/03/11 07:51:43 Using n1ql client
2021/03/11 07:51:43 Expected and Actual scan responses are the same
--- PASS: TestBuildDeferredAnotherBuilding (231.27s)
=== RUN   TestMultipleBucketsDeferredBuild
2021/03/11 07:51:43 In TestMultipleBucketsDeferredBuild()
2021/03/11 07:51:43 In DropAllSecondaryIndexes()
2021/03/11 07:51:43 Index found:  id_age
2021/03/11 07:51:43 Dropped index id_age
2021/03/11 07:51:43 Index found:  id_company
2021/03/11 07:51:44 Dropped index id_company
2021/03/11 07:51:44 Index found:  id_age1
2021/03/11 07:51:44 Dropped index id_age1
2021/03/11 07:52:22 Flushed the bucket default, Response body: 
2021/03/11 07:52:25 Modified parameters of bucket default, responseBody: 
2021/03/11 07:52:25 http://127.0.0.1:9000/pools/default/buckets/defertest_buck2
2021/03/11 07:52:25 &{DELETE http://127.0.0.1:9000/pools/default/buckets/defertest_buck2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 07:52:25 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:22:24 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc016bd90c0 31 [] false false map[] 0xc016796700 }
2021/03/11 07:52:25 DeleteBucket failed for bucket defertest_buck2 
2021/03/11 07:52:25 Deleted bucket defertest_buck2, responseBody: Requested resource not found.
2021/03/11 07:52:25 Created bucket defertest_buck2, responseBody: 
2021/03/11 07:52:40 Setting JSON docs in KV
2021/03/11 07:54:04 Build command issued for the deferred indexes [4173336860533582864]
2021/03/11 07:54:04 Build command issued for the deferred indexes [5140605857634189662 12798524357495822958]
2021/03/11 07:54:04 Index state of 12798524357495822958 is INDEX_STATE_READY
2021/03/11 07:54:04 Waiting for index to go active ...
2021/03/11 07:54:05 Waiting for index to go active ...
2021/03/11 07:54:06 Waiting for index to go active ...
2021/03/11 07:54:07 Waiting for index to go active ...
2021/03/11 07:54:08 Waiting for index to go active ...
2021/03/11 07:54:09 Waiting for index to go active ...
2021/03/11 07:54:10 Waiting for index to go active ...
2021/03/11 07:54:11 Waiting for index to go active ...
2021/03/11 07:54:12 Waiting for index to go active ...
2021/03/11 07:54:13 Index is now active
2021/03/11 07:54:13 Waiting for index to go active ...
2021/03/11 07:54:14 Waiting for index to go active ...
2021/03/11 07:54:15 Waiting for index to go active ...
2021/03/11 07:54:16 Waiting for index to go active ...
2021/03/11 07:54:17 Waiting for index to go active ...
2021/03/11 07:54:18 Waiting for index to go active ...
2021/03/11 07:54:19 Waiting for index to go active ...
2021/03/11 07:54:20 Waiting for index to go active ...
2021/03/11 07:54:21 Waiting for index to go active ...
2021/03/11 07:54:22 Waiting for index to go active ...
2021/03/11 07:54:23 Waiting for index to go active ...
2021/03/11 07:54:24 Waiting for index to go active ...
2021/03/11 07:54:25 Waiting for index to go active ...
2021/03/11 07:54:26 Waiting for index to go active ...
2021/03/11 07:54:27 Waiting for index to go active ...
2021/03/11 07:54:28 Waiting for index to go active ...
2021/03/11 07:54:29 Waiting for index to go active ...
2021/03/11 07:54:30 Waiting for index to go active ...
2021/03/11 07:54:31 Waiting for index to go active ...
2021/03/11 07:54:32 Waiting for index to go active ...
2021/03/11 07:54:33 Waiting for index to go active ...
2021/03/11 07:54:34 Waiting for index to go active ...
2021/03/11 07:54:35 Waiting for index to go active ...
2021/03/11 07:54:36 Waiting for index to go active ...
2021/03/11 07:54:37 Waiting for index to go active ...
2021/03/11 07:54:38 Waiting for index to go active ...
2021/03/11 07:54:39 Waiting for index to go active ...
2021/03/11 07:54:40 Index is now active
2021/03/11 07:54:40 Using n1ql client
2021/03/11 07:54:40 Expected and Actual scan responses are the same
2021/03/11 07:54:40 Using n1ql client
2021/03/11 07:54:40 Expected and Actual scan responses are the same
2021/03/11 07:54:41 Using n1ql client
2021-03-11T07:54:41.051+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T07:54:41.052+05:30 [Info] GSIC[default/defertest_buck2-_default-_default-1615429481047555738] started ...
2021/03/11 07:54:41 Expected and Actual scan responses are the same
2021/03/11 07:54:44 Modified parameters of bucket default, responseBody: 
2021/03/11 07:54:46 Deleted bucket defertest_buck2, responseBody: 
--- PASS: TestMultipleBucketsDeferredBuild (188.11s)
=== RUN   TestCreateDropCreateDeferredIndex
2021/03/11 07:54:51 In TestCreateDropCreateDeferredIndex()
2021/03/11 07:54:51 In DropAllSecondaryIndexes()
2021/03/11 07:54:51 Index found:  buck1_id1
2021/03/11 07:54:52 Dropped index buck1_id1
2021/03/11 07:54:52 Index found:  buck1_id2
2021/03/11 07:54:52 Dropped index buck1_id2
2021/03/11 07:54:54 Setting JSON docs in KV
2021/03/11 07:55:07 Created the secondary index id_company. Waiting for it become active
2021/03/11 07:55:07 Index is now active
2021/03/11 07:55:08 Dropping the secondary index id_age
2021/03/11 07:55:08 Index dropped
2021/03/11 07:55:12 Setting JSON docs in KV
2021/03/11 07:55:21 Using n1ql client
2021/03/11 07:55:22 Expected and Actual scan responses are the same
--- PASS: TestCreateDropCreateDeferredIndex (30.05s)
=== RUN   TestMultipleDeferredIndexes_BuildTogether
2021/03/11 07:55:22 In TestMultipleDeferredIndexes_BuildTogether()
2021/03/11 07:55:22 In DropAllSecondaryIndexes()
2021/03/11 07:55:22 Index found:  id_company
2021/03/11 07:55:22 Dropped index id_company
2021/03/11 07:55:25 Setting JSON docs in KV
2021/03/11 07:55:39 Created the secondary index id_company. Waiting for it become active
2021/03/11 07:55:39 Index is now active
2021/03/11 07:55:41 Build command issued for the deferred indexes [id_age id_gender id_isActive]
2021/03/11 07:55:41 Waiting for the index id_age to become active
2021/03/11 07:55:41 Waiting for index to go active ...
2021/03/11 07:55:42 Waiting for index to go active ...
2021/03/11 07:55:43 Waiting for index to go active ...
2021/03/11 07:55:44 Waiting for index to go active ...
2021/03/11 07:55:45 Waiting for index to go active ...
2021/03/11 07:55:46 Waiting for index to go active ...
2021/03/11 07:55:47 Waiting for index to go active ...
2021/03/11 07:55:48 Waiting for index to go active ...
2021/03/11 07:55:49 Waiting for index to go active ...
2021/03/11 07:55:50 Waiting for index to go active ...
2021/03/11 07:55:51 Waiting for index to go active ...
2021/03/11 07:55:52 Waiting for index to go active ...
2021/03/11 07:55:53 Waiting for index to go active ...
2021/03/11 07:55:54 Index is now active
2021/03/11 07:55:54 Waiting for the index id_gender to become active
2021/03/11 07:55:54 Index is now active
2021/03/11 07:55:54 Waiting for the index id_isActive to become active
2021/03/11 07:55:54 Index is now active
2021/03/11 07:55:54 Using n1ql client
2021/03/11 07:55:54 Expected and Actual scan responses are the same
2021/03/11 07:55:57 Setting JSON docs in KV
2021/03/11 07:56:07 Using n1ql client
2021/03/11 07:56:07 Expected and Actual scan responses are the same
2021/03/11 07:56:07 Using n1ql client
2021/03/11 07:56:07 Expected and Actual scan responses are the same
--- PASS: TestMultipleDeferredIndexes_BuildTogether (45.96s)
=== RUN   TestMultipleDeferredIndexes_BuildOneByOne
2021/03/11 07:56:07 In TestMultipleDeferredIndexes_BuildOneByOne()
2021/03/11 07:56:07 In DropAllSecondaryIndexes()
2021/03/11 07:56:07 Index found:  id_age
2021/03/11 07:56:08 Dropped index id_age
2021/03/11 07:56:08 Index found:  id_company
2021/03/11 07:56:08 Dropped index id_company
2021/03/11 07:56:08 Index found:  id_isActive
2021/03/11 07:56:08 Dropped index id_isActive
2021/03/11 07:56:08 Index found:  id_gender
2021/03/11 07:56:08 Dropped index id_gender
2021/03/11 07:56:11 Setting JSON docs in KV
2021/03/11 07:56:27 Created the secondary index id_company. Waiting for it become active
2021/03/11 07:56:27 Index is now active
2021/03/11 07:56:28 Build command issued for the deferred indexes [id_age]
2021/03/11 07:56:28 Waiting for the index id_age to become active
2021/03/11 07:56:28 Waiting for index to go active ...
2021/03/11 07:56:29 Waiting for index to go active ...
2021/03/11 07:56:30 Waiting for index to go active ...
2021/03/11 07:56:31 Waiting for index to go active ...
2021/03/11 07:56:32 Waiting for index to go active ...
2021/03/11 07:56:33 Waiting for index to go active ...
2021/03/11 07:56:34 Waiting for index to go active ...
2021/03/11 07:56:35 Waiting for index to go active ...
2021/03/11 07:56:36 Waiting for index to go active ...
2021/03/11 07:56:37 Waiting for index to go active ...
2021/03/11 07:56:38 Waiting for index to go active ...
2021/03/11 07:56:39 Waiting for index to go active ...
2021/03/11 07:56:40 Index is now active
2021/03/11 07:56:40 Build command issued for the deferred indexes [id_gender]
2021/03/11 07:56:40 Waiting for the index id_gender to become active
2021/03/11 07:56:40 Waiting for index to go active ...
2021/03/11 07:56:41 Waiting for index to go active ...
2021/03/11 07:56:42 Waiting for index to go active ...
2021/03/11 07:56:43 Waiting for index to go active ...
2021/03/11 07:56:44 Waiting for index to go active ...
2021/03/11 07:56:45 Waiting for index to go active ...
2021/03/11 07:56:46 Waiting for index to go active ...
2021/03/11 07:56:47 Waiting for index to go active ...
2021/03/11 07:56:48 Waiting for index to go active ...
2021/03/11 07:56:49 Waiting for index to go active ...
2021/03/11 07:56:50 Index is now active
2021/03/11 07:56:50 Build command issued for the deferred indexes [id_isActive]
2021/03/11 07:56:50 Waiting for the index id_isActive to become active
2021/03/11 07:56:50 Waiting for index to go active ...
2021/03/11 07:56:51 Waiting for index to go active ...
2021/03/11 07:56:52 Waiting for index to go active ...
2021/03/11 07:56:53 Waiting for index to go active ...
2021/03/11 07:56:54 Waiting for index to go active ...
2021/03/11 07:56:55 Waiting for index to go active ...
2021/03/11 07:56:56 Waiting for index to go active ...
2021/03/11 07:56:57 Waiting for index to go active ...
2021/03/11 07:56:58 Waiting for index to go active ...
2021/03/11 07:56:59 Index is now active
2021/03/11 07:57:00 Using n1ql client
2021/03/11 07:57:00 Expected and Actual scan responses are the same
2021/03/11 07:57:03 Setting JSON docs in KV
2021/03/11 07:57:13 Using n1ql client
2021/03/11 07:57:14 Expected and Actual scan responses are the same
2021/03/11 07:57:14 Using n1ql client
2021/03/11 07:57:14 Expected and Actual scan responses are the same
--- PASS: TestMultipleDeferredIndexes_BuildOneByOne (66.46s)
=== RUN   TestDropDeferredIndexWhileOthersBuilding
2021/03/11 07:57:14 In TestDropDeferredIndexWhileOthersBuilding()
2021/03/11 07:57:14 In DropAllSecondaryIndexes()
2021/03/11 07:57:14 Index found:  id_company
2021/03/11 07:57:14 Dropped index id_company
2021/03/11 07:57:14 Index found:  id_gender
2021/03/11 07:57:14 Dropped index id_gender
2021/03/11 07:57:14 Index found:  id_age
2021/03/11 07:57:14 Dropped index id_age
2021/03/11 07:57:14 Index found:  id_isActive
2021/03/11 07:57:15 Dropped index id_isActive
2021/03/11 07:57:18 Setting JSON docs in KV
2021/03/11 07:57:34 Created the secondary index id_company. Waiting for it become active
2021/03/11 07:57:34 Index is now active
2021/03/11 07:57:36 Build command issued for the deferred indexes [4136047891290058716 10463172124820784252]
2021/03/11 07:57:38 Dropping the secondary index id_isActive
2021/03/11 07:57:38 Index dropped
2021/03/11 07:57:38 Waiting for index to go active ...
2021/03/11 07:57:39 Waiting for index to go active ...
2021/03/11 07:57:40 Waiting for index to go active ...
2021/03/11 07:57:41 Waiting for index to go active ...
2021/03/11 07:57:42 Waiting for index to go active ...
2021/03/11 07:57:43 Waiting for index to go active ...
2021/03/11 07:57:44 Waiting for index to go active ...
2021/03/11 07:57:45 Waiting for index to go active ...
2021/03/11 07:57:46 Waiting for index to go active ...
2021/03/11 07:57:47 Waiting for index to go active ...
2021/03/11 07:57:48 Waiting for index to go active ...
2021/03/11 07:57:49 Waiting for index to go active ...
2021/03/11 07:57:50 Index is now active
2021/03/11 07:57:50 Index is now active
2021/03/11 07:57:51 Using n1ql client
2021/03/11 07:57:52 Expected and Actual scan responses are the same
2021/03/11 07:57:52 Using n1ql client
2021/03/11 07:57:52 Expected and Actual scan responses are the same
2021/03/11 07:57:54 Setting JSON docs in KV
2021/03/11 07:58:04 Using n1ql client
2021/03/11 07:58:05 Expected and Actual scan responses are the same
--- PASS: TestDropDeferredIndexWhileOthersBuilding (50.86s)
=== RUN   TestDropBuildingDeferredIndex
2021/03/11 07:58:05 In TestDropBuildingDeferredIndex()
2021/03/11 07:58:05 In DropAllSecondaryIndexes()
2021/03/11 07:58:05 Index found:  id_company
2021/03/11 07:58:05 Dropped index id_company
2021/03/11 07:58:05 Index found:  id_gender
2021/03/11 07:58:05 Dropped index id_gender
2021/03/11 07:58:05 Index found:  id_age
2021/03/11 07:58:05 Dropped index id_age
2021/03/11 07:58:08 Setting JSON docs in KV
2021/03/11 07:58:15 Build command issued for the deferred indexes [13281593373062542198 18146976828000446440]
2021/03/11 07:58:16 Dropping the secondary index id_age
2021/03/11 07:58:16 Index dropped
2021/03/11 07:58:16 Waiting for index to go active ...
2021/03/11 07:58:17 Waiting for index to go active ...
2021/03/11 07:58:18 Waiting for index to go active ...
2021/03/11 07:58:19 Waiting for index to go active ...
2021/03/11 07:58:20 Waiting for index to go active ...
2021/03/11 07:58:21 Waiting for index to go active ...
2021/03/11 07:58:22 Waiting for index to go active ...
2021/03/11 07:58:23 Waiting for index to go active ...
2021/03/11 07:58:24 Waiting for index to go active ...
2021/03/11 07:58:25 Waiting for index to go active ...
2021/03/11 07:58:26 Waiting for index to go active ...
2021/03/11 07:58:27 Index is now active
2021/03/11 07:58:27 Build command issued for the deferred indexes [id_gender]
2021/03/11 07:58:27 Waiting for the index id_gender to become active
2021/03/11 07:58:27 Waiting for index to go active ...
2021/03/11 07:58:28 Waiting for index to go active ...
2021/03/11 07:58:29 Waiting for index to go active ...
2021/03/11 07:58:30 Waiting for index to go active ...
2021/03/11 07:58:31 Waiting for index to go active ...
2021/03/11 07:58:32 Waiting for index to go active ...
2021/03/11 07:58:33 Waiting for index to go active ...
2021/03/11 07:58:34 Waiting for index to go active ...
2021/03/11 07:58:35 Waiting for index to go active ...
2021/03/11 07:58:36 Waiting for index to go active ...
2021/03/11 07:58:37 Waiting for index to go active ...
2021/03/11 07:58:38 Waiting for index to go active ...
2021/03/11 07:58:39 Waiting for index to go active ...
2021/03/11 07:58:40 Index is now active
2021/03/11 07:58:41 Using n1ql client
2021/03/11 07:58:41 Expected and Actual scan responses are the same
2021/03/11 07:58:42 Using n1ql client
2021/03/11 07:58:42 Expected and Actual scan responses are the same
2021/03/11 07:58:45 Setting JSON docs in KV
2021/03/11 07:58:55 Using n1ql client
2021/03/11 07:58:56 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingDeferredIndex (51.59s)
=== RUN   TestDropMultipleBuildingDeferredIndexes
2021/03/11 07:58:56 In TestDropMultipleBuildingDeferredIndexes()
2021/03/11 07:58:56 In DropAllSecondaryIndexes()
2021/03/11 07:58:56 Index found:  id_gender
2021/03/11 07:58:57 Dropped index id_gender
2021/03/11 07:58:57 Index found:  id_company
2021/03/11 07:58:57 Dropped index id_company
2021/03/11 07:59:06 Setting JSON docs in KV
2021/03/11 07:59:39 Created the secondary index id_company. Waiting for it become active
2021/03/11 07:59:39 Index is now active
2021/03/11 07:59:41 Build command issued for the deferred indexes [6114523914028251413 6982464455244277686]
2021/03/11 07:59:42 Dropping the secondary index id_age
2021/03/11 07:59:42 Index dropped
2021/03/11 07:59:42 Dropping the secondary index id_gender
2021/03/11 07:59:58 Index dropped
2021/03/11 07:59:59 Build command issued for the deferred indexes [id_isActive]
2021/03/11 07:59:59 Waiting for the index id_isActive to become active
2021/03/11 07:59:59 Waiting for index to go active ...
2021/03/11 08:00:00 Waiting for index to go active ...
2021/03/11 08:00:01 Waiting for index to go active ...
2021/03/11 08:00:02 Waiting for index to go active ...
2021/03/11 08:00:03 Waiting for index to go active ...
2021/03/11 08:00:04 Waiting for index to go active ...
2021/03/11 08:00:05 Waiting for index to go active ...
2021/03/11 08:00:06 Waiting for index to go active ...
2021/03/11 08:00:07 Waiting for index to go active ...
2021/03/11 08:00:08 Waiting for index to go active ...
2021/03/11 08:00:09 Waiting for index to go active ...
2021/03/11 08:00:10 Waiting for index to go active ...
2021/03/11 08:00:11 Waiting for index to go active ...
2021/03/11 08:00:12 Waiting for index to go active ...
2021/03/11 08:00:13 Index is now active
2021/03/11 08:00:23 Using n1ql client
2021/03/11 08:00:24 Expected and Actual scan responses are the same
2021/03/11 08:00:24 Number of docScanResults and scanResults = 180000 and 180000
2021/03/11 08:00:24 Using n1ql client
2021/03/11 08:00:25 Expected and Actual scan responses are the same
2021/03/11 08:00:25 Number of docScanResults and scanResults = 180000 and 180000
--- PASS: TestDropMultipleBuildingDeferredIndexes (88.45s)
=== RUN   TestDropOneIndexSecondDeferBuilding
2021/03/11 08:00:25 In TestDropOneIndexSecondDeferBuilding()
2021/03/11 08:00:25 In DropAllSecondaryIndexes()
2021/03/11 08:00:25 Index found:  id_company
2021/03/11 08:00:25 Dropped index id_company
2021/03/11 08:00:25 Index found:  id_isActive
2021/03/11 08:00:25 Dropped index id_isActive
2021/03/11 08:00:28 Setting JSON docs in KV
2021/03/11 08:00:34 Build command issued for the deferred indexes [id_company]
2021/03/11 08:00:34 Waiting for the index id_company to become active
2021/03/11 08:00:34 Waiting for index to go active ...
2021/03/11 08:00:35 Waiting for index to go active ...
2021/03/11 08:00:36 Waiting for index to go active ...
2021/03/11 08:00:37 Waiting for index to go active ...
2021/03/11 08:00:38 Waiting for index to go active ...
2021/03/11 08:00:39 Waiting for index to go active ...
2021/03/11 08:00:40 Waiting for index to go active ...
2021/03/11 08:00:41 Waiting for index to go active ...
2021/03/11 08:00:42 Waiting for index to go active ...
2021/03/11 08:00:43 Waiting for index to go active ...
2021/03/11 08:00:44 Waiting for index to go active ...
2021/03/11 08:00:45 Waiting for index to go active ...
2021/03/11 08:00:46 Waiting for index to go active ...
2021/03/11 08:00:47 Waiting for index to go active ...
2021/03/11 08:00:48 Index is now active
2021/03/11 08:00:48 Build command issued for the deferred indexes [6733114825490316210]
2021/03/11 08:00:49 Dropping the secondary index id_company
2021/03/11 08:00:50 Index dropped
2021/03/11 08:01:02 Setting JSON docs in KV
2021/03/11 08:01:14 Index is now active
2021/03/11 08:01:14 Build command issued for the deferred indexes [id_gender]
2021/03/11 08:01:14 Waiting for the index id_gender to become active
2021/03/11 08:01:14 Waiting for index to go active ...
2021/03/11 08:01:15 Waiting for index to go active ...
2021/03/11 08:01:16 Waiting for index to go active ...
2021/03/11 08:01:17 Waiting for index to go active ...
2021/03/11 08:01:18 Waiting for index to go active ...
2021/03/11 08:01:19 Waiting for index to go active ...
2021/03/11 08:01:20 Waiting for index to go active ...
2021/03/11 08:01:21 Waiting for index to go active ...
2021/03/11 08:01:22 Waiting for index to go active ...
2021/03/11 08:01:23 Waiting for index to go active ...
2021/03/11 08:01:24 Waiting for index to go active ...
2021/03/11 08:01:25 Waiting for index to go active ...
2021/03/11 08:01:26 Waiting for index to go active ...
2021/03/11 08:01:27 Waiting for index to go active ...
2021/03/11 08:01:28 Waiting for index to go active ...
2021/03/11 08:01:29 Waiting for index to go active ...
2021/03/11 08:01:30 Index is now active
2021/03/11 08:01:30 Using n1ql client
2021/03/11 08:01:30 Expected and Actual scan responses are the same
2021/03/11 08:01:31 Using n1ql client
2021/03/11 08:01:32 Expected and Actual scan responses are the same
--- PASS: TestDropOneIndexSecondDeferBuilding (66.97s)
=== RUN   TestDropSecondIndexSecondDeferBuilding
2021/03/11 08:01:32 In TestDropSecondIndexSecondDeferBuilding()
2021/03/11 08:01:32 In DropAllSecondaryIndexes()
2021/03/11 08:01:32 Index found:  id_gender
2021/03/11 08:01:32 Dropped index id_gender
2021/03/11 08:01:32 Index found:  id_age
2021/03/11 08:01:32 Dropped index id_age
2021/03/11 08:01:35 Setting JSON docs in KV
2021/03/11 08:01:42 Build command issued for the deferred indexes [id_company]
2021/03/11 08:01:42 Waiting for the index id_company to become active
2021/03/11 08:01:42 Waiting for index to go active ...
2021/03/11 08:01:43 Waiting for index to go active ...
2021/03/11 08:01:44 Waiting for index to go active ...
2021/03/11 08:01:45 Waiting for index to go active ...
2021/03/11 08:01:46 Waiting for index to go active ...
2021/03/11 08:01:47 Waiting for index to go active ...
2021/03/11 08:01:48 Waiting for index to go active ...
2021/03/11 08:01:49 Waiting for index to go active ...
2021/03/11 08:01:50 Waiting for index to go active ...
2021/03/11 08:01:51 Waiting for index to go active ...
2021/03/11 08:01:52 Waiting for index to go active ...
2021/03/11 08:01:53 Waiting for index to go active ...
2021/03/11 08:01:54 Waiting for index to go active ...
2021/03/11 08:01:55 Waiting for index to go active ...
2021/03/11 08:01:56 Waiting for index to go active ...
2021/03/11 08:01:57 Waiting for index to go active ...
2021/03/11 08:01:58 Index is now active
2021/03/11 08:01:58 Build command issued for the deferred indexes [8769409810810974003]
2021/03/11 08:01:59 Dropping the secondary index id_age
2021/03/11 08:01:59 Index dropped
2021/03/11 08:02:02 Setting JSON docs in KV
2021/03/11 08:02:11 Build command issued for the deferred indexes [id_gender]
2021/03/11 08:02:11 Waiting for the index id_gender to become active
2021/03/11 08:02:11 Waiting for index to go active ...
2021/03/11 08:02:12 Waiting for index to go active ...
2021/03/11 08:02:13 Waiting for index to go active ...
2021/03/11 08:02:14 Waiting for index to go active ...
2021/03/11 08:02:15 Waiting for index to go active ...
2021/03/11 08:02:16 Waiting for index to go active ...
2021/03/11 08:02:17 Waiting for index to go active ...
2021/03/11 08:02:18 Waiting for index to go active ...
2021/03/11 08:02:19 Waiting for index to go active ...
2021/03/11 08:02:20 Waiting for index to go active ...
2021/03/11 08:02:21 Waiting for index to go active ...
2021/03/11 08:02:22 Waiting for index to go active ...
2021/03/11 08:02:23 Waiting for index to go active ...
2021/03/11 08:02:24 Waiting for index to go active ...
2021/03/11 08:02:25 Waiting for index to go active ...
2021/03/11 08:02:26 Waiting for index to go active ...
2021/03/11 08:02:27 Waiting for index to go active ...
2021/03/11 08:02:28 Index is now active
2021/03/11 08:02:29 Using n1ql client
2021/03/11 08:02:30 Expected and Actual scan responses are the same
2021/03/11 08:02:30 Using n1ql client
2021/03/11 08:02:32 Expected and Actual scan responses are the same
--- PASS: TestDropSecondIndexSecondDeferBuilding (59.81s)
=== RUN   TestCreateAfterDropWhileIndexBuilding
2021/03/11 08:02:32 In TestCreateAfterDropWhileIndexBuilding()
2021/03/11 08:02:32 In DropAllSecondaryIndexes()
2021/03/11 08:02:32 Index found:  id_company
2021/03/11 08:02:32 Dropped index id_company
2021/03/11 08:02:32 Index found:  id_gender
2021/03/11 08:02:32 Dropped index id_gender
2021/03/11 08:02:59 Setting JSON docs in KV
2021/03/11 08:04:04 Build command issued for the deferred indexes [10850924579691494102]
2021/03/11 08:04:05 Waiting for index to go active ...
2021/03/11 08:04:06 Waiting for index to go active ...
2021/03/11 08:04:07 Waiting for index to go active ...
2021/03/11 08:04:08 Waiting for index to go active ...
2021/03/11 08:04:09 Waiting for index to go active ...
2021/03/11 08:04:10 Waiting for index to go active ...
2021/03/11 08:04:11 Waiting for index to go active ...
2021/03/11 08:04:12 Waiting for index to go active ...
2021/03/11 08:04:13 Waiting for index to go active ...
2021/03/11 08:04:14 Waiting for index to go active ...
2021/03/11 08:04:15 Waiting for index to go active ...
2021/03/11 08:04:16 Waiting for index to go active ...
2021/03/11 08:04:17 Waiting for index to go active ...
2021/03/11 08:04:18 Waiting for index to go active ...
2021/03/11 08:04:19 Waiting for index to go active ...
2021/03/11 08:04:20 Waiting for index to go active ...
2021/03/11 08:04:21 Waiting for index to go active ...
2021/03/11 08:04:22 Waiting for index to go active ...
2021/03/11 08:04:23 Waiting for index to go active ...
2021/03/11 08:04:24 Waiting for index to go active ...
2021/03/11 08:04:25 Waiting for index to go active ...
2021/03/11 08:04:26 Index is now active
2021/03/11 08:04:26 Build command issued for the deferred indexes [10470906665930669047]
2021/03/11 08:04:27 Dropping the secondary index id_company
2021/03/11 08:04:27 Index dropped
2021/03/11 08:04:27 Dropping the secondary index id_age
2021/03/11 08:04:28 Index dropped
2021/03/11 08:04:33 Build command issued for the deferred indexes [id_gender]
2021/03/11 08:04:33 Waiting for the index id_gender to become active
2021/03/11 08:04:33 Waiting for index to go active ...
2021/03/11 08:04:34 Waiting for index to go active ...
2021/03/11 08:04:35 Waiting for index to go active ...
2021/03/11 08:04:36 Waiting for index to go active ...
2021/03/11 08:04:37 Waiting for index to go active ...
2021/03/11 08:04:38 Waiting for index to go active ...
2021/03/11 08:04:39 Waiting for index to go active ...
2021/03/11 08:04:40 Waiting for index to go active ...
2021/03/11 08:04:41 Waiting for index to go active ...
2021/03/11 08:04:42 Waiting for index to go active ...
2021/03/11 08:04:43 Waiting for index to go active ...
2021/03/11 08:04:44 Waiting for index to go active ...
2021/03/11 08:04:45 Waiting for index to go active ...
2021/03/11 08:04:46 Waiting for index to go active ...
2021/03/11 08:04:47 Waiting for index to go active ...
2021/03/11 08:04:48 Waiting for index to go active ...
2021/03/11 08:04:49 Waiting for index to go active ...
2021/03/11 08:04:50 Waiting for index to go active ...
2021/03/11 08:04:51 Waiting for index to go active ...
2021/03/11 08:04:52 Waiting for index to go active ...
2021/03/11 08:04:53 Waiting for index to go active ...
2021/03/11 08:04:54 Waiting for index to go active ...
2021/03/11 08:04:55 Index is now active
2021/03/11 08:04:56 Index is now active
2021/03/11 08:04:57 Using n1ql client
2021/03/11 08:04:58 Expected and Actual scan responses are the same
--- PASS: TestCreateAfterDropWhileIndexBuilding (146.51s)
=== RUN   TestDropBuildingIndex1
2021/03/11 08:04:58 In TestDropBuildingIndex1()
2021/03/11 08:04:58 In DropAllSecondaryIndexes()
2021/03/11 08:04:58 Index found:  id_gender
2021/03/11 08:04:58 Dropped index id_gender
2021/03/11 08:05:04 Setting JSON docs in KV
2021/03/11 08:05:40 Created the secondary index id_company. Waiting for it become active
2021/03/11 08:05:40 Index is now active
2021/03/11 08:06:06 Dropping the secondary index id_age
2021/03/11 08:06:06 Index dropped
2021/03/11 08:06:29 Created the secondary index id_age. Waiting for it become active
2021/03/11 08:06:29 Index is now active
2021/03/11 08:06:31 Setting JSON docs in KV
2021/03/11 08:06:41 Using n1ql client
2021/03/11 08:06:41 Expected and Actual scan responses are the same
2021/03/11 08:06:41 Using n1ql client
2021/03/11 08:06:42 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingIndex1 (103.74s)
=== RUN   TestDropBuildingIndex2
2021/03/11 08:06:42 In TestDropBuildingIndex2()
2021/03/11 08:06:42 In DropAllSecondaryIndexes()
2021/03/11 08:06:42 Index found:  id_age
2021/03/11 08:06:42 Dropped index id_age
2021/03/11 08:06:42 Index found:  id_company
2021/03/11 08:06:42 Dropped index id_company
2021/03/11 08:06:48 Setting JSON docs in KV
2021/03/11 08:07:27 Created the secondary index id_company. Waiting for it become active
2021/03/11 08:07:27 Index is now active
2021/03/11 08:07:55 Dropping the secondary index id_company
2021/03/11 08:07:55 Index dropped
2021/03/11 08:07:55 Index is now active
2021/03/11 08:08:20 Created the secondary index id_company. Waiting for it become active
2021/03/11 08:08:20 Index is now active
2021/03/11 08:08:22 Setting JSON docs in KV
2021/03/11 08:08:33 Using n1ql client
2021/03/11 08:08:33 Expected and Actual scan responses are the same
2021/03/11 08:08:34 Using n1ql client
2021/03/11 08:08:36 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingIndex2 (114.10s)
=== RUN   TestDropIndexWithDataLoad
2021/03/11 08:08:36 In TestDropIndexWithDataLoad()
2021/03/11 08:08:36 In DropAllSecondaryIndexes()
2021/03/11 08:08:36 Index found:  id_age
2021/03/11 08:08:36 Dropped index id_age
2021/03/11 08:08:36 Index found:  id_company
2021/03/11 08:08:36 Dropped index id_company
2021/03/11 08:08:39 Setting JSON docs in KV
2021/03/11 08:09:12 Created the secondary index id_company. Waiting for it become active
2021/03/11 08:09:12 Index is now active
2021/03/11 08:09:40 Created the secondary index id_age. Waiting for it become active
2021/03/11 08:09:40 Index is now active
2021/03/11 08:10:07 Created the secondary index id_gender. Waiting for it become active
2021/03/11 08:10:07 Index is now active
2021/03/11 08:10:34 Created the secondary index id_isActive. Waiting for it become active
2021/03/11 08:10:34 Index is now active
2021/03/11 08:10:43 Setting JSON docs in KV
2021/03/11 08:10:43 In LoadKVBucket
2021/03/11 08:10:43 Bucket name = default
2021/03/11 08:10:43 In DropIndexWhileKVLoad
2021/03/11 08:10:44 Dropping the secondary index id_company
2021/03/11 08:10:45 Index dropped
2021/03/11 08:11:13 Using n1ql client
2021/03/11 08:11:14 Expected and Actual scan responses are the same
2021/03/11 08:11:14 Number of docScanResults and scanResults = 96752 and 96752
2021/03/11 08:11:17 Using n1ql client
2021/03/11 08:11:19 Expected and Actual scan responses are the same
2021/03/11 08:11:19 Number of docScanResults and scanResults = 420000 and 420000
--- PASS: TestDropIndexWithDataLoad (163.04s)
=== RUN   TestDropAllIndexesWithDataLoad
2021/03/11 08:11:19 In TestDropAllIndexesWithDataLoad()
2021/03/11 08:11:19 In DropAllSecondaryIndexes()
2021/03/11 08:11:19 Index found:  id_age
2021/03/11 08:11:19 Dropped index id_age
2021/03/11 08:11:19 Index found:  id_gender
2021/03/11 08:11:19 Dropped index id_gender
2021/03/11 08:11:19 Index found:  id_isActive
2021/03/11 08:11:19 Dropped index id_isActive
2021/03/11 08:11:22 Setting JSON docs in KV
2021/03/11 08:11:57 Created the secondary index id_company. Waiting for it become active
2021/03/11 08:11:57 Index is now active
2021/03/11 08:12:26 Created the secondary index id_age. Waiting for it become active
2021/03/11 08:12:26 Index is now active
2021/03/11 08:12:54 Created the secondary index id_gender. Waiting for it become active
2021/03/11 08:12:54 Index is now active
2021/03/11 08:13:24 Created the secondary index id_isActive. Waiting for it become active
2021/03/11 08:13:24 Index is now active
2021/03/11 08:13:32 Setting JSON docs in KV
2021/03/11 08:13:32 In LoadKVBucket
2021/03/11 08:13:32 Bucket name = default
2021/03/11 08:13:32 In DropIndexWhileKVLoad
2021/03/11 08:13:32 In DropIndexWhileKVLoad
2021/03/11 08:13:32 In DropIndexWhileKVLoad
2021/03/11 08:13:32 In DropIndexWhileKVLoad
2021/03/11 08:13:33 Dropping the secondary index id_gender
2021/03/11 08:13:33 Dropping the secondary index id_age
2021/03/11 08:13:33 Dropping the secondary index id_company
2021/03/11 08:13:33 Dropping the secondary index id_isActive
2021/03/11 08:13:33 Index dropped
2021/03/11 08:13:33 Index dropped
2021/03/11 08:13:33 Index dropped
2021/03/11 08:13:33 Index dropped
2021/03/11 08:13:54 Using n1ql client
2021/03/11 08:13:54 Scan failed as expected with error: Index Not Found - cause: GSI index id_company not found.
--- PASS: TestDropAllIndexesWithDataLoad (154.56s)
=== RUN   TestCreateBucket_AnotherIndexBuilding
2021/03/11 08:13:54 In TestCreateBucket_AnotherIndexBuilding()
2021/03/11 08:13:54 In DropAllSecondaryIndexes()
2021/03/11 08:14:33 Flushed the bucket default, Response body: 
2021/03/11 08:14:36 Modified parameters of bucket default, responseBody: 
2021/03/11 08:14:36 http://127.0.0.1:9000/pools/default/buckets/multi_buck2
2021/03/11 08:14:36 &{DELETE http://127.0.0.1:9000/pools/default/buckets/multi_buck2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:14:36 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:44:35 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc0b1e16900 31 [] false false map[] 0xc00c220300 }
2021/03/11 08:14:36 DeleteBucket failed for bucket multi_buck2 
2021/03/11 08:14:36 Deleted bucket multi_buck2, responseBody: Requested resource not found.
2021/03/11 08:14:51 Setting JSON docs in KV
2021/03/11 08:16:53 Created bucket multi_buck2, responseBody: 
2021/03/11 08:17:18 Index is now active
2021/03/11 08:17:18 Index is now active
2021/03/11 08:17:18 Using n1ql client
2021-03-11T08:17:18.555+05:30 [Info] metadata provider version changed 1210 -> 1211
2021-03-11T08:17:18.555+05:30 [Info] switched currmeta from 1210 -> 1211 force false 
2021-03-11T08:17:18.555+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:17:18.555+05:30 [Info] GSIC[default/multi_buck2-_default-_default-1615430838508619263] started ...
2021/03/11 08:17:18 Expected and Actual scan responses are the same
2021/03/11 08:17:18 Number of docScanResults and scanResults = 10000 and 10000
2021/03/11 08:17:19 Using n1ql client
2021/03/11 08:17:21 Expected and Actual scan responses are the same
2021/03/11 08:17:21 Number of docScanResults and scanResults = 200000 and 200000
2021/03/11 08:17:24 Deleted bucket multi_buck2, responseBody: 
2021/03/11 08:18:03 Flushed the bucket default, Response body: 
--- PASS: TestCreateBucket_AnotherIndexBuilding (249.09s)
=== RUN   TestDropBucket2Index_Bucket1IndexBuilding
2021/03/11 08:18:03 In TestDropBucket2Index_Bucket1IndexBuilding()
2021/03/11 08:18:03 In DropAllSecondaryIndexes()
2021/03/11 08:18:03 Index found:  buck1_idx
2021/03/11 08:18:03 Dropped index buck1_idx
2021/03/11 08:18:42 Flushed the bucket default, Response body: 
2021/03/11 08:18:45 Modified parameters of bucket default, responseBody: 
2021/03/11 08:18:45 http://127.0.0.1:9000/pools/default/buckets/multibucket_test3
2021/03/11 08:18:45 &{DELETE http://127.0.0.1:9000/pools/default/buckets/multibucket_test3 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:18:45 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:48:44 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc0264fd780 31 [] false false map[] 0xc003ae3a00 }
2021/03/11 08:18:45 DeleteBucket failed for bucket multibucket_test3 
2021/03/11 08:18:45 Deleted bucket multibucket_test3, responseBody: Requested resource not found.
2021/03/11 08:18:45 Created bucket multibucket_test3, responseBody: 
2021/03/11 08:19:00 Setting JSON docs in KV
2021/03/11 08:20:03 Created the secondary index buck2_idx. Waiting for it become active
2021/03/11 08:20:03 Index is now active
2021/03/11 08:20:16 Dropping the secondary index buck2_idx
2021/03/11 08:20:16 Index dropped
2021/03/11 08:20:16 Index is now active
2021/03/11 08:20:16 Using n1ql client
2021/03/11 08:20:17 Expected and Actual scan responses are the same
2021/03/11 08:20:17 Number of docScanResults and scanResults = 100000 and 100000
2021/03/11 08:20:20 Deleted bucket multibucket_test3, responseBody: 
2021/03/11 08:20:58 Flushed the bucket default, Response body: 
--- PASS: TestDropBucket2Index_Bucket1IndexBuilding (175.78s)
=== RUN   TestDeleteBucketWhileInitialIndexBuild
2021/03/11 08:20:58 In TestDeleteBucketWhileInitialIndexBuild()
2021/03/11 08:20:58 ============== DBG: Drop all indexes in all buckets
2021/03/11 08:20:58 In DropAllSecondaryIndexes()
2021/03/11 08:20:58 Index found:  buck1_idx
2021/03/11 08:20:59 Dropped index buck1_idx
2021/03/11 08:20:59 ============== DBG: Delete bucket default
2021/03/11 08:21:01 Deleted bucket default, responseBody: 
2021/03/11 08:21:01 ============== DBG: Create bucket default
2021/03/11 08:21:01 Created bucket default, responseBody: 
2021/03/11 08:21:04 Flush Enabled on bucket default, responseBody: 
2021/03/11 08:21:37 Flushed the bucket default, Response body: 
2021/03/11 08:21:37 ============== DBG: Delete bucket testbucket2
2021/03/11 08:21:37 http://127.0.0.1:9000/pools/default/buckets/testbucket2
2021/03/11 08:21:37 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:21:37 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:51:36 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc002f43ec0 31 [] false false map[] 0xc0028bdb00 }
2021/03/11 08:21:37 DeleteBucket failed for bucket testbucket2 
2021/03/11 08:21:37 Deleted bucket testbucket2, responseBody: Requested resource not found.
2021/03/11 08:21:37 ============== DBG: Create bucket testbucket2
2021/03/11 08:21:37 Created bucket testbucket2, responseBody: 
2021/03/11 08:21:40 Flush Enabled on bucket testbucket2, responseBody: 
2021/03/11 08:22:14 Flushed the bucket testbucket2, Response body: 
2021/03/11 08:22:14 ============== DBG: Delete bucket testbucket3
2021/03/11 08:22:14 http://127.0.0.1:9000/pools/default/buckets/testbucket3
2021/03/11 08:22:14 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket3 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:22:14 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:52:13 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc00427efc0 31 [] false false map[] 0xc003d4c600 }
2021/03/11 08:22:14 DeleteBucket failed for bucket testbucket3 
2021/03/11 08:22:14 Deleted bucket testbucket3, responseBody: Requested resource not found.
2021/03/11 08:22:14 ============== DBG: Create bucket testbucket3
2021/03/11 08:22:14 Created bucket testbucket3, responseBody: 
2021/03/11 08:22:17 Flush Enabled on bucket testbucket3, responseBody: 
2021/03/11 08:22:52 Flushed the bucket testbucket3, Response body: 
2021/03/11 08:22:52 ============== DBG: Delete bucket testbucket4
2021/03/11 08:22:52 http://127.0.0.1:9000/pools/default/buckets/testbucket4
2021/03/11 08:22:52 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket4 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:22:52 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:52:51 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc026e4cb00 31 [] false false map[] 0xc008f3e000 }
2021/03/11 08:22:52 DeleteBucket failed for bucket testbucket4 
2021/03/11 08:22:52 Deleted bucket testbucket4, responseBody: Requested resource not found.
2021/03/11 08:22:52 ============== DBG: Create bucket testbucket4
2021/03/11 08:22:52 Created bucket testbucket4, responseBody: 
2021/03/11 08:22:55 Flush Enabled on bucket testbucket4, responseBody: 
2021/03/11 08:23:30 Flushed the bucket testbucket4, Response body: 
2021/03/11 08:23:45 Generating docs and Populating all the buckets
2021/03/11 08:23:46 ============== DBG: Creating docs in bucket default
2021/03/11 08:23:46 ============== DBG: Creating index bucket1_age in bucket default
2021/03/11 08:23:51 Created the secondary index bucket1_age. Waiting for it become active
2021/03/11 08:23:51 Index is now active
2021/03/11 08:23:51 ============== DBG: Creating index bucket1_gender in bucket default
2021/03/11 08:23:57 Created the secondary index bucket1_gender. Waiting for it become active
2021/03/11 08:23:57 Index is now active
2021/03/11 08:23:58 ============== DBG: Creating docs in bucket testbucket2
2021/03/11 08:23:58 ============== DBG: Creating index bucket2_city in bucket testbucket2
2021/03/11 08:24:03 Created the secondary index bucket2_city. Waiting for it become active
2021/03/11 08:24:03 Index is now active
2021/03/11 08:24:03 ============== DBG: Creating index bucket2_company in bucket testbucket2
2021/03/11 08:24:09 Created the secondary index bucket2_company. Waiting for it become active
2021/03/11 08:24:09 Index is now active
2021/03/11 08:24:10 ============== DBG: Creating docs in bucket testbucket3
2021/03/11 08:24:11 ============== DBG: Creating index bucket3_gender in bucket testbucket3
2021/03/11 08:24:15 Created the secondary index bucket3_gender. Waiting for it become active
2021/03/11 08:24:15 Index is now active
2021/03/11 08:24:15 ============== DBG: Creating index bucket3_address in bucket testbucket3
2021/03/11 08:24:22 Created the secondary index bucket3_address. Waiting for it become active
2021/03/11 08:24:22 Index is now active
2021/03/11 08:24:22 ============== DBG: First bucket scan:: Scanning index bucket1_age in bucket default
2021/03/11 08:24:22 Using n1ql client
2021-03-11T08:24:22.932+05:30 [Info] metadata provider version changed 1275 -> 1276
2021-03-11T08:24:22.932+05:30 [Info] switched currmeta from 1275 -> 1276 force false 
2021-03-11T08:24:22.932+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:24:22.932+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] started ...
2021/03/11 08:24:22 ============== DBG: First bucket scan:: Expected results = 310 Actual results = 310
2021/03/11 08:24:22 Expected and Actual scan responses are the same
2021/03/11 08:24:37 ============== DBG: Creating 50K docs in bucket testbucket4
2021/03/11 08:25:12 ============== DBG: Creating index bucket4_balance asynchronously in bucket testbucket4
2021/03/11 08:25:22 ============== DBG: Deleting bucket testbucket4
2021/03/11 08:25:26 Deleted bucket testbucket4, responseBody: 
2021/03/11 08:25:26 ============== DBG: First bucket scan:: Scanning index bucket1_age in bucket default
2021/03/11 08:25:26 Using n1ql client
2021/03/11 08:25:26 ============== DBG: First bucket scan:: Expected results = 310 Actual results = 310
2021/03/11 08:25:26 Expected and Actual scan responses are the same
2021/03/11 08:25:26 ============== DBG: Second bucket scan:: Scanning index bucket2_city in bucket testbucket2
2021/03/11 08:25:26 Using n1ql client
2021-03-11T08:25:26.512+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:25:26.512+05:30 [Info] GSIC[default/testbucket2-_default-_default-1615431326505661470] started ...
2021/03/11 08:25:26 ============== DBG: Second bucket scan:: Expected results = 423 Actual results = 423
2021/03/11 08:25:26 Expected and Actual scan responses are the same
2021/03/11 08:25:26 ============== DBG: Third bucket scan:: Scanning index bucket3_gender in bucket testbucket3
2021/03/11 08:25:26 Using n1ql client
2021-03-11T08:25:26.526+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:25:26.526+05:30 [Info] GSIC[default/testbucket3-_default-_default-1615431326519861880] started ...
2021/03/11 08:25:26 ============== DBG: Third bucket scan:: Expected results = 521 Actual results = 521
2021/03/11 08:25:26 Expected and Actual scan responses are the same
2021/03/11 08:25:26 ============== DBG: Deleting buckets testbucket2 testbucket3 testbucket4
2021/03/11 08:25:29 Deleted bucket testbucket2, responseBody: 
2021/03/11 08:25:31 Deleted bucket testbucket3, responseBody: 
2021/03/11 08:25:31 http://127.0.0.1:9000/pools/default/buckets/testbucket4
2021/03/11 08:25:31 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket4 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc0000c6010}
2021/03/11 08:25:31 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Thu, 11 Mar 2021 02:55:30 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc0017dfb00 31 [] false false map[] 0xc009fffb00 }
2021/03/11 08:25:31 DeleteBucket failed for bucket testbucket4 
2021/03/11 08:25:31 Deleted bucket testbucket4, responseBody: Requested resource not found.
2021/03/11 08:25:34 Modified parameters of bucket default, responseBody: 
--- PASS: TestDeleteBucketWhileInitialIndexBuild (290.29s)
=== RUN   TestWherClause_UpdateDocument
2021/03/11 08:25:49 In TestWherClause_UpdateDocument()
2021/03/11 08:25:49 In DropAllSecondaryIndexes()
2021/03/11 08:25:49 Index found:  bucket1_age
2021/03/11 08:25:49 Dropped index bucket1_age
2021/03/11 08:25:49 Index found:  bucket1_gender
2021/03/11 08:25:49 Dropped index bucket1_gender
2021/03/11 08:26:28 Flushed the bucket default, Response body: 
2021/03/11 08:26:30 Setting JSON docs in KV
2021/03/11 08:26:40 Created the secondary index id_ageGreaterThan40. Waiting for it become active
2021/03/11 08:26:40 Index is now active
2021/03/11 08:26:40 Using n1ql client
2021/03/11 08:26:40 Expected and Actual scan responses are the same
2021/03/11 08:26:40 Number of docScanResults and scanResults = 6008 and 6008
2021/03/11 08:26:45 Using n1ql client
2021/03/11 08:26:45 Expected and Actual scan responses are the same
2021/03/11 08:26:45 Number of docScanResults and scanResults = 2008 and 2008
--- PASS: TestWherClause_UpdateDocument (55.88s)
=== RUN   TestDeferFalse
2021/03/11 08:26:45 In TestDeferFalse()
2021/03/11 08:26:48 Setting JSON docs in KV
2021/03/11 08:27:01 Created the secondary index index_deferfalse1. Waiting for it become active
2021/03/11 08:27:01 Index is now active
2021/03/11 08:27:01 Using n1ql client
2021/03/11 08:27:01 Expected and Actual scan responses are the same
--- PASS: TestDeferFalse (16.69s)
=== RUN   TestDeferFalse_CloseClientConnection
2021/03/11 08:27:01 In TestDeferFalse_CloseClientConnection()
2021/03/11 08:27:01 In CloseClientThread
2021/03/11 08:27:01 In CreateIndexThread
2021-03-11T08:27:03.945+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:58696->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 08:27:03 Create Index call failed as expected due to error : Terminate Request due to client termination
2021/03/11 08:27:04 Waiting for index to go active ...
2021/03/11 08:27:05 Waiting for index to go active ...
2021/03/11 08:27:06 Waiting for index to go active ...
2021/03/11 08:27:07 Waiting for index to go active ...
2021/03/11 08:27:08 Index is now active
2021/03/11 08:27:08 Using n1ql client
2021/03/11 08:27:08 Expected and Actual scan responses are the same
--- PASS: TestDeferFalse_CloseClientConnection (6.39s)
=== RUN   TestOrphanIndexCleanup
2021-03-11T08:27:08.193+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:58726->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 08:27:08 In DropAllSecondaryIndexes()
2021/03/11 08:27:08 Index found:  id_ageGreaterThan40
2021/03/11 08:27:08 Dropped index id_ageGreaterThan40
2021/03/11 08:27:08 Index found:  index_deferfalse2
2021/03/11 08:27:08 Dropped index index_deferfalse2
2021/03/11 08:27:08 Index found:  index_deferfalse1
2021/03/11 08:27:08 Dropped index index_deferfalse1
2021/03/11 08:27:22 Created the secondary index idx1_age_regular. Waiting for it become active
2021/03/11 08:27:22 Index is now active
2021/03/11 08:27:27 Created the secondary index idx2_company_regular. Waiting for it become active
2021/03/11 08:27:27 Index is now active
2021/03/11 08:27:37 Using n1ql client
2021/03/11 08:27:37 Query on idx1_age_regular is successful
2021/03/11 08:27:37 Using n1ql client
2021/03/11 08:27:37 Query on idx2_company_regular is successful
Restarting indexer process ...
2021/03/11 08:27:37 []
2021-03-11T08:27:37.580+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:27:37.580+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:27:37.580+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:27:37.580+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 08:27:57 Using n1ql client
2021-03-11T08:27:57.477+05:30 [Error] transport error between 127.0.0.1:41956->127.0.0.1:9107: write tcp 127.0.0.1:41956->127.0.0.1:9107: write: broken pipe
2021-03-11T08:27:57.477+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 3791170280761230963 request transport failed `write tcp 127.0.0.1:41956->127.0.0.1:9107: write: broken pipe`
2021-03-11T08:27:57.477+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 08:27:57 Query on idx1_age_regular is successful - after indexer restart.
2021/03/11 08:27:57 Using n1ql client
2021/03/11 08:27:57 Query on idx2_company_regular is successful - after indexer restart.
--- PASS: TestOrphanIndexCleanup (49.32s)
=== RUN   TestOrphanPartitionCleanup
2021/03/11 08:28:02 Created the secondary index idx3_age_regular. Waiting for it become active
2021/03/11 08:28:02 Index is now active
2021/03/11 08:28:12 Using n1ql client
2021/03/11 08:28:12 Query on idx3_age_regular is successful
Restarting indexer process ...
2021/03/11 08:28:12 []
2021-03-11T08:28:12.985+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:28:12.985+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:28:12.986+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:28:12.986+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 08:28:32 Using n1ql client
2021-03-11T08:28:32.942+05:30 [Error] transport error between 127.0.0.1:37964->127.0.0.1:9107: write tcp 127.0.0.1:37964->127.0.0.1:9107: write: broken pipe
2021-03-11T08:28:32.942+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -4119973075390943185 request transport failed `write tcp 127.0.0.1:37964->127.0.0.1:9107: write: broken pipe`
2021-03-11T08:28:32.942+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 0 
2021/03/11 08:28:32 Query on idx3_age_regular is successful - after indexer restart.
--- PASS: TestOrphanPartitionCleanup (35.46s)
=== RUN   TestIndexerSettings
2021/03/11 08:28:32 In TestIndexerSettings()
2021/03/11 08:28:32 Changing config key indexer.settings.max_cpu_percent to value 300
2021-03-11T08:28:32.999+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:32 Changing config key indexer.settings.inmemory_snapshot.interval to value 300
2021-03-11T08:28:33.018+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.persisted_snapshot.interval to value 20000
2021-03-11T08:28:33.039+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":300,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":300,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":2,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.062+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.recovery.max_rollbacks to value 3
2021-03-11T08:28:33.077+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":300,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":300,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":20000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":2,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.089+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.log_level to value error
--- PASS: TestIndexerSettings (0.15s)
=== RUN   TestRestoreDefaultSettings
2021/03/11 08:28:33 In TestIndexerSettings_RestoreDefault()
2021-03-11T08:28:33.116+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.max_cpu_percent to value 0
2021-03-11T08:28:33.130+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":300,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":300,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":20000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":3,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.138+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":300,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"error","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":300,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":20000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":3,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.141+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.inmemory_snapshot.interval to value 200
2021-03-11T08:28:33.171+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":300,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"error","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":20000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":3,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.173+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.persisted_snapshot.interval to value 5000
2021-03-11T08:28:33.182+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"error","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":20000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":3,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.208+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.recovery.max_rollbacks to value 5
2021-03-11T08:28:33.236+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"error","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":3,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.240+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:28:33 Changing config key indexer.settings.log_level to value info
2021-03-11T08:28:33.256+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"error","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
--- PASS: TestRestoreDefaultSettings (0.17s)
=== RUN   TestStat_ItemsCount
2021/03/11 08:28:33 In TestStat_ItemsCount()
2021/03/11 08:28:33 In DropAllSecondaryIndexes()
2021/03/11 08:28:33 Index found:  idx1_age_regular
2021-03-11T08:28:33.286+05:30 [Info] DropIndex 15045950942815761049 ...
2021-03-11T08:28:33.315+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:28:33.367+05:30 [Info] metadata provider version changed 1363 -> 1365
2021-03-11T08:28:33.367+05:30 [Info] switched currmeta from 1363 -> 1365 force false 
2021-03-11T08:28:33.367+05:30 [Info] DropIndex 15045950942815761049 - elapsed(80.996288ms), err()
2021/03/11 08:28:33 Dropped index idx1_age_regular
2021/03/11 08:28:33 Index found:  idx3_age_regular
2021-03-11T08:28:33.367+05:30 [Info] DropIndex 5151378705511894390 ...
2021-03-11T08:28:33.467+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.467+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.484+05:30 [Info] switched currmeta from 1365 -> 1366 force true 
2021-03-11T08:28:33.486+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.486+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.493+05:30 [Info] metadata provider version changed 1366 -> 1367
2021-03-11T08:28:33.493+05:30 [Info] switched currmeta from 1366 -> 1367 force false 
2021-03-11T08:28:33.493+05:30 [Info] DropIndex 5151378705511894390 - elapsed(125.40237ms), err()
2021/03/11 08:28:33 Dropped index idx3_age_regular
2021/03/11 08:28:33 Index found:  idx2_company_regular
2021-03-11T08:28:33.493+05:30 [Info] DropIndex 1180698035913475078 ...
2021-03-11T08:28:33.494+05:30 [Info] switched currmeta from 1360 -> 1364 force true 
2021-03-11T08:28:33.675+05:30 [Info] metadata provider version changed 1367 -> 1369
2021-03-11T08:28:33.675+05:30 [Info] switched currmeta from 1367 -> 1369 force false 
2021-03-11T08:28:33.675+05:30 [Info] DropIndex 1180698035913475078 - elapsed(181.922066ms), err()
2021/03/11 08:28:33 Dropped index idx2_company_regular
2021/03/11 08:28:33 Emptying the default bucket
2021-03-11T08:28:33.681+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.681+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.683+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.683+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.707+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:33.709+05:30 [Info] switched currmeta from 1364 -> 1366 force true 
2021-03-11T08:28:33.786+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:28:33.895+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.895+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.901+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021-03-11T08:28:33.904+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.904+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.924+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:33.983+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.983+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.986+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:33.988+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:33.988+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:33.991+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021-03-11T08:28:34.909+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:28:34.964+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:34.964+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:34.971+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:34.971+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:34.972+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021-03-11T08:28:34.974+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021/03/11 08:28:36 Flush Enabled on bucket default, responseBody: 
2021-03-11T08:28:36.805+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:28:36.998+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:36.998+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:37.008+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:37.008+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:37.011+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021-03-11T08:28:37.015+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:37.443+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:28:37.500+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:37.500+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:37.505+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:37.505+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:37.507+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021-03-11T08:28:37.514+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:45.730+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:28:45.792+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:45.792+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:45.833+05:30 [Info] switched currmeta from 1369 -> 1369 force true 
2021-03-11T08:28:45.837+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:28:45.837+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:28:45.842+05:30 [Info] switched currmeta from 1366 -> 1366 force true 
2021/03/11 08:29:15 Flushed the bucket default, Response body: 
2021/03/11 08:29:15 Generating JSON docs
2021/03/11 08:29:15 Setting initial JSON docs in KV
2021-03-11T08:29:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:29:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:29:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:29:19.426+05:30 [Info] average scan response {216 ms}
2021-03-11T08:29:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:29:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:29:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:29:20.068+05:30 [Info] average scan response {10 ms}
2021/03/11 08:29:20 Creating a 2i
2021-03-11T08:29:20.423+05:30 [Info] CreateIndex default _default _default index_test1 ...
2021-03-11T08:29:20.424+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:29:20.426+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_test1)
2021-03-11T08:29:20.426+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_test1
2021-03-11T08:29:20.426+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:29:20.548+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.549+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.551+05:30 [Info] switched currmeta from 1366 -> 1370 force true 
2021-03-11T08:29:20.556+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.556+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.562+05:30 [Info] switched currmeta from 1369 -> 1373 force true 
2021-03-11T08:29:20.619+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.619+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.625+05:30 [Info] switched currmeta from 1370 -> 1371 force true 
2021-03-11T08:29:20.640+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.640+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.653+05:30 [Info] switched currmeta from 1373 -> 1374 force true 
2021-03-11T08:29:20.725+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.725+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.728+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:20.728+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:20.732+05:30 [Info] switched currmeta from 1371 -> 1371 force true 
2021-03-11T08:29:20.736+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:21.361+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:29:22.371+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:29:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":12,"gsi_scan_duration":205972321,"gsi_throttle_duration":11633992,"gsi_prime_duration":80238905,"gsi_blocked_duration":2631874,"gsi_total_temp_files":0}
2021-03-11T08:29:24.126+05:30 [Info] CreateIndex 16184713894069550841 default _default _default/index_test1 using:plasma exprType:N1QL whereExpr:() secExprs:([`company`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.702189171s) err()
2021/03/11 08:29:24 Created the secondary index index_test1. Waiting for it become active
2021-03-11T08:29:24.126+05:30 [Info] metadata provider version changed 1374 -> 1375
2021-03-11T08:29:24.126+05:30 [Info] switched currmeta from 1374 -> 1375 force false 
2021/03/11 08:29:24 Index is now active
2021-03-11T08:29:24.250+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:24.250+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:24.251+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:24.251+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:24.253+05:30 [Info] switched currmeta from 1371 -> 1372 force true 
2021-03-11T08:29:24.256+05:30 [Info] switched currmeta from 1375 -> 1375 force true 
2021-03-11T08:29:24.945+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:29:25.093+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:25.093+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:25.096+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:25.096+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:25.101+05:30 [Info] switched currmeta from 1375 -> 1375 force true 
2021-03-11T08:29:25.103+05:30 [Info] switched currmeta from 1372 -> 1372 force true 
2021/03/11 08:29:29 items_count stat is 10000
--- PASS: TestStat_ItemsCount (55.87s)
=== RUN   TestRangeArrayIndex_Distinct
2021/03/11 08:29:29 In TestRangeArrayIndex_Distinct()
2021/03/11 08:29:29 In DropAllSecondaryIndexes()
2021/03/11 08:29:29 Index found:  index_test1
2021-03-11T08:29:29.159+05:30 [Info] DropIndex 16184713894069550841 ...
2021-03-11T08:29:29.258+05:30 [Info] metadata provider version changed 1375 -> 1377
2021-03-11T08:29:29.258+05:30 [Info] switched currmeta from 1375 -> 1377 force false 
2021-03-11T08:29:29.258+05:30 [Info] DropIndex 16184713894069550841 - elapsed(99.569536ms), err()
2021/03/11 08:29:29 Dropped index index_test1
2021-03-11T08:29:29.371+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:29:29.412+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.412+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.422+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:29.449+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.449+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.459+05:30 [Info] switched currmeta from 1372 -> 1374 force true 
2021-03-11T08:29:29.578+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.578+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.590+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:29.601+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.601+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.606+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:29.667+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.667+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.679+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:29.685+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:29.685+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:29.689+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:29.948+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:29:30.064+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:30.064+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:30.069+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:30.073+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:30.073+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:30.076+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:31.463+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:29:31.533+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:31.533+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:31.545+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:31.547+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:31.547+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:31.551+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:31.624+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:29:31.700+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:31.700+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:31.710+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:31.729+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:31.729+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:31.734+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021-03-11T08:29:38.461+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:29:38.555+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:38.555+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:38.565+05:30 [Info] switched currmeta from 1374 -> 1374 force true 
2021-03-11T08:29:38.575+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:29:38.575+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:29:38.587+05:30 [Info] switched currmeta from 1377 -> 1377 force true 
2021/03/11 08:30:08 Flushed the bucket default, Response body: 
2021-03-11T08:30:08.928+05:30 [Info] CreateIndex default _default _default arridx_friends ...
2021-03-11T08:30:08.929+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:30:08.929+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arridx_friends)
2021-03-11T08:30:08.929+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arridx_friends
2021-03-11T08:30:08.930+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:30:09.036+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.036+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.043+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.043+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.045+05:30 [Info] switched currmeta from 1374 -> 1378 force true 
2021-03-11T08:30:09.049+05:30 [Info] switched currmeta from 1377 -> 1381 force true 
2021-03-11T08:30:09.135+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.136+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.144+05:30 [Info] switched currmeta from 1381 -> 1382 force true 
2021-03-11T08:30:09.156+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.156+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.163+05:30 [Info] switched currmeta from 1378 -> 1379 force true 
2021-03-11T08:30:09.234+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.235+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.237+05:30 [Info] switched currmeta from 1379 -> 1379 force true 
2021-03-11T08:30:09.243+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:09.243+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:09.244+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:11.874+05:30 [Info] CreateIndex 1248698162460142879 default _default _default/arridx_friends using:plasma exprType:N1QL whereExpr:() secExprs:([(distinct (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(2.946015047s) err()
2021/03/11 08:30:11 Created the secondary index arridx_friends. Waiting for it become active
2021-03-11T08:30:11.874+05:30 [Info] metadata provider version changed 1382 -> 1383
2021-03-11T08:30:11.874+05:30 [Info] switched currmeta from 1382 -> 1383 force false 
2021/03/11 08:30:11 Index is now active
2021-03-11T08:30:11.908+05:30 [Error] transport error between 127.0.0.1:49782->127.0.0.1:9107: write tcp 127.0.0.1:49782->127.0.0.1:9107: write: broken pipe
2021-03-11T08:30:11.908+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:49782->127.0.0.1:9107: write: broken pipe`
2021-03-11T08:30:11.908+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] closing unhealthy connection "127.0.0.1:49782"
2021-03-11T08:30:11.912+05:30 [Warn] scan failed: requestId  queryport 127.0.0.1:9107 inst 8448316991172823919 partition [0]
2021-03-11T08:30:11.912+05:30 [Warn] Scan failed with error for index 1248698162460142879.  Trying scan again with replica, reqId: :  write tcp 127.0.0.1:49782->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2021-03-11T08:30:11.913+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021-03-11T08:30:11.913+05:30 [Warn] Fail to find indexers to satisfy query request.  Trying scan again for index 1248698162460142879, reqId: :  write tcp 127.0.0.1:49782->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2021-03-11T08:30:11.924+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] open new connection ...
2021/03/11 08:30:11 Expected and Actual scan responses are the same
2021-03-11T08:30:12.010+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:12.010+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:12.013+05:30 [Info] switched currmeta from 1383 -> 1383 force true 
2021-03-11T08:30:12.026+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:12.026+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:12.038+05:30 [Info] switched currmeta from 1379 -> 1380 force true 
2021/03/11 08:30:14 Expected and Actual scan responses are the same
--- PASS: TestRangeArrayIndex_Distinct (45.09s)
=== RUN   TestUpdateArrayIndex_Distinct
2021/03/11 08:30:14 In TestUpdateArrayIndex_Distinct()
2021/03/11 08:30:14 In DropAllSecondaryIndexes()
2021/03/11 08:30:14 Index found:  arridx_friends
2021-03-11T08:30:14.249+05:30 [Info] DropIndex 1248698162460142879 ...
2021-03-11T08:30:14.327+05:30 [Info] metadata provider version changed 1383 -> 1385
2021-03-11T08:30:14.327+05:30 [Info] switched currmeta from 1383 -> 1385 force false 
2021-03-11T08:30:14.327+05:30 [Info] DropIndex 1248698162460142879 - elapsed(77.810238ms), err()
2021/03/11 08:30:14 Dropped index arridx_friends
2021-03-11T08:30:14.446+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:30:14.587+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:14.587+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:14.627+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:14.627+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:14.627+05:30 [Info] switched currmeta from 1380 -> 1382 force true 
2021-03-11T08:30:14.635+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:14.847+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:14.847+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:14.872+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:14.903+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:14.903+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:14.916+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:15.018+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:15.018+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:15.023+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:30:15.047+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:15.060+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:15.060+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:15.086+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:15.200+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:15.200+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:15.203+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:15.214+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:15.214+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:15.219+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:19.164+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:30:19.242+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:19.242+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:19.245+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:19.245+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:19.251+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:19.251+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:19.251+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:30:19.343+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:19.343+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:19.343+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:19.343+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:19.346+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:19.346+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:30:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:30:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:30:19.426+05:30 [Info] average scan response {1111 ms}
2021-03-11T08:30:19.961+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:30:20.002+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:20.002+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:20.009+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:20.010+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:20.010+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:20.014+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021-03-11T08:30:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:30:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:30:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:30:20.068+05:30 [Info] average scan response {10 ms}
2021-03-11T08:30:21.381+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:30:22.392+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:30:23.964+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:30:24.030+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:24.030+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:24.042+05:30 [Info] switched currmeta from 1382 -> 1382 force true 
2021-03-11T08:30:24.051+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:24.051+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:24.056+05:30 [Info] switched currmeta from 1385 -> 1385 force true 
2021/03/11 08:30:53 Flushed the bucket default, Response body: 
2021-03-11T08:30:54.483+05:30 [Info] CreateIndex default _default _default arridx_friends ...
2021-03-11T08:30:54.483+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:30:54.489+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arridx_friends)
2021-03-11T08:30:54.489+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arridx_friends
2021-03-11T08:30:54.489+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:30:54.598+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.598+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.602+05:30 [Info] switched currmeta from 1382 -> 1384 force true 
2021-03-11T08:30:54.606+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.606+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.615+05:30 [Info] switched currmeta from 1385 -> 1387 force true 
2021-03-11T08:30:54.673+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.673+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.686+05:30 [Info] switched currmeta from 1387 -> 1390 force true 
2021-03-11T08:30:54.690+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.690+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.704+05:30 [Info] switched currmeta from 1384 -> 1387 force true 
2021-03-11T08:30:54.807+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.807+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.815+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:54.816+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:54.817+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:30:54.820+05:30 [Info] switched currmeta from 1387 -> 1387 force true 
2021-03-11T08:30:57.416+05:30 [Info] CreateIndex 1687288245245378845 default _default _default/arridx_friends using:plasma exprType:N1QL whereExpr:() secExprs:([(distinct (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(2.933546812s) err()
2021/03/11 08:30:57 Created the secondary index arridx_friends. Waiting for it become active
2021-03-11T08:30:57.416+05:30 [Info] metadata provider version changed 1390 -> 1391
2021-03-11T08:30:57.416+05:30 [Info] switched currmeta from 1390 -> 1391 force false 
2021/03/11 08:30:57 Index is now active
2021/03/11 08:30:57 Expected and Actual scan responses are the same
2021-03-11T08:30:57.539+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:57.539+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:57.542+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:30:57.542+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:30:57.547+05:30 [Info] switched currmeta from 1387 -> 1388 force true 
2021-03-11T08:30:57.552+05:30 [Info] switched currmeta from 1391 -> 1391 force true 
2021-03-11T08:30:59.600+05:30 [Info] Rollback time has changed for index inst 4291200842892156397. New rollback time 1615431494324916697
2021-03-11T08:30:59.600+05:30 [Info] Rollback time has changed for index inst 4291200842892156397. New rollback time 1615431494324916697
2021-03-11T08:31:00.011+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:00.167+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.167+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.174+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.174+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.180+05:30 [Info] switched currmeta from 1388 -> 1388 force true 
2021-03-11T08:31:00.182+05:30 [Info] switched currmeta from 1391 -> 1391 force true 
2021/03/11 08:31:00 Expected and Actual scan responses are the same
2021/03/11 08:31:00 Expected and Actual scan responses are the same
--- PASS: TestUpdateArrayIndex_Distinct (46.09s)
=== RUN   TestRangeArrayIndex_Duplicate
2021/03/11 08:31:00 In TestRangeArrayIndex_Duplicate()
2021/03/11 08:31:00 In DropAllSecondaryIndexes()
2021/03/11 08:31:00 Index found:  arridx_friends
2021-03-11T08:31:00.335+05:30 [Info] DropIndex 1687288245245378845 ...
2021-03-11T08:31:00.461+05:30 [Info] metadata provider version changed 1391 -> 1393
2021-03-11T08:31:00.461+05:30 [Info] switched currmeta from 1391 -> 1393 force false 
2021-03-11T08:31:00.461+05:30 [Info] DropIndex 1687288245245378845 - elapsed(126.117296ms), err()
2021/03/11 08:31:00 Dropped index arridx_friends
2021-03-11T08:31:00.611+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:31:00.691+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.691+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.726+05:30 [Info] switched currmeta from 1388 -> 1390 force true 
2021-03-11T08:31:00.756+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.756+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.787+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:00.862+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.862+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.869+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:00.900+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.900+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.910+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:00.944+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.944+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.956+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:00.987+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:00.987+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:00.991+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:04.632+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:04.731+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:04.731+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:04.740+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:04.746+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:04.746+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:04.755+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:04.972+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:05.029+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:05.029+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:05.046+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:05.056+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:05.056+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:05.061+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:08.346+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:08.423+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:08.423+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:08.429+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:08.433+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:08.433+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:08.444+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:09.972+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:10.020+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:10.021+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:10.021+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:10.021+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:10.024+05:30 [Info] switched currmeta from 1393 -> 1393 force true 
2021-03-11T08:31:10.025+05:30 [Info] switched currmeta from 1390 -> 1390 force true 
2021-03-11T08:31:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:31:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:31:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:31:19.426+05:30 [Info] average scan response {400 ms}
2021-03-11T08:31:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:31:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:31:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:31:20.068+05:30 [Info] average scan response {10 ms}
2021-03-11T08:31:21.398+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:31:22.414+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021/03/11 08:31:38 Flushed the bucket default, Response body: 
2021-03-11T08:31:38.853+05:30 [Info] CreateIndex default _default _default arridx_friends ...
2021-03-11T08:31:38.854+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:31:38.855+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arridx_friends)
2021-03-11T08:31:38.855+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arridx_friends
2021-03-11T08:31:38.855+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:31:38.964+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:38.964+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:38.967+05:30 [Info] switched currmeta from 1390 -> 1393 force true 
2021-03-11T08:31:38.976+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:38.976+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:38.979+05:30 [Info] switched currmeta from 1393 -> 1396 force true 
2021-03-11T08:31:39.036+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:39.036+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:39.040+05:30 [Info] switched currmeta from 1393 -> 1395 force true 
2021-03-11T08:31:39.041+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:39.041+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:39.049+05:30 [Info] switched currmeta from 1396 -> 1398 force true 
2021-03-11T08:31:39.134+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:39.134+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:39.148+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:39.171+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:39.171+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:39.173+05:30 [Info] switched currmeta from 1395 -> 1395 force true 
2021-03-11T08:31:41.671+05:30 [Info] CreateIndex 2728124706480152633 default _default _default/arridx_friends using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(2.818086637s) err()
2021/03/11 08:31:41 Created the secondary index arridx_friends. Waiting for it become active
2021-03-11T08:31:41.671+05:30 [Info] metadata provider version changed 1398 -> 1399
2021-03-11T08:31:41.671+05:30 [Info] switched currmeta from 1398 -> 1399 force false 
2021/03/11 08:31:41 Index is now active
2021/03/11 08:31:41 Expected and Actual scan responses are the same
2021-03-11T08:31:41.841+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:41.841+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:41.844+05:30 [Info] switched currmeta from 1395 -> 1396 force true 
2021-03-11T08:31:41.861+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:41.861+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:41.864+05:30 [Info] switched currmeta from 1399 -> 1399 force true 
2021/03/11 08:31:44 Expected and Actual scan responses are the same
--- PASS: TestRangeArrayIndex_Duplicate (43.69s)
=== RUN   TestUpdateArrayIndex_Duplicate
2021/03/11 08:31:44 In TestUpdateArrayIndex_Duplicate()
2021/03/11 08:31:44 In DropAllSecondaryIndexes()
2021/03/11 08:31:44 Index found:  arridx_friends
2021-03-11T08:31:44.021+05:30 [Info] DropIndex 2728124706480152633 ...
2021-03-11T08:31:44.108+05:30 [Info] metadata provider version changed 1399 -> 1401
2021-03-11T08:31:44.108+05:30 [Info] switched currmeta from 1399 -> 1401 force false 
2021-03-11T08:31:44.108+05:30 [Info] DropIndex 2728124706480152633 - elapsed(87.152171ms), err()
2021/03/11 08:31:44 Dropped index arridx_friends
2021-03-11T08:31:44.318+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:31:44.370+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.371+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.373+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.373+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.389+05:30 [Info] switched currmeta from 1396 -> 1398 force true 
2021-03-11T08:31:44.390+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:44.579+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.580+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.602+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:44.607+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.607+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.611+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:44.722+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.722+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.727+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:44.727+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:44.729+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:44.735+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:44.992+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:45.080+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:45.080+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:45.080+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:45.080+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:45.084+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:45.088+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:49.026+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:49.116+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:49.116+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:49.122+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:49.122+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:49.122+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:49.142+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:49.150+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:49.208+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:49.208+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:49.224+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:49.228+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:49.228+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:49.233+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:49.991+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:50.043+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:50.043+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:50.049+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:31:50.055+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:50.055+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:50.057+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:53.660+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:31:53.706+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:53.706+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:53.715+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:31:53.715+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:31:53.716+05:30 [Info] switched currmeta from 1401 -> 1401 force true 
2021-03-11T08:31:53.717+05:30 [Info] switched currmeta from 1398 -> 1398 force true 
2021-03-11T08:32:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:32:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:32:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:32:19.426+05:30 [Info] average scan response {1161 ms}
2021-03-11T08:32:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:32:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:32:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:32:20.068+05:30 [Info] average scan response {10 ms}
2021-03-11T08:32:21.417+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:32:22.437+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021/03/11 08:32:23 Flushed the bucket default, Response body: 
2021-03-11T08:32:24.133+05:30 [Info] CreateIndex default _default _default arridx_friends ...
2021-03-11T08:32:24.133+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:32:24.134+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arridx_friends)
2021-03-11T08:32:24.134+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arridx_friends
2021-03-11T08:32:24.134+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:32:24.224+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.224+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.226+05:30 [Info] switched currmeta from 1401 -> 1403 force true 
2021-03-11T08:32:24.228+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.228+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.234+05:30 [Info] switched currmeta from 1398 -> 1400 force true 
2021-03-11T08:32:24.303+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.303+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.309+05:30 [Info] switched currmeta from 1403 -> 1405 force true 
2021-03-11T08:32:24.313+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.313+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.325+05:30 [Info] switched currmeta from 1400 -> 1402 force true 
2021-03-11T08:32:24.414+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.414+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.422+05:30 [Info] switched currmeta from 1405 -> 1406 force true 
2021-03-11T08:32:24.431+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.431+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.437+05:30 [Info] switched currmeta from 1402 -> 1403 force true 
2021-03-11T08:32:24.487+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.487+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.493+05:30 [Info] switched currmeta from 1406 -> 1406 force true 
2021-03-11T08:32:24.505+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:24.505+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:24.513+05:30 [Info] switched currmeta from 1403 -> 1403 force true 
2021-03-11T08:32:27.092+05:30 [Info] CreateIndex 9651789107225683213 default _default _default/arridx_friends using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(2.959532265s) err()
2021/03/11 08:32:27 Created the secondary index arridx_friends. Waiting for it become active
2021-03-11T08:32:27.092+05:30 [Info] metadata provider version changed 1406 -> 1407
2021-03-11T08:32:27.092+05:30 [Info] switched currmeta from 1406 -> 1407 force false 
2021/03/11 08:32:27 Index is now active
2021/03/11 08:32:27 Expected and Actual scan responses are the same
2021-03-11T08:32:27.257+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:27.257+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:27.261+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:27.261+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:27.262+05:30 [Info] switched currmeta from 1403 -> 1404 force true 
2021-03-11T08:32:27.268+05:30 [Info] switched currmeta from 1407 -> 1407 force true 
2021-03-11T08:32:29.600+05:30 [Info] Rollback time has changed for index inst 14291828829949255972. New rollback time 1615431494324916697
2021-03-11T08:32:29.605+05:30 [Info] Rollback time has changed for index inst 14291828829949255972. New rollback time 1615431494324916697
2021/03/11 08:32:29 Expected and Actual scan responses are the same
2021-03-11T08:32:30.010+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:30.107+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.107+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.108+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.108+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.111+05:30 [Info] switched currmeta from 1407 -> 1407 force true 
2021-03-11T08:32:30.111+05:30 [Info] switched currmeta from 1404 -> 1404 force true 
2021/03/11 08:32:30 Expected and Actual scan responses are the same
--- PASS: TestUpdateArrayIndex_Duplicate (46.10s)
=== RUN   TestArrayIndexCornerCases
2021/03/11 08:32:30 In TestArrayIndexCornerCases()
2021-03-11T08:32:30.118+05:30 [Info] CreateIndex default _default _default arr_single ...
2021-03-11T08:32:30.119+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:32:30.121+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr_single)
2021-03-11T08:32:30.121+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr_single
2021-03-11T08:32:30.121+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:32:30.218+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.218+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.223+05:30 [Info] switched currmeta from 1407 -> 1411 force true 
2021-03-11T08:32:30.237+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.237+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.243+05:30 [Info] switched currmeta from 1404 -> 1408 force true 
2021-03-11T08:32:30.295+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.295+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.308+05:30 [Info] switched currmeta from 1411 -> 1412 force true 
2021-03-11T08:32:30.331+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.331+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.342+05:30 [Info] switched currmeta from 1408 -> 1409 force true 
2021-03-11T08:32:30.406+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.406+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.407+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:30.407+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:30.411+05:30 [Info] switched currmeta from 1412 -> 1412 force true 
2021-03-11T08:32:30.411+05:30 [Info] switched currmeta from 1409 -> 1409 force true 
2021-03-11T08:32:32.992+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:32.992+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:32.993+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:32.994+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:32.999+05:30 [Info] switched currmeta from 1409 -> 1410 force true 
2021-03-11T08:32:33.000+05:30 [Info] switched currmeta from 1412 -> 1413 force true 
2021-03-11T08:32:33.906+05:30 [Info] CreateIndex 10659601813628290750 default _default _default/arr_single using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`arr_tags`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.788380696s) err()
2021/03/11 08:32:33 Created the secondary index arr_single. Waiting for it become active
2021-03-11T08:32:33.906+05:30 [Info] metadata provider version changed 1413 -> 1414
2021-03-11T08:32:33.906+05:30 [Info] switched currmeta from 1413 -> 1414 force false 
2021/03/11 08:32:33 Index is now active
2021-03-11T08:32:33.907+05:30 [Info] CreateIndex default _default _default arr_leading ...
2021-03-11T08:32:33.908+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:32:33.910+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr_leading)
2021-03-11T08:32:33.910+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr_leading
2021-03-11T08:32:33.910+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:32:34.029+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:34.029+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:34.034+05:30 [Info] switched currmeta from 1414 -> 1418 force true 
2021-03-11T08:32:34.060+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:34.060+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:34.083+05:30 [Info] switched currmeta from 1410 -> 1416 force true 
2021-03-11T08:32:34.233+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:34.233+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:34.242+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:34.242+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:34.249+05:30 [Info] switched currmeta from 1416 -> 1416 force true 
2021-03-11T08:32:34.254+05:30 [Info] switched currmeta from 1418 -> 1419 force true 
2021-03-11T08:32:34.305+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:34.305+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:34.309+05:30 [Info] switched currmeta from 1419 -> 1419 force true 
2021-03-11T08:32:35.040+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:35.159+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:35.159+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:35.180+05:30 [Info] switched currmeta from 1416 -> 1416 force true 
2021-03-11T08:32:35.187+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:35.187+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:35.208+05:30 [Info] switched currmeta from 1419 -> 1419 force true 
2021-03-11T08:32:36.689+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:36.689+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:36.694+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:36.694+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:36.697+05:30 [Info] switched currmeta from 1419 -> 1420 force true 
2021-03-11T08:32:36.697+05:30 [Info] switched currmeta from 1416 -> 1417 force true 
2021-03-11T08:32:37.659+05:30 [Info] CreateIndex 9111288755875467698 default _default _default/arr_leading using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`arr_tags`)) `arr_name`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.752311276s) err()
2021/03/11 08:32:37 Created the secondary index arr_leading. Waiting for it become active
2021-03-11T08:32:37.659+05:30 [Info] metadata provider version changed 1420 -> 1421
2021-03-11T08:32:37.659+05:30 [Info] switched currmeta from 1420 -> 1421 force false 
2021/03/11 08:32:37 Index is now active
2021-03-11T08:32:37.660+05:30 [Info] CreateIndex default _default _default arr_nonleading ...
2021-03-11T08:32:37.660+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:32:37.662+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr_nonleading)
2021-03-11T08:32:37.662+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr_nonleading
2021-03-11T08:32:37.662+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:32:37.767+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:37.767+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:37.784+05:30 [Info] switched currmeta from 1421 -> 1423 force true 
2021-03-11T08:32:37.786+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:37.786+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:37.789+05:30 [Info] switched currmeta from 1417 -> 1420 force true 
2021-03-11T08:32:37.893+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:37.893+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:37.901+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:37.901+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:37.902+05:30 [Info] switched currmeta from 1423 -> 1426 force true 
2021-03-11T08:32:37.909+05:30 [Info] switched currmeta from 1420 -> 1423 force true 
2021-03-11T08:32:38.049+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:38.051+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:38.049+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:38.051+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:38.053+05:30 [Info] switched currmeta from 1423 -> 1423 force true 
2021-03-11T08:32:38.055+05:30 [Info] switched currmeta from 1426 -> 1426 force true 
2021-03-11T08:32:39.599+05:30 [Info] Rollback time has changed for index inst 2623540855016290534. New rollback time 1615431494324916697
2021-03-11T08:32:39.599+05:30 [Info] Rollback time has changed for index inst 2623540855016290534. New rollback time 1615431494324916697
2021-03-11T08:32:40.020+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:40.174+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:40.174+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:40.184+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:40.184+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:40.189+05:30 [Info] switched currmeta from 1426 -> 1426 force true 
2021-03-11T08:32:40.198+05:30 [Info] switched currmeta from 1423 -> 1423 force true 
2021-03-11T08:32:40.514+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:40.514+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:40.515+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:40.515+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:40.522+05:30 [Info] switched currmeta from 1426 -> 1427 force true 
2021-03-11T08:32:40.523+05:30 [Info] switched currmeta from 1423 -> 1424 force true 
2021-03-11T08:32:41.464+05:30 [Info] CreateIndex 17549841499594479820 default _default _default/arr_nonleading using:plasma exprType:N1QL whereExpr:() secExprs:([`arr_name` (all (`arr_tags`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.804490138s) err()
2021/03/11 08:32:41 Created the secondary index arr_nonleading. Waiting for it become active
2021-03-11T08:32:41.464+05:30 [Info] metadata provider version changed 1427 -> 1428
2021-03-11T08:32:41.464+05:30 [Info] switched currmeta from 1427 -> 1428 force false 
2021/03/11 08:32:41 Index is now active
2021/03/11 08:32:41 

--------ScanAll for EMPTY array--------
2021/03/11 08:32:41 Count of scanResults is 0
2021/03/11 08:32:41 Count of scanResults is 0
2021-03-11T08:32:41.571+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:41.571+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:41.571+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:41.571+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["VbNd1Y" Missing field or index.] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 

--------ScanAll for MISSING array--------
2021-03-11T08:32:41.577+05:30 [Info] switched currmeta from 1428 -> 1428 force true 
2021-03-11T08:32:41.577+05:30 [Info] switched currmeta from 1424 -> 1425 force true 
2021/03/11 08:32:41 Count of scanResults is 0
2021/03/11 08:32:41 Count of scanResults is 0
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["DBoQw" Missing field or index.] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 

--------ScanAll for NULL array--------
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values [null] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values [null "Mio6Q"] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["Mio6Q" null] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 

--------ScanAll for SCALARVALUE array--------
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["IamScalar"] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["IamScalar" "SSkiv#"] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["SSkiv#" "IamScalar"] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 

--------ScanAll for SCALAROBJECT array--------
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values [{"1":"abc","2":"def"}] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values [{"1":"abc","2":"def"} "aLkFb"] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
2021/03/11 08:32:41 Count of scanResults is 1
2021/03/11 08:32:41 Key: string 6581902736822742951  Value: value.Values ["aLkFb" {"1":"abc","2":"def"}] false
2021/03/11 08:32:41 Expected and Actual scan responses are the same
--- PASS: TestArrayIndexCornerCases (11.59s)
=== RUN   TestArraySizeIncreaseDecrease1
2021/03/11 08:32:41 In TestArraySizeIncreaseDecrease1()
2021/03/11 08:32:41 In DropAllSecondaryIndexes()
2021/03/11 08:32:41 Index found:  arr_single
2021-03-11T08:32:41.703+05:30 [Info] DropIndex 10659601813628290750 ...
2021-03-11T08:32:41.788+05:30 [Info] metadata provider version changed 1428 -> 1430
2021-03-11T08:32:41.788+05:30 [Info] switched currmeta from 1428 -> 1430 force false 
2021-03-11T08:32:41.788+05:30 [Info] DropIndex 10659601813628290750 - elapsed(84.658165ms), err()
2021/03/11 08:32:41 Dropped index arr_single
2021/03/11 08:32:41 Index found:  arr_leading
2021-03-11T08:32:41.788+05:30 [Info] DropIndex 9111288755875467698 ...
2021-03-11T08:32:41.869+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:41.869+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:41.874+05:30 [Info] switched currmeta from 1425 -> 1428 force true 
2021-03-11T08:32:41.881+05:30 [Info] metadata provider version changed 1430 -> 1432
2021-03-11T08:32:41.882+05:30 [Info] switched currmeta from 1430 -> 1432 force false 
2021-03-11T08:32:41.882+05:30 [Info] DropIndex 9111288755875467698 - elapsed(93.698395ms), err()
2021/03/11 08:32:41 Dropped index arr_leading
2021/03/11 08:32:41 Index found:  arr_nonleading
2021-03-11T08:32:41.882+05:30 [Info] DropIndex 17549841499594479820 ...
2021-03-11T08:32:41.931+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:41.931+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:41.949+05:30 [Info] switched currmeta from 1432 -> 1432 force true 
2021-03-11T08:32:42.004+05:30 [Info] metadata provider version changed 1432 -> 1434
2021-03-11T08:32:42.005+05:30 [Info] switched currmeta from 1432 -> 1434 force false 
2021-03-11T08:32:42.005+05:30 [Info] DropIndex 17549841499594479820 - elapsed(122.933852ms), err()
2021/03/11 08:32:42 Dropped index arr_nonleading
2021/03/11 08:32:42 Index found:  arridx_friends
2021-03-11T08:32:42.005+05:30 [Info] DropIndex 9651789107225683213 ...
2021-03-11T08:32:42.032+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.032+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.040+05:30 [Info] switched currmeta from 1428 -> 1431 force true 
2021-03-11T08:32:42.125+05:30 [Info] metadata provider version changed 1434 -> 1436
2021-03-11T08:32:42.125+05:30 [Info] switched currmeta from 1434 -> 1436 force false 
2021-03-11T08:32:42.125+05:30 [Info] DropIndex 9651789107225683213 - elapsed(120.107329ms), err()
2021/03/11 08:32:42 Dropped index arridx_friends
2021-03-11T08:32:42.150+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.150+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.197+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:42.287+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:32:42.345+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.345+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.359+05:30 [Info] switched currmeta from 1431 -> 1433 force true 
2021-03-11T08:32:42.472+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.472+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.479+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:42.489+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.489+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.515+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:42.564+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.564+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.572+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:42.587+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:42.587+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:42.591+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:44.504+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:44.581+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:44.581+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:44.581+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:44.587+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:44.606+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:44.606+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:44.610+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:44.657+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:44.658+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:44.663+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:44.679+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:44.679+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:44.680+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:45.003+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:45.054+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:45.054+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:45.055+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:45.055+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:45.058+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:45.058+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:50.022+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:50.116+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:50.116+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:50.117+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:50.117+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:50.126+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:32:50.126+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:51.253+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:32:51.373+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:51.373+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:51.380+05:30 [Info] switched currmeta from 1433 -> 1433 force true 
2021-03-11T08:32:51.384+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:32:51.384+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:32:51.389+05:30 [Info] switched currmeta from 1436 -> 1436 force true 
2021-03-11T08:33:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:33:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:33:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:33:19.426+05:30 [Info] average scan response {2 ms}
2021-03-11T08:33:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:33:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:33:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:33:20.068+05:30 [Info] average scan response {10 ms}
2021/03/11 08:33:20 Flushed the bucket default, Response body: 
2021-03-11T08:33:20.929+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:33:20 Changing config key indexer.settings.allow_large_keys to value false
2021-03-11T08:33:20.958+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:33:21.438+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:33:21.942+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:33:21 Changing config key indexer.settings.max_seckey_size to value 100
2021-03-11T08:33:21.951+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:33:21 Changing config key indexer.settings.max_array_seckey_size to value 2000
2021-03-11T08:33:21.966+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:33:22.005+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":2000,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:33:22.466+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021/03/11 08:33:22 Start of createArrayDocs()
2021/03/11 08:33:40 End of createArrayDocs()
2021/03/11 08:33:40 Start of createArrayDocs()
2021/03/11 08:33:40 End of createArrayDocs()
2021-03-11T08:33:40.681+05:30 [Info] CreateIndex default _default _default arr1 ...
2021-03-11T08:33:40.682+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:33:40.683+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr1)
2021-03-11T08:33:40.683+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr1
2021-03-11T08:33:40.683+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:33:40.776+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.776+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.777+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.777+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.779+05:30 [Info] switched currmeta from 1433 -> 1436 force true 
2021-03-11T08:33:40.784+05:30 [Info] switched currmeta from 1436 -> 1439 force true 
2021-03-11T08:33:40.857+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.857+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.872+05:30 [Info] switched currmeta from 1436 -> 1438 force true 
2021-03-11T08:33:40.877+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.877+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.890+05:30 [Info] switched currmeta from 1439 -> 1441 force true 
2021-03-11T08:33:40.965+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.965+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.969+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:40.969+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:40.976+05:30 [Info] switched currmeta from 1441 -> 1441 force true 
2021-03-11T08:33:40.978+05:30 [Info] switched currmeta from 1438 -> 1438 force true 
2021-03-11T08:33:43.726+05:30 [Info] CreateIndex 12968997275625704714 default _default _default/arr1 using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.045391461s) err()
2021/03/11 08:33:43 Created the secondary index arr1. Waiting for it become active
2021-03-11T08:33:43.726+05:30 [Info] metadata provider version changed 1441 -> 1442
2021-03-11T08:33:43.727+05:30 [Info] switched currmeta from 1441 -> 1442 force false 
2021/03/11 08:33:43 Index is now active
2021-03-11T08:33:43.727+05:30 [Info] CreateIndex default _default _default arr2 ...
2021-03-11T08:33:43.727+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:33:43.738+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr2)
2021-03-11T08:33:43.738+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr2
2021-03-11T08:33:43.738+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:33:43.861+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:43.861+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:43.869+05:30 [Info] switched currmeta from 1442 -> 1446 force true 
2021-03-11T08:33:43.871+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:43.872+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:43.880+05:30 [Info] switched currmeta from 1438 -> 1443 force true 
2021-03-11T08:33:43.969+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:43.969+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:43.976+05:30 [Info] switched currmeta from 1446 -> 1447 force true 
2021-03-11T08:33:43.983+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:43.984+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:43.992+05:30 [Info] switched currmeta from 1443 -> 1444 force true 
2021-03-11T08:33:44.129+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:44.129+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:44.134+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:44.134+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:44.138+05:30 [Info] switched currmeta from 1447 -> 1447 force true 
2021-03-11T08:33:44.140+05:30 [Info] switched currmeta from 1444 -> 1444 force true 
2021-03-11T08:33:46.880+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:46.880+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:46.892+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:46.892+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:46.896+05:30 [Info] switched currmeta from 1447 -> 1448 force true 
2021-03-11T08:33:46.897+05:30 [Info] switched currmeta from 1444 -> 1445 force true 
2021-03-11T08:33:48.592+05:30 [Info] CreateIndex 836583527844035339 default _default _default/arr2 using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`name`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.865324916s) err()
2021/03/11 08:33:48 Created the secondary index arr2. Waiting for it become active
2021-03-11T08:33:48.592+05:30 [Info] metadata provider version changed 1448 -> 1449
2021-03-11T08:33:48.592+05:30 [Info] switched currmeta from 1448 -> 1449 force false 
2021/03/11 08:33:48 Index is now active
2021-03-11T08:33:48.593+05:30 [Info] CreateIndex default _default _default idx3 ...
2021-03-11T08:33:48.593+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:33:48.597+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, idx3)
2021-03-11T08:33:48.597+05:30 [Info] Skipping planner for creation of the index default:_default:_default:idx3
2021-03-11T08:33:48.597+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:33:48.711+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.711+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.714+05:30 [Info] switched currmeta from 1449 -> 1452 force true 
2021-03-11T08:33:48.715+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.715+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.727+05:30 [Info] switched currmeta from 1445 -> 1449 force true 
2021-03-11T08:33:48.820+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.820+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.823+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.823+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.826+05:30 [Info] switched currmeta from 1452 -> 1454 force true 
2021-03-11T08:33:48.832+05:30 [Info] switched currmeta from 1449 -> 1451 force true 
2021-03-11T08:33:48.933+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.933+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.934+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:48.934+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:48.944+05:30 [Info] switched currmeta from 1454 -> 1454 force true 
2021-03-11T08:33:48.946+05:30 [Info] switched currmeta from 1451 -> 1451 force true 
2021-03-11T08:33:49.599+05:30 [Info] Rollback time has changed for index inst 16654490053271344296. New rollback time 1615431494324916697
2021-03-11T08:33:49.599+05:30 [Info] Rollback time has changed for index inst 10515533091024265430. New rollback time 1615431494324916697
2021-03-11T08:33:49.599+05:30 [Info] Rollback time has changed for index inst 10515533091024265430. New rollback time 1615431494324916697
2021-03-11T08:33:49.599+05:30 [Info] Rollback time has changed for index inst 16654490053271344296. New rollback time 1615431494324916697
2021-03-11T08:33:50.048+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:33:50.164+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:50.164+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:50.171+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:50.171+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:50.172+05:30 [Info] switched currmeta from 1454 -> 1454 force true 
2021-03-11T08:33:50.181+05:30 [Info] switched currmeta from 1451 -> 1451 force true 
2021-03-11T08:33:51.252+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:51.252+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:51.256+05:30 [Info] switched currmeta from 1454 -> 1455 force true 
2021-03-11T08:33:51.259+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:51.259+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:51.262+05:30 [Info] switched currmeta from 1451 -> 1452 force true 
2021-03-11T08:33:52.194+05:30 [Info] CreateIndex 17747047585099660178 default _default _default/idx3 using:plasma exprType:N1QL whereExpr:() secExprs:([`name`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.601775413s) err()
2021/03/11 08:33:52 Created the secondary index idx3. Waiting for it become active
2021-03-11T08:33:52.195+05:30 [Info] metadata provider version changed 1455 -> 1456
2021-03-11T08:33:52.195+05:30 [Info] switched currmeta from 1455 -> 1456 force false 
2021/03/11 08:33:52 Index is now active
2021/03/11 08:33:52 Using n1ql client
2021-03-11T08:33:52.263+05:30 [Info] metadata provider version changed 1452 -> 1453
2021-03-11T08:33:52.263+05:30 [Info] switched currmeta from 1452 -> 1453 force false 
2021-03-11T08:33:52.264+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:33:52 Length of scanResults = 10
2021/03/11 08:33:52 Changing config key indexer.settings.max_seckey_size to value 4096
2021/03/11 08:33:52 Changing config key indexer.settings.max_array_seckey_size to value 51200
2021-03-11T08:33:52.392+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:52.392+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:52.400+05:30 [Info] switched currmeta from 1453 -> 1453 force true 
2021-03-11T08:33:52.416+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:52.416+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:52.424+05:30 [Info] switched currmeta from 1456 -> 1456 force true 
2021-03-11T08:33:52.428+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":51200,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4096,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:33:55.042+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:33:55.118+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:55.118+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:55.123+05:30 [Info] switched currmeta from 1456 -> 1456 force true 
2021-03-11T08:33:55.125+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:33:55.125+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:33:55.131+05:30 [Info] switched currmeta from 1453 -> 1453 force true 
2021/03/11 08:33:59 Expected and Actual scan responses are the same
2021/03/11 08:33:59 Using n1ql client
2021-03-11T08:33:59.007+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:33:59 Expected and Actual scan responses are the same
2021/03/11 08:33:59 Using n1ql client
2021/03/11 08:33:59 Expected and Actual scan responses are the same
2021/03/11 08:33:59 Changing config key indexer.settings.max_seckey_size to value 100
2021/03/11 08:33:59 Changing config key indexer.settings.max_array_seckey_size to value 2200
2021-03-11T08:33:59.077+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":2200,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:33:59.599+05:30 [Info] Rollback time has changed for index inst 10830170309824952603. New rollback time 1615431494324916697
2021-03-11T08:33:59.599+05:30 [Info] Rollback time has changed for index inst 10830170309824952603. New rollback time 1615431494324916697
2021-03-11T08:34:00.036+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:34:00.078+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:00.078+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:00.079+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:00.080+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:00.081+05:30 [Info] switched currmeta from 1456 -> 1456 force true 
2021-03-11T08:34:00.083+05:30 [Info] switched currmeta from 1453 -> 1453 force true 
2021/03/11 08:34:05 Using n1ql client
2021-03-11T08:34:05.078+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:34:05 Length of scanResults = 10
2021/03/11 08:34:05 Changing config key indexer.settings.max_seckey_size to value 4608
2021/03/11 08:34:05 Changing config key indexer.settings.max_array_seckey_size to value 10240
2021-03-11T08:34:05.147+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":false,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
--- PASS: TestArraySizeIncreaseDecrease1 (84.42s)
=== RUN   TestArraySizeIncreaseDecrease2
2021/03/11 08:34:06 In TestArraySizeIncreaseDecrease2()
2021/03/11 08:34:06 In DropAllSecondaryIndexes()
2021/03/11 08:34:06 Index found:  arr2
2021-03-11T08:34:06.123+05:30 [Info] DropIndex 836583527844035339 ...
2021-03-11T08:34:06.178+05:30 [Info] metadata provider version changed 1456 -> 1458
2021-03-11T08:34:06.178+05:30 [Info] switched currmeta from 1456 -> 1458 force false 
2021-03-11T08:34:06.178+05:30 [Info] DropIndex 836583527844035339 - elapsed(55.08087ms), err()
2021/03/11 08:34:06 Dropped index arr2
2021/03/11 08:34:06 Index found:  idx3
2021-03-11T08:34:06.178+05:30 [Info] DropIndex 17747047585099660178 ...
2021-03-11T08:34:06.264+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.264+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.275+05:30 [Info] switched currmeta from 1458 -> 1459 force true 
2021-03-11T08:34:06.278+05:30 [Info] metadata provider version changed 1459 -> 1460
2021-03-11T08:34:06.278+05:30 [Info] switched currmeta from 1459 -> 1460 force false 
2021-03-11T08:34:06.278+05:30 [Info] DropIndex 17747047585099660178 - elapsed(99.638868ms), err()
2021/03/11 08:34:06 Dropped index idx3
2021/03/11 08:34:06 Index found:  arr1
2021-03-11T08:34:06.278+05:30 [Info] DropIndex 12968997275625704714 ...
2021-03-11T08:34:06.302+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.302+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.321+05:30 [Info] switched currmeta from 1453 -> 1457 force true 
2021-03-11T08:34:06.394+05:30 [Info] metadata provider version changed 1460 -> 1462
2021-03-11T08:34:06.394+05:30 [Info] switched currmeta from 1460 -> 1462 force false 
2021-03-11T08:34:06.394+05:30 [Info] DropIndex 12968997275625704714 - elapsed(115.840468ms), err()
2021/03/11 08:34:06 Dropped index arr1
2021-03-11T08:34:06.468+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.468+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.528+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:06.562+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:34:06.589+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.589+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.603+05:30 [Info] switched currmeta from 1457 -> 1459 force true 
2021-03-11T08:34:06.746+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.746+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.754+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.754+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.770+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:06.772+05:30 [Info] switched currmeta from 1459 -> 1459 force true 
2021-03-11T08:34:06.868+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.868+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.873+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:06.873+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:06.878+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:06.879+05:30 [Info] switched currmeta from 1459 -> 1459 force true 
2021-03-11T08:34:07.779+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:34:07.864+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:07.864+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:07.867+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:07.876+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:07.876+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:07.878+05:30 [Info] switched currmeta from 1459 -> 1459 force true 
2021-03-11T08:34:15.055+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:34:15.127+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:15.127+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:15.138+05:30 [Info] switched currmeta from 1459 -> 1459 force true 
2021-03-11T08:34:15.152+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:15.152+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:15.154+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:15.563+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:34:15.636+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:15.636+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:15.636+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:34:15.636+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:34:15.641+05:30 [Info] switched currmeta from 1459 -> 1459 force true 
2021-03-11T08:34:15.646+05:30 [Info] switched currmeta from 1462 -> 1462 force true 
2021-03-11T08:34:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:34:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:34:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:34:19.426+05:30 [Info] average scan response {906 ms}
2021-03-11T08:34:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:34:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:34:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:34:20.068+05:30 [Info] average scan response {2 ms}
2021-03-11T08:34:21.455+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:34:22.489+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:34:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":16,"gsi_scan_duration":213272867,"gsi_throttle_duration":11685246,"gsi_prime_duration":86584826,"gsi_blocked_duration":2631874,"gsi_total_temp_files":0}
2021/03/11 08:34:45 Flushed the bucket default, Response body: 
2021-03-11T08:34:45.225+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:34:45 Changing config key indexer.settings.allow_large_keys to value true
2021-03-11T08:34:45.252+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:34:46.238+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:34:46 Changing config key indexer.settings.max_seckey_size to value 100
2021-03-11T08:34:46.247+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:34:46 Changing config key indexer.settings.max_array_seckey_size to value 2000
2021-03-11T08:34:46.264+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:34:46.301+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":2000,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021/03/11 08:34:47 Start of createArrayDocs()
2021/03/11 08:35:04 End of createArrayDocs()
2021/03/11 08:35:04 Start of createArrayDocs()
2021/03/11 08:35:04 End of createArrayDocs()
2021-03-11T08:35:04.916+05:30 [Info] CreateIndex default _default _default arr1 ...
2021-03-11T08:35:04.917+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:35:04.918+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr1)
2021-03-11T08:35:04.918+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr1
2021-03-11T08:35:04.918+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:35:04.976+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:04.976+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:04.981+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:04.981+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:04.981+05:30 [Info] switched currmeta from 1462 -> 1464 force true 
2021-03-11T08:35:04.985+05:30 [Info] switched currmeta from 1459 -> 1461 force true 
2021-03-11T08:35:05.082+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:05.082+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:05.086+05:30 [Info] switched currmeta from 1461 -> 1464 force true 
2021-03-11T08:35:05.093+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:05.093+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:05.102+05:30 [Info] switched currmeta from 1464 -> 1467 force true 
2021-03-11T08:35:05.204+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:05.204+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:05.209+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:05.209+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:05.211+05:30 [Info] switched currmeta from 1467 -> 1467 force true 
2021-03-11T08:35:05.212+05:30 [Info] switched currmeta from 1464 -> 1464 force true 
2021-03-11T08:35:10.063+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:10.141+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:10.141+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:10.146+05:30 [Info] switched currmeta from 1467 -> 1467 force true 
2021-03-11T08:35:10.150+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:10.150+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:10.152+05:30 [Info] switched currmeta from 1464 -> 1464 force true 
2021-03-11T08:35:13.226+05:30 [Info] CreateIndex 16599760907811791613 default _default _default/arr1 using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`friends`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(8.309827944s) err()
2021/03/11 08:35:13 Created the secondary index arr1. Waiting for it become active
2021-03-11T08:35:13.226+05:30 [Info] metadata provider version changed 1467 -> 1468
2021-03-11T08:35:13.226+05:30 [Info] switched currmeta from 1467 -> 1468 force false 
2021/03/11 08:35:13 Index is now active
2021-03-11T08:35:13.226+05:30 [Info] CreateIndex default _default _default arr2 ...
2021-03-11T08:35:13.227+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:35:13.231+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, arr2)
2021-03-11T08:35:13.231+05:30 [Info] Skipping planner for creation of the index default:_default:_default:arr2
2021-03-11T08:35:13.231+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:35:13.382+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:13.382+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:13.401+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:13.401+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:13.403+05:30 [Info] switched currmeta from 1464 -> 1470 force true 
2021-03-11T08:35:13.442+05:30 [Info] switched currmeta from 1468 -> 1473 force true 
2021-03-11T08:35:13.580+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:13.580+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:13.592+05:30 [Info] switched currmeta from 1473 -> 1473 force true 
2021-03-11T08:35:13.598+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:13.598+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:13.605+05:30 [Info] switched currmeta from 1470 -> 1470 force true 
2021-03-11T08:35:17.413+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:17.413+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:17.419+05:30 [Info] switched currmeta from 1470 -> 1471 force true 
2021-03-11T08:35:17.429+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:17.429+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:17.436+05:30 [Info] switched currmeta from 1473 -> 1474 force true 
2021-03-11T08:35:18.350+05:30 [Info] CreateIndex 12719918476746171585 default _default _default/arr2 using:plasma exprType:N1QL whereExpr:() secExprs:([(all (`name`))]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(5.123335458s) err()
2021/03/11 08:35:18 Created the secondary index arr2. Waiting for it become active
2021-03-11T08:35:18.350+05:30 [Info] metadata provider version changed 1474 -> 1475
2021-03-11T08:35:18.350+05:30 [Info] switched currmeta from 1474 -> 1475 force false 
2021/03/11 08:35:18 Index is now active
2021-03-11T08:35:18.350+05:30 [Info] CreateIndex default _default _default idx3 ...
2021-03-11T08:35:18.350+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:35:18.351+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, idx3)
2021-03-11T08:35:18.351+05:30 [Info] Skipping planner for creation of the index default:_default:_default:idx3
2021-03-11T08:35:18.351+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:35:18.458+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.458+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.472+05:30 [Info] switched currmeta from 1471 -> 1476 force true 
2021-03-11T08:35:18.513+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.513+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.523+05:30 [Info] switched currmeta from 1475 -> 1479 force true 
2021-03-11T08:35:18.556+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.556+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.574+05:30 [Info] switched currmeta from 1476 -> 1477 force true 
2021-03-11T08:35:18.595+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.595+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.603+05:30 [Info] switched currmeta from 1479 -> 1480 force true 
2021-03-11T08:35:18.695+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.695+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.699+05:30 [Info] switched currmeta from 1477 -> 1477 force true 
2021-03-11T08:35:18.703+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:18.703+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:18.712+05:30 [Info] switched currmeta from 1480 -> 1480 force true 
2021-03-11T08:35:19.423+05:30 [Info] connected with 1 indexers
2021-03-11T08:35:19.423+05:30 [Info] client stats current counts: current: 3, not current: 0
2021-03-11T08:35:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:35:19.426+05:30 [Info] average scan response {906 ms}
2021-03-11T08:35:19.599+05:30 [Info] Rollback time has changed for index inst 15611212335907788685. New rollback time 1615431494324916697
2021-03-11T08:35:19.599+05:30 [Info] Rollback time has changed for index inst 15611212335907788685. New rollback time 1615431494324916697
2021-03-11T08:35:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:35:20.065+05:30 [Info] client stats current counts: current: 3, not current: 0
2021-03-11T08:35:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:35:20.068+05:30 [Info] average scan response {2 ms}
2021-03-11T08:35:20.070+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:20.182+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:20.182+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:20.189+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:20.189+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:20.194+05:30 [Info] switched currmeta from 1477 -> 1477 force true 
2021-03-11T08:35:20.195+05:30 [Info] switched currmeta from 1480 -> 1480 force true 
2021-03-11T08:35:20.968+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:20.968+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:20.992+05:30 [Info] switched currmeta from 1480 -> 1481 force true 
2021-03-11T08:35:21.000+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:21.000+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:21.003+05:30 [Info] switched currmeta from 1477 -> 1478 force true 
2021-03-11T08:35:21.472+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:35:21.916+05:30 [Info] CreateIndex 3618205249451554546 default _default _default/idx3 using:plasma exprType:N1QL whereExpr:() secExprs:([`name`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.565344321s) err()
2021/03/11 08:35:21 Created the secondary index idx3. Waiting for it become active
2021-03-11T08:35:21.916+05:30 [Info] metadata provider version changed 1481 -> 1482
2021-03-11T08:35:21.916+05:30 [Info] switched currmeta from 1481 -> 1482 force false 
2021/03/11 08:35:21 Index is now active
2021-03-11T08:35:22.025+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:22.025+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:22.028+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:22.028+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:22.031+05:30 [Info] switched currmeta from 1482 -> 1482 force true 
2021-03-11T08:35:22.032+05:30 [Info] switched currmeta from 1478 -> 1479 force true 
2021-03-11T08:35:22.508+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021/03/11 08:35:22 Expected and Actual scan responses are the same
2021/03/11 08:35:22 Using n1ql client
2021-03-11T08:35:22.729+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:35:22 Expected and Actual scan responses are the same
2021/03/11 08:35:22 Using n1ql client
2021/03/11 08:35:22 Expected and Actual scan responses are the same
2021/03/11 08:35:22 Changing config key indexer.settings.max_seckey_size to value 4096
2021/03/11 08:35:22 Changing config key indexer.settings.max_array_seckey_size to value 51200
2021-03-11T08:35:22.799+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":51200,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4096,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:35:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":18,"gsi_scan_duration":217672374,"gsi_throttle_duration":11765114,"gsi_prime_duration":89880600,"gsi_blocked_duration":2631874,"gsi_total_temp_files":0}
2021-03-11T08:35:24.609+05:30 [Info] Rollback time has changed for index inst 4003418333908701011. New rollback time 1615431494324916697
2021-03-11T08:35:24.609+05:30 [Info] Rollback time has changed for index inst 4003418333908701011. New rollback time 1615431494324916697
2021-03-11T08:35:25.073+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:25.145+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:25.145+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:25.147+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:25.147+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:25.151+05:30 [Info] switched currmeta from 1482 -> 1482 force true 
2021-03-11T08:35:25.153+05:30 [Info] switched currmeta from 1479 -> 1479 force true 
2021/03/11 08:35:27 Expected and Actual scan responses are the same
2021/03/11 08:35:27 Using n1ql client
2021-03-11T08:35:27.113+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:35:27 Expected and Actual scan responses are the same
2021/03/11 08:35:27 Using n1ql client
2021/03/11 08:35:27 Expected and Actual scan responses are the same
2021/03/11 08:35:27 Changing config key indexer.settings.max_seckey_size to value 100
2021/03/11 08:35:27 Changing config key indexer.settings.max_array_seckey_size to value 2200
2021-03-11T08:35:27.178+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":2200,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":100,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021-03-11T08:35:29.599+05:30 [Info] Rollback time has changed for index inst 17220458278181353151. New rollback time 1615431494324916697
2021-03-11T08:35:29.601+05:30 [Info] Rollback time has changed for index inst 17220458278181353151. New rollback time 1615431494324916697
2021-03-11T08:35:30.056+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:30.093+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:30.093+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:30.096+05:30 [Info] switched currmeta from 1482 -> 1482 force true 
2021-03-11T08:35:30.098+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:30.098+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:30.100+05:30 [Info] switched currmeta from 1479 -> 1479 force true 
2021/03/11 08:35:31 Expected and Actual scan responses are the same
2021/03/11 08:35:31 Using n1ql client
2021-03-11T08:35:31.267+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:35:31 Expected and Actual scan responses are the same
2021/03/11 08:35:31 Using n1ql client
2021/03/11 08:35:31 Expected and Actual scan responses are the same
2021/03/11 08:35:31 Changing config key indexer.settings.max_seckey_size to value 4608
2021/03/11 08:35:31 Changing config key indexer.settings.max_array_seckey_size to value 10240
2021-03-11T08:35:31.344+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
--- PASS: TestArraySizeIncreaseDecrease2 (86.19s)
=== RUN   TestBufferedScan_BackfillDisabled
2021/03/11 08:35:32 In TestBufferedScan_BackfillDisabled()
2021/03/11 08:35:32 In DropAllSecondaryIndexes()
2021/03/11 08:35:32 Index found:  arr2
2021-03-11T08:35:32.316+05:30 [Info] DropIndex 12719918476746171585 ...
2021-03-11T08:35:32.365+05:30 [Info] metadata provider version changed 1482 -> 1484
2021-03-11T08:35:32.365+05:30 [Info] switched currmeta from 1482 -> 1484 force false 
2021-03-11T08:35:32.365+05:30 [Info] DropIndex 12719918476746171585 - elapsed(48.740972ms), err()
2021/03/11 08:35:32 Dropped index arr2
2021/03/11 08:35:32 Index found:  arr1
2021-03-11T08:35:32.365+05:30 [Info] DropIndex 16599760907811791613 ...
2021-03-11T08:35:32.472+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.472+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.475+05:30 [Info] metadata provider version changed 1484 -> 1486
2021-03-11T08:35:32.475+05:30 [Info] switched currmeta from 1484 -> 1486 force false 
2021-03-11T08:35:32.475+05:30 [Info] DropIndex 16599760907811791613 - elapsed(110.095775ms), err()
2021/03/11 08:35:32 Dropped index arr1
2021/03/11 08:35:32 Index found:  idx3
2021-03-11T08:35:32.475+05:30 [Info] DropIndex 3618205249451554546 ...
2021-03-11T08:35:32.492+05:30 [Info] switched currmeta from 1486 -> 1486 force true 
2021-03-11T08:35:32.495+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.495+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.508+05:30 [Info] switched currmeta from 1479 -> 1483 force true 
2021-03-11T08:35:32.588+05:30 [Info] metadata provider version changed 1486 -> 1488
2021-03-11T08:35:32.588+05:30 [Info] switched currmeta from 1486 -> 1488 force false 
2021-03-11T08:35:32.588+05:30 [Info] DropIndex 3618205249451554546 - elapsed(112.698001ms), err()
2021/03/11 08:35:32 Dropped index idx3
2021-03-11T08:35:32.662+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.662+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.693+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.693+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.711+05:30 [Info] switched currmeta from 1483 -> 1485 force true 
2021-03-11T08:35:32.712+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:32.772+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:35:32.936+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.936+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.946+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:32.946+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:32.947+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:32.966+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021-03-11T08:35:33.094+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:33.094+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:33.098+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:33.120+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:33.120+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:33.128+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021-03-11T08:35:36.911+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:37.038+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:37.038+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:37.044+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:37.045+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:37.045+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:37.048+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021-03-11T08:35:40.074+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:40.129+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:40.129+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:40.134+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:40.150+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:40.150+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:40.153+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021-03-11T08:35:42.130+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:42.183+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:42.183+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:42.186+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:42.186+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:42.189+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021-03-11T08:35:42.191+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:42.372+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:35:42.430+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:42.430+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:42.436+05:30 [Info] switched currmeta from 1488 -> 1488 force true 
2021-03-11T08:35:42.456+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:35:42.456+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:35:42.468+05:30 [Info] switched currmeta from 1485 -> 1485 force true 
2021/03/11 08:36:11 Flushed the bucket default, Response body: 
2021-03-11T08:36:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:36:19.421+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:36:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:36:19.426+05:30 [Info] average scan response {294 ms}
2021-03-11T08:36:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:36:20.065+05:30 [Info] client stats current counts: current: 0, not current: 0
2021-03-11T08:36:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:36:20.068+05:30 [Info] average scan response {1 ms}
2021-03-11T08:36:21.498+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:36:22.532+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:36:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":22,"gsi_scan_duration":225344837,"gsi_throttle_duration":11916729,"gsi_prime_duration":96454915,"gsi_blocked_duration":2631874,"gsi_total_temp_files":0}
2021-03-11T08:36:51.468+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:36:51 Changing config key queryport.client.settings.backfillLimit to value 0
2021-03-11T08:36:51.480+05:30 [Info] CreateIndex default _default _default addressidx ...
2021-03-11T08:36:51.480+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:36:51.483+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, addressidx)
2021-03-11T08:36:51.483+05:30 [Info] Skipping planner for creation of the index default:_default:_default:addressidx
2021-03-11T08:36:51.483+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:36:51.623+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.623+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.633+05:30 [Info] switched currmeta from 1488 -> 1492 force true 
2021-03-11T08:36:51.634+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.634+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.643+05:30 [Info] switched currmeta from 1485 -> 1489 force true 
2021-03-11T08:36:51.698+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.698+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.708+05:30 [Info] switched currmeta from 1489 -> 1490 force true 
2021-03-11T08:36:51.710+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.710+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.718+05:30 [Info] switched currmeta from 1492 -> 1493 force true 
2021-03-11T08:36:51.785+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.785+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.793+05:30 [Info] switched currmeta from 1490 -> 1490 force true 
2021-03-11T08:36:51.807+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:51.807+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:51.813+05:30 [Info] switched currmeta from 1493 -> 1493 force true 
2021-03-11T08:36:57.794+05:30 [Info] CreateIndex 9878574712972772445 default _default _default/addressidx using:plasma exprType:N1QL whereExpr:() secExprs:([`address`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(6.314679616s) err()
2021/03/11 08:36:57 Created the secondary index addressidx. Waiting for it become active
2021-03-11T08:36:57.795+05:30 [Info] metadata provider version changed 1493 -> 1494
2021-03-11T08:36:57.795+05:30 [Info] switched currmeta from 1493 -> 1494 force false 
2021/03/11 08:36:57 Index is now active
2021-03-11T08:36:57.818+05:30 [Info] metadata provider version changed 1490 -> 1491
2021-03-11T08:36:57.818+05:30 [Info] switched currmeta from 1490 -> 1491 force false 
2021-03-11T08:36:57.818+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:36:57.818+05:30 [Info] GSIC[default/default-_default-_default-1615432017795101705] started ...
2021/03/11 08:36:57 Non-backfill file found: /tmp/.ICE-unix
2021/03/11 08:36:57 Non-backfill file found: /tmp/.Test-unix
2021/03/11 08:36:57 Non-backfill file found: /tmp/.X11-unix
2021/03/11 08:36:57 Non-backfill file found: /tmp/.XIM-unix
2021/03/11 08:36:57 Non-backfill file found: /tmp/.font-unix
2021/03/11 08:36:57 Non-backfill file found: /tmp/go-build215587140
2021/03/11 08:36:57 Non-backfill file found: /tmp/go-build269151486
2021/03/11 08:36:57 Non-backfill file found: /tmp/go-build385564627
2021/03/11 08:36:57 Non-backfill file found: /tmp/go-build717237083
2021/03/11 08:36:57 Non-backfill file found: /tmp/mdbslice
2021/03/11 08:36:57 Non-backfill file found: /tmp/systemd-private-d7949a86a2284e429128c6d977aab68a-apache2.service-cj0o6e
2021/03/11 08:36:57 Non-backfill file found: /tmp/systemd-private-d7949a86a2284e429128c6d977aab68a-systemd-timesyncd.service-DRhvnT
2021/03/11 08:36:57 limit=1,chsize=256; received 1 items; took 2.677999ms
2021-03-11T08:36:57.902+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:57.902+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:57.905+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:36:57.918+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:36:57.918+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:36:57.921+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:00.103+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:00.185+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:00.185+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:00.189+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:00.190+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:00.190+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021/03/11 08:37:00 limit=1000,chsize=256; received 1000 items; took 1.367682846s
2021-03-11T08:37:00.193+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
--- PASS: TestBufferedScan_BackfillDisabled (88.87s)
=== RUN   TestBufferedScan_BackfillEnabled
2021/03/11 08:37:01 In TestBufferedScan_BackfillEnabled()
2021-03-11T08:37:01.191+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:37:01 Changing config key queryport.client.settings.backfillLimit to value 1
2021-03-11T08:37:01.218+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:37:01.218+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] started ...
2021/03/11 08:37:01 limit=1,chsize=256; received 1 items; took 11.422524ms
2021-03-11T08:37:01.237+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":1,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021/03/11 08:37:02 limit=1000,chsize=256; received 1000 items; took 10.277983ms
2021-03-11T08:37:03.247+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] bufferedscan new temp file ... /tmp/scan-results31142150092397
2021-03-11T08:37:04.599+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615431494324916697
2021-03-11T08:37:04.599+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615431494324916697
2021-03-11T08:37:05.083+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:05.121+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:05.121+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:05.125+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:05.128+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:05.128+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:05.130+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:37:09.238+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] request(bufferedscan) removing temp file /tmp/scan-results31142150092397 ...
2021/03/11 08:37:14 limit=1000,chsize=256; received 1000 items; took 10.253786856s
2021-03-11T08:37:15.504+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] bufferedscan new temp file ... /tmp/scan-results31142954188688
2021-03-11T08:37:15.506+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] open new connection ...
2021-03-11T08:37:15.517+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] bufferedscan new temp file ... /tmp/scan-results31142214166850
Scan error: bufferedscan temp file size exceeded limit 1, 13 - cause: bufferedscan temp file size exceeded limit 1, 13
Scan error:  bufferedscan temp file size exceeded limit 1, 13 - cause:  bufferedscan temp file size exceeded limit 1, 13
2021-03-11T08:37:16.936+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] request(bufferedscan) removing temp file /tmp/scan-results31142954188688 ...
Scan error: bufferedscan temp file size exceeded limit 1, 13 - cause: bufferedscan temp file size exceeded limit 1, 13
Scan error:  bufferedscan temp file size exceeded limit 1, 13 - cause:  bufferedscan temp file size exceeded limit 1, 13
2021-03-11T08:37:16.950+05:30 [Info] GSIC[default/default-_default-_default-1615432021212355766] request(bufferedscan) removing temp file /tmp/scan-results31142214166850 ...
2021-03-11T08:37:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:37:19.421+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T08:37:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:37:19.426+05:30 [Info] average scan response {294 ms}
2021-03-11T08:37:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:37:20.065+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T08:37:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:37:20.068+05:30 [Info] average scan response {3041 ms}
2021-03-11T08:37:21.517+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:37:22.567+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 2
2021/03/11 08:37:27 limit=1000,chsize=256; received 584 items; took 11.8310131s
2021/03/11 08:37:27 limit=1000,chsize=256; received 584 items; took 11.84536175s
2021-03-11T08:37:28.343+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 08:37:28 Changing config key queryport.client.settings.backfillLimit to value 0
--- PASS: TestBufferedScan_BackfillEnabled (27.16s)
=== RUN   TestMultiScanSetup
2021/03/11 08:37:28 In TestMultiScanSetup()
2021-03-11T08:37:28.386+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021/03/11 08:37:29 Emptying the default bucket
2021/03/11 08:37:32 Flush Enabled on bucket default, responseBody: 
2021-03-11T08:37:32.683+05:30 [Info] serviceChangeNotifier: received CollectionManifestChangeNotification for bucket: default
2021-03-11T08:37:32.844+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:32.844+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:32.851+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:32.851+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:32.857+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:32.860+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:37:33.924+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:34.009+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:34.009+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:34.017+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:34.017+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:34.021+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:34.022+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:37:34.218+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:34.327+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:34.327+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:34.333+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:34.333+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:34.333+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:34.337+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:37:41.340+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:41.387+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:41.388+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:41.392+05:30 [Info] switched currmeta from 1491 -> 1491 force true 
2021-03-11T08:37:41.396+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:41.396+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:41.399+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:49.592+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:49.592+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:49.594+05:30 [Info] switched currmeta from 1494 -> 1495 force true 
2021-03-11T08:37:49.596+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:49.596+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:49.599+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615432066427196009
2021-03-11T08:37:49.600+05:30 [Info] switched currmeta from 1491 -> 1492 force true 
2021-03-11T08:37:49.600+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615432066427196009
2021-03-11T08:37:49.742+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:49.742+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:49.753+05:30 [Info] switched currmeta from 1492 -> 1493 force true 
2021-03-11T08:37:49.759+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:49.760+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:49.765+05:30 [Info] switched currmeta from 1495 -> 1496 force true 
2021-03-11T08:37:51.998+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:51.998+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:51.999+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:51.999+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:52.008+05:30 [Info] switched currmeta from 1493 -> 1494 force true 
2021-03-11T08:37:52.015+05:30 [Info] switched currmeta from 1496 -> 1497 force true 
2021-03-11T08:37:55.092+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:37:55.131+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:55.131+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:55.131+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:37:55.131+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:37:55.134+05:30 [Info] switched currmeta from 1497 -> 1497 force true 
2021-03-11T08:37:55.134+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021-03-11T08:37:59.599+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615432069269962788
2021-03-11T08:37:59.599+05:30 [Info] Rollback time has changed for index inst 4275603283683971703. New rollback time 1615432069269962788
2021-03-11T08:38:00.095+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:38:00.134+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:00.134+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:00.137+05:30 [Info] switched currmeta from 1497 -> 1497 force true 
2021-03-11T08:38:00.137+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:00.137+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:00.140+05:30 [Info] switched currmeta from 1494 -> 1494 force true 
2021/03/11 08:38:11 Flushed the bucket default, Response body: 
2021/03/11 08:38:11 Populating the default bucket
2021-03-11T08:38:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T08:38:19.421+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T08:38:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T08:38:19.426+05:30 [Info] average scan response {294 ms}
2021-03-11T08:38:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T08:38:20.065+05:30 [Info] client stats current counts: current: 1, not current: 0
2021-03-11T08:38:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T08:38:20.068+05:30 [Info] average scan response {3041 ms}
2021-03-11T08:38:20.339+05:30 [Info] CreateIndex default _default _default index_companyname ...
2021-03-11T08:38:20.341+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:20.342+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_companyname)
2021-03-11T08:38:20.342+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_companyname
2021-03-11T08:38:20.342+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:20.463+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.463+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.472+05:30 [Info] switched currmeta from 1494 -> 1496 force true 
2021-03-11T08:38:20.475+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.475+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.477+05:30 [Info] switched currmeta from 1497 -> 1500 force true 
2021-03-11T08:38:20.558+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.558+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.568+05:30 [Info] switched currmeta from 1496 -> 1499 force true 
2021-03-11T08:38:20.574+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.574+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.582+05:30 [Info] switched currmeta from 1500 -> 1502 force true 
2021-03-11T08:38:20.724+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.724+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.732+05:30 [Info] switched currmeta from 1502 -> 1502 force true 
2021-03-11T08:38:20.758+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:20.758+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:20.761+05:30 [Info] switched currmeta from 1499 -> 1499 force true 
2021-03-11T08:38:21.538+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T08:38:22.584+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 2
2021-03-11T08:38:24.341+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:24.341+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:24.346+05:30 [Info] switched currmeta from 1502 -> 1503 force true 
2021-03-11T08:38:24.352+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:24.352+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:24.355+05:30 [Info] switched currmeta from 1499 -> 1500 force true 
2021-03-11T08:38:25.233+05:30 [Info] CreateIndex 3918525577933280283 default _default _default/index_companyname using:plasma exprType:N1QL whereExpr:() secExprs:([`company` `name`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.893702837s) err()
2021/03/11 08:38:25 Created the secondary index index_companyname. Waiting for it become active
2021-03-11T08:38:25.233+05:30 [Info] metadata provider version changed 1503 -> 1504
2021-03-11T08:38:25.233+05:30 [Info] switched currmeta from 1503 -> 1504 force false 
2021/03/11 08:38:25 Index is now active
2021-03-11T08:38:25.233+05:30 [Info] CreateIndex default _default _default index_company ...
2021-03-11T08:38:25.234+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:25.241+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_company)
2021-03-11T08:38:25.241+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_company
2021-03-11T08:38:25.241+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:25.406+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:25.406+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:25.414+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:25.414+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:25.424+05:30 [Info] switched currmeta from 1500 -> 1506 force true 
2021-03-11T08:38:25.432+05:30 [Info] switched currmeta from 1504 -> 1509 force true 
2021-03-11T08:38:25.524+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:25.524+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:25.529+05:30 [Info] switched currmeta from 1509 -> 1509 force true 
2021-03-11T08:38:25.538+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:25.538+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:25.545+05:30 [Info] switched currmeta from 1506 -> 1506 force true 
2021-03-11T08:38:28.876+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:28.876+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:28.881+05:30 [Info] switched currmeta from 1506 -> 1507 force true 
2021-03-11T08:38:28.886+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:28.886+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:28.892+05:30 [Info] switched currmeta from 1509 -> 1510 force true 
2021-03-11T08:38:29.599+05:30 [Info] Rollback time has changed for index inst 6573543692389584408. New rollback time 1615432069269962788
2021-03-11T08:38:29.599+05:30 [Info] Rollback time has changed for index inst 6573543692389584408. New rollback time 1615432069269962788
2021-03-11T08:38:29.787+05:30 [Info] CreateIndex 12969178696672929373 default _default _default/index_company using:plasma exprType:N1QL whereExpr:() secExprs:([`company`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.553525089s) err()
2021/03/11 08:38:29 Created the secondary index index_company. Waiting for it become active
2021-03-11T08:38:29.787+05:30 [Info] metadata provider version changed 1510 -> 1511
2021-03-11T08:38:29.787+05:30 [Info] switched currmeta from 1510 -> 1511 force false 
2021/03/11 08:38:29 Index is now active
2021-03-11T08:38:29.787+05:30 [Info] CreateIndex default _default _default index_company_name_age ...
2021-03-11T08:38:29.789+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:29.790+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_company_name_age)
2021-03-11T08:38:29.790+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_company_name_age
2021-03-11T08:38:29.791+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:29.886+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:29.886+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:29.892+05:30 [Info] switched currmeta from 1511 -> 1515 force true 
2021-03-11T08:38:29.956+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:29.956+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:29.968+05:30 [Info] switched currmeta from 1507 -> 1513 force true 
2021-03-11T08:38:30.002+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:30.002+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:30.028+05:30 [Info] switched currmeta from 1515 -> 1516 force true 
2021-03-11T08:38:30.085+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:30.085+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:30.095+05:30 [Info] switched currmeta from 1513 -> 1513 force true 
2021-03-11T08:38:30.118+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:30.118+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:30.135+05:30 [Info] switched currmeta from 1516 -> 1516 force true 
2021-03-11T08:38:30.170+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:38:30.222+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:30.222+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:30.224+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:30.224+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:30.226+05:30 [Info] switched currmeta from 1513 -> 1513 force true 
2021-03-11T08:38:30.228+05:30 [Info] switched currmeta from 1516 -> 1516 force true 
2021-03-11T08:38:33.426+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:33.426+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:33.427+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:33.427+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:33.430+05:30 [Info] switched currmeta from 1513 -> 1514 force true 
2021-03-11T08:38:33.431+05:30 [Info] switched currmeta from 1516 -> 1517 force true 
2021-03-11T08:38:34.339+05:30 [Info] CreateIndex 16837956035401292142 default _default _default/index_company_name_age using:plasma exprType:N1QL whereExpr:() secExprs:([`company` `name` `age`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.551403079s) err()
2021/03/11 08:38:34 Created the secondary index index_company_name_age. Waiting for it become active
2021-03-11T08:38:34.339+05:30 [Info] metadata provider version changed 1517 -> 1518
2021-03-11T08:38:34.339+05:30 [Info] switched currmeta from 1517 -> 1518 force false 
2021/03/11 08:38:34 Index is now active
2021-03-11T08:38:34.339+05:30 [Info] CreateIndex default _default _default index_primary ...
2021-03-11T08:38:34.339+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:34.340+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_primary)
2021-03-11T08:38:34.340+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_primary
2021-03-11T08:38:34.341+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:34.489+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:34.490+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:34.491+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:34.491+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:34.494+05:30 [Info] switched currmeta from 1518 -> 1523 force true 
2021-03-11T08:38:34.495+05:30 [Info] switched currmeta from 1514 -> 1520 force true 
2021-03-11T08:38:34.630+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:34.630+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:34.631+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:34.631+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:34.638+05:30 [Info] switched currmeta from 1520 -> 1520 force true 
2021-03-11T08:38:34.638+05:30 [Info] Rollback time has changed for index inst 5888977184673469405. New rollback time 1615432069269962788
2021-03-11T08:38:34.639+05:30 [Info] switched currmeta from 1523 -> 1523 force true 
2021-03-11T08:38:34.639+05:30 [Info] Rollback time has changed for index inst 5888977184673469405. New rollback time 1615432069269962788
2021-03-11T08:38:35.132+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:38:35.247+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:35.247+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:35.275+05:30 [Info] switched currmeta from 1520 -> 1520 force true 
2021-03-11T08:38:35.283+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:35.283+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:35.296+05:30 [Info] switched currmeta from 1523 -> 1523 force true 
2021-03-11T08:38:37.711+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:37.711+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:37.718+05:30 [Info] switched currmeta from 1520 -> 1521 force true 
2021-03-11T08:38:37.719+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:37.719+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:37.722+05:30 [Info] switched currmeta from 1523 -> 1524 force true 
2021-03-11T08:38:38.498+05:30 [Info] CreateIndex 15585846694160071989 default _default _default/index_primary using:plasma exprType:N1QL whereExpr:() secExprs:([]) desc:[] isPrimary:true scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.159065495s) err()
2021/03/11 08:38:38 Created the secondary index index_primary. Waiting for it become active
2021-03-11T08:38:38.498+05:30 [Info] metadata provider version changed 1524 -> 1525
2021-03-11T08:38:38.498+05:30 [Info] switched currmeta from 1524 -> 1525 force false 
2021/03/11 08:38:38 Index is now active
2021-03-11T08:38:38.499+05:30 [Info] CreateIndex default _default _default index_company_name_age_address ...
2021-03-11T08:38:38.500+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:38.522+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_company_name_age_address)
2021-03-11T08:38:38.522+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_company_name_age_address
2021-03-11T08:38:38.522+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:38.619+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.619+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.622+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.622+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.625+05:30 [Info] switched currmeta from 1521 -> 1524 force true 
2021-03-11T08:38:38.631+05:30 [Info] switched currmeta from 1525 -> 1527 force true 
2021-03-11T08:38:38.763+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.763+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.770+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.770+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.774+05:30 [Info] switched currmeta from 1527 -> 1530 force true 
2021-03-11T08:38:38.781+05:30 [Info] switched currmeta from 1524 -> 1527 force true 
2021-03-11T08:38:38.903+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.903+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:38.903+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.903+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:38.907+05:30 [Info] switched currmeta from 1530 -> 1530 force true 
2021-03-11T08:38:38.909+05:30 [Info] switched currmeta from 1527 -> 1527 force true 
2021-03-11T08:38:40.141+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:38:40.238+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:40.238+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:40.247+05:30 [Info] switched currmeta from 1530 -> 1530 force true 
2021-03-11T08:38:40.251+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:40.251+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:40.254+05:30 [Info] switched currmeta from 1527 -> 1527 force true 
2021-03-11T08:38:42.347+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:42.347+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:42.365+05:30 [Info] switched currmeta from 1527 -> 1528 force true 
2021-03-11T08:38:42.376+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:42.376+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:42.379+05:30 [Info] switched currmeta from 1530 -> 1531 force true 
2021-03-11T08:38:43.295+05:30 [Info] CreateIndex 13865274733142745105 default _default _default/index_company_name_age_address using:plasma exprType:N1QL whereExpr:() secExprs:([`company` `name` `age` `address`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.796241724s) err()
2021/03/11 08:38:43 Created the secondary index index_company_name_age_address. Waiting for it become active
2021-03-11T08:38:43.295+05:30 [Info] metadata provider version changed 1531 -> 1532
2021-03-11T08:38:43.295+05:30 [Info] switched currmeta from 1531 -> 1532 force false 
2021/03/11 08:38:43 Index is now active
2021-03-11T08:38:43.296+05:30 [Info] CreateIndex default _default _default index_company_name_age_address_friends ...
2021-03-11T08:38:43.297+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T08:38:43.299+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_company_name_age_address_friends)
2021-03-11T08:38:43.299+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_company_name_age_address_friends
2021-03-11T08:38:43.299+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T08:38:43.382+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.382+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.396+05:30 [Info] switched currmeta from 1532 -> 1534 force true 
2021-03-11T08:38:43.405+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.405+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.416+05:30 [Info] switched currmeta from 1528 -> 1531 force true 
2021-03-11T08:38:43.496+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.496+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.500+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.500+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.511+05:30 [Info] switched currmeta from 1534 -> 1537 force true 
2021-03-11T08:38:43.514+05:30 [Info] switched currmeta from 1531 -> 1534 force true 
2021-03-11T08:38:43.620+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.620+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.626+05:30 [Info] switched currmeta from 1537 -> 1537 force true 
2021-03-11T08:38:43.636+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:43.636+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:43.645+05:30 [Info] switched currmeta from 1534 -> 1534 force true 
2021-03-11T08:38:44.604+05:30 [Info] Rollback time has changed for index inst 9587217231489424638. New rollback time 1615432069269962788
2021-03-11T08:38:44.604+05:30 [Info] Rollback time has changed for index inst 10907931110185063748. New rollback time 1615432069269962788
2021-03-11T08:38:44.604+05:30 [Info] Rollback time has changed for index inst 10907931110185063748. New rollback time 1615432069269962788
2021-03-11T08:38:44.604+05:30 [Info] Rollback time has changed for index inst 9587217231489424638. New rollback time 1615432069269962788
2021-03-11T08:38:45.138+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T08:38:45.398+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:45.398+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:45.414+05:30 [Info] switched currmeta from 1534 -> 1534 force true 
2021-03-11T08:38:45.421+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:45.421+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:45.430+05:30 [Info] switched currmeta from 1537 -> 1537 force true 
2021-03-11T08:38:47.400+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:47.400+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:47.412+05:30 [Info] switched currmeta from 1534 -> 1535 force true 
2021-03-11T08:38:47.417+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:47.417+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:47.421+05:30 [Info] switched currmeta from 1537 -> 1538 force true 
2021-03-11T08:38:48.344+05:30 [Info] CreateIndex 10164258830486591914 default _default _default/index_company_name_age_address_friends using:plasma exprType:N1QL whereExpr:() secExprs:([`company` `name` `age` `address` `friends`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(5.048227289s) err()
2021/03/11 08:38:48 Created the secondary index index_company_name_age_address_friends. Waiting for it become active
2021-03-11T08:38:48.344+05:30 [Info] metadata provider version changed 1538 -> 1539
2021-03-11T08:38:48.344+05:30 [Info] switched currmeta from 1538 -> 1539 force false 
2021/03/11 08:38:48 Index is now active
--- PASS: TestMultiScanSetup (79.99s)
=== RUN   TestMultiScanCount
2021/03/11 08:38:48 In TestMultiScanCount()
2021/03/11 08:38:48 

--------- Composite Index with 2 fields ---------
2021/03/11 08:38:48 
--- ScanAllNoFilter ---
2021/03/11 08:38:48 distinct = false
2021-03-11T08:38:48.460+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:48.460+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:48.465+05:30 [Info] switched currmeta from 1539 -> 1539 force true 
2021-03-11T08:38:48.470+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T08:38:48.470+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T08:38:48.475+05:30 [Info] switched currmeta from 1535 -> 1536 force true 
2021/03/11 08:38:48 Using n1ql client
2021-03-11T08:38:48.888+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021/03/11 08:38:48 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:48 
--- ScanAllFilterNil ---
2021/03/11 08:38:48 distinct = false
2021/03/11 08:38:49 Using n1ql client
2021/03/11 08:38:49 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:49 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:38:49 distinct = false
2021/03/11 08:38:49 Using n1ql client
2021/03/11 08:38:49 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:49 
--- SingleSeek ---
2021/03/11 08:38:49 distinct = false
2021/03/11 08:38:50 Using n1ql client
2021/03/11 08:38:50 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:38:50 
--- MultipleSeek ---
2021/03/11 08:38:50 distinct = false
2021/03/11 08:38:50 Using n1ql client
2021/03/11 08:38:50 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:38:50 
--- SimpleRange ---
2021/03/11 08:38:50 distinct = false
2021/03/11 08:38:51 Using n1ql client
2021/03/11 08:38:51 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2021/03/11 08:38:51 
--- NonOverlappingRanges ---
2021/03/11 08:38:51 distinct = false
2021/03/11 08:38:51 Using n1ql client
2021/03/11 08:38:51 MultiScanCount = 4283 ExpectedMultiScanCount = 4283
2021/03/11 08:38:51 
--- OverlappingRanges ---
2021/03/11 08:38:51 distinct = false
2021/03/11 08:38:52 Using n1ql client
2021/03/11 08:38:52 MultiScanCount = 5756 ExpectedMultiScanCount = 5756
2021/03/11 08:38:52 
--- NonOverlappingFilters ---
2021/03/11 08:38:52 distinct = false
2021/03/11 08:38:52 Using n1ql client
2021/03/11 08:38:52 MultiScanCount = 337 ExpectedMultiScanCount = 337
2021/03/11 08:38:52 
--- OverlappingFilters ---
2021/03/11 08:38:52 distinct = false
2021/03/11 08:38:53 Using n1ql client
2021/03/11 08:38:53 MultiScanCount = 2559 ExpectedMultiScanCount = 2559
2021/03/11 08:38:53 
--- BoundaryFilters ---
2021/03/11 08:38:53 distinct = false
2021/03/11 08:38:53 Using n1ql client
2021/03/11 08:38:53 MultiScanCount = 499 ExpectedMultiScanCount = 499
2021/03/11 08:38:53 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:38:53 distinct = false
2021/03/11 08:38:54 Using n1ql client
2021/03/11 08:38:54 MultiScanCount = 256 ExpectedMultiScanCount = 256
2021/03/11 08:38:54 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:38:54 distinct = false
2021/03/11 08:38:54 Using n1ql client
2021/03/11 08:38:54 MultiScanCount = 255 ExpectedMultiScanCount = 255
2021/03/11 08:38:54 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:38:54 distinct = false
2021/03/11 08:38:55 Using n1ql client
2021/03/11 08:38:55 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2021/03/11 08:38:55 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:38:55 distinct = false
2021/03/11 08:38:55 Using n1ql client
2021/03/11 08:38:55 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2021/03/11 08:38:55 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:38:55 distinct = false
2021/03/11 08:38:56 Using n1ql client
2021/03/11 08:38:56 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:56 
--- FiltersWithUnbounded ---
2021/03/11 08:38:56 distinct = false
2021/03/11 08:38:56 Using n1ql client
2021/03/11 08:38:56 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2021/03/11 08:38:56 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:38:56 distinct = false
2021/03/11 08:38:57 Using n1ql client
2021/03/11 08:38:57 MultiScanCount = 418 ExpectedMultiScanCount = 418
2021/03/11 08:38:57 

--------- Simple Index with 1 field ---------
2021/03/11 08:38:57 
--- SingleIndexSimpleRange ---
2021/03/11 08:38:57 distinct = false
2021/03/11 08:38:57 Using n1ql client
2021/03/11 08:38:57 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2021/03/11 08:38:57 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:38:57 distinct = false
2021/03/11 08:38:58 Using n1ql client
2021/03/11 08:38:58 MultiScanCount = 7140 ExpectedMultiScanCount = 7140
2021/03/11 08:38:58 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:38:58 distinct = false
2021/03/11 08:38:58 Using n1ql client
2021/03/11 08:38:58 MultiScanCount = 8701 ExpectedMultiScanCount = 8701
2021/03/11 08:38:58 

--------- Composite Index with 3 fields ---------
2021/03/11 08:38:58 
--- ScanAllNoFilter ---
2021/03/11 08:38:58 distinct = false
2021/03/11 08:38:59 Using n1ql client
2021/03/11 08:38:59 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:59 
--- ScanAllFilterNil ---
2021/03/11 08:38:59 distinct = false
2021/03/11 08:38:59 Using n1ql client
2021/03/11 08:38:59 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:38:59 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:38:59 distinct = false
2021/03/11 08:39:00 Using n1ql client
2021/03/11 08:39:00 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:39:00 
--- 3FieldsSingleSeek ---
2021/03/11 08:39:00 distinct = false
2021/03/11 08:39:00 Using n1ql client
2021/03/11 08:39:00 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:39:00 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:39:00 distinct = false
2021/03/11 08:39:01 Using n1ql client
2021/03/11 08:39:01 MultiScanCount = 3 ExpectedMultiScanCount = 3
2021/03/11 08:39:01 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:39:01 distinct = false
2021/03/11 08:39:01 Using n1ql client
2021/03/11 08:39:01 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:39:01 

--------- New scenarios ---------
2021/03/11 08:39:01 
--- CompIndexHighUnbounded1 ---
2021/03/11 08:39:01 
--- Multi Scan 0 ---
2021/03/11 08:39:01 distinct = false
2021/03/11 08:39:02 Using n1ql client
2021/03/11 08:39:02 Using n1ql client
2021/03/11 08:39:02 len(scanResults) = 8 MultiScanCount = 8
2021/03/11 08:39:02 Expected and Actual scan responses are the same
2021/03/11 08:39:02 
--- Multi Scan 1 ---
2021/03/11 08:39:02 distinct = false
2021/03/11 08:39:02 Using n1ql client
2021/03/11 08:39:02 Using n1ql client
2021/03/11 08:39:02 len(scanResults) = 0 MultiScanCount = 0
2021/03/11 08:39:02 Expected and Actual scan responses are the same
2021/03/11 08:39:02 
--- Multi Scan 2 ---
2021/03/11 08:39:02 distinct = false
2021/03/11 08:39:03 Using n1ql client
2021/03/11 08:39:03 Using n1ql client
2021/03/11 08:39:03 len(scanResults) = 9 MultiScanCount = 9
2021/03/11 08:39:03 Expected and Actual scan responses are the same
2021/03/11 08:39:03 
--- CompIndexHighUnbounded2 ---
2021/03/11 08:39:03 
--- Multi Scan 0 ---
2021/03/11 08:39:03 distinct = false
2021/03/11 08:39:04 Using n1ql client
2021/03/11 08:39:04 Using n1ql client
2021/03/11 08:39:04 len(scanResults) = 4138 MultiScanCount = 4138
2021/03/11 08:39:04 Expected and Actual scan responses are the same
2021/03/11 08:39:04 
--- Multi Scan 1 ---
2021/03/11 08:39:04 distinct = false
2021/03/11 08:39:04 Using n1ql client
2021/03/11 08:39:04 Using n1ql client
2021/03/11 08:39:04 len(scanResults) = 2746 MultiScanCount = 2746
2021/03/11 08:39:04 Expected and Actual scan responses are the same
2021/03/11 08:39:04 
--- Multi Scan 2 ---
2021/03/11 08:39:04 distinct = false
2021/03/11 08:39:05 Using n1ql client
2021/03/11 08:39:05 Using n1ql client
2021/03/11 08:39:05 len(scanResults) = 4691 MultiScanCount = 4691
2021/03/11 08:39:05 Expected and Actual scan responses are the same
2021/03/11 08:39:05 
--- CompIndexHighUnbounded3 ---
2021/03/11 08:39:05 
--- Multi Scan 0 ---
2021/03/11 08:39:05 distinct = false
2021/03/11 08:39:05 Using n1ql client
2021/03/11 08:39:05 Using n1ql client
2021/03/11 08:39:05 len(scanResults) = 1329 MultiScanCount = 1329
2021/03/11 08:39:05 Expected and Actual scan responses are the same
2021/03/11 08:39:05 
--- CompIndexHighUnbounded4 ---
2021/03/11 08:39:05 
--- Multi Scan 0 ---
2021/03/11 08:39:05 distinct = false
2021/03/11 08:39:06 Using n1ql client
2021/03/11 08:39:06 Using n1ql client
2021/03/11 08:39:06 len(scanResults) = 5349 MultiScanCount = 5349
2021/03/11 08:39:06 Expected and Actual scan responses are the same
2021/03/11 08:39:06 
--- CompIndexHighUnbounded5 ---
2021/03/11 08:39:06 
--- Multi Scan 0 ---
2021/03/11 08:39:06 distinct = false
2021/03/11 08:39:06 Using n1ql client
2021/03/11 08:39:06 Using n1ql client
2021/03/11 08:39:06 len(scanResults) = 8210 MultiScanCount = 8210
2021/03/11 08:39:06 Expected and Actual scan responses are the same
2021/03/11 08:39:06 
--- SeekBoundaries ---
2021/03/11 08:39:06 
--- Multi Scan 0 ---
2021/03/11 08:39:06 distinct = false
2021/03/11 08:39:07 Using n1ql client
2021/03/11 08:39:07 Using n1ql client
2021/03/11 08:39:07 len(scanResults) = 175 MultiScanCount = 175
2021/03/11 08:39:07 Expected and Actual scan responses are the same
2021/03/11 08:39:07 
--- Multi Scan 1 ---
2021/03/11 08:39:07 distinct = false
2021/03/11 08:39:07 Using n1ql client
2021/03/11 08:39:07 Using n1ql client
2021/03/11 08:39:07 len(scanResults) = 1 MultiScanCount = 1
2021/03/11 08:39:07 Expected and Actual scan responses are the same
2021/03/11 08:39:07 
--- Multi Scan 2 ---
2021/03/11 08:39:07 distinct = false
2021/03/11 08:39:08 Using n1ql client
2021/03/11 08:39:08 Using n1ql client
2021/03/11 08:39:08 len(scanResults) = 555 MultiScanCount = 555
2021/03/11 08:39:08 Expected and Actual scan responses are the same
2021/03/11 08:39:08 
--- Multi Scan 3 ---
2021/03/11 08:39:08 distinct = false
2021/03/11 08:39:08 Using n1ql client
2021/03/11 08:39:08 Using n1ql client
2021/03/11 08:39:08 len(scanResults) = 872 MultiScanCount = 872
2021/03/11 08:39:08 Expected and Actual scan responses are the same
2021/03/11 08:39:08 
--- Multi Scan 4 ---
2021/03/11 08:39:08 distinct = false
2021/03/11 08:39:09 Using n1ql client
2021/03/11 08:39:09 Using n1ql client
2021/03/11 08:39:09 len(scanResults) = 287 MultiScanCount = 287
2021/03/11 08:39:09 Expected and Actual scan responses are the same
2021/03/11 08:39:09 
--- Multi Scan 5 ---
2021/03/11 08:39:09 distinct = false
2021/03/11 08:39:09 Using n1ql client
2021/03/11 08:39:09 Using n1ql client
2021/03/11 08:39:09 len(scanResults) = 5254 MultiScanCount = 5254
2021/03/11 08:39:09 Expected and Actual scan responses are the same
2021/03/11 08:39:09 
--- Multi Scan 6 ---
2021/03/11 08:39:09 distinct = false
2021/03/11 08:39:10 Using n1ql client
2021/03/11 08:39:10 Using n1ql client
2021/03/11 08:39:10 len(scanResults) = 5566 MultiScanCount = 5566
2021/03/11 08:39:10 Expected and Actual scan responses are the same
2021/03/11 08:39:10 
--- Multi Scan 7 ---
2021/03/11 08:39:10 distinct = false
2021/03/11 08:39:10 Using n1ql client
2021/03/11 08:39:10 Using n1ql client
2021/03/11 08:39:10 len(scanResults) = 8 MultiScanCount = 8
2021/03/11 08:39:10 Expected and Actual scan responses are the same
2021/03/11 08:39:10 

--------- With DISTINCT True ---------
2021/03/11 08:39:10 
--- ScanAllNoFilter ---
2021/03/11 08:39:10 distinct = true
2021/03/11 08:39:11 Using n1ql client
2021/03/11 08:39:11 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:11 
--- ScanAllFilterNil ---
2021/03/11 08:39:11 distinct = true
2021/03/11 08:39:11 Using n1ql client
2021/03/11 08:39:11 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:11 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:11 distinct = true
2021/03/11 08:39:12 Using n1ql client
2021/03/11 08:39:12 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:12 
--- SingleSeek ---
2021/03/11 08:39:12 distinct = true
2021/03/11 08:39:12 Using n1ql client
2021/03/11 08:39:12 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:39:12 
--- MultipleSeek ---
2021/03/11 08:39:12 distinct = true
2021/03/11 08:39:13 Using n1ql client
2021/03/11 08:39:13 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:39:13 
--- SimpleRange ---
2021/03/11 08:39:13 distinct = true
2021/03/11 08:39:13 Using n1ql client
2021/03/11 08:39:13 MultiScanCount = 227 ExpectedMultiScanCount = 227
2021/03/11 08:39:13 
--- NonOverlappingRanges ---
2021/03/11 08:39:13 distinct = true
2021/03/11 08:39:14 Using n1ql client
2021/03/11 08:39:14 MultiScanCount = 428 ExpectedMultiScanCount = 428
2021/03/11 08:39:14 
--- OverlappingRanges ---
2021/03/11 08:39:14 distinct = true
2021/03/11 08:39:14 Using n1ql client
2021/03/11 08:39:14 MultiScanCount = 575 ExpectedMultiScanCount = 575
2021/03/11 08:39:14 
--- NonOverlappingFilters ---
2021/03/11 08:39:14 distinct = true
2021/03/11 08:39:15 Using n1ql client
2021/03/11 08:39:15 MultiScanCount = 186 ExpectedMultiScanCount = 186
2021/03/11 08:39:15 
--- NonOverlappingFilters2 ---
2021/03/11 08:39:15 distinct = true
2021/03/11 08:39:15 Using n1ql client
2021/03/11 08:39:15 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:39:15 
--- OverlappingFilters ---
2021/03/11 08:39:15 distinct = true
2021/03/11 08:39:16 Using n1ql client
2021/03/11 08:39:16 MultiScanCount = 543 ExpectedMultiScanCount = 543
2021/03/11 08:39:16 
--- BoundaryFilters ---
2021/03/11 08:39:16 distinct = true
2021/03/11 08:39:16 Using n1ql client
2021/03/11 08:39:16 MultiScanCount = 172 ExpectedMultiScanCount = 172
2021/03/11 08:39:16 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:39:16 distinct = true
2021/03/11 08:39:17 Using n1ql client
2021/03/11 08:39:17 MultiScanCount = 135 ExpectedMultiScanCount = 135
2021/03/11 08:39:17 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:39:17 distinct = true
2021/03/11 08:39:17 Using n1ql client
2021/03/11 08:39:17 MultiScanCount = 134 ExpectedMultiScanCount = 134
2021/03/11 08:39:17 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:39:17 distinct = false
2021/03/11 08:39:17 Using n1ql client
2021/03/11 08:39:17 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2021/03/11 08:39:17 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:39:17 distinct = false
2021/03/11 08:39:18 Using n1ql client
2021/03/11 08:39:18 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2021/03/11 08:39:18 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:39:18 distinct = false
2021/03/11 08:39:18 Using n1ql client
2021/03/11 08:39:18 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:39:18 
--- FiltersWithUnbounded ---
2021/03/11 08:39:18 distinct = false
2021/03/11 08:39:19 Using n1ql client
2021/03/11 08:39:19 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2021/03/11 08:39:19 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:39:19 distinct = false
2021/03/11 08:39:19 Using n1ql client
2021/03/11 08:39:19 MultiScanCount = 418 ExpectedMultiScanCount = 418
2021/03/11 08:39:19 

--------- Simple Index with 1 field ---------
2021/03/11 08:39:19 
--- SingleIndexSimpleRange ---
2021/03/11 08:39:19 distinct = true
2021/03/11 08:39:20 Using n1ql client
2021/03/11 08:39:20 MultiScanCount = 227 ExpectedMultiScanCount = 227
2021/03/11 08:39:20 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:39:20 distinct = true
2021/03/11 08:39:20 Using n1ql client
2021/03/11 08:39:20 MultiScanCount = 713 ExpectedMultiScanCount = 713
2021/03/11 08:39:20 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:39:20 distinct = true
2021/03/11 08:39:20 Using n1ql client
2021/03/11 08:39:20 MultiScanCount = 869 ExpectedMultiScanCount = 869
2021/03/11 08:39:20 

--------- Composite Index with 3 fields ---------
2021/03/11 08:39:20 
--- ScanAllNoFilter ---
2021/03/11 08:39:20 distinct = true
2021/03/11 08:39:21 Using n1ql client
2021/03/11 08:39:21 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:21 
--- ScanAllFilterNil ---
2021/03/11 08:39:21 distinct = true
2021/03/11 08:39:22 Using n1ql client
2021/03/11 08:39:22 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:22 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:22 distinct = true
2021/03/11 08:39:22 Using n1ql client
2021/03/11 08:39:22 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:39:22 
--- 3FieldsSingleSeek ---
2021/03/11 08:39:22 distinct = true
2021/03/11 08:39:23 Using n1ql client
2021/03/11 08:39:23 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:39:23 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:39:23 distinct = true
2021/03/11 08:39:23 Using n1ql client
2021/03/11 08:39:23 MultiScanCount = 3 ExpectedMultiScanCount = 3
2021/03/11 08:39:23 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:39:23 distinct = true
2021/03/11 08:39:24 Using n1ql client
2021/03/11 08:39:24 MultiScanCount = 2 ExpectedMultiScanCount = 2
--- PASS: TestMultiScanCount (35.87s)
=== RUN   TestMultiScanScenarios
2021/03/11 08:39:24 In TestMultiScanScenarios()
2021/03/11 08:39:24 

--------- Composite Index with 2 fields ---------
2021/03/11 08:39:24 
--- ScanAllNoFilter ---
2021/03/11 08:39:24 distinct = false
2021/03/11 08:39:24 Using n1ql client
2021/03/11 08:39:24 Expected and Actual scan responses are the same
2021/03/11 08:39:24 
--- ScanAllFilterNil ---
2021/03/11 08:39:24 distinct = false
2021/03/11 08:39:25 Using n1ql client
2021/03/11 08:39:25 Expected and Actual scan responses are the same
2021/03/11 08:39:25 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:25 distinct = false
2021/03/11 08:39:25 Using n1ql client
2021/03/11 08:39:25 Expected and Actual scan responses are the same
2021/03/11 08:39:25 
--- SingleSeek ---
2021/03/11 08:39:25 distinct = false
2021/03/11 08:39:26 Using n1ql client
2021/03/11 08:39:26 Expected and Actual scan responses are the same
2021/03/11 08:39:26 
--- MultipleSeek ---
2021/03/11 08:39:26 distinct = false
2021/03/11 08:39:26 Using n1ql client
2021/03/11 08:39:26 Expected and Actual scan responses are the same
2021/03/11 08:39:26 
--- SimpleRange ---
2021/03/11 08:39:26 distinct = false
2021/03/11 08:39:27 Using n1ql client
2021/03/11 08:39:27 Expected and Actual scan responses are the same
2021/03/11 08:39:27 
--- NonOverlappingRanges ---
2021/03/11 08:39:27 distinct = false
2021/03/11 08:39:27 Using n1ql client
2021/03/11 08:39:27 Expected and Actual scan responses are the same
2021/03/11 08:39:27 
--- OverlappingRanges ---
2021/03/11 08:39:27 distinct = false
2021/03/11 08:39:27 Using n1ql client
2021/03/11 08:39:28 Expected and Actual scan responses are the same
2021/03/11 08:39:28 
--- NonOverlappingFilters ---
2021/03/11 08:39:28 distinct = false
2021/03/11 08:39:28 Using n1ql client
2021/03/11 08:39:28 Expected and Actual scan responses are the same
2021/03/11 08:39:28 
--- OverlappingFilters ---
2021/03/11 08:39:28 distinct = false
2021/03/11 08:39:29 Using n1ql client
2021/03/11 08:39:29 Expected and Actual scan responses are the same
2021/03/11 08:39:29 
--- BoundaryFilters ---
2021/03/11 08:39:29 distinct = false
2021/03/11 08:39:29 Using n1ql client
2021/03/11 08:39:29 Expected and Actual scan responses are the same
2021/03/11 08:39:29 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:39:29 distinct = false
2021/03/11 08:39:29 Using n1ql client
2021/03/11 08:39:29 Expected and Actual scan responses are the same
2021/03/11 08:39:29 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:39:29 distinct = false
2021/03/11 08:39:30 Using n1ql client
2021/03/11 08:39:30 Expected and Actual scan responses are the same
2021/03/11 08:39:30 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:39:30 distinct = false
2021/03/11 08:39:30 Using n1ql client
2021/03/11 08:39:30 Expected and Actual scan responses are the same
2021/03/11 08:39:30 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:39:30 distinct = false
2021/03/11 08:39:31 Using n1ql client
2021/03/11 08:39:31 Expected and Actual scan responses are the same
2021/03/11 08:39:31 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:39:31 distinct = false
2021/03/11 08:39:31 Using n1ql client
2021/03/11 08:39:31 Expected and Actual scan responses are the same
2021/03/11 08:39:31 
--- FiltersWithUnbounded ---
2021/03/11 08:39:31 distinct = false
2021/03/11 08:39:32 Using n1ql client
2021/03/11 08:39:32 Expected and Actual scan responses are the same
2021/03/11 08:39:32 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:39:32 distinct = false
2021/03/11 08:39:32 Using n1ql client
2021/03/11 08:39:32 Expected and Actual scan responses are the same
2021/03/11 08:39:32 

--------- Simple Index with 1 field ---------
2021/03/11 08:39:32 
--- SingleIndexSimpleRange ---
2021/03/11 08:39:32 distinct = false
2021/03/11 08:39:33 Using n1ql client
2021/03/11 08:39:33 Expected and Actual scan responses are the same
2021/03/11 08:39:33 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:39:33 distinct = false
2021/03/11 08:39:33 Using n1ql client
2021/03/11 08:39:33 Expected and Actual scan responses are the same
2021/03/11 08:39:33 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:39:33 distinct = false
2021/03/11 08:39:34 Using n1ql client
2021/03/11 08:39:34 Expected and Actual scan responses are the same
2021/03/11 08:39:34 

--------- Composite Index with 3 fields ---------
2021/03/11 08:39:34 
--- ScanAllNoFilter ---
2021/03/11 08:39:34 distinct = false
2021/03/11 08:39:34 Using n1ql client
2021/03/11 08:39:34 Expected and Actual scan responses are the same
2021/03/11 08:39:34 
--- ScanAllFilterNil ---
2021/03/11 08:39:34 distinct = false
2021/03/11 08:39:35 Using n1ql client
2021/03/11 08:39:35 Expected and Actual scan responses are the same
2021/03/11 08:39:35 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:35 distinct = false
2021/03/11 08:39:35 Using n1ql client
2021/03/11 08:39:35 Expected and Actual scan responses are the same
2021/03/11 08:39:35 
--- 3FieldsSingleSeek ---
2021/03/11 08:39:35 distinct = false
2021/03/11 08:39:36 Using n1ql client
2021/03/11 08:39:36 Expected and Actual scan responses are the same
2021/03/11 08:39:36 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:39:36 distinct = false
2021/03/11 08:39:36 Using n1ql client
2021/03/11 08:39:36 Expected and Actual scan responses are the same
2021/03/11 08:39:36 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:39:36 distinct = false
2021/03/11 08:39:37 Using n1ql client
2021/03/11 08:39:37 Expected and Actual scan responses are the same
2021/03/11 08:39:37 

--------- New scenarios ---------
2021/03/11 08:39:37 
--- CompIndexHighUnbounded1 ---
2021/03/11 08:39:37 
--- Multi Scan 0 ---
2021/03/11 08:39:37 distinct = false
2021/03/11 08:39:37 Using n1ql client
2021/03/11 08:39:37 Expected and Actual scan responses are the same
2021/03/11 08:39:37 
--- Multi Scan 1 ---
2021/03/11 08:39:37 distinct = false
2021/03/11 08:39:38 Using n1ql client
2021/03/11 08:39:38 Expected and Actual scan responses are the same
2021/03/11 08:39:38 
--- Multi Scan 2 ---
2021/03/11 08:39:38 distinct = false
2021/03/11 08:39:38 Using n1ql client
2021/03/11 08:39:38 Expected and Actual scan responses are the same
2021/03/11 08:39:38 
--- CompIndexHighUnbounded2 ---
2021/03/11 08:39:38 
--- Multi Scan 0 ---
2021/03/11 08:39:38 distinct = false
2021/03/11 08:39:39 Using n1ql client
2021/03/11 08:39:39 Expected and Actual scan responses are the same
2021/03/11 08:39:39 
--- Multi Scan 1 ---
2021/03/11 08:39:39 distinct = false
2021/03/11 08:39:39 Using n1ql client
2021/03/11 08:39:39 Expected and Actual scan responses are the same
2021/03/11 08:39:39 
--- Multi Scan 2 ---
2021/03/11 08:39:39 distinct = false
2021/03/11 08:39:40 Using n1ql client
2021/03/11 08:39:40 Expected and Actual scan responses are the same
2021/03/11 08:39:40 
--- CompIndexHighUnbounded3 ---
2021/03/11 08:39:40 
--- Multi Scan 0 ---
2021/03/11 08:39:40 distinct = false
2021/03/11 08:39:40 Using n1ql client
2021/03/11 08:39:40 Expected and Actual scan responses are the same
2021/03/11 08:39:40 
--- CompIndexHighUnbounded4 ---
2021/03/11 08:39:40 
--- Multi Scan 0 ---
2021/03/11 08:39:40 distinct = false
2021/03/11 08:39:41 Using n1ql client
2021/03/11 08:39:41 Expected and Actual scan responses are the same
2021/03/11 08:39:41 
--- CompIndexHighUnbounded5 ---
2021/03/11 08:39:41 
--- Multi Scan 0 ---
2021/03/11 08:39:41 distinct = false
2021/03/11 08:39:41 Using n1ql client
2021/03/11 08:39:41 Expected and Actual scan responses are the same
2021/03/11 08:39:41 
--- SeekBoundaries ---
2021/03/11 08:39:41 
--- Multi Scan 0 ---
2021/03/11 08:39:41 distinct = false
2021/03/11 08:39:42 Using n1ql client
2021/03/11 08:39:42 Expected and Actual scan responses are the same
2021/03/11 08:39:42 
--- Multi Scan 1 ---
2021/03/11 08:39:42 distinct = false
2021/03/11 08:39:42 Using n1ql client
2021/03/11 08:39:42 Expected and Actual scan responses are the same
2021/03/11 08:39:42 
--- Multi Scan 2 ---
2021/03/11 08:39:42 distinct = false
2021/03/11 08:39:43 Using n1ql client
2021/03/11 08:39:43 Expected and Actual scan responses are the same
2021/03/11 08:39:43 
--- Multi Scan 3 ---
2021/03/11 08:39:43 distinct = false
2021/03/11 08:39:43 Using n1ql client
2021/03/11 08:39:43 Expected and Actual scan responses are the same
2021/03/11 08:39:43 
--- Multi Scan 4 ---
2021/03/11 08:39:43 distinct = false
2021/03/11 08:39:43 Using n1ql client
2021/03/11 08:39:43 Expected and Actual scan responses are the same
2021/03/11 08:39:43 
--- Multi Scan 5 ---
2021/03/11 08:39:43 distinct = false
2021/03/11 08:39:44 Using n1ql client
2021/03/11 08:39:44 Expected and Actual scan responses are the same
2021/03/11 08:39:44 
--- Multi Scan 6 ---
2021/03/11 08:39:44 distinct = false
2021/03/11 08:39:44 Using n1ql client
2021/03/11 08:39:44 Expected and Actual scan responses are the same
2021/03/11 08:39:44 
--- Multi Scan 7 ---
2021/03/11 08:39:44 distinct = false
2021/03/11 08:39:45 Using n1ql client
2021/03/11 08:39:45 Expected and Actual scan responses are the same
2021/03/11 08:39:45 
--- PrefixSortVariations ---
2021/03/11 08:39:45 
--- Multi Scan 0 ---
2021/03/11 08:39:45 distinct = false
2021/03/11 08:39:45 Using n1ql client
2021/03/11 08:39:45 Expected and Actual scan responses are the same
2021/03/11 08:39:45 
--- Multi Scan 1 ---
2021/03/11 08:39:45 distinct = false
2021/03/11 08:39:46 Using n1ql client
2021/03/11 08:39:46 Expected and Actual scan responses are the same
--- PASS: TestMultiScanScenarios (22.18s)
=== RUN   TestMultiScanOffset
2021/03/11 08:39:46 In TestMultiScanOffset()
2021/03/11 08:39:46 

--------- Composite Index with 2 fields ---------
2021/03/11 08:39:46 
--- ScanAllNoFilter ---
2021/03/11 08:39:46 distinct = false
2021/03/11 08:39:46 Using n1ql client
2021/03/11 08:39:46 
--- ScanAllFilterNil ---
2021/03/11 08:39:46 distinct = false
2021/03/11 08:39:47 Using n1ql client
2021/03/11 08:39:47 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:47 distinct = false
2021/03/11 08:39:47 Using n1ql client
2021/03/11 08:39:47 
--- SingleSeek ---
2021/03/11 08:39:47 distinct = false
2021/03/11 08:39:48 Using n1ql client
2021/03/11 08:39:48 
--- MultipleSeek ---
2021/03/11 08:39:48 distinct = false
2021/03/11 08:39:48 Using n1ql client
2021/03/11 08:39:48 
--- SimpleRange ---
2021/03/11 08:39:48 distinct = false
2021/03/11 08:39:49 Using n1ql client
2021/03/11 08:39:49 
--- NonOverlappingRanges ---
2021/03/11 08:39:49 distinct = false
2021/03/11 08:39:49 Using n1ql client
2021/03/11 08:39:49 
--- OverlappingRanges ---
2021/03/11 08:39:49 distinct = false
2021/03/11 08:39:50 Using n1ql client
2021/03/11 08:39:50 
--- NonOverlappingFilters ---
2021/03/11 08:39:50 distinct = false
2021/03/11 08:39:50 Using n1ql client
2021/03/11 08:39:50 
--- OverlappingFilters ---
2021/03/11 08:39:50 distinct = false
2021/03/11 08:39:51 Using n1ql client
2021/03/11 08:39:51 
--- BoundaryFilters ---
2021/03/11 08:39:51 distinct = false
2021/03/11 08:39:51 Using n1ql client
2021/03/11 08:39:51 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:39:51 distinct = false
2021/03/11 08:39:52 Using n1ql client
2021/03/11 08:39:52 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:39:52 distinct = false
2021/03/11 08:39:52 Using n1ql client
2021/03/11 08:39:52 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:39:52 distinct = false
2021/03/11 08:39:52 Using n1ql client
2021/03/11 08:39:52 Expected and Actual scan responses are the same
2021/03/11 08:39:52 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:39:52 distinct = false
2021/03/11 08:39:53 Using n1ql client
2021/03/11 08:39:53 Expected and Actual scan responses are the same
2021/03/11 08:39:53 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:39:53 distinct = false
2021/03/11 08:39:53 Using n1ql client
2021/03/11 08:39:53 Expected and Actual scan responses are the same
2021/03/11 08:39:53 
--- FiltersWithUnbounded ---
2021/03/11 08:39:53 distinct = false
2021/03/11 08:39:54 Using n1ql client
2021/03/11 08:39:54 Expected and Actual scan responses are the same
2021/03/11 08:39:54 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:39:54 distinct = false
2021/03/11 08:39:54 Using n1ql client
2021/03/11 08:39:54 Expected and Actual scan responses are the same
2021/03/11 08:39:54 

--------- Simple Index with 1 field ---------
2021/03/11 08:39:54 
--- SingleIndexSimpleRange ---
2021/03/11 08:39:54 distinct = false
2021/03/11 08:39:55 Using n1ql client
2021/03/11 08:39:55 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:39:55 distinct = false
2021/03/11 08:39:55 Using n1ql client
2021/03/11 08:39:55 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:39:55 distinct = false
2021/03/11 08:39:56 Using n1ql client
2021/03/11 08:39:56 

--------- Composite Index with 3 fields ---------
2021/03/11 08:39:56 
--- ScanAllNoFilter ---
2021/03/11 08:39:56 distinct = false
2021/03/11 08:39:56 Using n1ql client
2021/03/11 08:39:56 
--- ScanAllFilterNil ---
2021/03/11 08:39:56 distinct = false
2021/03/11 08:39:57 Using n1ql client
2021/03/11 08:39:57 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:39:57 distinct = false
2021/03/11 08:39:57 Using n1ql client
2021/03/11 08:39:57 
--- 3FieldsSingleSeek ---
2021/03/11 08:39:57 distinct = false
2021/03/11 08:39:58 Using n1ql client
2021/03/11 08:39:58 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:39:58 distinct = false
2021/03/11 08:39:58 Using n1ql client
2021/03/11 08:39:58 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:39:58 distinct = false
2021/03/11 08:39:59 Using n1ql client
--- PASS: TestMultiScanOffset (12.94s)
=== RUN   TestMultiScanPrimaryIndex
2021/03/11 08:39:59 In TestMultiScanPrimaryIndex()
2021/03/11 08:39:59 
--- PrimaryRange ---
2021/03/11 08:39:59 Using n1ql client
2021/03/11 08:39:59 Expected and Actual scan responses are the same
2021/03/11 08:39:59 
--- PrimaryScanAllNoFilter ---
2021/03/11 08:39:59 Using n1ql client
2021/03/11 08:39:59 Expected and Actual scan responses are the same
--- PASS: TestMultiScanPrimaryIndex (0.05s)
=== RUN   TestMultiScanDistinct
2021/03/11 08:39:59 In TestScansDistinct()
2021/03/11 08:39:59 

--------- Composite Index with 2 fields ---------
2021/03/11 08:39:59 
--- ScanAllNoFilter ---
2021/03/11 08:39:59 distinct = true
2021/03/11 08:39:59 Using n1ql client
2021/03/11 08:39:59 Expected and Actual scan responses are the same
2021/03/11 08:39:59 
--- ScanAllFilterNil ---
2021/03/11 08:39:59 distinct = true
2021/03/11 08:40:00 Using n1ql client
2021/03/11 08:40:00 Expected and Actual scan responses are the same
2021/03/11 08:40:00 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:40:00 distinct = true
2021/03/11 08:40:00 Using n1ql client
2021/03/11 08:40:00 Expected and Actual scan responses are the same
2021/03/11 08:40:00 
--- SingleSeek ---
2021/03/11 08:40:00 distinct = true
2021/03/11 08:40:01 Using n1ql client
2021/03/11 08:40:01 Expected and Actual scan responses are the same
2021/03/11 08:40:01 
--- MultipleSeek ---
2021/03/11 08:40:01 distinct = true
2021/03/11 08:40:01 Using n1ql client
2021/03/11 08:40:01 Expected and Actual scan responses are the same
2021/03/11 08:40:01 
--- SimpleRange ---
2021/03/11 08:40:01 distinct = true
2021/03/11 08:40:02 Using n1ql client
2021/03/11 08:40:02 Expected and Actual scan responses are the same
2021/03/11 08:40:02 
--- NonOverlappingRanges ---
2021/03/11 08:40:02 distinct = true
2021/03/11 08:40:02 Using n1ql client
2021/03/11 08:40:02 Expected and Actual scan responses are the same
2021/03/11 08:40:02 
--- OverlappingRanges ---
2021/03/11 08:40:02 distinct = true
2021/03/11 08:40:03 Using n1ql client
2021/03/11 08:40:03 Expected and Actual scan responses are the same
2021/03/11 08:40:03 
--- NonOverlappingFilters ---
2021/03/11 08:40:03 distinct = true
2021/03/11 08:40:03 Using n1ql client
2021/03/11 08:40:03 Expected and Actual scan responses are the same
2021/03/11 08:40:03 
--- OverlappingFilters ---
2021/03/11 08:40:03 distinct = true
2021/03/11 08:40:04 Using n1ql client
2021/03/11 08:40:04 Expected and Actual scan responses are the same
2021/03/11 08:40:04 
--- BoundaryFilters ---
2021/03/11 08:40:04 distinct = true
2021/03/11 08:40:04 Using n1ql client
2021/03/11 08:40:04 Expected and Actual scan responses are the same
2021/03/11 08:40:04 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:40:04 distinct = true
2021/03/11 08:40:05 Using n1ql client
2021/03/11 08:40:05 Expected and Actual scan responses are the same
2021/03/11 08:40:05 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:40:05 distinct = true
2021/03/11 08:40:05 Using n1ql client
2021/03/11 08:40:05 Expected and Actual scan responses are the same
2021/03/11 08:40:05 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:40:05 distinct = false
2021/03/11 08:40:06 Using n1ql client
2021/03/11 08:40:06 Expected and Actual scan responses are the same
2021/03/11 08:40:06 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:40:06 distinct = false
2021/03/11 08:40:06 Using n1ql client
2021/03/11 08:40:06 Expected and Actual scan responses are the same
2021/03/11 08:40:06 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:40:06 distinct = false
2021/03/11 08:40:07 Using n1ql client
2021/03/11 08:40:07 Expected and Actual scan responses are the same
2021/03/11 08:40:07 
--- FiltersWithUnbounded ---
2021/03/11 08:40:07 distinct = false
2021/03/11 08:40:07 Using n1ql client
2021/03/11 08:40:07 Expected and Actual scan responses are the same
2021/03/11 08:40:07 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:40:07 distinct = false
2021/03/11 08:40:08 Using n1ql client
2021/03/11 08:40:08 Expected and Actual scan responses are the same
2021/03/11 08:40:08 

--------- Simple Index with 1 field ---------
2021/03/11 08:40:08 
--- SingleIndexSimpleRange ---
2021/03/11 08:40:08 distinct = true
2021/03/11 08:40:08 Using n1ql client
2021/03/11 08:40:08 Expected and Actual scan responses are the same
2021/03/11 08:40:08 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:40:08 distinct = true
2021/03/11 08:40:08 Using n1ql client
2021/03/11 08:40:08 Expected and Actual scan responses are the same
2021/03/11 08:40:08 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:40:08 distinct = true
2021/03/11 08:40:09 Using n1ql client
2021/03/11 08:40:09 Expected and Actual scan responses are the same
2021/03/11 08:40:09 

--------- Composite Index with 3 fields ---------
2021/03/11 08:40:09 
--- ScanAllNoFilter ---
2021/03/11 08:40:09 distinct = true
2021/03/11 08:40:09 Using n1ql client
2021/03/11 08:40:09 Expected and Actual scan responses are the same
2021/03/11 08:40:09 
--- ScanAllFilterNil ---
2021/03/11 08:40:09 distinct = true
2021/03/11 08:40:10 Using n1ql client
2021/03/11 08:40:10 Expected and Actual scan responses are the same
2021/03/11 08:40:10 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:40:10 distinct = true
2021/03/11 08:40:11 Using n1ql client
2021/03/11 08:40:11 Expected and Actual scan responses are the same
2021/03/11 08:40:11 
--- 3FieldsSingleSeek ---
2021/03/11 08:40:11 distinct = true
2021/03/11 08:40:11 Using n1ql client
2021/03/11 08:40:11 Expected and Actual scan responses are the same
2021/03/11 08:40:11 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:40:11 distinct = true
2021/03/11 08:40:12 Using n1ql client
2021/03/11 08:40:12 Expected and Actual scan responses are the same
2021/03/11 08:40:12 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:40:12 distinct = true
2021/03/11 08:40:12 Using n1ql client
2021/03/11 08:40:12 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDistinct (13.32s)
=== RUN   TestMultiScanProjection
2021/03/11 08:40:12 In TestMultiScanProjection()
2021/03/11 08:40:12 

--------- Composite Index with 2 fields ---------
2021/03/11 08:40:12 
--- ScanAllNoFilter ---
2021/03/11 08:40:12 distinct = true
2021/03/11 08:40:13 Using n1ql client
2021/03/11 08:40:13 Expected and Actual scan responses are the same
2021/03/11 08:40:13 
--- ScanAllFilterNil ---
2021/03/11 08:40:13 distinct = true
2021/03/11 08:40:13 Using n1ql client
2021/03/11 08:40:13 Expected and Actual scan responses are the same
2021/03/11 08:40:13 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:40:13 distinct = true
2021/03/11 08:40:14 Using n1ql client
2021/03/11 08:40:14 Expected and Actual scan responses are the same
2021/03/11 08:40:14 
--- SingleSeek ---
2021/03/11 08:40:14 distinct = true
2021/03/11 08:40:14 Using n1ql client
2021/03/11 08:40:14 Expected and Actual scan responses are the same
2021/03/11 08:40:14 
--- MultipleSeek ---
2021/03/11 08:40:14 distinct = true
2021/03/11 08:40:15 Using n1ql client
2021/03/11 08:40:15 Expected and Actual scan responses are the same
2021/03/11 08:40:15 
--- SimpleRange ---
2021/03/11 08:40:15 distinct = true
2021/03/11 08:40:15 Using n1ql client
2021/03/11 08:40:15 Expected and Actual scan responses are the same
2021/03/11 08:40:15 
--- NonOverlappingRanges ---
2021/03/11 08:40:15 distinct = true
2021/03/11 08:40:16 Using n1ql client
2021/03/11 08:40:16 Expected and Actual scan responses are the same
2021/03/11 08:40:16 
--- OverlappingRanges ---
2021/03/11 08:40:16 distinct = true
2021/03/11 08:40:16 Using n1ql client
2021/03/11 08:40:16 Expected and Actual scan responses are the same
2021/03/11 08:40:16 
--- NonOverlappingFilters ---
2021/03/11 08:40:16 distinct = true
2021/03/11 08:40:17 Using n1ql client
2021/03/11 08:40:17 Expected and Actual scan responses are the same
2021/03/11 08:40:17 
--- OverlappingFilters ---
2021/03/11 08:40:17 distinct = true
2021/03/11 08:40:17 Using n1ql client
2021/03/11 08:40:17 Expected and Actual scan responses are the same
2021/03/11 08:40:17 
--- BoundaryFilters ---
2021/03/11 08:40:17 distinct = true
2021/03/11 08:40:18 Using n1ql client
2021/03/11 08:40:18 Expected and Actual scan responses are the same
2021/03/11 08:40:18 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:40:18 distinct = true
2021/03/11 08:40:18 Using n1ql client
2021/03/11 08:40:18 Expected and Actual scan responses are the same
2021/03/11 08:40:18 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:40:18 distinct = true
2021/03/11 08:40:19 Using n1ql client
2021/03/11 08:40:19 Expected and Actual scan responses are the same
2021/03/11 08:40:19 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:40:19 distinct = false
2021/03/11 08:40:19 Using n1ql client
2021/03/11 08:40:19 Expected and Actual scan responses are the same
2021/03/11 08:40:19 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:40:19 distinct = false
2021/03/11 08:40:20 Using n1ql client
2021/03/11 08:40:20 Expected and Actual scan responses are the same
2021/03/11 08:40:20 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:40:20 distinct = false
2021/03/11 08:40:20 Using n1ql client
2021/03/11 08:40:20 Expected and Actual scan responses are the same
2021/03/11 08:40:20 
--- FiltersWithUnbounded ---
2021/03/11 08:40:20 distinct = false
2021/03/11 08:40:20 Using n1ql client
2021/03/11 08:40:20 Expected and Actual scan responses are the same
2021/03/11 08:40:20 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:40:20 distinct = false
2021/03/11 08:40:21 Using n1ql client
2021/03/11 08:40:21 Expected and Actual scan responses are the same
2021/03/11 08:40:21 

--------- Simple Index with 1 field ---------
2021/03/11 08:40:21 
--- SingleIndexSimpleRange ---
2021/03/11 08:40:21 distinct = true
2021/03/11 08:40:21 Using n1ql client
2021/03/11 08:40:21 Expected and Actual scan responses are the same
2021/03/11 08:40:21 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:40:21 distinct = true
2021/03/11 08:40:22 Using n1ql client
2021/03/11 08:40:22 Expected and Actual scan responses are the same
2021/03/11 08:40:22 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:40:22 distinct = true
2021/03/11 08:40:22 Using n1ql client
2021/03/11 08:40:22 Expected and Actual scan responses are the same
2021/03/11 08:40:22 

--------- Composite Index with 3 fields ---------
2021/03/11 08:40:22 
--- ScanAllNoFilter ---
2021/03/11 08:40:22 distinct = true
2021/03/11 08:40:23 Using n1ql client
2021/03/11 08:40:23 Expected and Actual scan responses are the same
2021/03/11 08:40:23 
--- ScanAllFilterNil ---
2021/03/11 08:40:23 distinct = true
2021/03/11 08:40:23 Using n1ql client
2021/03/11 08:40:23 Expected and Actual scan responses are the same
2021/03/11 08:40:23 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:40:23 distinct = true
2021/03/11 08:40:24 Using n1ql client
2021/03/11 08:40:24 Expected and Actual scan responses are the same
2021/03/11 08:40:24 
--- 3FieldsSingleSeek ---
2021/03/11 08:40:24 distinct = true
2021/03/11 08:40:25 Using n1ql client
2021/03/11 08:40:25 Expected and Actual scan responses are the same
2021/03/11 08:40:25 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:40:25 distinct = true
2021/03/11 08:40:25 Using n1ql client
2021/03/11 08:40:25 Expected and Actual scan responses are the same
2021/03/11 08:40:25 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:40:25 distinct = true
2021/03/11 08:40:26 Using n1ql client
2021/03/11 08:40:26 Expected and Actual scan responses are the same
2021/03/11 08:40:26 indexes are: index_company, index_companyname, index_company_name_age, index_company_name_age_address, index_company_name_age_address_friends
2021/03/11 08:40:26 fields are: [company], [company name], [company name age], [company name age address], [company name age address friends]
2021/03/11 08:40:26 
--- SingleIndexProjectFirst ---
2021/03/11 08:40:26 distinct = true
2021/03/11 08:40:26 Using n1ql client
2021/03/11 08:40:26 Expected and Actual scan responses are the same
2021/03/11 08:40:26 
--- 2FieldIndexProjectSecond ---
2021/03/11 08:40:26 distinct = true
2021/03/11 08:40:26 Using n1ql client
2021/03/11 08:40:26 Expected and Actual scan responses are the same
2021/03/11 08:40:26 
--- 3FieldIndexProjectThird ---
2021/03/11 08:40:26 distinct = true
2021/03/11 08:40:27 Using n1ql client
2021/03/11 08:40:27 Expected and Actual scan responses are the same
2021/03/11 08:40:27 
--- 4FieldIndexProjectFourth ---
2021/03/11 08:40:27 distinct = true
2021/03/11 08:40:29 Using n1ql client
2021/03/11 08:40:29 Expected and Actual scan responses are the same
2021/03/11 08:40:29 
--- 5FieldIndexProjectFifth ---
2021/03/11 08:40:29 distinct = true
2021/03/11 08:40:33 Using n1ql client
2021/03/11 08:40:33 Expected and Actual scan responses are the same
2021/03/11 08:40:33 
--- 2FieldIndexProjectTwo ---
2021/03/11 08:40:33 distinct = true
2021/03/11 08:40:34 Using n1ql client
2021/03/11 08:40:34 Expected and Actual scan responses are the same
2021/03/11 08:40:34 
--- 3FieldIndexProjectTwo ---
2021/03/11 08:40:34 distinct = true
2021/03/11 08:40:34 Using n1ql client
2021/03/11 08:40:34 Expected and Actual scan responses are the same
2021/03/11 08:40:34 
--- 3FieldIndexProjectTwo ---
2021/03/11 08:40:34 distinct = true
2021/03/11 08:40:35 Using n1ql client
2021/03/11 08:40:35 Expected and Actual scan responses are the same
2021/03/11 08:40:35 
--- 3FieldIndexProjectTwo ---
2021/03/11 08:40:35 distinct = true
2021/03/11 08:40:35 Using n1ql client
2021/03/11 08:40:35 Expected and Actual scan responses are the same
2021/03/11 08:40:35 
--- 4FieldIndexProjectTwo ---
2021/03/11 08:40:35 distinct = true
2021/03/11 08:40:37 Using n1ql client
2021/03/11 08:40:37 Expected and Actual scan responses are the same
2021/03/11 08:40:37 
--- 4FieldIndexProjectTwo ---
2021/03/11 08:40:37 distinct = true
2021/03/11 08:40:39 Using n1ql client
2021/03/11 08:40:39 Expected and Actual scan responses are the same
2021/03/11 08:40:39 
--- 4FieldIndexProjectTwo ---
2021/03/11 08:40:39 distinct = true
2021/03/11 08:40:41 Using n1ql client
2021/03/11 08:40:41 Expected and Actual scan responses are the same
2021/03/11 08:40:41 
--- 4FieldIndexProjectTwo ---
2021/03/11 08:40:41 distinct = true
2021/03/11 08:40:43 Using n1ql client
2021/03/11 08:40:43 Expected and Actual scan responses are the same
2021/03/11 08:40:43 
--- 4FieldIndexProjectTwo ---
2021/03/11 08:40:43 distinct = true
2021/03/11 08:40:45 Using n1ql client
2021/03/11 08:40:45 Expected and Actual scan responses are the same
2021/03/11 08:40:45 
--- 5FieldIndexProjectTwo ---
2021/03/11 08:40:45 distinct = true
2021/03/11 08:40:50 Using n1ql client
2021/03/11 08:40:50 Expected and Actual scan responses are the same
2021/03/11 08:40:50 
--- 5FieldIndexProjectTwo ---
2021/03/11 08:40:50 distinct = true
2021/03/11 08:40:54 Using n1ql client
2021/03/11 08:40:54 Expected and Actual scan responses are the same
2021/03/11 08:40:54 
--- 5FieldIndexProjectTwo ---
2021/03/11 08:40:54 distinct = true
2021/03/11 08:40:58 Using n1ql client
2021/03/11 08:40:58 Expected and Actual scan responses are the same
2021/03/11 08:40:58 
--- 5FieldIndexProjectTwo ---
2021/03/11 08:40:58 distinct = true
2021/03/11 08:41:03 Using n1ql client
2021/03/11 08:41:03 Expected and Actual scan responses are the same
2021/03/11 08:41:03 
--- 5FieldIndexProjectThree ---
2021/03/11 08:41:03 distinct = true
2021/03/11 08:41:07 Using n1ql client
2021/03/11 08:41:07 Expected and Actual scan responses are the same
2021/03/11 08:41:07 
--- 5FieldIndexProjectFour ---
2021/03/11 08:41:07 distinct = true
2021/03/11 08:41:11 Using n1ql client
2021/03/11 08:41:11 Expected and Actual scan responses are the same
2021/03/11 08:41:11 
--- 5FieldIndexProjectAll ---
2021/03/11 08:41:11 distinct = true
2021/03/11 08:41:16 Using n1ql client
2021/03/11 08:41:16 Expected and Actual scan responses are the same
2021/03/11 08:41:16 
--- 5FieldIndexProjectAlternate ---
2021/03/11 08:41:16 distinct = true
2021/03/11 08:41:20 Using n1ql client
2021/03/11 08:41:20 Expected and Actual scan responses are the same
2021/03/11 08:41:20 
--- 5FieldIndexProjectEmptyEntryKeys ---
2021/03/11 08:41:20 distinct = true
2021/03/11 08:41:24 Using n1ql client
2021/03/11 08:41:24 Expected and Actual scan responses are the same
--- PASS: TestMultiScanProjection (71.70s)
=== RUN   TestMultiScanRestAPI
2021/03/11 08:41:24 In TestMultiScanRestAPI()
2021/03/11 08:41:24 In DropAllSecondaryIndexes()
2021/03/11 08:41:24 Index found:  index_company_name_age_address_friends
2021/03/11 08:41:24 Dropped index index_company_name_age_address_friends
2021/03/11 08:41:24 Index found:  index_company
2021/03/11 08:41:24 Dropped index index_company
2021/03/11 08:41:24 Index found:  index_company_name_age
2021/03/11 08:41:24 Dropped index index_company_name_age
2021/03/11 08:41:24 Index found:  index_primary
2021/03/11 08:41:24 Dropped index index_primary
2021/03/11 08:41:24 Index found:  index_companyname
2021/03/11 08:41:24 Dropped index index_companyname
2021/03/11 08:41:24 Index found:  addressidx
2021/03/11 08:41:25 Dropped index addressidx
2021/03/11 08:41:25 Index found:  index_company_name_age_address
2021/03/11 08:41:25 Dropped index index_company_name_age_address
2021/03/11 08:41:28 Created the secondary index index_companyname. Waiting for it become active
2021/03/11 08:41:28 Index is now active
2021/03/11 08:41:28 GET all indexes
2021/03/11 08:41:28 200 OK
2021/03/11 08:41:28 getscans status : 200 OK
2021/03/11 08:41:28 number of entries 337
2021/03/11 08:41:28 Status : 200 OK
2021/03/11 08:41:28 Result from multiscancount API = 337
--- PASS: TestMultiScanRestAPI (4.40s)
=== RUN   TestMultiScanPrimaryIndexVariations
2021/03/11 08:41:28 In TestMultiScanPrimaryIndexVariations()
2021/03/11 08:41:34 Created the secondary index index_pi. Waiting for it become active
2021/03/11 08:41:34 Index is now active
2021/03/11 08:41:34 
--- No Overlap ---
2021/03/11 08:41:34 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Proper Overlap ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Low Boundary Overlap ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Complex Overlaps ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Multiple Equal Overlaps ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Boundary and Subset Overlaps ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Point Overlaps ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Boundary and Point Overlaps ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 
--- Primary index range null ---
2021/03/11 08:41:35 Using n1ql client
2021/03/11 08:41:35 Expected and Actual scan responses are the same
2021/03/11 08:41:35 Dropping the secondary index index_pi
2021/03/11 08:41:35 Index dropped
--- PASS: TestMultiScanPrimaryIndexVariations (7.13s)
=== RUN   TestMultiScanDescSetup
2021/03/11 08:41:35 In TestMultiScanDescSetup()
2021/03/11 08:41:35 In DropAllSecondaryIndexes()
2021/03/11 08:41:35 Index found:  index_companyname
2021/03/11 08:41:36 Dropped index index_companyname
2021/03/11 08:41:39 Created the secondary index index_companyname_desc. Waiting for it become active
2021/03/11 08:41:39 Index is now active
2021/03/11 08:41:44 Created the secondary index index_company_desc. Waiting for it become active
2021/03/11 08:41:44 Index is now active
2021/03/11 08:41:49 Created the secondary index index_company_name_age_desc. Waiting for it become active
2021/03/11 08:41:49 Index is now active
--- PASS: TestMultiScanDescSetup (13.18s)
=== RUN   TestMultiScanDescScenarios
2021/03/11 08:41:49 In TestMultiScanDescScenarios()
2021/03/11 08:41:49 

--------- Composite Index with 2 fields ---------
2021/03/11 08:41:49 
--- ScanAllNoFilter ---
2021/03/11 08:41:49 distinct = false
2021/03/11 08:41:49 Using n1ql client
2021/03/11 08:41:49 Expected and Actual scan responses are the same
2021/03/11 08:41:49 
--- ScanAllFilterNil ---
2021/03/11 08:41:49 distinct = false
2021/03/11 08:41:50 Using n1ql client
2021/03/11 08:41:50 Expected and Actual scan responses are the same
2021/03/11 08:41:50 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:41:50 distinct = false
2021/03/11 08:41:50 Using n1ql client
2021/03/11 08:41:50 Expected and Actual scan responses are the same
2021/03/11 08:41:50 
--- SingleSeek ---
2021/03/11 08:41:50 distinct = false
2021/03/11 08:41:51 Using n1ql client
2021/03/11 08:41:51 Expected and Actual scan responses are the same
2021/03/11 08:41:51 
--- MultipleSeek ---
2021/03/11 08:41:51 distinct = false
2021/03/11 08:41:51 Using n1ql client
2021/03/11 08:41:51 Expected and Actual scan responses are the same
2021/03/11 08:41:51 
--- SimpleRange ---
2021/03/11 08:41:51 distinct = false
2021/03/11 08:41:52 Using n1ql client
2021/03/11 08:41:52 Expected and Actual scan responses are the same
2021/03/11 08:41:52 
--- NonOverlappingRanges ---
2021/03/11 08:41:52 distinct = false
2021/03/11 08:41:52 Using n1ql client
2021/03/11 08:41:52 Expected and Actual scan responses are the same
2021/03/11 08:41:52 
--- OverlappingRanges ---
2021/03/11 08:41:52 distinct = false
2021/03/11 08:41:53 Using n1ql client
2021/03/11 08:41:53 Expected and Actual scan responses are the same
2021/03/11 08:41:53 
--- NonOverlappingFilters ---
2021/03/11 08:41:53 distinct = false
2021/03/11 08:41:53 Using n1ql client
2021/03/11 08:41:53 Expected and Actual scan responses are the same
2021/03/11 08:41:53 
--- OverlappingFilters ---
2021/03/11 08:41:53 distinct = false
2021/03/11 08:41:54 Using n1ql client
2021/03/11 08:41:54 Expected and Actual scan responses are the same
2021/03/11 08:41:54 
--- BoundaryFilters ---
2021/03/11 08:41:54 distinct = false
2021/03/11 08:41:54 Using n1ql client
2021/03/11 08:41:54 Expected and Actual scan responses are the same
2021/03/11 08:41:54 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:41:54 distinct = false
2021/03/11 08:41:55 Using n1ql client
2021/03/11 08:41:55 Expected and Actual scan responses are the same
2021/03/11 08:41:55 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:41:55 distinct = false
2021/03/11 08:41:56 Using n1ql client
2021/03/11 08:41:56 Expected and Actual scan responses are the same
2021/03/11 08:41:56 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:41:56 distinct = false
2021/03/11 08:41:56 Using n1ql client
2021/03/11 08:41:56 Expected and Actual scan responses are the same
2021/03/11 08:41:56 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:41:56 distinct = false
2021/03/11 08:41:57 Using n1ql client
2021/03/11 08:41:57 Expected and Actual scan responses are the same
2021/03/11 08:41:57 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:41:57 distinct = false
2021/03/11 08:41:57 Using n1ql client
2021/03/11 08:41:57 Expected and Actual scan responses are the same
2021/03/11 08:41:57 
--- FiltersWithUnbounded ---
2021/03/11 08:41:57 distinct = false
2021/03/11 08:41:58 Using n1ql client
2021/03/11 08:41:58 Expected and Actual scan responses are the same
2021/03/11 08:41:58 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:41:58 distinct = false
2021/03/11 08:41:58 Using n1ql client
2021/03/11 08:41:58 Expected and Actual scan responses are the same
2021/03/11 08:41:58 

--------- Simple Index with 1 field ---------
2021/03/11 08:41:58 
--- SingleIndexSimpleRange ---
2021/03/11 08:41:58 distinct = false
2021/03/11 08:41:59 Using n1ql client
2021/03/11 08:41:59 Expected and Actual scan responses are the same
2021/03/11 08:41:59 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:41:59 distinct = false
2021/03/11 08:41:59 Using n1ql client
2021/03/11 08:41:59 Expected and Actual scan responses are the same
2021/03/11 08:41:59 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:41:59 distinct = false
2021/03/11 08:42:00 Using n1ql client
2021/03/11 08:42:00 Expected and Actual scan responses are the same
2021/03/11 08:42:00 

--------- Composite Index with 3 fields ---------
2021/03/11 08:42:00 
--- ScanAllNoFilter ---
2021/03/11 08:42:00 distinct = false
2021/03/11 08:42:00 Using n1ql client
2021/03/11 08:42:00 Expected and Actual scan responses are the same
2021/03/11 08:42:00 
--- ScanAllFilterNil ---
2021/03/11 08:42:00 distinct = false
2021/03/11 08:42:01 Using n1ql client
2021/03/11 08:42:01 Expected and Actual scan responses are the same
2021/03/11 08:42:01 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:01 distinct = false
2021/03/11 08:42:01 Using n1ql client
2021/03/11 08:42:02 Expected and Actual scan responses are the same
2021/03/11 08:42:02 
--- 3FieldsSingleSeek ---
2021/03/11 08:42:02 distinct = false
2021/03/11 08:42:02 Using n1ql client
2021/03/11 08:42:02 Expected and Actual scan responses are the same
2021/03/11 08:42:02 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:42:02 distinct = false
2021/03/11 08:42:03 Using n1ql client
2021/03/11 08:42:03 Expected and Actual scan responses are the same
2021/03/11 08:42:03 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:42:03 distinct = false
2021/03/11 08:42:03 Using n1ql client
2021/03/11 08:42:03 Expected and Actual scan responses are the same
2021/03/11 08:42:03 

--------- New scenarios ---------
2021/03/11 08:42:03 
--- CompIndexHighUnbounded1 ---
2021/03/11 08:42:03 
--- Multi Scan 0 ---
2021/03/11 08:42:03 distinct = false
2021/03/11 08:42:04 Using n1ql client
2021/03/11 08:42:04 Expected and Actual scan responses are the same
2021/03/11 08:42:04 
--- Multi Scan 1 ---
2021/03/11 08:42:04 distinct = false
2021/03/11 08:42:04 Using n1ql client
2021/03/11 08:42:04 Expected and Actual scan responses are the same
2021/03/11 08:42:04 
--- Multi Scan 2 ---
2021/03/11 08:42:04 distinct = false
2021/03/11 08:42:05 Using n1ql client
2021/03/11 08:42:05 Expected and Actual scan responses are the same
2021/03/11 08:42:05 
--- CompIndexHighUnbounded2 ---
2021/03/11 08:42:05 
--- Multi Scan 0 ---
2021/03/11 08:42:05 distinct = false
2021/03/11 08:42:05 Using n1ql client
2021/03/11 08:42:05 Expected and Actual scan responses are the same
2021/03/11 08:42:05 
--- Multi Scan 1 ---
2021/03/11 08:42:05 distinct = false
2021/03/11 08:42:06 Using n1ql client
2021/03/11 08:42:06 Expected and Actual scan responses are the same
2021/03/11 08:42:06 
--- Multi Scan 2 ---
2021/03/11 08:42:06 distinct = false
2021/03/11 08:42:06 Using n1ql client
2021/03/11 08:42:06 Expected and Actual scan responses are the same
2021/03/11 08:42:06 
--- CompIndexHighUnbounded3 ---
2021/03/11 08:42:06 
--- Multi Scan 0 ---
2021/03/11 08:42:06 distinct = false
2021/03/11 08:42:07 Using n1ql client
2021/03/11 08:42:07 Expected and Actual scan responses are the same
2021/03/11 08:42:07 
--- CompIndexHighUnbounded4 ---
2021/03/11 08:42:07 
--- Multi Scan 0 ---
2021/03/11 08:42:07 distinct = false
2021/03/11 08:42:07 Using n1ql client
2021/03/11 08:42:07 Expected and Actual scan responses are the same
2021/03/11 08:42:07 
--- CompIndexHighUnbounded5 ---
2021/03/11 08:42:07 
--- Multi Scan 0 ---
2021/03/11 08:42:07 distinct = false
2021/03/11 08:42:08 Using n1ql client
2021/03/11 08:42:08 Expected and Actual scan responses are the same
2021/03/11 08:42:08 
--- SeekBoundaries ---
2021/03/11 08:42:08 
--- Multi Scan 0 ---
2021/03/11 08:42:08 distinct = false
2021/03/11 08:42:08 Using n1ql client
2021/03/11 08:42:08 Expected and Actual scan responses are the same
2021/03/11 08:42:08 
--- Multi Scan 1 ---
2021/03/11 08:42:08 distinct = false
2021/03/11 08:42:09 Using n1ql client
2021/03/11 08:42:09 Expected and Actual scan responses are the same
2021/03/11 08:42:09 
--- Multi Scan 2 ---
2021/03/11 08:42:09 distinct = false
2021/03/11 08:42:09 Using n1ql client
2021/03/11 08:42:09 Expected and Actual scan responses are the same
2021/03/11 08:42:09 
--- Multi Scan 3 ---
2021/03/11 08:42:09 distinct = false
2021/03/11 08:42:10 Using n1ql client
2021/03/11 08:42:10 Expected and Actual scan responses are the same
2021/03/11 08:42:10 
--- Multi Scan 4 ---
2021/03/11 08:42:10 distinct = false
2021/03/11 08:42:10 Using n1ql client
2021/03/11 08:42:10 Expected and Actual scan responses are the same
2021/03/11 08:42:10 
--- Multi Scan 5 ---
2021/03/11 08:42:10 distinct = false
2021/03/11 08:42:11 Using n1ql client
2021/03/11 08:42:11 Expected and Actual scan responses are the same
2021/03/11 08:42:11 
--- Multi Scan 6 ---
2021/03/11 08:42:11 distinct = false
2021/03/11 08:42:11 Using n1ql client
2021/03/11 08:42:11 Expected and Actual scan responses are the same
2021/03/11 08:42:11 
--- Multi Scan 7 ---
2021/03/11 08:42:11 distinct = false
2021/03/11 08:42:12 Using n1ql client
2021/03/11 08:42:12 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDescScenarios (23.17s)
=== RUN   TestMultiScanDescCount
2021/03/11 08:42:12 In TestMultiScanDescCount()
2021/03/11 08:42:12 

--------- Composite Index with 2 fields ---------
2021/03/11 08:42:12 
--- ScanAllNoFilter ---
2021/03/11 08:42:12 distinct = false
2021/03/11 08:42:12 Using n1ql client
2021/03/11 08:42:12 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:12 
--- ScanAllFilterNil ---
2021/03/11 08:42:12 distinct = false
2021/03/11 08:42:13 Using n1ql client
2021/03/11 08:42:13 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:13 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:13 distinct = false
2021/03/11 08:42:13 Using n1ql client
2021/03/11 08:42:13 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:13 
--- SingleSeek ---
2021/03/11 08:42:13 distinct = false
2021/03/11 08:42:14 Using n1ql client
2021/03/11 08:42:14 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:42:14 
--- MultipleSeek ---
2021/03/11 08:42:14 distinct = false
2021/03/11 08:42:14 Using n1ql client
2021/03/11 08:42:14 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:42:14 
--- SimpleRange ---
2021/03/11 08:42:14 distinct = false
2021/03/11 08:42:15 Using n1ql client
2021/03/11 08:42:15 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2021/03/11 08:42:15 
--- NonOverlappingRanges ---
2021/03/11 08:42:15 distinct = false
2021/03/11 08:42:15 Using n1ql client
2021/03/11 08:42:15 MultiScanCount = 4283 ExpectedMultiScanCount = 4283
2021/03/11 08:42:15 
--- OverlappingRanges ---
2021/03/11 08:42:15 distinct = false
2021/03/11 08:42:16 Using n1ql client
2021/03/11 08:42:16 MultiScanCount = 5756 ExpectedMultiScanCount = 5756
2021/03/11 08:42:16 
--- NonOverlappingFilters ---
2021/03/11 08:42:16 distinct = false
2021/03/11 08:42:16 Using n1ql client
2021/03/11 08:42:16 MultiScanCount = 337 ExpectedMultiScanCount = 337
2021/03/11 08:42:16 
--- OverlappingFilters ---
2021/03/11 08:42:16 distinct = false
2021/03/11 08:42:16 Using n1ql client
2021/03/11 08:42:16 MultiScanCount = 2559 ExpectedMultiScanCount = 2559
2021/03/11 08:42:16 
--- BoundaryFilters ---
2021/03/11 08:42:16 distinct = false
2021/03/11 08:42:17 Using n1ql client
2021/03/11 08:42:17 MultiScanCount = 499 ExpectedMultiScanCount = 499
2021/03/11 08:42:17 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:42:17 distinct = false
2021/03/11 08:42:17 Using n1ql client
2021/03/11 08:42:17 MultiScanCount = 256 ExpectedMultiScanCount = 256
2021/03/11 08:42:17 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:42:17 distinct = false
2021/03/11 08:42:18 Using n1ql client
2021/03/11 08:42:18 MultiScanCount = 255 ExpectedMultiScanCount = 255
2021/03/11 08:42:18 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:42:18 distinct = false
2021/03/11 08:42:18 Using n1ql client
2021/03/11 08:42:18 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2021/03/11 08:42:18 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:42:18 distinct = false
2021/03/11 08:42:19 Using n1ql client
2021/03/11 08:42:19 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2021/03/11 08:42:19 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:42:19 distinct = false
2021/03/11 08:42:19 Using n1ql client
2021/03/11 08:42:19 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:19 
--- FiltersWithUnbounded ---
2021/03/11 08:42:19 distinct = false
2021/03/11 08:42:20 Using n1ql client
2021/03/11 08:42:20 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2021/03/11 08:42:20 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:42:20 distinct = false
2021/03/11 08:42:20 Using n1ql client
2021/03/11 08:42:20 MultiScanCount = 418 ExpectedMultiScanCount = 418
2021/03/11 08:42:20 

--------- Simple Index with 1 field ---------
2021/03/11 08:42:20 
--- SingleIndexSimpleRange ---
2021/03/11 08:42:20 distinct = false
2021/03/11 08:42:21 Using n1ql client
2021/03/11 08:42:21 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2021/03/11 08:42:21 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:42:21 distinct = false
2021/03/11 08:42:21 Using n1ql client
2021/03/11 08:42:21 MultiScanCount = 7140 ExpectedMultiScanCount = 7140
2021/03/11 08:42:21 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:42:21 distinct = false
2021/03/11 08:42:21 Using n1ql client
2021/03/11 08:42:21 MultiScanCount = 8701 ExpectedMultiScanCount = 8701
2021/03/11 08:42:21 

--------- Composite Index with 3 fields ---------
2021/03/11 08:42:21 
--- ScanAllNoFilter ---
2021/03/11 08:42:21 distinct = false
2021/03/11 08:42:22 Using n1ql client
2021/03/11 08:42:22 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:22 
--- ScanAllFilterNil ---
2021/03/11 08:42:22 distinct = false
2021/03/11 08:42:23 Using n1ql client
2021/03/11 08:42:23 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:23 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:23 distinct = false
2021/03/11 08:42:23 Using n1ql client
2021/03/11 08:42:23 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:23 
--- 3FieldsSingleSeek ---
2021/03/11 08:42:23 distinct = false
2021/03/11 08:42:24 Using n1ql client
2021/03/11 08:42:24 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:42:24 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:42:24 distinct = false
2021/03/11 08:42:24 Using n1ql client
2021/03/11 08:42:24 MultiScanCount = 3 ExpectedMultiScanCount = 3
2021/03/11 08:42:24 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:42:24 distinct = false
2021/03/11 08:42:24 Using n1ql client
2021/03/11 08:42:24 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:42:24 

--------- New scenarios ---------
2021/03/11 08:42:24 
--- CompIndexHighUnbounded1 ---
2021/03/11 08:42:24 
--- Multi Scan 0 ---
2021/03/11 08:42:24 distinct = false
2021/03/11 08:42:25 Using n1ql client
2021/03/11 08:42:25 Using n1ql client
2021/03/11 08:42:25 len(scanResults) = 8 MultiScanCount = 8
2021/03/11 08:42:25 Expected and Actual scan responses are the same
2021/03/11 08:42:25 
--- Multi Scan 1 ---
2021/03/11 08:42:25 distinct = false
2021/03/11 08:42:25 Using n1ql client
2021/03/11 08:42:25 Using n1ql client
2021/03/11 08:42:25 len(scanResults) = 0 MultiScanCount = 0
2021/03/11 08:42:25 Expected and Actual scan responses are the same
2021/03/11 08:42:25 
--- Multi Scan 2 ---
2021/03/11 08:42:25 distinct = false
2021/03/11 08:42:26 Using n1ql client
2021/03/11 08:42:26 Using n1ql client
2021/03/11 08:42:26 len(scanResults) = 9 MultiScanCount = 9
2021/03/11 08:42:26 Expected and Actual scan responses are the same
2021/03/11 08:42:26 
--- CompIndexHighUnbounded2 ---
2021/03/11 08:42:26 
--- Multi Scan 0 ---
2021/03/11 08:42:26 distinct = false
2021/03/11 08:42:26 Using n1ql client
2021/03/11 08:42:26 Using n1ql client
2021/03/11 08:42:26 len(scanResults) = 4138 MultiScanCount = 4138
2021/03/11 08:42:26 Expected and Actual scan responses are the same
2021/03/11 08:42:26 
--- Multi Scan 1 ---
2021/03/11 08:42:26 distinct = false
2021/03/11 08:42:27 Using n1ql client
2021/03/11 08:42:27 Using n1ql client
2021/03/11 08:42:27 len(scanResults) = 2746 MultiScanCount = 2746
2021/03/11 08:42:27 Expected and Actual scan responses are the same
2021/03/11 08:42:27 
--- Multi Scan 2 ---
2021/03/11 08:42:27 distinct = false
2021/03/11 08:42:27 Using n1ql client
2021/03/11 08:42:27 Using n1ql client
2021/03/11 08:42:27 len(scanResults) = 4691 MultiScanCount = 4691
2021/03/11 08:42:27 Expected and Actual scan responses are the same
2021/03/11 08:42:27 
--- CompIndexHighUnbounded3 ---
2021/03/11 08:42:27 
--- Multi Scan 0 ---
2021/03/11 08:42:27 distinct = false
2021/03/11 08:42:28 Using n1ql client
2021/03/11 08:42:28 Using n1ql client
2021/03/11 08:42:28 len(scanResults) = 1329 MultiScanCount = 1329
2021/03/11 08:42:28 Expected and Actual scan responses are the same
2021/03/11 08:42:28 
--- CompIndexHighUnbounded4 ---
2021/03/11 08:42:28 
--- Multi Scan 0 ---
2021/03/11 08:42:28 distinct = false
2021/03/11 08:42:28 Using n1ql client
2021/03/11 08:42:28 Using n1ql client
2021/03/11 08:42:28 len(scanResults) = 5349 MultiScanCount = 5349
2021/03/11 08:42:28 Expected and Actual scan responses are the same
2021/03/11 08:42:28 
--- CompIndexHighUnbounded5 ---
2021/03/11 08:42:28 
--- Multi Scan 0 ---
2021/03/11 08:42:28 distinct = false
2021/03/11 08:42:29 Using n1ql client
2021/03/11 08:42:29 Using n1ql client
2021/03/11 08:42:29 len(scanResults) = 8210 MultiScanCount = 8210
2021/03/11 08:42:29 Expected and Actual scan responses are the same
2021/03/11 08:42:29 
--- SeekBoundaries ---
2021/03/11 08:42:29 
--- Multi Scan 0 ---
2021/03/11 08:42:29 distinct = false
2021/03/11 08:42:29 Using n1ql client
2021/03/11 08:42:29 Using n1ql client
2021/03/11 08:42:29 len(scanResults) = 175 MultiScanCount = 175
2021/03/11 08:42:29 Expected and Actual scan responses are the same
2021/03/11 08:42:29 
--- Multi Scan 1 ---
2021/03/11 08:42:29 distinct = false
2021/03/11 08:42:30 Using n1ql client
2021/03/11 08:42:30 Using n1ql client
2021/03/11 08:42:30 len(scanResults) = 1 MultiScanCount = 1
2021/03/11 08:42:30 Expected and Actual scan responses are the same
2021/03/11 08:42:30 
--- Multi Scan 2 ---
2021/03/11 08:42:30 distinct = false
2021/03/11 08:42:30 Using n1ql client
2021/03/11 08:42:30 Using n1ql client
2021/03/11 08:42:30 len(scanResults) = 555 MultiScanCount = 555
2021/03/11 08:42:30 Expected and Actual scan responses are the same
2021/03/11 08:42:30 
--- Multi Scan 3 ---
2021/03/11 08:42:30 distinct = false
2021/03/11 08:42:31 Using n1ql client
2021/03/11 08:42:31 Using n1ql client
2021/03/11 08:42:31 len(scanResults) = 872 MultiScanCount = 872
2021/03/11 08:42:31 Expected and Actual scan responses are the same
2021/03/11 08:42:31 
--- Multi Scan 4 ---
2021/03/11 08:42:31 distinct = false
2021/03/11 08:42:31 Using n1ql client
2021/03/11 08:42:31 Using n1ql client
2021/03/11 08:42:31 len(scanResults) = 287 MultiScanCount = 287
2021/03/11 08:42:31 Expected and Actual scan responses are the same
2021/03/11 08:42:31 
--- Multi Scan 5 ---
2021/03/11 08:42:31 distinct = false
2021/03/11 08:42:32 Using n1ql client
2021/03/11 08:42:32 Using n1ql client
2021/03/11 08:42:32 len(scanResults) = 5254 MultiScanCount = 5254
2021/03/11 08:42:32 Expected and Actual scan responses are the same
2021/03/11 08:42:32 
--- Multi Scan 6 ---
2021/03/11 08:42:32 distinct = false
2021/03/11 08:42:32 Using n1ql client
2021/03/11 08:42:32 Using n1ql client
2021/03/11 08:42:32 len(scanResults) = 5566 MultiScanCount = 5566
2021/03/11 08:42:32 Expected and Actual scan responses are the same
2021/03/11 08:42:32 
--- Multi Scan 7 ---
2021/03/11 08:42:32 distinct = false
2021/03/11 08:42:33 Using n1ql client
2021/03/11 08:42:33 Using n1ql client
2021/03/11 08:42:33 len(scanResults) = 8 MultiScanCount = 8
2021/03/11 08:42:33 Expected and Actual scan responses are the same
2021/03/11 08:42:33 

--------- With DISTINCT True ---------
2021/03/11 08:42:33 
--- ScanAllNoFilter ---
2021/03/11 08:42:33 distinct = true
2021/03/11 08:42:33 Using n1ql client
2021/03/11 08:42:33 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:33 
--- ScanAllFilterNil ---
2021/03/11 08:42:33 distinct = true
2021/03/11 08:42:34 Using n1ql client
2021/03/11 08:42:34 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:34 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:34 distinct = true
2021/03/11 08:42:34 Using n1ql client
2021/03/11 08:42:34 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:34 
--- SingleSeek ---
2021/03/11 08:42:34 distinct = true
2021/03/11 08:42:34 Using n1ql client
2021/03/11 08:42:34 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:42:34 
--- MultipleSeek ---
2021/03/11 08:42:34 distinct = true
2021/03/11 08:42:35 Using n1ql client
2021/03/11 08:42:35 MultiScanCount = 2 ExpectedMultiScanCount = 2
2021/03/11 08:42:35 
--- SimpleRange ---
2021/03/11 08:42:35 distinct = true
2021/03/11 08:42:35 Using n1ql client
2021/03/11 08:42:35 MultiScanCount = 227 ExpectedMultiScanCount = 227
2021/03/11 08:42:35 
--- NonOverlappingRanges ---
2021/03/11 08:42:35 distinct = true
2021/03/11 08:42:36 Using n1ql client
2021/03/11 08:42:36 MultiScanCount = 428 ExpectedMultiScanCount = 428
2021/03/11 08:42:36 
--- NonOverlappingFilters2 ---
2021/03/11 08:42:36 distinct = true
2021/03/11 08:42:36 Using n1ql client
2021/03/11 08:42:36 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:42:36 
--- OverlappingRanges ---
2021/03/11 08:42:36 distinct = true
2021/03/11 08:42:37 Using n1ql client
2021/03/11 08:42:37 MultiScanCount = 575 ExpectedMultiScanCount = 575
2021/03/11 08:42:37 
--- NonOverlappingFilters ---
2021/03/11 08:42:37 distinct = true
2021/03/11 08:42:37 Using n1ql client
2021/03/11 08:42:37 MultiScanCount = 186 ExpectedMultiScanCount = 186
2021/03/11 08:42:37 
--- OverlappingFilters ---
2021/03/11 08:42:37 distinct = true
2021/03/11 08:42:38 Using n1ql client
2021/03/11 08:42:38 MultiScanCount = 543 ExpectedMultiScanCount = 543
2021/03/11 08:42:38 
--- BoundaryFilters ---
2021/03/11 08:42:38 distinct = true
2021/03/11 08:42:38 Using n1ql client
2021/03/11 08:42:38 MultiScanCount = 172 ExpectedMultiScanCount = 172
2021/03/11 08:42:38 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:42:38 distinct = true
2021/03/11 08:42:39 Using n1ql client
2021/03/11 08:42:39 MultiScanCount = 135 ExpectedMultiScanCount = 135
2021/03/11 08:42:39 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:42:39 distinct = true
2021/03/11 08:42:39 Using n1ql client
2021/03/11 08:42:39 MultiScanCount = 134 ExpectedMultiScanCount = 134
2021/03/11 08:42:39 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:42:39 distinct = false
2021/03/11 08:42:40 Using n1ql client
2021/03/11 08:42:40 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2021/03/11 08:42:40 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:42:40 distinct = false
2021/03/11 08:42:40 Using n1ql client
2021/03/11 08:42:40 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2021/03/11 08:42:40 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:42:40 distinct = false
2021/03/11 08:42:41 Using n1ql client
2021/03/11 08:42:41 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2021/03/11 08:42:41 
--- FiltersWithUnbounded ---
2021/03/11 08:42:41 distinct = false
2021/03/11 08:42:41 Using n1ql client
2021/03/11 08:42:41 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2021/03/11 08:42:41 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:42:41 distinct = false
2021/03/11 08:42:41 Using n1ql client
2021/03/11 08:42:41 MultiScanCount = 418 ExpectedMultiScanCount = 418
2021/03/11 08:42:41 

--------- Simple Index with 1 field ---------
2021/03/11 08:42:41 
--- SingleIndexSimpleRange ---
2021/03/11 08:42:41 distinct = true
2021/03/11 08:42:42 Using n1ql client
2021/03/11 08:42:42 MultiScanCount = 227 ExpectedMultiScanCount = 227
2021/03/11 08:42:42 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:42:42 distinct = true
2021/03/11 08:42:42 Using n1ql client
2021/03/11 08:42:42 MultiScanCount = 713 ExpectedMultiScanCount = 713
2021/03/11 08:42:42 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:42:42 distinct = true
2021/03/11 08:42:43 Using n1ql client
2021/03/11 08:42:43 MultiScanCount = 869 ExpectedMultiScanCount = 869
2021/03/11 08:42:43 

--------- Composite Index with 3 fields ---------
2021/03/11 08:42:43 
--- ScanAllNoFilter ---
2021/03/11 08:42:43 distinct = true
2021/03/11 08:42:43 Using n1ql client
2021/03/11 08:42:43 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:43 
--- ScanAllFilterNil ---
2021/03/11 08:42:43 distinct = true
2021/03/11 08:42:44 Using n1ql client
2021/03/11 08:42:44 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:44 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:44 distinct = true
2021/03/11 08:42:44 Using n1ql client
2021/03/11 08:42:44 MultiScanCount = 999 ExpectedMultiScanCount = 999
2021/03/11 08:42:44 
--- 3FieldsSingleSeek ---
2021/03/11 08:42:44 distinct = true
2021/03/11 08:42:45 Using n1ql client
2021/03/11 08:42:45 MultiScanCount = 1 ExpectedMultiScanCount = 1
2021/03/11 08:42:45 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:42:45 distinct = true
2021/03/11 08:42:45 Using n1ql client
2021/03/11 08:42:45 MultiScanCount = 3 ExpectedMultiScanCount = 3
2021/03/11 08:42:45 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:42:45 distinct = true
2021/03/11 08:42:46 Using n1ql client
2021/03/11 08:42:46 MultiScanCount = 2 ExpectedMultiScanCount = 2
--- PASS: TestMultiScanDescCount (33.96s)
=== RUN   TestMultiScanDescOffset
2021/03/11 08:42:46 In SkipTestMultiScanDescOffset()
2021/03/11 08:42:46 

--------- Composite Index with 2 fields ---------
2021/03/11 08:42:46 
--- ScanAllNoFilter ---
2021/03/11 08:42:46 distinct = false
2021/03/11 08:42:46 Using n1ql client
2021/03/11 08:42:46 
--- ScanAllFilterNil ---
2021/03/11 08:42:46 distinct = false
2021/03/11 08:42:47 Using n1ql client
2021/03/11 08:42:47 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:47 distinct = false
2021/03/11 08:42:47 Using n1ql client
2021/03/11 08:42:47 
--- SingleSeek ---
2021/03/11 08:42:47 distinct = false
2021/03/11 08:42:48 Using n1ql client
2021/03/11 08:42:48 
--- MultipleSeek ---
2021/03/11 08:42:48 distinct = false
2021/03/11 08:42:48 Using n1ql client
2021/03/11 08:42:48 
--- SimpleRange ---
2021/03/11 08:42:48 distinct = false
2021/03/11 08:42:48 Using n1ql client
2021/03/11 08:42:48 
--- NonOverlappingRanges ---
2021/03/11 08:42:48 distinct = false
2021/03/11 08:42:49 Using n1ql client
2021/03/11 08:42:49 
--- OverlappingRanges ---
2021/03/11 08:42:49 distinct = false
2021/03/11 08:42:49 Using n1ql client
2021/03/11 08:42:49 
--- NonOverlappingFilters ---
2021/03/11 08:42:49 distinct = false
2021/03/11 08:42:50 Using n1ql client
2021/03/11 08:42:50 
--- OverlappingFilters ---
2021/03/11 08:42:50 distinct = false
2021/03/11 08:42:50 Using n1ql client
2021/03/11 08:42:50 
--- BoundaryFilters ---
2021/03/11 08:42:50 distinct = false
2021/03/11 08:42:51 Using n1ql client
2021/03/11 08:42:51 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:42:51 distinct = false
2021/03/11 08:42:51 Using n1ql client
2021/03/11 08:42:51 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:42:51 distinct = false
2021/03/11 08:42:52 Using n1ql client
2021/03/11 08:42:52 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:42:52 distinct = false
2021/03/11 08:42:52 Using n1ql client
2021/03/11 08:42:52 Expected and Actual scan responses are the same
2021/03/11 08:42:52 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:42:52 distinct = false
2021/03/11 08:42:53 Using n1ql client
2021/03/11 08:42:53 Expected and Actual scan responses are the same
2021/03/11 08:42:53 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:42:53 distinct = false
2021/03/11 08:42:53 Using n1ql client
2021/03/11 08:42:53 Expected and Actual scan responses are the same
2021/03/11 08:42:53 
--- FiltersWithUnbounded ---
2021/03/11 08:42:53 distinct = false
2021/03/11 08:42:54 Using n1ql client
2021/03/11 08:42:54 Expected and Actual scan responses are the same
2021/03/11 08:42:54 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:42:54 distinct = false
2021/03/11 08:42:54 Using n1ql client
2021/03/11 08:42:54 Expected and Actual scan responses are the same
2021/03/11 08:42:54 

--------- Simple Index with 1 field ---------
2021/03/11 08:42:54 
--- SingleIndexSimpleRange ---
2021/03/11 08:42:54 distinct = false
2021/03/11 08:42:55 Using n1ql client
2021/03/11 08:42:55 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:42:55 distinct = false
2021/03/11 08:42:55 Using n1ql client
2021/03/11 08:42:55 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:42:55 distinct = false
2021/03/11 08:42:55 Using n1ql client
2021/03/11 08:42:55 

--------- Composite Index with 3 fields ---------
2021/03/11 08:42:55 
--- ScanAllNoFilter ---
2021/03/11 08:42:55 distinct = false
2021/03/11 08:42:56 Using n1ql client
2021/03/11 08:42:56 
--- ScanAllFilterNil ---
2021/03/11 08:42:56 distinct = false
2021/03/11 08:42:56 Using n1ql client
2021/03/11 08:42:56 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:42:56 distinct = false
2021/03/11 08:42:57 Using n1ql client
2021/03/11 08:42:57 
--- 3FieldsSingleSeek ---
2021/03/11 08:42:57 distinct = false
2021/03/11 08:42:57 Using n1ql client
2021/03/11 08:42:57 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:42:57 distinct = false
2021/03/11 08:42:58 Using n1ql client
2021/03/11 08:42:58 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:42:58 distinct = false
2021/03/11 08:42:58 Using n1ql client
--- PASS: TestMultiScanDescOffset (12.66s)
=== RUN   TestMultiScanDescDistinct
2021/03/11 08:42:58 In SkipTestMultiScanDescDistinct()
2021/03/11 08:42:58 

--------- Composite Index with 2 fields ---------
2021/03/11 08:42:58 
--- ScanAllNoFilter ---
2021/03/11 08:42:58 distinct = true
2021/03/11 08:42:59 Using n1ql client
2021/03/11 08:42:59 Expected and Actual scan responses are the same
2021/03/11 08:42:59 
--- ScanAllFilterNil ---
2021/03/11 08:42:59 distinct = true
2021/03/11 08:42:59 Using n1ql client
2021/03/11 08:43:00 Expected and Actual scan responses are the same
2021/03/11 08:43:00 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:43:00 distinct = true
2021/03/11 08:43:00 Using n1ql client
2021/03/11 08:43:00 Expected and Actual scan responses are the same
2021/03/11 08:43:00 
--- SingleSeek ---
2021/03/11 08:43:00 distinct = true
2021/03/11 08:43:00 Using n1ql client
2021/03/11 08:43:00 Expected and Actual scan responses are the same
2021/03/11 08:43:00 
--- MultipleSeek ---
2021/03/11 08:43:00 distinct = true
2021/03/11 08:43:01 Using n1ql client
2021/03/11 08:43:01 Expected and Actual scan responses are the same
2021/03/11 08:43:01 
--- SimpleRange ---
2021/03/11 08:43:01 distinct = true
2021/03/11 08:43:01 Using n1ql client
2021/03/11 08:43:01 Expected and Actual scan responses are the same
2021/03/11 08:43:01 
--- NonOverlappingRanges ---
2021/03/11 08:43:01 distinct = true
2021/03/11 08:43:02 Using n1ql client
2021/03/11 08:43:02 Expected and Actual scan responses are the same
2021/03/11 08:43:02 
--- OverlappingRanges ---
2021/03/11 08:43:02 distinct = true
2021/03/11 08:43:02 Using n1ql client
2021/03/11 08:43:02 Expected and Actual scan responses are the same
2021/03/11 08:43:02 
--- NonOverlappingFilters ---
2021/03/11 08:43:02 distinct = true
2021/03/11 08:43:03 Using n1ql client
2021/03/11 08:43:03 Expected and Actual scan responses are the same
2021/03/11 08:43:03 
--- OverlappingFilters ---
2021/03/11 08:43:03 distinct = true
2021/03/11 08:43:03 Using n1ql client
2021/03/11 08:43:03 Expected and Actual scan responses are the same
2021/03/11 08:43:03 
--- BoundaryFilters ---
2021/03/11 08:43:03 distinct = true
2021/03/11 08:43:04 Using n1ql client
2021/03/11 08:43:04 Expected and Actual scan responses are the same
2021/03/11 08:43:04 
--- SeekAndFilters_NonOverlapping ---
2021/03/11 08:43:04 distinct = true
2021/03/11 08:43:04 Using n1ql client
2021/03/11 08:43:04 Expected and Actual scan responses are the same
2021/03/11 08:43:04 
--- SeekAndFilters_Overlapping ---
2021/03/11 08:43:04 distinct = true
2021/03/11 08:43:05 Using n1ql client
2021/03/11 08:43:05 Expected and Actual scan responses are the same
2021/03/11 08:43:05 
--- SimpleRangeLowUnbounded ---
2021/03/11 08:43:05 distinct = false
2021/03/11 08:43:05 Using n1ql client
2021/03/11 08:43:05 Expected and Actual scan responses are the same
2021/03/11 08:43:05 
--- SimpleRangeHighUnbounded ---
2021/03/11 08:43:05 distinct = false
2021/03/11 08:43:06 Using n1ql client
2021/03/11 08:43:06 Expected and Actual scan responses are the same
2021/03/11 08:43:06 
--- SimpleRangeMultipleUnbounded ---
2021/03/11 08:43:06 distinct = false
2021/03/11 08:43:06 Using n1ql client
2021/03/11 08:43:06 Expected and Actual scan responses are the same
2021/03/11 08:43:06 
--- FiltersWithUnbounded ---
2021/03/11 08:43:06 distinct = false
2021/03/11 08:43:07 Using n1ql client
2021/03/11 08:43:07 Expected and Actual scan responses are the same
2021/03/11 08:43:07 
--- FiltersLowGreaterThanHigh ---
2021/03/11 08:43:07 distinct = false
2021/03/11 08:43:07 Using n1ql client
2021/03/11 08:43:07 Expected and Actual scan responses are the same
2021/03/11 08:43:07 

--------- Simple Index with 1 field ---------
2021/03/11 08:43:07 
--- SingleIndexSimpleRange ---
2021/03/11 08:43:07 distinct = true
2021/03/11 08:43:07 Using n1ql client
2021/03/11 08:43:07 Expected and Actual scan responses are the same
2021/03/11 08:43:07 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2021/03/11 08:43:07 distinct = true
2021/03/11 08:43:08 Using n1ql client
2021/03/11 08:43:08 Expected and Actual scan responses are the same
2021/03/11 08:43:08 
--- SingleIndex_SimpleRanges_Overlapping ---
2021/03/11 08:43:08 distinct = true
2021/03/11 08:43:08 Using n1ql client
2021/03/11 08:43:08 Expected and Actual scan responses are the same
2021/03/11 08:43:08 

--------- Composite Index with 3 fields ---------
2021/03/11 08:43:08 
--- ScanAllNoFilter ---
2021/03/11 08:43:08 distinct = true
2021/03/11 08:43:09 Using n1ql client
2021/03/11 08:43:09 Expected and Actual scan responses are the same
2021/03/11 08:43:09 
--- ScanAllFilterNil ---
2021/03/11 08:43:09 distinct = true
2021/03/11 08:43:10 Using n1ql client
2021/03/11 08:43:10 Expected and Actual scan responses are the same
2021/03/11 08:43:10 
--- ScanAll_AllFiltersNil ---
2021/03/11 08:43:10 distinct = true
2021/03/11 08:43:10 Using n1ql client
2021/03/11 08:43:10 Expected and Actual scan responses are the same
2021/03/11 08:43:10 
--- 3FieldsSingleSeek ---
2021/03/11 08:43:10 distinct = true
2021/03/11 08:43:11 Using n1ql client
2021/03/11 08:43:11 Expected and Actual scan responses are the same
2021/03/11 08:43:11 
--- 3FieldsMultipleSeeks ---
2021/03/11 08:43:11 distinct = true
2021/03/11 08:43:11 Using n1ql client
2021/03/11 08:43:11 Expected and Actual scan responses are the same
2021/03/11 08:43:11 
--- 3FieldsMultipleSeeks_Identical ---
2021/03/11 08:43:11 distinct = true
2021/03/11 08:43:12 Using n1ql client
2021/03/11 08:43:12 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDescDistinct (13.43s)
=== RUN   TestGroupAggrSetup
2021/03/11 08:43:12 In TestGroupAggrSetup()
2021/03/11 08:43:12 Emptying the default bucket
2021/03/11 08:43:15 Flush Enabled on bucket default, responseBody: 
2021/03/11 08:43:54 Flushed the bucket default, Response body: 
2021/03/11 08:43:54 Dropping the secondary index index_agg
2021/03/11 08:43:54 Populating the default bucket
2021/03/11 08:43:57 Created the secondary index index_agg. Waiting for it become active
2021/03/11 08:43:57 Index is now active
--- PASS: TestGroupAggrSetup (49.14s)
=== RUN   TestGroupAggrLeading
2021/03/11 08:44:01 In TestGroupAggrLeading()
2021/03/11 08:44:01 Total Scanresults = 7
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 3
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrLeading (0.02s)
=== RUN   TestGroupAggrNonLeading
2021/03/11 08:44:01 In TestGroupAggrNonLeading()
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNonLeading (0.01s)
=== RUN   TestGroupAggrNoGroup
2021/03/11 08:44:01 In TestGroupAggrNoGroup()
2021/03/11 08:44:01 Total Scanresults = 1
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNoGroup (0.01s)
=== RUN   TestGroupAggrMinMax
2021/03/11 08:44:01 In TestGroupAggrMinMax()
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMinMax (0.00s)
=== RUN   TestGroupAggrMinMax2
2021/03/11 08:44:01 In TestGroupAggrMinMax()
2021/03/11 08:44:01 Total Scanresults = 1
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMinMax2 (0.00s)
=== RUN   TestGroupAggrLeading_N1QLExprs
2021/03/11 08:44:01 In TestGroupAggrLeading_N1QLExprs()
2021/03/11 08:44:01 Total Scanresults = 17
2021/03/11 08:44:01 basicGroupAggrN1QLExprs1: Scan validation passed
2021/03/11 08:44:01 Total Scanresults = 9
2021/03/11 08:44:01 basicGroupAggrN1QLExprs2: Scan validation passed
--- PASS: TestGroupAggrLeading_N1QLExprs (0.14s)
=== RUN   TestGroupAggrLimit
2021/03/11 08:44:01 In TestGroupAggrLimit()
2021/03/11 08:44:01 Total Scanresults = 3
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrLimit (0.00s)
=== RUN   TestGroupAggrOffset
2021/03/11 08:44:01 In TestGroupAggrOffset()
2021/03/11 08:44:01 Total Scanresults = 3
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrOffset (0.00s)
=== RUN   TestGroupAggrCountN
2021/03/11 08:44:01 In TestGroupAggrCountN()
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrCountN (0.00s)
=== RUN   TestGroupAggrNoGroupNoMatch
2021/03/11 08:44:01 In TestGroupAggrNoGroupNoMatch()
2021/03/11 08:44:01 Total Scanresults = 1
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNoGroupNoMatch (0.00s)
=== RUN   TestGroupAggrGroupNoMatch
2021/03/11 08:44:01 In TestGroupAggrGroupNoMatch()
2021/03/11 08:44:01 Total Scanresults = 0
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrGroupNoMatch (0.00s)
=== RUN   TestGroupAggrMultDataTypes
2021/03/11 08:44:01 In TestGroupAggrMultDataTypes()
2021/03/11 08:44:01 Total Scanresults = 8
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMultDataTypes (0.00s)
=== RUN   TestGroupAggrDistinct
2021/03/11 08:44:01 In TestGroupAggrDistinct()
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrDistinct (0.00s)
=== RUN   TestGroupAggrDistinct2
2021/03/11 08:44:01 In TestGroupAggrDistinct2()
2021/03/11 08:44:01 Total Scanresults = 1
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 4
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrDistinct2 (0.01s)
=== RUN   TestGroupAggrNull
2021/03/11 08:44:01 In TestGroupAggrNull()
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNull (0.00s)
=== RUN   TestGroupAggrInt64
2021/03/11 08:44:01 In TestGroupAggrInt64()
2021/03/11 08:44:01 Updating the default bucket
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
2021/03/11 08:44:01 Total Scanresults = 2
2021/03/11 08:44:01 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrInt64 (0.11s)
=== RUN   TestGroupAggr1
2021/03/11 08:44:01 In TestGroupAggr1()
2021/03/11 08:44:01 In DropAllSecondaryIndexes()
2021/03/11 08:44:01 Index found:  index_agg
2021/03/11 08:44:01 Dropped index index_agg
2021/03/11 08:44:01 Index found:  index_company_desc
2021/03/11 08:44:01 Dropped index index_company_desc
2021/03/11 08:44:01 Index found:  index_companyname_desc
2021/03/11 08:44:02 Dropped index index_companyname_desc
2021/03/11 08:44:02 Index found:  index_company_name_age_desc
2021/03/11 08:44:02 Dropped index index_company_name_age_desc
2021/03/11 08:44:02 Index found:  #primary
2021/03/11 08:44:02 Dropped index #primary
2021/03/11 08:44:41 Flushed the bucket default, Response body: 
2021/03/11 08:44:50 Created the secondary index idx_aggrs. Waiting for it become active
2021/03/11 08:44:50 Index is now active
2021/03/11 08:44:50 Total Scanresults = 641
2021/03/11 08:44:51 Total Scanresults = 753
--- PASS: TestGroupAggr1 (49.68s)
=== RUN   TestGroupAggrArrayIndex
2021/03/11 08:44:51 In TestGroupAggrArrayIndex()
2021/03/11 08:44:55 Created the secondary index ga_arr1. Waiting for it become active
2021/03/11 08:44:55 Index is now active
2021/03/11 08:44:59 Created the secondary index ga_arr2. Waiting for it become active
2021/03/11 08:44:59 Index is now active
2021/03/11 08:44:59 Scenario 1
2021/03/11 08:44:59 Total Scanresults = 641
2021/03/11 08:44:59 Scenario 2
2021/03/11 08:44:59 Total Scanresults = 2835
2021/03/11 08:45:00 Scenario 3
2021/03/11 08:45:00 Total Scanresults = 1
2021/03/11 08:45:00 Scenario 4
2021/03/11 08:45:00 Total Scanresults = 991
2021/03/11 08:45:01 Scenario 5
2021/03/11 08:45:01 Total Scanresults = 2835
2021/03/11 08:45:02 Scenario 6
2021/03/11 08:45:02 Total Scanresults = 1
2021/03/11 08:45:02 Scenario 7
2021/03/11 08:45:02 Total Scanresults = 2931
2021/03/11 08:45:06 Scenario 8
2021/03/11 08:45:06 Total Scanresults = 1168
2021/03/11 08:45:07 Scenario 9
2021/03/11 08:45:07 Total Scanresults = 1
2021/03/11 08:45:07 Scenario 10
2021/03/11 08:45:07 Total Scanresults = 641
2021/03/11 08:45:07 Scenario 11
2021/03/11 08:45:07 Total Scanresults = 1168
2021/03/11 08:45:08 Scenario 12
2021/03/11 08:45:08 Total Scanresults = 1
2021/03/11 08:45:08 Scenario 13
2021/03/11 08:45:12 Total Scanresults = 1
2021/03/11 08:45:12 Count of scanResults is 1
2021/03/11 08:45:12 Value: [2 117]
--- PASS: TestGroupAggrArrayIndex (21.45s)
=== RUN   TestGroupAggr_FirstValidAggrOnly
2021/03/11 08:45:12 In TestGroupAggr_FirstValidAggrOnly()
2021/03/11 08:45:12 In DropAllSecondaryIndexes()
2021/03/11 08:45:12 Index found:  test_oneperprimarykey
2021/03/11 08:45:13 Dropped index test_oneperprimarykey
2021/03/11 08:45:13 Index found:  idx_aggrs
2021/03/11 08:45:13 Dropped index idx_aggrs
2021/03/11 08:45:13 Index found:  ga_arr1
2021/03/11 08:45:13 Dropped index ga_arr1
2021/03/11 08:45:13 Index found:  ga_arr2
2021/03/11 08:45:13 Dropped index ga_arr2
2021/03/11 08:45:13 Index found:  #primary
2021/03/11 08:45:13 Dropped index #primary
2021/03/11 08:45:25 Created the secondary index idx_asc_3field. Waiting for it become active
2021/03/11 08:45:25 Index is now active
2021/03/11 08:45:29 Created the secondary index idx_desc_3field. Waiting for it become active
2021/03/11 08:45:29 Index is now active
2021/03/11 08:45:29 === MIN no group by ===
2021/03/11 08:45:29 Total Scanresults = 1
2021/03/11 08:45:29 Count of scanResults is 1
2021/03/11 08:45:29 Value: ["ACCEL"]
2021/03/11 08:45:29 === MIN no group by, no row match ===
2021/03/11 08:45:29 Total Scanresults = 1
2021/03/11 08:45:29 Count of scanResults is 1
2021/03/11 08:45:29 Value: [null]
2021/03/11 08:45:29 === MIN with group by ===
2021/03/11 08:45:29 Total Scanresults = 641
2021/03/11 08:45:30 === MIN with group by, no row match ===
2021/03/11 08:45:30 Total Scanresults = 0
2021/03/11 08:45:30 === One Aggr, no group by ===
2021/03/11 08:45:30 Total Scanresults = 1
2021/03/11 08:45:30 Count of scanResults is 1
2021/03/11 08:45:30 Value: ["FANFARE"]
2021/03/11 08:45:30 === One Aggr, no group by, no row match ===
2021/03/11 08:45:30 Total Scanresults = 1
2021/03/11 08:45:30 Count of scanResults is 1
2021/03/11 08:45:30 Value: [null]
2021/03/11 08:45:30 === Multiple Aggr, no group by ===
2021/03/11 08:45:30 Total Scanresults = 1
2021/03/11 08:45:30 Count of scanResults is 1
2021/03/11 08:45:30 Value: ["FANFARE" 15]
2021/03/11 08:45:31 === Multiple Aggr, no group by, no row match ===
2021/03/11 08:45:31 Total Scanresults = 1
2021/03/11 08:45:31 Count of scanResults is 1
2021/03/11 08:45:31 Value: [null null]
2021/03/11 08:45:31 === No Aggr, 1 group by ===
2021/03/11 08:45:31 Total Scanresults = 211
2021/03/11 08:45:31 === Aggr on non-leading key, previous equality filter, no group ===
2021/03/11 08:45:31 Total Scanresults = 1
2021/03/11 08:45:31 Count of scanResults is 1
2021/03/11 08:45:31 Value: [59]
2021/03/11 08:45:31 === Aggr on non-leading key, previous equality filters, no group ===
2021/03/11 08:45:31 Total Scanresults = 1
2021/03/11 08:45:31 Count of scanResults is 1
2021/03/11 08:45:31 Value: [null]
2021/03/11 08:45:31 === Aggr on non-leading key, previous non-equality filters, no group ===
2021/03/11 08:45:31 Total Scanresults = 1
2021/03/11 08:45:31 Count of scanResults is 1
2021/03/11 08:45:31 Value: ["Adam"]
2021/03/11 08:45:32 === MIN on desc, no group ===
2021/03/11 08:45:32 Total Scanresults = 1
2021/03/11 08:45:32 Count of scanResults is 1
2021/03/11 08:45:32 Value: ["FANFARE"]
2021/03/11 08:45:32 === MAX on asc, no group ===
2021/03/11 08:45:32 Total Scanresults = 1
2021/03/11 08:45:32 Count of scanResults is 1
2021/03/11 08:45:32 Value: ["OZEAN"]
2021/03/11 08:45:32 === MAX on desc, no group ===
2021/03/11 08:45:32 Total Scanresults = 1
2021/03/11 08:45:32 Count of scanResults is 1
2021/03/11 08:45:32 Value: ["OZEAN"]
2021/03/11 08:45:32 === COUNT(DISTINCT const_expr, no group ===
2021/03/11 08:45:32 Total Scanresults = 1
2021/03/11 08:45:32 Count of scanResults is 1
2021/03/11 08:45:32 Value: [1]
2021/03/11 08:45:32 === COUNT(DISTINCT const_expr, no group, no row match ===
2021/03/11 08:45:32 Total Scanresults = 1
2021/03/11 08:45:32 Count of scanResults is 1
2021/03/11 08:45:32 Value: [0]
2021/03/11 08:45:33 === COUNT(const_expr, no group ===
2021/03/11 08:45:33 Total Scanresults = 1
2021/03/11 08:45:33 Count of scanResults is 1
2021/03/11 08:45:33 Value: [345]
2021/03/11 08:45:36 Created the secondary index indexMinAggr. Waiting for it become active
2021/03/11 08:45:36 Index is now active
2021/03/11 08:45:36 === Equality filter check: Equality for a, nil filter for b - inclusion 0, no filter for c ===
2021/03/11 08:45:36 Total Scanresults = 1
2021/03/11 08:45:36 Count of scanResults is 1
2021/03/11 08:45:36 Value: [5]
2021/03/11 08:45:37 === Equality filter check: Equality for a, nil filter for b - inclusion 3, no filter for c ===
2021/03/11 08:45:37 Total Scanresults = 1
2021/03/11 08:45:37 Count of scanResults is 1
2021/03/11 08:45:37 Value: [5]
2021/03/11 08:45:37 === Equality filter check: Equality for a, nil filter for b - inclusion 3, nil filter for c ===
2021/03/11 08:45:37 Total Scanresults = 1
2021/03/11 08:45:37 Count of scanResults is 1
2021/03/11 08:45:37 Value: [5]
--- PASS: TestGroupAggr_FirstValidAggrOnly (24.65s)
=== RUN   TestGroupAggrPrimary
2021/03/11 08:45:37 In TestGroupAggrPrimary()
2021/03/11 08:45:37 Total Scanresults = 1
2021/03/11 08:45:37 Total Scanresults = 1
2021/03/11 08:45:37 Total Scanresults = 1
2021/03/11 08:45:37 Total Scanresults = 1002
2021/03/11 08:45:38 Total Scanresults = 1002
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 --- MB-28305 Scenario 1 ---
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Count of scanResults is 1
2021/03/11 08:45:38 Value: [0]
2021/03/11 08:45:38 --- MB-28305 Scenario 2 ---
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Count of scanResults is 1
2021/03/11 08:45:38 Value: [0]
2021/03/11 08:45:38 --- MB-28305 Scenario 3 ---
2021/03/11 08:45:38 Total Scanresults = 1
2021/03/11 08:45:38 Count of scanResults is 1
2021/03/11 08:45:38 Value: [0]
--- PASS: TestGroupAggrPrimary (1.16s)
=== RUN   TestIndexNodeRebalanceIn
2021/03/11 08:45:38 In TestIndexNodeRebalanceIn()
2021/03/11 08:45:38 This test will rebalance in an indexer node into the cluster
2021/03/11 08:45:38 Adding node: http://127.0.0.1:9002 with role: index to the cluster
2021/03/11 08:45:45 addNode: Successfully added node with hostname: 127.0.0.1:9002, res: {"otpNode":"n_2@127.0.0.1"}
2021-03-11T08:45:47.732+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9112: connect: connection refused
2021-03-11T08:45:47.896+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9112: connect: connection refused
2021-03-11T08:45:48.798+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:45:48.953+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:45:50 Rebalance progress: 100
--- PASS: TestIndexNodeRebalanceIn (11.81s)
=== RUN   TestIndexNodeRebalanceOut
2021/03/11 08:45:50 In TestIndexNodeRebalanceOut()
2021/03/11 08:45:50 This test will rebalance out an indexer node (127.0.0.1:9001) out of the cluster
2021/03/11 08:45:50 Removing node(s): [127.0.0.1:9001] from the cluster
2021/03/11 08:45:56 Rebalance progress: 0.390625
2021/03/11 08:46:00 Rebalance progress: 1.475694444444444
2021/03/11 08:46:05 Rebalance progress: 2.56076388888889
2021/03/11 08:46:10 Rebalance progress: 3.472222222222223
2021/03/11 08:46:15 Rebalance progress: 4.513888888888889
2021/03/11 08:46:20 Rebalance progress: 5.555555555555556
2021/03/11 08:46:25 Rebalance progress: 6.423611111111111
2021/03/11 08:46:30 Rebalance progress: 7.552083333333333
2021/03/11 08:46:36 Rebalance progress: 8.42013888888889
2021/03/11 08:46:40 Rebalance progress: 9.375
2021/03/11 08:46:45 Rebalance progress: 10.41666666666667
2021/03/11 08:46:50 Rebalance progress: 11.28472222222222
2021/03/11 08:46:56 Rebalance progress: 12.32638888888889
2021/03/11 08:47:00 Rebalance progress: 13.45486111111111
2021/03/11 08:47:06 Rebalance progress: 14.23611111111111
2021/03/11 08:47:10 Rebalance progress: 15.40798611111111
2021/03/11 08:47:15 Rebalance progress: 16.23263888888889
2021/03/11 08:47:20 Rebalance progress: 17.01388888888889
2021/03/11 08:47:25 Rebalance progress: 18.09895833333333
2021/03/11 08:47:30 Rebalance progress: 19.18402777777778
2021/03/11 08:47:35 Rebalance progress: 20.00868055555556
2021/03/11 08:47:40 Rebalance progress: 20.96354166666667
2021/03/11 08:47:45 Rebalance progress: 21.91840277777778
2021/03/11 08:47:50 Rebalance progress: 23.22048611111111
2021/03/11 08:47:55 Rebalance progress: 24.609375
2021/03/11 08:48:01 Rebalance progress: 25.84635416666667
2021/03/11 08:48:06 Rebalance progress: 27.38715277777778
2021/03/11 08:48:10 Rebalance progress: 28.53732638888889
2021/03/11 08:48:16 Rebalance progress: 29.57899305555556
2021/03/11 08:48:21 Rebalance progress: 30.92447916666667
2021/03/11 08:48:25 Rebalance progress: 31.85763888888889
2021/03/11 08:48:30 Rebalance progress: 33.33333333333334
2021/03/11 08:48:35 Rebalance progress: 33.33333333333334
2021/03/11 08:48:40 Rebalance progress: 33.33333333333334
2021/03/11 08:48:45 Rebalance progress: 38.33333333333333
2021/03/11 08:48:50 Rebalance progress: 70.83333333333333
2021/03/11 08:48:55 Rebalance progress: 83.33333333333333
2021-03-11T08:48:56.432+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:59608->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T08:48:56.479+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:48:56.479+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 08:49:00 Rebalance progress: 100
--- PASS: TestIndexNodeRebalanceOut (193.27s)
=== RUN   TestFailoverAndRebalance
2021/03/11 08:49:03 In TestFailoverAndRebalance()
2021/03/11 08:49:03 This test will failover and rebalance-out an indexer node (127.0.0.1:9002) out of the cluster
2021/03/11 08:49:03 Failing over: [127.0.0.1:9002]
2021-03-11T08:49:04.334+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:49:04.335+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:49:04.335+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:49:04.335+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 08:49:09 Rebalance progress: 100
--- PASS: TestFailoverAndRebalance (8.42s)
=== RUN   TestResetCluster
2021/03/11 08:49:12 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2021/03/11 08:49:17 Rebalance progress: 100
2021/03/11 08:49:17 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:49:21 addNode: Successfully added node with hostname: 127.0.0.1:9001, res: {"otpNode":"n_1@127.0.0.1"}
2021-03-11T08:49:25.693+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:25.726+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:26.761+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:49:26.781+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:49:26 Rebalance progress: 100
--- PASS: TestResetCluster (14.58s)
=== RUN   TestAlterIndexIncrReplica
2021/03/11 08:49:26 In TestAlterIndexIncrReplica()
2021/03/11 08:49:26 This test creates an index with one replica and then increments replica count to 2
2021/03/11 08:49:26 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2021-03-11T08:49:27.930+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:49:27.930+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:49:27.930+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:49:27.931+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:49:29.931+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:29.931+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021/03/11 08:49:31 Rebalance progress: 100
2021/03/11 08:49:31 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:49:32 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021-03-11T08:49:33.932+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:33.932+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021/03/11 08:49:37 addNode: Successfully added node with hostname: 127.0.0.1:9001, res: {"otpNode":"n_1@127.0.0.1"}
2021-03-11T08:49:41.224+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:41.231+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:49:42.266+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:49:42.269+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:49:42 Rebalance progress: 100
2021/03/11 08:49:42 Adding node: http://127.0.0.1:9002 with role: index to the cluster
2021/03/11 08:49:47 addNode: Successfully added node with hostname: 127.0.0.1:9002, res: {"otpNode":"n_2@127.0.0.1"}
2021/03/11 08:49:52 Rebalance progress: 100
2021/03/11 08:49:52 Adding node: http://127.0.0.1:9003 with role: index to the cluster
2021/03/11 08:49:58 addNode: Successfully added node with hostname: 127.0.0.1:9003, res: {"otpNode":"n_3@127.0.0.1"}
2021-03-11T08:50:01.474+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9118: connect: connection refused
2021-03-11T08:50:01.481+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9118: connect: connection refused
2021-03-11T08:50:02.570+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:50:02.571+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:50:03 Rebalance progress: 100
2021/03/11 08:50:03 In DropAllSecondaryIndexes()
2021/03/11 08:50:49 Flushed the bucket default, Response body: 
2021/03/11 08:50:55 Created the secondary index idx_1. Waiting for it become active
2021/03/11 08:50:55 Index is now active
2021/03/11 08:50:55 Executing alter index command: alter index `default`.idx_1 with {"action":"replica_count", "num_replica":2}
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
2021/03/11 08:51:17 Using n1ql client
--- PASS: TestAlterIndexIncrReplica (111.12s)
=== RUN   TestAlterIndexDecrReplica
2021/03/11 08:51:17 In TestAlterIndexDecrReplica()
2021/03/11 08:51:17 This test creates an index with two replicas and then decrements replica count to 1
2021/03/11 08:51:17 In DropAllSecondaryIndexes()
2021/03/11 08:51:17 Index found:  idx_1
2021/03/11 08:51:18 Dropped index idx_1
2021/03/11 08:51:32 Created the secondary index idx_2. Waiting for it become active
2021/03/11 08:51:32 Index is now active
2021/03/11 08:51:32 Executing alter index command: alter index `default`.idx_2 with {"action":"replica_count", "num_replica":1}
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
2021/03/11 08:51:48 Using n1ql client
--- PASS: TestAlterIndexDecrReplica (30.83s)
=== RUN   TestAlterIndexDropReplica
2021/03/11 08:51:48 In TestAlterIndexDropReplica()
2021/03/11 08:51:48 This test creates an index with two replicas and then drops one replica from cluster
2021/03/11 08:51:48 In DropAllSecondaryIndexes()
2021/03/11 08:51:48 Index found:  idx_2
2021/03/11 08:51:49 Dropped index idx_2
2021/03/11 08:52:04 Created the secondary index idx_3. Waiting for it become active
2021/03/11 08:52:04 Index is now active
2021/03/11 08:52:04 Executing alter index command: alter index `default`.idx_3 with {"action":"drop_replica", "replicaId":0}
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:19 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
2021/03/11 08:52:20 Using n1ql client
--- PASS: TestAlterIndexDropReplica (31.41s)
=== RUN   TestResetCluster_1
2021/03/11 08:52:20 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2021-03-11T08:52:23.323+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:52:23.323+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:23.323+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:52:23.323+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:23.986+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:52:23.987+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:23.987+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:52:23.987+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:24.010+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = EOF. Kill Pipe.
2021-03-11T08:52:24.010+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:24.010+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = EOF. Kill Pipe.
2021-03-11T08:52:24.011+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 08:52:25 Rebalance progress: 100
2021/03/11 08:52:25 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:26 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:27 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:28 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:29 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:30 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:31 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:36 addNode: Successfully added node with hostname: 127.0.0.1:9001, res: {"otpNode":"n_1@127.0.0.1"}
2021-03-11T08:52:39.978+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:39.992+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:41.077+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:52:41.081+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:52:41 Rebalance progress: 100
--- PASS: TestResetCluster_1 (21.14s)
=== RUN   TestPartitionDistributionWithReplica
2021/03/11 08:52:41 In TestPartitionDistributionWithReplica()
2021/03/11 08:52:41 This test will create a paritioned index with replica and checks the parition distribution
2021/03/11 08:52:41 Parititions with same ID beloning to both replica and source index should not be on the same node
2021/03/11 08:52:41 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2021-03-11T08:52:42.598+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:52:42.599+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:42.599+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:52:42.599+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:52:44.599+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:44.600+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021/03/11 08:52:46 Rebalance progress: 100
2021/03/11 08:52:46 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:52:47 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021-03-11T08:52:48.600+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:48.600+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021/03/11 08:52:52 addNode: Successfully added node with hostname: 127.0.0.1:9001, res: {"otpNode":"n_1@127.0.0.1"}
2021-03-11T08:52:55.733+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:55.737+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:52:56.772+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021-03-11T08:52:56.778+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:52:57 Rebalance progress: 100
2021/03/11 08:52:57 Adding node: http://127.0.0.1:9002 with role: index to the cluster
2021/03/11 08:53:02 addNode: Successfully added node with hostname: 127.0.0.1:9002, res: {"otpNode":"n_2@127.0.0.1"}
2021/03/11 08:53:07 Rebalance progress: 100
2021/03/11 08:53:07 Adding node: http://127.0.0.1:9003 with role: index to the cluster
2021/03/11 08:53:12 addNode: Successfully added node with hostname: 127.0.0.1:9003, res: {"otpNode":"n_3@127.0.0.1"}
2021-03-11T08:53:15.550+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9118: connect: connection refused
2021-03-11T08:53:15.563+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9118: connect: connection refused
2021-03-11T08:53:16.712+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:53:17 Rebalance progress: 100
2021/03/11 08:53:17 In DropAllSecondaryIndexes()
2021/03/11 08:54:03 Flushed the bucket default, Response body: 
2021/03/11 08:54:03 Executing create partition index command on: create index `idx_partn` on `default`(age) partition by hash(meta().id) with {"num_partition":8, "num_replica":1}
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:32 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
2021/03/11 08:54:33 Using n1ql client
--- PASS: TestPartitionDistributionWithReplica (111.98s)
=== RUN   TestPartitionedPartialIndex
2021/03/11 08:54:33 Executing create index command: CREATE INDEX `idx_regular` ON `default`(partn_name)
2021/03/11 08:54:39 Using n1ql client
2021/03/11 08:54:40 Dropping the secondary index idx_regular
2021/03/11 08:54:40 Index dropped
2021/03/11 08:54:40 Executing create index command: CREATE INDEX `idx_partial` ON `default`(partn_name) WHERE partn_age >= 0
2021/03/11 08:54:47 Using n1ql client
2021/03/11 08:54:47 Using n1ql client
2021/03/11 08:54:47 Using n1ql client
2021/03/11 08:54:47 Dropping the secondary index idx_partial
2021/03/11 08:54:47 Index dropped
2021/03/11 08:54:48 Executing create index command: CREATE INDEX `idx_partitioned` ON `default`(partn_name) PARTITION BY HASH(meta().id) 
2021/03/11 08:54:58 Using n1ql client
2021/03/11 08:54:59 Using n1ql client
2021/03/11 08:54:59 Dropping the secondary index idx_partitioned
2021/03/11 08:54:59 Index dropped
2021/03/11 08:54:59 Executing create index command: CREATE INDEX `idx_partitioned_partial` ON `default`(partn_name) PARTITION BY HASH(meta().id) WHERE partn_age >= 0
2021/03/11 08:55:10 Using n1ql client
2021/03/11 08:55:10 Using n1ql client
2021/03/11 08:55:10 Using n1ql client
2021/03/11 08:55:10 Using n1ql client
2021/03/11 08:55:10 Dropping the secondary index idx_partitioned_partial
2021/03/11 08:55:11 Index dropped
--- PASS: TestPartitionedPartialIndex (41.07s)
=== RUN   TestResetCluster_2
2021/03/11 08:55:14 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2021-03-11T08:55:18.417+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:55:18.417+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:55:18.442+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T08:55:18.443+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:55:18.759+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:55:18.759+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:55:18.759+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = EOF. Kill Pipe.
2021-03-11T08:55:18.760+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T08:55:19.040+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:43356->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2021-03-11T08:55:19.114+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:43354->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2021/03/11 08:55:19 Rebalance progress: 100
2021/03/11 08:55:19 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:20 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:21 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:22 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:23 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:24 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:25 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:26 Adding node: http://127.0.0.1:9001 with role: index to the cluster
2021/03/11 08:55:31 addNode: Successfully added node with hostname: 127.0.0.1:9001, res: {"otpNode":"n_1@127.0.0.1"}
2021-03-11T08:55:35.495+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:55:35.496+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T08:55:36.721+05:30 [Error] updateIndexerList(): Cannot locate cluster node.
2021/03/11 08:55:36 Rebalance progress: 100
--- PASS: TestResetCluster_2 (22.45s)
=== RUN   TestCollectionSetup
2021/03/11 08:55:36 In TestCollectionSetup()
2021/03/11 08:55:36 In DropAllSecondaryIndexes()
2021/03/11 08:56:32 Flushed the bucket default, Response body: 
--- PASS: TestCollectionSetup (56.73s)
=== RUN   TestCollectionDefault
2021/03/11 08:56:33 In TestCollectionDefault()
2021/03/11 08:56:42 Created the secondary index _default__default_i1. Waiting for it become active
2021/03/11 08:56:42 Index is now active
2021/03/11 08:56:42 Using n1ql client
2021/03/11 08:56:42 Expected and Actual scan responses are the same
2021/03/11 08:56:47 Created the secondary index _default__default_i2. Waiting for it become active
2021/03/11 08:56:47 Index is now active
2021/03/11 08:56:47 Using n1ql client
2021/03/11 08:56:47 Expected and Actual scan responses are the same
2021/03/11 08:56:47 Dropping the secondary index _default__default_i1
2021/03/11 08:56:47 Index dropped
2021/03/11 08:56:47 Using n1ql client
2021/03/11 08:56:47 Expected and Actual scan responses are the same
2021/03/11 08:56:50 Created the secondary index _default__default_i1. Waiting for it become active
2021/03/11 08:56:50 Index is now active
2021/03/11 08:56:50 Using n1ql client
2021/03/11 08:56:51 Expected and Actual scan responses are the same
2021/03/11 08:56:51 Dropping the secondary index _default__default_i1
2021/03/11 08:56:51 Index dropped
2021/03/11 08:56:51 Dropping the secondary index _default__default_i2
2021/03/11 08:56:51 Index dropped
2021/03/11 08:56:53 Created the secondary index _default__default_i1. Waiting for it become active
2021/03/11 08:56:53 Index is now active
2021/03/11 08:56:53 Using n1ql client
2021/03/11 08:56:53 Expected and Actual scan responses are the same
2021/03/11 08:56:57 Created the secondary index _default__default_i2. Waiting for it become active
2021/03/11 08:56:57 Index is now active
2021/03/11 08:56:57 Using n1ql client
2021/03/11 08:56:58 Expected and Actual scan responses are the same
2021/03/11 08:56:58 Dropping the secondary index _default__default_i1
2021/03/11 08:56:58 Index dropped
2021/03/11 08:56:58 Dropping the secondary index _default__default_i2
2021/03/11 08:56:58 Index dropped
2021/03/11 08:56:58 Build command issued for the deferred indexes [_default__default_i1 _default__default_i2]
2021/03/11 08:56:58 Waiting for the index _default__default_i1 to become active
2021/03/11 08:56:58 Waiting for index to go active ...
2021/03/11 08:56:59 Waiting for index to go active ...
2021/03/11 08:57:00 Waiting for index to go active ...
2021/03/11 08:57:01 Index is now active
2021/03/11 08:57:01 Waiting for the index _default__default_i2 to become active
2021/03/11 08:57:01 Index is now active
2021/03/11 08:57:08 Using n1ql client
2021/03/11 08:57:08 Expected and Actual scan responses are the same
2021/03/11 08:57:08 Using n1ql client
2021/03/11 08:57:08 Expected and Actual scan responses are the same
2021/03/11 08:57:08 Dropping the secondary index _default__default_i1
2021/03/11 08:57:08 Index dropped
2021/03/11 08:57:14 Using n1ql client
2021/03/11 08:57:14 Expected and Actual scan responses are the same
2021/03/11 08:57:14 Dropping the secondary index _default__default_i2
2021/03/11 08:57:14 Index dropped
--- PASS: TestCollectionDefault (41.00s)
=== RUN   TestCollectionNonDefault
2021/03/11 08:57:14 In TestCollectionNonDefault()
2021/03/11 08:57:14 Creating scope: s1 for bucket: default as it does not exist
2021/03/11 08:57:14 Create scope succeeded for bucket default, scopeName: s1, body: {"uid":"1"} 
2021/03/11 08:57:15 Created collection succeeded for bucket: default, scope: s1, collection: c1, body: {"uid":"2"}
2021/03/11 08:57:29 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:57:29 Index is now active
2021/03/11 08:57:29 Using n1ql client
2021-03-11T08:57:29.251+05:30 [Info] metadata provider version changed 1972 -> 1973
2021-03-11T08:57:29.251+05:30 [Info] switched currmeta from 1972 -> 1973 force false 
2021-03-11T08:57:29.251+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:57:29.252+05:30 [Info] GSIC[default/default-s1-c1-1615433249240471154] started ...
2021/03/11 08:57:29 Expected and Actual scan responses are the same
2021/03/11 08:57:34 Created the secondary index s1_c1_i2. Waiting for it become active
2021/03/11 08:57:34 Index is now active
2021/03/11 08:57:34 Using n1ql client
2021/03/11 08:57:34 Expected and Actual scan responses are the same
2021/03/11 08:57:34 Dropping the secondary index s1_c1_i1
2021/03/11 08:57:34 Index dropped
2021/03/11 08:57:34 Using n1ql client
2021/03/11 08:57:34 Expected and Actual scan responses are the same
2021/03/11 08:57:38 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:57:38 Index is now active
2021/03/11 08:57:38 Using n1ql client
2021/03/11 08:57:38 Expected and Actual scan responses are the same
2021/03/11 08:57:38 Dropping the secondary index s1_c1_i1
2021/03/11 08:57:38 Index dropped
2021/03/11 08:57:38 Dropping the secondary index s1_c1_i2
2021/03/11 08:57:38 Index dropped
2021/03/11 08:57:41 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:57:41 Index is now active
2021/03/11 08:57:41 Using n1ql client
2021/03/11 08:57:42 Expected and Actual scan responses are the same
2021/03/11 08:57:46 Created the secondary index s1_c1_i2. Waiting for it become active
2021/03/11 08:57:46 Index is now active
2021/03/11 08:57:46 Using n1ql client
2021/03/11 08:57:46 Expected and Actual scan responses are the same
2021/03/11 08:57:46 Dropping the secondary index s1_c1_i1
2021/03/11 08:57:46 Index dropped
2021/03/11 08:57:46 Dropping the secondary index s1_c1_i2
2021/03/11 08:57:46 Index dropped
2021/03/11 08:57:47 Build command issued for the deferred indexes [s1_c1_i1 s1_c1_i2]
2021/03/11 08:57:47 Waiting for the index s1_c1_i1 to become active
2021/03/11 08:57:47 Waiting for index to go active ...
2021/03/11 08:57:48 Waiting for index to go active ...
2021/03/11 08:57:49 Waiting for index to go active ...
2021/03/11 08:57:50 Index is now active
2021/03/11 08:57:50 Waiting for the index s1_c1_i2 to become active
2021/03/11 08:57:50 Index is now active
2021/03/11 08:57:51 Using n1ql client
2021/03/11 08:57:52 Expected and Actual scan responses are the same
2021/03/11 08:57:52 Using n1ql client
2021/03/11 08:57:52 Expected and Actual scan responses are the same
2021/03/11 08:57:52 Dropping the secondary index s1_c1_i1
2021/03/11 08:57:52 Index dropped
2021/03/11 08:57:53 Using n1ql client
2021/03/11 08:57:53 Expected and Actual scan responses are the same
2021/03/11 08:57:53 Dropping the secondary index s1_c1_i2
2021/03/11 08:57:53 Index dropped
--- PASS: TestCollectionNonDefault (39.31s)
=== RUN   TestCollectionMetaAtSnapEnd
2021/03/11 08:57:53 In TestCollectionMetaAtSnapEnd()
2021/03/11 08:57:53 Creating scope: s2 for bucket: default as it does not exist
2021/03/11 08:57:54 Create scope succeeded for bucket default, scopeName: s2, body: {"uid":"3"} 
2021/03/11 08:57:54 Created collection succeeded for bucket: default, scope: s2, collection: c2, body: {"uid":"4"}
2021/03/11 08:58:06 Created the secondary index s2_c2_i1. Waiting for it become active
2021/03/11 08:58:06 Index is now active
2021/03/11 08:58:06 Using n1ql client
2021-03-11T08:58:06.812+05:30 [Info] metadata provider version changed 2031 -> 2032
2021-03-11T08:58:06.812+05:30 [Info] switched currmeta from 2031 -> 2032 force false 
2021-03-11T08:58:06.812+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T08:58:06.812+05:30 [Info] GSIC[default/default-s2-c2-1615433286805210800] started ...
2021/03/11 08:58:06 Expected and Actual scan responses are the same
2021/03/11 08:58:09 Using n1ql client
2021/03/11 08:58:09 Expected and Actual scan responses are the same
2021/03/11 08:58:10 Created collection succeeded for bucket: default, scope: s2, collection: c3, body: {"uid":"5"}
2021/03/11 08:58:20 Using n1ql client
2021/03/11 08:58:20 Expected and Actual scan responses are the same
2021/03/11 08:58:24 Created the secondary index s2_c2_i2. Waiting for it become active
2021/03/11 08:58:24 Index is now active
2021/03/11 08:58:24 Using n1ql client
2021/03/11 08:58:24 Expected and Actual scan responses are the same
2021/03/11 08:58:24 Using n1ql client
2021/03/11 08:58:24 Expected and Actual scan responses are the same
2021/03/11 08:58:30 Using n1ql client
2021/03/11 08:58:30 Expected and Actual scan responses are the same
2021/03/11 08:58:30 Using n1ql client
2021/03/11 08:58:30 Expected and Actual scan responses are the same
--- PASS: TestCollectionMetaAtSnapEnd (36.42s)
=== RUN   TestCollectionUpdateSeq
2021/03/11 08:58:30 In TestCollectionUpdateSeq()
2021/03/11 08:58:31 Using n1ql client
2021/03/11 08:58:31 Expected and Actual scan responses are the same
2021/03/11 08:58:31 Using n1ql client
2021/03/11 08:58:31 Expected and Actual scan responses are the same
2021/03/11 08:58:37 Using n1ql client
2021/03/11 08:58:37 Expected and Actual scan responses are the same
2021/03/11 08:58:37 Using n1ql client
2021/03/11 08:58:37 Expected and Actual scan responses are the same
2021/03/11 08:58:37 Dropping the secondary index s2_c2_i1
2021/03/11 08:58:37 Index dropped
2021/03/11 08:58:37 Dropping the secondary index s2_c2_i2
2021/03/11 08:58:37 Index dropped
--- PASS: TestCollectionUpdateSeq (7.07s)
=== RUN   TestCollectionMultiple
2021/03/11 08:58:37 In TestCollectionMultiple()
2021/03/11 08:58:40 Created the secondary index _default__default_i3. Waiting for it become active
2021/03/11 08:58:40 Index is now active
2021/03/11 08:58:40 Using n1ql client
2021/03/11 08:58:40 Expected and Actual scan responses are the same
2021/03/11 08:58:45 Created the secondary index s1_c1_i4. Waiting for it become active
2021/03/11 08:58:45 Index is now active
2021/03/11 08:58:45 Using n1ql client
2021/03/11 08:58:45 Expected and Actual scan responses are the same
2021/03/11 08:58:45 Dropping the secondary index _default__default_i3
2021/03/11 08:58:45 Index dropped
2021/03/11 08:58:45 Dropping the secondary index s1_c1_i4
2021/03/11 08:58:45 Index dropped
--- PASS: TestCollectionMultiple (8.05s)
=== RUN   TestCollectionNoDocs
2021/03/11 08:58:45 In TestCollectionNoDocs()
2021/03/11 08:58:50 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:58:50 Index is now active
2021/03/11 08:58:50 Using n1ql client
2021/03/11 08:58:50 Expected and Actual scan responses are the same
2021/03/11 08:58:54 Created the secondary index s1_c1_i2. Waiting for it become active
2021/03/11 08:58:54 Index is now active
2021/03/11 08:58:54 Using n1ql client
2021/03/11 08:58:55 Expected and Actual scan responses are the same
2021/03/11 08:58:55 Dropping the secondary index s1_c1_i1
2021/03/11 08:58:55 Index dropped
2021/03/11 08:58:55 Using n1ql client
2021/03/11 08:58:55 Expected and Actual scan responses are the same
2021/03/11 08:58:59 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:58:59 Index is now active
2021/03/11 08:58:59 Using n1ql client
2021/03/11 08:59:00 Expected and Actual scan responses are the same
2021/03/11 08:59:06 Using n1ql client
2021/03/11 08:59:06 Expected and Actual scan responses are the same
2021/03/11 08:59:06 Using n1ql client
2021/03/11 08:59:06 Expected and Actual scan responses are the same
2021/03/11 08:59:06 Dropping the secondary index s1_c1_i1
2021/03/11 08:59:06 Index dropped
2021/03/11 08:59:06 Dropping the secondary index s1_c1_i2
2021/03/11 08:59:06 Index dropped
--- PASS: TestCollectionNoDocs (20.98s)
=== RUN   TestCollectionPrimaryIndex
2021/03/11 08:59:06 In TestCollectionPrimaryIndex()
2021/03/11 08:59:11 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:59:11 Index is now active
2021/03/11 08:59:11 Using n1ql client
2021/03/11 08:59:16 Created the secondary index s1_c1_i2. Waiting for it become active
2021/03/11 08:59:16 Index is now active
2021/03/11 08:59:16 Using n1ql client
2021/03/11 08:59:18 Using n1ql client
2021/03/11 08:59:18 Using n1ql client
2021/03/11 08:59:18 Dropping the secondary index s1_c1_i1
2021/03/11 08:59:18 Index dropped
2021/03/11 08:59:22 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 08:59:22 Index is now active
2021/03/11 08:59:22 Using n1ql client
2021/03/11 08:59:22 Expected and Actual scan responses are the same
2021/03/11 08:59:23 Using n1ql client
2021/03/11 08:59:23 Expected and Actual scan responses are the same
2021/03/11 08:59:23 Using n1ql client
2021/03/11 08:59:23 Dropping the secondary index s1_c1_i1
2021/03/11 08:59:23 Index dropped
2021/03/11 08:59:23 Dropping the secondary index s1_c1_i2
2021/03/11 08:59:23 Index dropped
--- PASS: TestCollectionPrimaryIndex (17.25s)
=== RUN   TestCollectionMultipleBuilds
2021/03/11 08:59:24 Build command issued for the deferred indexes [9599448112743519604 11848628745306328968]
2021/03/11 08:59:24 Build command issued for the deferred indexes [12659921147979159781 10561135318059431185]
2021/03/11 08:59:24 Waiting for index to go active ...
2021/03/11 08:59:25 Waiting for index to go active ...
2021/03/11 08:59:26 Waiting for index to go active ...
2021/03/11 08:59:27 Waiting for index to go active ...
2021/03/11 08:59:28 Index is now active
2021/03/11 08:59:28 Index is now active
2021/03/11 08:59:28 Waiting for index to go active ...
2021/03/11 08:59:29 Waiting for index to go active ...
2021/03/11 08:59:30 Waiting for index to go active ...
2021/03/11 08:59:31 Waiting for index to go active ...
2021/03/11 08:59:32 Waiting for index to go active ...
2021/03/11 08:59:33 Waiting for index to go active ...
2021/03/11 08:59:34 Waiting for index to go active ...
2021/03/11 08:59:35 Waiting for index to go active ...
2021/03/11 08:59:36 Waiting for index to go active ...
2021/03/11 08:59:37 Index is now active
2021/03/11 08:59:37 Index is now active
2021/03/11 08:59:37 Using n1ql client
2021/03/11 08:59:37 Expected and Actual scan responses are the same
2021/03/11 08:59:37 Using n1ql client
2021/03/11 08:59:37 Expected and Actual scan responses are the same
2021/03/11 08:59:37 Using n1ql client
2021-03-11T09:04:37.349+05:30 [Error] receiving packet: read tcp 127.0.0.1:32946->127.0.0.1:9107: i/o timeout
2021-03-11T09:04:37.349+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1291884638076091091) connection "127.0.0.1:32946" response transport failed `read tcp 127.0.0.1:32946->127.0.0.1:9107: i/o timeout`
2021-03-11T09:04:37.349+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] Range(1291884638076091091) response failed `read tcp 127.0.0.1:32946->127.0.0.1:9107: i/o timeout`
2021-03-11T09:04:37.349+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021-03-11T09:09:37.363+05:30 [Error] receiving packet: read tcp 127.0.0.1:43062->127.0.0.1:9107: i/o timeout
2021-03-11T09:09:37.363+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1291884638076091091) connection "127.0.0.1:43062" response transport failed `read tcp 127.0.0.1:43062->127.0.0.1:9107: i/o timeout`
2021-03-11T09:14:37.364+05:30 [Error] receiving packet: read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout
2021-03-11T09:14:37.364+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1291884638076091091) connection "127.0.0.1:47880" response transport failed `read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout`
2021-03-11T09:14:37.364+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] Range(1291884638076091091) response failed `read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout`
2021-03-11T09:14:37.365+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
Scan error:  read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout from [127.0.0.1:9107] - cause:  read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout from [127.0.0.1:9107]
--- FAIL: TestCollectionMultipleBuilds (913.73s)
    common_test.go:125: Error in scan :  read tcp 127.0.0.1:47880->127.0.0.1:9107: i/o timeout from [127.0.0.1:9107]
=== RUN   TestCollectionMultipleBuilds2
2021/03/11 09:14:37 Index found:  s1_c1_i1
2021/03/11 09:14:37 Index found:  s1_c1_i2
2021/03/11 09:14:39 Build command issued for the deferred indexes [9599448112743519604 11848628745306328968 11539359952741412031 11725671895930171208 3644921654402746189 3373446745115704468 9869855051017796829 1354136233002773888]
2021/03/11 09:14:39 Index is now active
2021/03/11 09:14:39 Index is now active
2021/03/11 09:14:39 Waiting for index to go active ...
2021/03/11 09:14:40 Waiting for index to go active ...
2021/03/11 09:14:41 Waiting for index to go active ...
2021/03/11 09:14:42 Waiting for index to go active ...
2021/03/11 09:14:43 Waiting for index to go active ...
2021/03/11 09:14:44 Index is now active
2021/03/11 09:14:44 Index is now active
2021/03/11 09:14:44 Waiting for index to go active ...
2021/03/11 09:14:45 Waiting for index to go active ...
2021/03/11 09:14:46 Waiting for index to go active ...
2021/03/11 09:14:47 Index is now active
2021/03/11 09:14:47 Index is now active
2021/03/11 09:14:47 Waiting for index to go active ...
2021/03/11 09:14:48 Waiting for index to go active ...
2021/03/11 09:14:49 Waiting for index to go active ...
2021/03/11 09:14:50 Index is now active
2021/03/11 09:14:50 Index is now active
2021/03/11 09:14:50 Using n1ql client
2021/03/11 09:14:50 Expected and Actual scan responses are the same
2021/03/11 09:14:50 Using n1ql client
2021/03/11 09:14:50 Expected and Actual scan responses are the same
2021/03/11 09:14:50 Using n1ql client
2021/03/11 09:14:50 Expected and Actual scan responses are the same
2021/03/11 09:14:50 Using n1ql client
2021-03-11T09:19:50.169+05:30 [Error] receiving packet: read tcp 127.0.0.1:52980->127.0.0.1:9107: i/o timeout
2021-03-11T09:19:50.169+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7395319025095376107) connection "127.0.0.1:52980" response transport failed `read tcp 127.0.0.1:52980->127.0.0.1:9107: i/o timeout`
2021-03-11T09:19:50.169+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] Range(7395319025095376107) response failed `read tcp 127.0.0.1:52980->127.0.0.1:9107: i/o timeout`
2021-03-11T09:24:50.172+05:30 [Error] receiving packet: read tcp 127.0.0.1:57794->127.0.0.1:9107: i/o timeout
2021-03-11T09:24:50.172+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7395319025095376107) connection "127.0.0.1:57794" response transport failed `read tcp 127.0.0.1:57794->127.0.0.1:9107: i/o timeout`
2021-03-11T09:24:50.172+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] Range(7395319025095376107) response failed `read tcp 127.0.0.1:57794->127.0.0.1:9107: i/o timeout`
2021-03-11T09:24:50.172+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:24:50 Expected and Actual scan responses are the same
2021/03/11 09:24:50 Using n1ql client
2021-03-11T09:24:50.207+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:24:50.208+05:30 [Info] GSIC[default/default-s2-c3-1615434890204291363] started ...
2021/03/11 09:24:50 Expected and Actual scan responses are the same
2021/03/11 09:24:50 Using n1ql client
2021/03/11 09:24:50 Expected and Actual scan responses are the same
2021/03/11 09:24:50 Using n1ql client
2021/03/11 09:24:50 Expected and Actual scan responses are the same
2021/03/11 09:24:50 Using n1ql client
2021/03/11 09:24:50 Expected and Actual scan responses are the same
2021/03/11 09:24:51 Using n1ql client
2021/03/11 09:24:51 Expected and Actual scan responses are the same
2021/03/11 09:24:51 Using n1ql client
2021/03/11 09:24:51 Expected and Actual scan responses are the same
2021/03/11 09:24:51 Dropping the secondary index s1_c1_i1
2021/03/11 09:24:51 Index dropped
2021/03/11 09:24:51 Dropping the secondary index s1_c1_i2
2021/03/11 09:24:51 Index dropped
2021/03/11 09:24:51 Dropping the secondary index s2_c2_i1
2021/03/11 09:24:51 Index dropped
2021/03/11 09:24:51 Dropping the secondary index s2_c2_i2
2021/03/11 09:24:51 Index dropped
2021/03/11 09:24:51 Dropping the secondary index s2_c3_i1
2021/03/11 09:24:51 Index dropped
2021/03/11 09:24:51 Dropping the secondary index s2_c3_i2
2021/03/11 09:24:52 Index dropped
2021/03/11 09:24:52 Dropping the secondary index _default__default_i1
2021/03/11 09:24:52 Index dropped
2021/03/11 09:24:52 Dropping the secondary index _default__default_i2
2021/03/11 09:24:52 Index dropped
--- PASS: TestCollectionMultipleBuilds2 (614.95s)
=== RUN   TestCollectionIndexDropConcurrentBuild
2021/03/11 09:24:52 In TestCollectionIndexDropConcurrentBuild()
2021/03/11 09:24:52 Build command issued for the deferred indexes [16918183472731973259 13858014852305364550]
2021/03/11 09:24:53 Dropping the secondary index s1_c1_i1
2021/03/11 09:24:53 Index dropped
2021/03/11 09:24:53 Waiting for index to go active ...
2021/03/11 09:24:54 Waiting for index to go active ...
2021/03/11 09:24:55 Waiting for index to go active ...
2021/03/11 09:24:56 Waiting for index to go active ...
2021/03/11 09:24:57 Index is now active
2021/03/11 09:24:57 Using n1ql client
2021/03/11 09:24:57 Expected and Actual scan responses are the same
2021/03/11 09:24:57 Dropping the secondary index s1_c1_i2
2021/03/11 09:24:57 Index dropped
--- PASS: TestCollectionIndexDropConcurrentBuild (5.51s)
=== RUN   TestCollectionIndexDropConcurrentBuild2
2021/03/11 09:24:57 In TestCollectionIndexDropConcurrentBuild2()
2021/03/11 09:25:01 Created the secondary index s1_c1_i3. Waiting for it become active
2021/03/11 09:25:01 Index is now active
2021/03/11 09:25:01 Using n1ql client
2021/03/11 09:25:02 Expected and Actual scan responses are the same
2021/03/11 09:25:02 Build command issued for the deferred indexes [10823093021868901665 6935149862219767206]
2021/03/11 09:25:03 Dropping the secondary index s1_c1_i3
2021/03/11 09:25:03 Index dropped
2021/03/11 09:25:03 Waiting for index to go active ...
2021/03/11 09:25:04 Waiting for index to go active ...
2021/03/11 09:25:05 Waiting for index to go active ...
2021/03/11 09:25:06 Index is now active
2021/03/11 09:25:06 Index is now active
2021/03/11 09:25:06 Using n1ql client
2021/03/11 09:25:06 Expected and Actual scan responses are the same
2021/03/11 09:25:06 Using n1ql client
2021/03/11 09:25:06 Expected and Actual scan responses are the same
2021/03/11 09:25:06 Dropping the secondary index s1_c1_i1
2021/03/11 09:25:06 Index dropped
2021/03/11 09:25:06 Dropping the secondary index s1_c1_i2
2021/03/11 09:25:06 Index dropped
--- PASS: TestCollectionIndexDropConcurrentBuild2 (8.84s)
=== RUN   TestCollectionDrop
2021/03/11 09:25:06 In TestCollectionDrop()
2021/03/11 09:25:10 Created the secondary index s1_c1_i1. Waiting for it become active
2021/03/11 09:25:10 Index is now active
2021/03/11 09:25:14 Created the secondary index s1_c1_i2. Waiting for it become active
2021/03/11 09:25:14 Index is now active
2021/03/11 09:25:18 Created the secondary index s2_c2_i1. Waiting for it become active
2021/03/11 09:25:18 Index is now active
2021/03/11 09:25:22 Created the secondary index s2_c2_i2. Waiting for it become active
2021/03/11 09:25:22 Index is now active
2021/03/11 09:25:26 Created the secondary index s2_c3_i1. Waiting for it become active
2021/03/11 09:25:26 Index is now active
2021/03/11 09:25:29 Created the secondary index s2_c3_i2. Waiting for it become active
2021/03/11 09:25:29 Index is now active
2021/03/11 09:25:33 Created the secondary index _default__default_i1. Waiting for it become active
2021/03/11 09:25:33 Index is now active
2021/03/11 09:25:37 Created the secondary index _default__default_i2. Waiting for it become active
2021/03/11 09:25:37 Index is now active
2021/03/11 09:25:37 Dropped collection c1 for bucket: default, scope: s1, body: {"uid":"6"}
2021/03/11 09:25:42 Using n1ql client
2021/03/11 09:25:42 Scan failed as expected with error: Index Not Found - cause: GSI index s1_c1_i1 not found.
2021/03/11 09:25:43 Dropped scope s2 for bucket: default, body: {"uid":"7"}
2021/03/11 09:25:48 Using n1ql client
2021-03-11T09:25:48.016+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:25:48.016+05:30 [Info] GSIC[default/default-s2-c1-1615434948013144640] started ...
2021/03/11 09:25:48 Scan failed as expected with error: Index Not Found - cause: GSI index s2_c1_i1 not found.
--- PASS: TestCollectionDrop (41.35s)
=== RUN   TestCollectionDDLWithConcurrentSystemEvents
2021/03/11 09:25:48 Creating scope: sc for bucket: default as it does not exist
2021/03/11 09:25:48 Create scope succeeded for bucket default, scopeName: sc, body: {"uid":"8"} 
2021/03/11 09:25:48 Created collection succeeded for bucket: default, scope: sc, collection: cc, body: {"uid":"9"}
2021/03/11 09:26:02 Created the secondary index sc_cc_i1. Waiting for it become active
2021/03/11 09:26:02 Index is now active
2021/03/11 09:26:04 Build command issued for the deferred indexes [sc_cc_i2]
2021/03/11 09:26:04 Waiting for the index sc_cc_i2 to become active
2021/03/11 09:26:04 Waiting for index to go active ...
2021/03/11 09:26:04 Created collection succeeded for bucket: default, scope: sc, collection: cc_0, body: {"uid":"a"}
2021/03/11 09:26:05 Created collection succeeded for bucket: default, scope: sc, collection: cc_1, body: {"uid":"b"}
2021/03/11 09:26:05 Waiting for index to go active ...
2021/03/11 09:26:05 Created collection succeeded for bucket: default, scope: sc, collection: cc_2, body: {"uid":"c"}
2021/03/11 09:26:06 Created collection succeeded for bucket: default, scope: sc, collection: cc_3, body: {"uid":"d"}
2021/03/11 09:26:06 Waiting for index to go active ...
2021/03/11 09:26:06 Created collection succeeded for bucket: default, scope: sc, collection: cc_4, body: {"uid":"e"}
2021/03/11 09:26:07 Waiting for index to go active ...
2021/03/11 09:26:07 Created collection succeeded for bucket: default, scope: sc, collection: cc_5, body: {"uid":"f"}
2021/03/11 09:26:08 Waiting for index to go active ...
2021/03/11 09:26:08 Created collection succeeded for bucket: default, scope: sc, collection: cc_6, body: {"uid":"10"}
2021/03/11 09:26:09 Waiting for index to go active ...
2021/03/11 09:26:09 Created collection succeeded for bucket: default, scope: sc, collection: cc_7, body: {"uid":"11"}
2021/03/11 09:26:10 Waiting for index to go active ...
2021/03/11 09:26:10 Created collection succeeded for bucket: default, scope: sc, collection: cc_8, body: {"uid":"12"}
2021/03/11 09:26:11 Created collection succeeded for bucket: default, scope: sc, collection: cc_9, body: {"uid":"13"}
2021/03/11 09:26:11 Waiting for index to go active ...
2021/03/11 09:26:12 Waiting for index to go active ...
2021/03/11 09:26:13 Index is now active
2021/03/11 09:26:13 Using n1ql client
2021-03-11T09:26:13.345+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:26:13.345+05:30 [Info] GSIC[default/default-sc-cc-1615434973340970423] started ...
--- PASS: TestCollectionDDLWithConcurrentSystemEvents (62.53s)
=== RUN   TestCollectionDropWithMultipleBuckets
2021/03/11 09:26:50 In TestCollectionWithDropMultipleBuckets()
2021/03/11 09:26:50 This test will create a collection across multiple buckets and 
2021/03/11 09:26:50 drops a collection on one bucket. Indexer should not drop indexes
2021/03/11 09:26:50 with same CollectionID but different buckets
2021/03/11 09:26:50 Creating test_bucket_1
2021/03/11 09:26:50 Created bucket test_bucket_1, responseBody: 
2021/03/11 09:27:00 Creating test_bucket_2
2021/03/11 09:27:00 Created bucket test_bucket_2, responseBody: 
2021/03/11 09:27:10 Creating collection: test for bucket: test_bucket_1
2021/03/11 09:27:10 Created collection succeeded for bucket: test_bucket_1, scope: _default, collection: test, body: {"uid":"1"}
2021/03/11 09:27:10 Creating collection: test for bucket: test_bucket_2
2021/03/11 09:27:11 Created collection succeeded for bucket: test_bucket_2, scope: _default, collection: test, body: {"uid":"1"}
2021/03/11 09:27:21 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_1
2021/03/11 09:27:25 Created the secondary index idx_1. Waiting for it become active
2021/03/11 09:27:25 Index is now active
2021/03/11 09:27:30 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_2
2021/03/11 09:27:33 Created the secondary index idx_1. Waiting for it become active
2021/03/11 09:27:33 Index is now active
2021/03/11 09:27:38 Dropping collection: test for bucket: test_bucket_1
2021/03/11 09:27:38 Dropped collection test for bucket: test_bucket_1, scope: _default, body: {"uid":"2"}
2021/03/11 09:27:40 Scanning index: idx_1, bucket: test_bucket_2
2021/03/11 09:27:40 Using n1ql client
2021-03-11T09:27:40.770+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:27:40.770+05:30 [Info] GSIC[default/test_bucket_2-_default-test-1615435060767077622] started ...
2021/03/11 09:27:40 Deleting bucket: test_bucket_2
2021/03/11 09:27:43 Deleted bucket test_bucket_2, responseBody: 
2021/03/11 09:27:48 Creating test_bucket_2
2021/03/11 09:27:48 Created bucket test_bucket_2, responseBody: 
2021/03/11 09:27:58 Creating collection: test for bucket: test_bucket_2
2021/03/11 09:27:58 Created collection succeeded for bucket: test_bucket_2, scope: _default, collection: test, body: {"uid":"1"}
2021/03/11 09:28:08 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_2
2021/03/11 09:28:13 Build command issued for the deferred indexes [idx_1]
2021/03/11 09:28:13 Waiting for the index idx_1 to become active
2021/03/11 09:28:13 Waiting for index to go active ...
2021/03/11 09:28:14 Waiting for index to go active ...
2021/03/11 09:28:15 Waiting for index to go active ...
2021/03/11 09:28:16 Index is now active
2021/03/11 09:28:16 Scanning index: idx_1, bucket: test_bucket_2
2021/03/11 09:28:16 Using n1ql client
2021/03/11 09:28:18 Deleted bucket test_bucket_1, responseBody: 
2021/03/11 09:28:20 Deleted bucket test_bucket_2, responseBody: 
--- PASS: TestCollectionDropWithMultipleBuckets (95.04s)
=== RUN   TestStringToByteSlice
--- PASS: TestStringToByteSlice (0.00s)
=== RUN   TestStringToByteSlice_Stack
--- PASS: TestStringToByteSlice_Stack (1.29s)
=== RUN   TestByteSliceToString
--- PASS: TestByteSliceToString (0.00s)
=== RUN   TestBytesToString_WithUnusedBytes
--- PASS: TestBytesToString_WithUnusedBytes (0.00s)
=== RUN   TestStringHeadersCompatible
--- PASS: TestStringHeadersCompatible (0.00s)
=== RUN   TestSliceHeadersCompatible
--- PASS: TestSliceHeadersCompatible (0.00s)
=== RUN   TestIdxCorruptBasicSanityMultipleIndices
2021/03/11 09:28:26 In DropAllSecondaryIndexes()
2021/03/11 09:28:26 Index found:  sc_cc_i2
2021/03/11 09:28:26 Dropped index sc_cc_i2
2021/03/11 09:28:26 Index found:  _default__default_i1
2021/03/11 09:28:27 Dropped index _default__default_i1
2021/03/11 09:28:27 Index found:  sc_cc_i1
2021/03/11 09:28:27 Dropped index sc_cc_i1
2021/03/11 09:28:27 Index found:  _default__default_i2
2021/03/11 09:28:27 Dropped index _default__default_i2
Creating two indices ...
2021/03/11 09:28:40 Created the secondary index corrupt_idx1_age. Waiting for it become active
2021/03/11 09:28:40 Index is now active
2021/03/11 09:28:45 Created the secondary index corrupt_idx2_company. Waiting for it become active
2021/03/11 09:28:45 Index is now active
hosts = [127.0.0.1:9108]
2021/03/11 09:28:45 Corrupting index corrupt_idx1_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx1_age_4928186725197819369_0.index
2021/03/11 09:28:45 Corrupting index corrupt_idx1_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx1_age_4928186725197819369_0.index/mainIndex/error
Restarting indexer process ...
2021/03/11 09:28:45 []
2021-03-11T09:28:45.832+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:28:45.832+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:28:45.832+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:28:45.832+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 09:29:06 Using n1ql client
2021-03-11T09:29:06.790+05:30 [Error] transport error between 127.0.0.1:34368->127.0.0.1:9107: write tcp 127.0.0.1:34368->127.0.0.1:9107: write: broken pipe
2021-03-11T09:29:06.790+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 2195215023536148321 request transport failed `write tcp 127.0.0.1:34368->127.0.0.1:9107: write: broken pipe`
2021-03-11T09:29:06.790+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:29:06 Using n1ql client
--- PASS: TestIdxCorruptBasicSanityMultipleIndices (39.93s)
=== RUN   TestIdxCorruptPartitionedIndex
Creating partitioned index ...
2021/03/11 09:29:10 Created the secondary index corrupt_idx3_age. Waiting for it become active
2021/03/11 09:29:10 Index is now active
hosts = [127.0.0.1:9108]
indexer.numPartitions = 8
Corrupting partn id 1
Getting slicepath for  1
slicePath for partn 1 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_1.index
Getting slicepath for  2
slicePath for partn 2 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_2.index
Getting slicepath for  3
slicePath for partn 3 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_3.index
Getting slicepath for  4
slicePath for partn 4 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_4.index
Getting slicepath for  5
slicePath for partn 5 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_5.index
Getting slicepath for  6
slicePath for partn 6 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_6.index
Getting slicepath for  7
slicePath for partn 7 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_7.index
Getting slicepath for  8
slicePath for partn 8 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_8.index
2021/03/11 09:29:10 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_1.index
2021/03/11 09:29:10 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_1.index/mainIndex/error
Restarting indexer process ...
2021/03/11 09:29:11 []
2021-03-11T09:29:11.050+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:11.050+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:29:11.050+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:11.050+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 09:29:31 Using n1ql client
2021-03-11T09:29:31.019+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2021-03-11T09:29:31.029+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
Scan error: All indexer replica is down or unavailable or unable to process request - cause: queryport.client.noHost
Verified single partition corruption
Restarting indexer process ...
2021/03/11 09:29:31 []
2021-03-11T09:29:31.072+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:31.073+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:31.073+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:29:31.078+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 09:29:51 Using n1ql client
2021-03-11T09:29:51.050+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2021-03-11T09:29:51.060+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
Scan error: All indexer replica is down or unavailable or unable to process request - cause: queryport.client.noHost
Skip corrupting partition 1
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_2.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_2.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_3.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_3.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_4.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_4.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_5.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_5.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_6.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_6.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_7.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_7.index/mainIndex/error
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_8.index
2021/03/11 09:29:51 Corrupting index corrupt_idx3_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_11087981632504060444_8.index/mainIndex/error
Restarting indexer process ...
2021/03/11 09:29:51 []
2021-03-11T09:29:51.117+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:51.118+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:29:51.118+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:29:51.119+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021/03/11 09:30:11 Using n1ql client
--- PASS: TestIdxCorruptPartitionedIndex (64.29s)
=== RUN   TestIdxCorruptMOITwoSnapsOneCorrupt
Not running TestMOITwoSnapsOneCorrupt for plasma
--- PASS: TestIdxCorruptMOITwoSnapsOneCorrupt (0.00s)
=== RUN   TestIdxCorruptMOITwoSnapsBothCorrupt
Not running TestMOITwoSnapsBothCorrupt for plasma
--- PASS: TestIdxCorruptMOITwoSnapsBothCorrupt (0.00s)
=== RUN   TestIdxCorruptBackup
2021/03/11 09:30:11 Changing config key indexer.settings.enable_corrupt_index_backup to value true
Creating index ...
2021-03-11T09:30:11.241+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:11.241+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:11.251+05:30 [Info] switched currmeta from 2408 -> 2413 force true 
2021-03-11T09:30:11.264+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:11.264+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:11.278+05:30 [Info] switched currmeta from 2405 -> 2410 force true 
2021-03-11T09:30:11.367+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:11.367+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:11.398+05:30 [Info] switched currmeta from 2413 -> 2413 force true 
2021-03-11T09:30:11.413+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:11.413+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:11.427+05:30 [Info] switched currmeta from 2410 -> 2410 force true 
2021-03-11T09:30:14.177+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:14.177+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:14.182+05:30 [Info] switched currmeta from 2410 -> 2411 force true 
2021-03-11T09:30:14.185+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:14.185+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:14.190+05:30 [Info] switched currmeta from 2413 -> 2414 force true 
2021-03-11T09:30:15.145+05:30 [Info] CreateIndex 17007251625392695295 default _default _default/corrupt_idx6_age using:plasma exprType:N1QL whereExpr:() secExprs:([`age`]) desc:[false] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.029732353s) err()
2021/03/11 09:30:15 Created the secondary index corrupt_idx6_age. Waiting for it become active
2021-03-11T09:30:15.145+05:30 [Info] metadata provider version changed 2414 -> 2415
2021-03-11T09:30:15.145+05:30 [Info] switched currmeta from 2414 -> 2415 force false 
2021/03/11 09:30:15 Index is now active
2021-03-11T09:30:15.231+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:15.231+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:15.237+05:30 [Info] switched currmeta from 2415 -> 2415 force true 
hosts = [127.0.0.1:9108]
2021-03-11T09:30:15.239+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:15.239+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:15.246+05:30 [Info] switched currmeta from 2411 -> 2412 force true 
2021/03/11 09:30:15 Corrupting index corrupt_idx6_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx6_age_1107214026804307180_0.index
2021/03/11 09:30:15 Corrupting index corrupt_idx6_age mainIndexErrFilePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx6_age_1107214026804307180_0.index/mainIndex/error
Restarting indexer process ...
2021/03/11 09:30:15 []
2021-03-11T09:30:15.305+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:30:15.306+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:30:15.306+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:30:15.306+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:30:18.049+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:30:18.155+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:18.155+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:18.171+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:18.171+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:18.173+05:30 [Info] switched currmeta from 2415 -> 2415 force true 
2021-03-11T09:30:18.175+05:30 [Info] switched currmeta from 2412 -> 2412 force true 
2021-03-11T09:30:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T09:30:19.421+05:30 [Info] client stats current counts: current: 2, not current: 0
2021-03-11T09:30:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T09:30:19.426+05:30 [Info] average scan response {1 ms}
2021-03-11T09:30:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T09:30:20.065+05:30 [Info] client stats current counts: current: 2, not current: 0
2021-03-11T09:30:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T09:30:20.068+05:30 [Info] average scan response {2331 ms}
2021-03-11T09:30:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":740,"gsi_scan_duration":6734240374,"gsi_throttle_duration":564512398,"gsi_prime_duration":2568708935,"gsi_blocked_duration":212531860,"gsi_total_temp_files":0}
2021-03-11T09:30:23.005+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:30:23.061+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:23.062+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:23.063+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:23.063+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:23.065+05:30 [Info] switched currmeta from 2415 -> 2415 force true 
2021-03-11T09:30:23.066+05:30 [Info] switched currmeta from 2412 -> 2412 force true 
--- PASS: TestIdxCorruptBackup (24.18s)
=== RUN   TestStatsPersistence
2021/03/11 09:30:35 In TestStatsPersistence()
2021/03/11 09:30:35 In DropAllSecondaryIndexes()
2021/03/11 09:30:35 Index found:  corrupt_idx6_age
2021-03-11T09:30:35.276+05:30 [Info] DropIndex 17007251625392695295 ...
2021-03-11T09:30:37.334+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T09:30:37.362+05:30 [Info] [Queryport-connpool:127.0.0.1:9107] active conns 0, free conns 1
2021-03-11T09:30:45.311+05:30 [Info] metadata provider version changed 2415 -> 2422
2021-03-11T09:30:45.311+05:30 [Info] switched currmeta from 2415 -> 2422 force false 
2021-03-11T09:30:45.311+05:30 [Info] DropIndex 17007251625392695295 - elapsed(10.034376956s), err()
2021/03/11 09:30:45 Dropped index corrupt_idx6_age
2021/03/11 09:30:45 Index found:  corrupt_idx2_company
2021-03-11T09:30:45.311+05:30 [Info] DropIndex 8375875742660852254 ...
2021-03-11T09:30:45.378+05:30 [Info] metadata provider version changed 2422 -> 2424
2021-03-11T09:30:45.378+05:30 [Info] switched currmeta from 2422 -> 2424 force false 
2021-03-11T09:30:45.378+05:30 [Info] DropIndex 8375875742660852254 - elapsed(67.527476ms), err()
2021/03/11 09:30:45 Dropped index corrupt_idx2_company
2021-03-11T09:30:45.541+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:45.541+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:45.552+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:45.552+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:45.554+05:30 [Info] switched currmeta from 2424 -> 2424 force true 
2021-03-11T09:30:45.563+05:30 [Info] switched currmeta from 2412 -> 2421 force true 
2021-03-11T09:30:45.632+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:45.632+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:45.639+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:45.639+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:45.640+05:30 [Info] switched currmeta from 2424 -> 2424 force true 
2021-03-11T09:30:45.646+05:30 [Info] switched currmeta from 2421 -> 2421 force true 
2021-03-11T09:30:46.295+05:30 [Info] CreateIndex default _default _default index_age ...
2021-03-11T09:30:46.295+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T09:30:46.296+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_age)
2021-03-11T09:30:46.296+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_age
2021-03-11T09:30:46.296+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T09:30:46.389+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.389+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.390+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.391+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.394+05:30 [Info] switched currmeta from 2421 -> 2425 force true 
2021-03-11T09:30:46.401+05:30 [Info] switched currmeta from 2424 -> 2428 force true 
2021-03-11T09:30:46.509+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.509+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.524+05:30 [Info] switched currmeta from 2425 -> 2426 force true 
2021-03-11T09:30:46.534+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.534+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.572+05:30 [Info] switched currmeta from 2428 -> 2429 force true 
2021-03-11T09:30:46.656+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.656+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.665+05:30 [Info] switched currmeta from 2426 -> 2426 force true 
2021-03-11T09:30:46.670+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:46.670+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:46.675+05:30 [Info] switched currmeta from 2429 -> 2429 force true 
2021-03-11T09:30:49.195+05:30 [Info] CreateIndex 1292180444572655459 default _default _default/index_age using:plasma exprType:N1QL whereExpr:() secExprs:([`age`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(2.900549876s) err()
2021/03/11 09:30:49 Created the secondary index index_age. Waiting for it become active
2021-03-11T09:30:49.195+05:30 [Info] metadata provider version changed 2429 -> 2430
2021-03-11T09:30:49.196+05:30 [Info] switched currmeta from 2429 -> 2430 force false 
2021/03/11 09:30:49 Index is now active
2021-03-11T09:30:49.196+05:30 [Info] CreateIndex default _default _default index_gender ...
2021-03-11T09:30:49.196+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T09:30:49.201+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_gender)
2021-03-11T09:30:49.202+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_gender
2021-03-11T09:30:49.202+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T09:30:49.319+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.319+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.336+05:30 [Info] switched currmeta from 2426 -> 2431 force true 
2021-03-11T09:30:49.359+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.359+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.378+05:30 [Info] switched currmeta from 2430 -> 2434 force true 
2021-03-11T09:30:49.522+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.522+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.525+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.525+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.541+05:30 [Info] switched currmeta from 2431 -> 2432 force true 
2021-03-11T09:30:49.541+05:30 [Info] switched currmeta from 2434 -> 2435 force true 
2021-03-11T09:30:49.703+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.703+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.705+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:49.705+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:49.712+05:30 [Info] switched currmeta from 2435 -> 2435 force true 
2021-03-11T09:30:49.718+05:30 [Info] switched currmeta from 2432 -> 2432 force true 
2021-03-11T09:30:53.027+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:30:53.118+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:53.118+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:53.130+05:30 [Info] switched currmeta from 2435 -> 2435 force true 
2021-03-11T09:30:53.145+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:53.145+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:53.164+05:30 [Info] switched currmeta from 2432 -> 2433 force true 
2021-03-11T09:30:53.224+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:53.224+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:53.224+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:53.224+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:53.227+05:30 [Info] switched currmeta from 2433 -> 2433 force true 
2021-03-11T09:30:53.227+05:30 [Info] switched currmeta from 2435 -> 2436 force true 
2021-03-11T09:30:54.016+05:30 [Info] CreateIndex 7162194332155260017 default _default _default/index_gender using:plasma exprType:N1QL whereExpr:() secExprs:([`gender`]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(4.819752879s) err()
2021/03/11 09:30:54 Created the secondary index index_gender. Waiting for it become active
2021-03-11T09:30:54.016+05:30 [Info] metadata provider version changed 2436 -> 2437
2021-03-11T09:30:54.016+05:30 [Info] switched currmeta from 2436 -> 2437 force false 
2021/03/11 09:30:54 Index is now active
2021-03-11T09:30:54.016+05:30 [Info] CreateIndex default _default _default index_city ...
2021-03-11T09:30:54.017+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T09:30:54.017+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, index_city)
2021-03-11T09:30:54.017+05:30 [Info] Skipping planner for creation of the index default:_default:_default:index_city
2021-03-11T09:30:54.017+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T09:30:54.087+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.087+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.094+05:30 [Info] switched currmeta from 2437 -> 2439 force true 
2021-03-11T09:30:54.120+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.120+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.129+05:30 [Info] switched currmeta from 2433 -> 2438 force true 
2021-03-11T09:30:54.179+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.179+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.190+05:30 [Info] switched currmeta from 2439 -> 2442 force true 
2021-03-11T09:30:54.236+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.236+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.249+05:30 [Info] switched currmeta from 2438 -> 2439 force true 
2021-03-11T09:30:54.293+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.293+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.306+05:30 [Info] switched currmeta from 2442 -> 2442 force true 
2021-03-11T09:30:54.347+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:54.347+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:54.358+05:30 [Info] switched currmeta from 2439 -> 2439 force true 
2021-03-11T09:30:56.828+05:30 [Info] Rollback time has changed for index inst 7551396359686226707. New rollback time 1615435216052358722
2021-03-11T09:30:56.828+05:30 [Info] Rollback time has changed for index inst 7551396359686226707. New rollback time 1615435216052358722
2021-03-11T09:30:57.015+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:57.015+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:57.020+05:30 [Info] switched currmeta from 2442 -> 2443 force true 
2021-03-11T09:30:57.027+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:57.027+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:57.032+05:30 [Info] switched currmeta from 2439 -> 2440 force true 
2021-03-11T09:30:57.766+05:30 [Info] CreateIndex 12846094285784404326 default _default _default/index_city using:plasma exprType:N1QL whereExpr:() secExprs:([(`address`.`city`)]) desc:[] isPrimary:false scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.750232694s) err()
2021/03/11 09:30:57 Created the secondary index index_city. Waiting for it become active
2021-03-11T09:30:57.766+05:30 [Info] metadata provider version changed 2443 -> 2444
2021-03-11T09:30:57.767+05:30 [Info] switched currmeta from 2443 -> 2444 force false 
2021/03/11 09:30:57 Index is now active
2021-03-11T09:30:57.767+05:30 [Info] CreateIndex default _default _default p1 ...
2021-03-11T09:30:57.767+05:30 [Info] send prepare create request to watcher 127.0.0.1:9106
2021-03-11T09:30:57.774+05:30 [Info] Indexer 127.0.0.1:9106 accept prepare request. Index (default, _default, _default, p1)
2021-03-11T09:30:57.774+05:30 [Info] Skipping planner for creation of the index default:_default:_default:p1
2021-03-11T09:30:57.774+05:30 [Info] send commit create request to watcher 127.0.0.1:9106
2021-03-11T09:30:57.883+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:57.883+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:57.890+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:57.890+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:57.897+05:30 [Info] switched currmeta from 2440 -> 2446 force true 
2021-03-11T09:30:57.907+05:30 [Info] switched currmeta from 2444 -> 2449 force true 
2021-03-11T09:30:58.062+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:58.062+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:58.066+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:58.066+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:58.072+05:30 [Info] switched currmeta from 2449 -> 2449 force true 
2021-03-11T09:30:58.093+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:30:58.094+05:30 [Info] switched currmeta from 2446 -> 2446 force true 
2021-03-11T09:30:58.179+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:58.179+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:58.185+05:30 [Info] switched currmeta from 2449 -> 2449 force true 
2021-03-11T09:30:58.190+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:30:58.190+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:30:58.193+05:30 [Info] switched currmeta from 2446 -> 2446 force true 
2021-03-11T09:31:00.832+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:00.833+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:00.834+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:00.834+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:00.838+05:30 [Info] switched currmeta from 2446 -> 2447 force true 
2021-03-11T09:31:00.839+05:30 [Info] switched currmeta from 2449 -> 2450 force true 
2021-03-11T09:31:01.442+05:30 [Info] CreateIndex 11537249399573524760 default _default _default/p1 using:plasma exprType:N1QL whereExpr:() secExprs:([]) desc:[] isPrimary:true scheme:SINGLE  partitionKeys:([]) with: - elapsed(3.675310714s) err()
2021/03/11 09:31:01 Created the secondary index p1. Waiting for it become active
2021-03-11T09:31:01.442+05:30 [Info] metadata provider version changed 2450 -> 2451
2021-03-11T09:31:01.442+05:30 [Info] switched currmeta from 2450 -> 2451 force false 
2021/03/11 09:31:01 Index is now active
2021/03/11 09:31:01 === Testing for persistence enabled = true, with interval = 5  ===
2021-03-11T09:31:01.442+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2021/03/11 09:31:01 Changing config key indexer.statsPersistenceInterval to value 5
2021-03-11T09:31:01.532+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:01.532+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:01.537+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:01.537+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:01.544+05:30 [Info] switched currmeta from 2451 -> 2451 force true 
2021-03-11T09:31:01.548+05:30 [Info] switched currmeta from 2447 -> 2448 force true 
2021-03-11T09:31:01.584+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":true,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"plasma","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.wal_size":4096,"indexer.statsPersistenceInterval":5,"projector.settings.log_level":"info","queryport.client.settings.backfillLimit":0,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2021/03/11 09:31:01 []
2021-03-11T09:31:01.661+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:31:01.661+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:31:01.661+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:31:01.662+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:31:03.026+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:31:03.101+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:03.101+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:03.104+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:03.104+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:03.108+05:30 [Info] switched currmeta from 2451 -> 2451 force true 
2021-03-11T09:31:03.108+05:30 [Info] switched currmeta from 2448 -> 2448 force true 
2021/03/11 09:31:06 Using n1ql client
2021-03-11T09:31:06.625+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:31:06.626+05:30 [Error] transport error between 127.0.0.1:40486->127.0.0.1:9107: write tcp 127.0.0.1:40486->127.0.0.1:9107: write: broken pipe
2021-03-11T09:31:06.626+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 8442477804973859223 request transport failed `write tcp 127.0.0.1:40486->127.0.0.1:9107: write: broken pipe`
2021-03-11T09:31:06.626+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:31:06 Using n1ql client
2021/03/11 09:31:06 Using n1ql client
2021/03/11 09:31:13 []
2021/03/11 09:31:18 === Testing for persistence enabled = false, with interval = 0  ===
2021/03/11 09:31:18 Changing config key indexer.statsPersistenceInterval to value 0
2021/03/11 09:31:18 []
2021-03-11T09:31:19.421+05:30 [Info] connected with 1 indexers
2021-03-11T09:31:19.421+05:30 [Info] client stats current counts: current: 4, not current: 0
2021-03-11T09:31:19.426+05:30 [Info] num concurrent scans {0}
2021-03-11T09:31:19.426+05:30 [Info] average scan response {1 ms}
2021-03-11T09:31:20.065+05:30 [Info] connected with 1 indexers
2021-03-11T09:31:20.065+05:30 [Info] client stats current counts: current: 4, not current: 0
2021-03-11T09:31:20.068+05:30 [Info] num concurrent scans {0}
2021-03-11T09:31:20.068+05:30 [Info] average scan response {315 ms}
2021-03-11T09:31:22.933+05:30 [Info] GSIC[default/default-_default-_default-1615431262913458279] logstats "default" {"gsi_scan_count":743,"gsi_scan_duration":6828250438,"gsi_throttle_duration":569338351,"gsi_prime_duration":2582006626,"gsi_blocked_duration":215949819,"gsi_total_temp_files":0}
2021/03/11 09:31:23 Using n1ql client
2021-03-11T09:31:23.960+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:31:23.961+05:30 [Error] transport error between 127.0.0.1:43320->127.0.0.1:9107: write tcp 127.0.0.1:43320->127.0.0.1:9107: write: broken pipe
2021-03-11T09:31:23.961+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 3000773359535818474 request transport failed `write tcp 127.0.0.1:43320->127.0.0.1:9107: write: broken pipe`
2021-03-11T09:31:23.961+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:31:23 Using n1ql client
2021/03/11 09:31:24 Using n1ql client
2021/03/11 09:31:26 []
2021/03/11 09:31:31 === Testing for persistence enabled = true, with interval = 10  ===
2021/03/11 09:31:31 Changing config key indexer.statsPersistenceInterval to value 10
2021/03/11 09:31:31 []
2021-03-11T09:31:31.662+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021-03-11T09:31:31.662+05:30 [Error] WatcherServer.runOnce() error : dial tcp 127.0.0.1:9106: connect: connection refused
2021/03/11 09:31:36 Using n1ql client
2021-03-11T09:31:36.268+05:30 [Error] transport error between 127.0.0.1:43890->127.0.0.1:9107: write tcp 127.0.0.1:43890->127.0.0.1:9107: write: broken pipe
2021-03-11T09:31:36.268+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 8748058388406457504 request transport failed `write tcp 127.0.0.1:43890->127.0.0.1:9107: write: broken pipe`
2021-03-11T09:31:36.268+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:31:36 Using n1ql client
2021/03/11 09:31:36 Using n1ql client
2021/03/11 09:31:48 []
2021/03/11 09:31:53 === Testing for persistence enabled = true, with interval = 5  ===
2021/03/11 09:31:53 Changing config key indexer.statsPersistenceInterval to value 5
2021/03/11 09:31:53 []
2021-03-11T09:31:58.039+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2021-03-11T09:31:58.081+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:58.081+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:58.082+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2021-03-11T09:31:58.082+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2021-03-11T09:31:58.083+05:30 [Info] switched currmeta from 2451 -> 2451 force true 
2021-03-11T09:31:58.085+05:30 [Info] switched currmeta from 2448 -> 2448 force true 
2021/03/11 09:31:58 Using n1ql client
2021-03-11T09:31:58.708+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2021-03-11T09:31:58.708+05:30 [Error] transport error between 127.0.0.1:44306->127.0.0.1:9107: write tcp 127.0.0.1:44306->127.0.0.1:9107: write: broken pipe
2021-03-11T09:31:58.708+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -2285089781385770032 request transport failed `write tcp 127.0.0.1:44306->127.0.0.1:9107: write: broken pipe`
2021-03-11T09:31:58.709+05:30 [Error] PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2021/03/11 09:31:58 Using n1ql client
2021/03/11 09:31:58 Using n1ql client
2021/03/11 09:32:05 []
2021-03-11T09:32:05.845+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:32:05.846+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2021-03-11T09:32:05.846+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2021-03-11T09:32:05.846+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
--- PASS: TestStatsPersistence (95.58s)
=== RUN   TestStats_StorageStatistics
2021/03/11 09:32:10 In TestStats_StorageStatistics()
2021/03/11 09:32:10 Index found:  index_age
2021/03/11 09:32:10 Stats from Index4 StorageStatistics for index index_age are [map[AVG_ITEM_SIZE:0 AVG_PAGE_SIZE:0 NUM_DELETE:0 NUM_INSERT:0 NUM_ITEMS:5000 NUM_PAGES:15 PARTITION_ID:0 RESIDENT_RATIO:0]]
2021/03/11 09:32:10 Index found:  p1
2021/03/11 09:32:11 Stats from Index4 StorageStatistics for index p1 are [map[AVG_ITEM_SIZE:0 AVG_PAGE_SIZE:0 NUM_DELETE:0 NUM_INSERT:0 NUM_ITEMS:5000 NUM_PAGES:15 PARTITION_ID:0 RESIDENT_RATIO:0]]
--- PASS: TestStats_StorageStatistics (0.20s)
FAIL
exit status 1
FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	7911.874s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_functests_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 70305    0 70305    0     0  9808k      0 --:--:-- --:--:-- --:--:-- 9808k
2021/03/11 09:32:14 In TestMain()
2021/03/11 09:32:14 Changing config key indexer.api.enableTestServer to value true
2021/03/11 09:32:14 Using plasma for creating indexes
2021/03/11 09:32:14 Changing config key indexer.settings.storage_mode to value plasma
=== RUN   TestRangeWithConcurrentAddMuts
2021/03/11 09:32:19 In TestRangeWithConcurrentAddMuts()
2021/03/11 09:32:19 In DropAllSecondaryIndexes()
2021/03/11 09:32:19 Index found:  p1
2021/03/11 09:32:20 Dropped index p1
2021/03/11 09:32:20 Index found:  index_city
2021/03/11 09:32:20 Dropped index index_city
2021/03/11 09:32:20 Index found:  index_age
2021/03/11 09:32:20 Dropped index index_age
2021/03/11 09:32:20 Index found:  index_gender
2021/03/11 09:32:20 Dropped index index_gender
2021/03/11 09:32:20 Generating JSON docs
2021/03/11 09:32:20 Setting initial JSON docs in KV
2021/03/11 09:32:20 All indexers are active
2021/03/11 09:32:20 Creating a 2i
2021/03/11 09:32:23 Created the secondary index index_company. Waiting for it become active
2021/03/11 09:32:23 Index is now active
2021/03/11 09:32:23 In Range Scan for Thread 1: 
2021/03/11 09:32:23 CreateDocs:: Creating mutations
2021/03/11 09:32:23 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
--- PASS: TestRangeWithConcurrentAddMuts (124.13s)
=== RUN   TestRangeWithConcurrentDelMuts
2021/03/11 09:34:24 In TestRangeWithConcurrentDelMuts()
2021-03-11T09:34:24.128+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:38406->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 09:34:24 Generating JSON docs
2021/03/11 09:34:26 Setting initial JSON docs in KV
2021/03/11 09:34:43 All indexers are active
2021/03/11 09:34:43 Creating a 2i
2021/03/11 09:34:43 Index found:  index_company
2021/03/11 09:34:43 In Range Scan for Thread 1: 
2021/03/11 09:34:43 CreateDocs:: Delete mutations
2021/03/11 09:34:44 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
--- PASS: TestRangeWithConcurrentDelMuts (140.09s)
2021-03-11T09:36:44.213+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:40414->127.0.0.1:9106: use of closed network connection. Kill Pipe.
=== RUN   TestScanWithConcurrentIndexOps
2021/03/11 09:36:44 In TestScanWithConcurrentIndexOps()
2021/03/11 09:36:44 Generating JSON docs
2021/03/11 09:36:51 Setting initial JSON docs in KV
2021/03/11 09:37:47 All indexers are active
2021/03/11 09:37:47 Creating a 2i
2021/03/11 09:37:47 Index found:  index_company
2021/03/11 09:37:47 In Range Scan for Thread 1: 
2021/03/11 09:37:47 Create and Drop index operations
2021/03/11 09:37:47 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2021/03/11 09:37:47 ListAllSecondaryIndexes() for Thread 1: : Index index_age Bucket default
2021/03/11 09:38:06 Created the secondary index index_age. Waiting for it become active
2021/03/11 09:38:06 Index is now active
2021/03/11 09:38:25 Created the secondary index index_firstname. Waiting for it become active
2021/03/11 09:38:25 Index is now active
2021/03/11 09:38:27 Dropping the secondary index index_age
2021/03/11 09:38:27 Index dropped
2021/03/11 09:38:28 Dropping the secondary index index_firstname
2021/03/11 09:38:28 Index dropped
2021/03/11 09:38:48 Created the secondary index index_age. Waiting for it become active
2021/03/11 09:38:48 Index is now active
2021/03/11 09:39:07 Created the secondary index index_firstname. Waiting for it become active
2021/03/11 09:39:07 Index is now active
2021/03/11 09:39:08 Dropping the secondary index index_age
2021/03/11 09:39:08 Index dropped
2021/03/11 09:39:09 Dropping the secondary index index_firstname
2021/03/11 09:39:09 Index dropped
2021/03/11 09:39:29 Created the secondary index index_age. Waiting for it become active
2021/03/11 09:39:29 Index is now active
2021-03-11T09:39:48.046+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:46948->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 09:39:48 Created the secondary index index_firstname. Waiting for it become active
2021/03/11 09:39:48 Index is now active
2021/03/11 09:39:49 Dropping the secondary index index_age
2021/03/11 09:39:49 Index dropped
2021/03/11 09:39:50 Dropping the secondary index index_firstname
2021/03/11 09:39:50 Index dropped
--- PASS: TestScanWithConcurrentIndexOps (187.62s)
=== RUN   TestConcurrentScans_SameIndex
2021/03/11 09:39:51 In TestConcurrentScans_SameIndex()
2021/03/11 09:39:51 Generating JSON docs
2021-03-11T09:39:51.836+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:46946->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 09:39:58 Setting initial JSON docs in KV
2021/03/11 09:40:55 All indexers are active
2021/03/11 09:40:55 Creating a 2i
2021/03/11 09:40:55 Index found:  index_company
2021/03/11 09:40:55 In Range Scan for Thread 6: 
2021/03/11 09:40:55 In Range Scan for Thread 5: 
2021/03/11 09:40:55 In Range Scan for Thread 1: 
2021/03/11 09:40:55 In Range Scan for Thread 2: 
2021/03/11 09:40:55 In Range Scan for Thread 3: 
2021/03/11 09:40:55 In Range Scan for Thread 4: 
2021/03/11 09:40:55 ListAllSecondaryIndexes() for Thread 6: : Index index_company Bucket default
2021/03/11 09:40:55 ListAllSecondaryIndexes() for Thread 4: : Index index_company Bucket default
2021/03/11 09:40:56 ListAllSecondaryIndexes() for Thread 3: : Index index_company Bucket default
2021/03/11 09:40:56 ListAllSecondaryIndexes() for Thread 5: : Index index_company Bucket default
2021/03/11 09:40:56 ListAllSecondaryIndexes() for Thread 2: : Index index_company Bucket default
2021/03/11 09:40:56 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2021-03-11T09:42:56.191+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47620->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T09:42:56.521+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47626->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T09:42:56.522+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47624->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T09:42:56.945+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47616->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T09:42:56.966+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47622->127.0.0.1:9106: use of closed network connection. Kill Pipe.
--- PASS: TestConcurrentScans_SameIndex (185.55s)
=== RUN   TestConcurrentScans_MultipleIndexes
2021/03/11 09:42:57 In TestConcurrentScans_MultipleIndexes()
2021/03/11 09:42:57 Generating JSON docs
2021-03-11T09:42:57.383+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:47614->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 09:43:03 Setting initial JSON docs in KV
2021/03/11 09:44:00 All indexers are active
2021/03/11 09:44:00 Creating multiple indexes
2021/03/11 09:44:00 Index found:  index_company
2021/03/11 09:44:26 Created the secondary index index_age. Waiting for it become active
2021/03/11 09:44:26 Index is now active
2021/03/11 09:44:50 Created the secondary index index_firstname. Waiting for it become active
2021/03/11 09:44:50 Index is now active
2021/03/11 09:44:50 In Range Scan for Thread 3: 
2021/03/11 09:44:50 In Range Scan for Thread 1: 
2021/03/11 09:44:50 In Range Scan
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 1: : Index index_age Bucket default
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 1: : Index index_firstname Bucket default
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 3: : Index index_firstname Bucket default
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 3: : Index index_company Bucket default
2021/03/11 09:44:51 ListAllSecondaryIndexes() for Thread 3: : Index index_age Bucket default
2021-03-11T09:46:51.306+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:48136->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021-03-11T09:46:51.648+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:48138->127.0.0.1:9106: use of closed network connection. Kill Pipe.
--- PASS: TestConcurrentScans_MultipleIndexes (234.47s)
=== RUN   TestMutationsWithMultipleIndexBuilds
2021/03/11 09:46:51 In TestMutationsWithMultipleIndexBuilds()
2021/03/11 09:46:51 In DropAllSecondaryIndexes()
2021/03/11 09:46:51 Index found:  index_company
2021-03-11T09:46:51.847+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:48140->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2021/03/11 09:46:51 Dropped index index_company
2021/03/11 09:46:51 Index found:  index_age
2021/03/11 09:46:52 Dropped index index_age
2021/03/11 09:46:52 Index found:  index_firstname
2021/03/11 09:46:52 Dropped index index_firstname
2021/03/11 09:46:52 Generating JSON docs
2021/03/11 09:46:58 Setting initial JSON docs in KV
2021/03/11 09:48:02 Created the secondary index index_primary. Waiting for it become active
2021/03/11 09:48:02 Index is now active
2021/03/11 09:48:02 Creating multiple indexes in deferred mode
2021/03/11 09:48:03 Build Indexes and wait for indexes to become active: [index_company index_age index_firstname index_lastname]
2021/03/11 09:48:03 Build command issued for the deferred indexes [index_company index_age index_firstname index_lastname]
2021/03/11 09:48:03 Waiting for the index index_company to become active
2021/03/11 09:48:03 Waiting for index to go active ...
2021/03/11 09:48:04 Waiting for index to go active ...
2021/03/11 09:48:05 Waiting for index to go active ...
2021/03/11 09:48:06 Waiting for index to go active ...
2021/03/11 09:48:07 Waiting for index to go active ...
2021/03/11 09:48:08 Waiting for index to go active ...
2021/03/11 09:48:09 Waiting for index to go active ...
2021/03/11 09:48:10 Waiting for index to go active ...
2021/03/11 09:48:11 Waiting for index to go active ...
2021/03/11 09:48:12 Waiting for index to go active ...
2021/03/11 09:48:13 Waiting for index to go active ...
2021/03/11 09:48:14 Waiting for index to go active ...
2021/03/11 09:48:15 Waiting for index to go active ...
2021/03/11 09:48:16 Waiting for index to go active ...
2021/03/11 09:48:17 Waiting for index to go active ...
2021/03/11 09:48:18 Waiting for index to go active ...
2021/03/11 09:48:19 Waiting for index to go active ...
2021/03/11 09:48:20 Waiting for index to go active ...
2021/03/11 09:48:21 Waiting for index to go active ...
2021/03/11 09:48:22 Waiting for index to go active ...
2021/03/11 09:48:23 Waiting for index to go active ...
2021/03/11 09:48:24 Waiting for index to go active ...
2021/03/11 09:48:25 Waiting for index to go active ...
2021/03/11 09:48:26 Waiting for index to go active ...
2021/03/11 09:48:27 Waiting for index to go active ...
2021/03/11 09:48:28 Waiting for index to go active ...
2021/03/11 09:48:29 Waiting for index to go active ...
2021/03/11 09:48:30 Waiting for index to go active ...
2021/03/11 09:48:31 Waiting for index to go active ...
2021/03/11 09:48:32 Waiting for index to go active ...
2021/03/11 09:48:33 Waiting for index to go active ...
2021/03/11 09:48:34 Waiting for index to go active ...
2021/03/11 09:48:35 Waiting for index to go active ...
2021/03/11 09:48:36 Waiting for index to go active ...
2021/03/11 09:48:37 Waiting for index to go active ...
2021/03/11 09:48:38 Waiting for index to go active ...
2021/03/11 09:48:39 Waiting for index to go active ...
2021/03/11 09:48:40 Waiting for index to go active ...
2021/03/11 09:48:41 Waiting for index to go active ...
2021/03/11 09:48:42 Waiting for index to go active ...
2021/03/11 09:48:43 Waiting for index to go active ...
2021/03/11 09:48:44 Waiting for index to go active ...
2021/03/11 09:48:45 Waiting for index to go active ...
2021/03/11 09:48:46 Waiting for index to go active ...
2021/03/11 09:48:47 Waiting for index to go active ...
2021/03/11 09:48:48 Waiting for index to go active ...
2021/03/11 09:48:49 Waiting for index to go active ...
2021/03/11 09:48:50 Waiting for index to go active ...
2021/03/11 09:48:51 Waiting for index to go active ...
2021/03/11 09:48:52 Waiting for index to go active ...
2021/03/11 09:48:53 Waiting for index to go active ...
2021/03/11 09:48:54 Waiting for index to go active ...
2021/03/11 09:48:55 Waiting for index to go active ...
2021/03/11 09:48:56 Waiting for index to go active ...
2021/03/11 09:48:57 Waiting for index to go active ...
2021/03/11 09:48:58 Waiting for index to go active ...
2021/03/11 09:48:59 Waiting for index to go active ...
2021/03/11 09:49:00 Waiting for index to go active ...
2021/03/11 09:49:01 Waiting for index to go active ...
2021/03/11 09:49:02 Index is now active
2021/03/11 09:49:02 Waiting for the index index_age to become active
2021/03/11 09:49:02 Index is now active
2021/03/11 09:49:02 Waiting for the index index_firstname to become active
2021/03/11 09:49:02 Index is now active
2021/03/11 09:49:02 Waiting for the index index_lastname to become active
2021/03/11 09:49:02 Index is now active
--- PASS: TestMutationsWithMultipleIndexBuilds (130.55s)
PASS
ok  	github.com/couchbase/indexing/secondary/tests/largedatatests	1007.669s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_largedata_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 75912    0 75912    0     0  9266k      0 --:--:-- --:--:-- --:--:-- 9266k

Integration tests

echo "Running gsi integration tests with 4 node cluster"
Running gsi integration tests with 4 node cluster
scripts/start_cluster_and_run_tests.sh b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini conf/simple_gsi_n1ql.conf 1 1 gsi_type=plasma
/opt/build/testrunner /opt/build/testrunner
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory ns_dataclean
Built target ns_dataclean
make[1]: Leaving directory '/opt/build/ns_server'
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory all
WARN:  Missing plugins: [pc]
==> chronicle (compile)
Built target chronicle
Built target ns_cfg
Built target kv_mappings
==> ale (compile)
Built target ale
==> ns_server (compile)
Built target ns_server
==> gen_smtp (compile)
Built target gen_smtp
==> ns_babysitter (compile)
Built target ns_babysitter
==> ns_couchdb (compile)
Built target ns_couchdb
Building Go target vbmap using Go 1.8.5
Built target vbmap
Building Go target ns_gozip using Go 1.8.5
Built target ns_gozip
Building Go target ns_goport using Go 1.8.5
Built target ns_goport
Building Go target ns_generate_cert using Go 1.11.6
Built target ns_generate_cert
Building Go target ns_godu using Go 1.8.5
Built target ns_godu
Building Go target ns_minify using Go 1.8.5
Built target ns_minify
Building Go target ns_gosecrets using Go 1.8.5
Built target ns_gosecrets
make[1]: Leaving directory '/opt/build/ns_server'
/opt/build/testrunner
INFO:root:__main__
INFO:__main__:TestRunner: parsing args...
INFO:__main__:Checking arguments...
INFO:__main__:Conf filename: conf/simple_gsi_n1ql.conf
INFO:__main__:Test prefix: gsi.indexscans_gsi.SecondaryIndexingScanTests
INFO:__main__:Test prefix: gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests
INFO:__main__:Test prefix: gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests
INFO:__main__:TestRunner: start...
INFO:__main__:Global Test input params:
INFO:__main__:
Number of tests initially selected before GROUP filters: 11
INFO:__main__:--> Running test: gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi
INFO:__main__:Logs folder: /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_1
*** TestRunner ***
{'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi',
 'conf_file': 'conf/simple_gsi_n1ql.conf',
 'gsi_type': 'plasma',
 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini',
 'makefile': 'True',
 'num_nodes': 4,
 'spec': 'simple_gsi_n1ql'}
Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_1

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 1, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_1'}
Run before suite setup for gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index
suite_setUp (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... -->before_suite_name:gsi.indexscans_gsi.SecondaryIndexingScanTests.suite_setUp,suite: ]>
2021-03-11 09:49:41 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:49:41 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:49:41 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:49:41 | ERROR | MainProcess | MainThread | [rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/nodes/self error [Errno 111] Connection refused 
2021-03-11 09:49:44 | ERROR | MainProcess | MainThread | [rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/nodes/self error [Errno 111] Connection refused 
2021-03-11 09:49:50 | ERROR | MainProcess | MainThread | [rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/nodes/self error [Errno 111] Connection refused 
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.get_nodes_version] Node version in cluster 7.0.0-4170-rel-enterprise
2021-03-11 09:50:02 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.get_nodes_versions] Node versions in cluster []
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 suite_setUp==============
2021-03-11 09:50:02 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default/ body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:50:02 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:02 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] cannot find service node index in cluster 
2021-03-11 09:50:02 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2021-03-11 09:50:05 | ERROR | MainProcess | MainThread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default/ body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2021-03-11 09:50:05 | INFO | MainProcess | MainThread | [basetestcase.setUp] initializing cluster
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '23', 'memoryTotal': 0, 'memoryFree': 0, 'mcdMemoryReserved': 14750, 'mcdMemoryAllocated': 0, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:50:06 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=9626
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:06 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '22', 'memoryTotal': 0, 'memoryFree': 0, 'mcdMemoryReserved': 14750, 'mcdMemoryAllocated': 0, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:50:07 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=9882
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '23', 'memoryTotal': 0, 'memoryFree': 0, 'mcdMemoryReserved': 14750, 'mcdMemoryAllocated': 0, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:50:07 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=9882
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:07 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '23', 'memoryTotal': 0, 'memoryFree': 0, 'mcdMemoryReserved': 14750, 'mcdMemoryAllocated': 0, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:50:08 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=9882
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:50:08 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:50:08 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2021-03-11 09:50:08 | ERROR | MainProcess | MainThread | [rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/cbadminbucket body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2021-03-11 09:50:08 | INFO | MainProcess | MainThread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."'
2021-03-11 09:50:08 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [basetestcase.setUp] done initializing cluster
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:50:13 | INFO | MainProcess | MainThread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2021-03-11 09:50:14 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=9626&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2021-03-11 09:50:14 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.01 seconds to create bucket default
2021-03-11 09:50:14 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:15 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:15 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:15 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 0
2021-03-11 09:50:15 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:15 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:15 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 1
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 2
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 3
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:16 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 4
2021-03-11 09:50:16 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:50:17 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:17 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:17 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 5
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 suite_setUp ==============
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 0, 'mem_free': 0, 'mem_total': 0, 'swap_mem_used': 0, 'swap_mem_total': 0}
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2021-03-11 09:50:20 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2021-03-11 09:50:25 | INFO | MainProcess | MainThread | [rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2021-03-11 09:50:25 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:25 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:50:25 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:50:26 | INFO | MainProcess | MainThread | [basetestcase.load] create 2016.0 to default documents...
2021-03-11 09:50:26 | INFO | MainProcess | MainThread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:50:29 | INFO | MainProcess | MainThread | [basetestcase.load] LOAD IS FINISHED
2021-03-11 09:50:29 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:50:29 | INFO | MainProcess | MainThread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2021-03-11 09:50:29 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 30 secs.  ...
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 150.276806ms
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 502.084045ms
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 21.286576ms
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:50:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.793358ms
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 59.510451ms
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 3.342761635381846, 'mem_free': 14153875456, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:05 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2021-03-11 09:51:05 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2021-03-11 09:51:06 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 85.040s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Cluster instance shutdown with force
-->result: 
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.get_nodes_version] Node version in cluster 7.0.0-4170-rel-enterprise
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.get_nodes_versions] Node versions in cluster ['7.0.0-4170-rel-enterprise']
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 test_multi_create_query_explain_drop_index==============
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:51:06 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 8.21318228630278, 'mem_free': 14115721216, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:08 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2021-03-11 09:51:11 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '89', 'memoryTotal': 15466930176, 'memoryFree': 14110871552, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 9626, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7650
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2021-03-11 09:51:12 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] This node is already provisioned with services, we do not consider this as failure for test case
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '88', 'memoryTotal': 15466930176, 'memoryFree': 14078988288, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 9882, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:51:12 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '88', 'memoryTotal': 15466930176, 'memoryFree': 14107156480, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 9882, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '89', 'memoryTotal': 15466930176, 'memoryFree': 14075113472, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 9882, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:13 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
2021-03-11 09:51:14 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2021-03-11 09:51:14 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2021-03-11 09:51:14 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2021-03-11 09:51:19 | INFO | MainProcess | test_thread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.01 seconds to create bucket default
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 0
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 1
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 2
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:20 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 3
2021-03-11 09:51:20 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:21 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:21 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:21 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 4
2021-03-11 09:51:21 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2021-03-11 09:51:21 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:21 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:21 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 5
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.57046979865772, 'mem_free': 14147010560, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2021-03-11 09:51:24 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2021-03-11 09:51:29 | INFO | MainProcess | test_thread | [rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2021-03-11 09:51:29 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:51:29 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:51:30 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:51:31 | INFO | MainProcess | test_thread | [basetestcase.load] create 2016.0 to default documents...
2021-03-11 09:51:31 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2021-03-11 09:51:33 | INFO | MainProcess | test_thread | [basetestcase.load] LOAD IS FINISHED
2021-03-11 09:51:33 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:51:33 | INFO | MainProcess | test_thread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2021-03-11 09:51:33 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs.  ...
2021-03-11 09:52:03 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:52:03 | INFO | MainProcess | test_thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:52:03 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 84.919248ms
2021-03-11 09:52:03 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
2021-03-11 09:52:03 | INFO | MainProcess | test_thread | [rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 566.554293ms
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 19.531988ms
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:04 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:52:05 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE INDEX employee49ea525937f248c1849061a1ef582918job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
2021-03-11 09:52:05 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=CREATE+INDEX+employee49ea525937f248c1849061a1ef582918job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
2021-03-11 09:52:05 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 81.2179ms
2021-03-11 09:52:05 | INFO | MainProcess | test_thread | [base_gsi.async_build_index] BUILD INDEX on default(employee49ea525937f248c1849061a1ef582918job_title) USING GSI
2021-03-11 09:52:06 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY BUILD INDEX on default(employee49ea525937f248c1849061a1ef582918job_title) USING GSI
2021-03-11 09:52:06 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=BUILD+INDEX+on+default%28employee49ea525937f248c1849061a1ef582918job_title%29+USING+GSI
2021-03-11 09:52:06 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 23.613275ms
2021-03-11 09:52:07 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee49ea525937f248c1849061a1ef582918job_title'
2021-03-11 09:52:07 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee49ea525937f248c1849061a1ef582918job_title%27
2021-03-11 09:52:07 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 12.613192ms
2021-03-11 09:52:08 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee49ea525937f248c1849061a1ef582918job_title'
2021-03-11 09:52:08 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee49ea525937f248c1849061a1ef582918job_title%27
2021-03-11 09:52:08 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 9.926327ms
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 3.322132ms
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [task.execute] {'requestID': '3815124f-7d93-4c29-8ba0-abbfe85e3e65', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee49ea525937f248c1849061a1ef582918job_title', 'index_id': '891f67c63d9813bf', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '3.322132ms', 'executionTime': '3.199545ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:09 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [base_gsi.async_query_using_index] Query : SELECT * FROM default WHERE  job_title = "Sales" 
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] FROM clause ===== is default
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] WHERE clause ===== is   doc["job_title"] == "Sales" 
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] UNNEST clause ===== is None
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] SELECT clause ===== is {"*" : doc,}
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
2021-03-11 09:52:09 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->where_clause=  doc["job_title"] == "Sales" 
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 113.641076ms
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Actual Result
2021-03-11 09:52:10 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Expected Result
2021-03-11 09:52:11 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:11 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee49ea525937f248c1849061a1ef582918job_title'
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee49ea525937f248c1849061a1ef582918job_title%27
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 9.381604ms
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY DROP INDEX employee49ea525937f248c1849061a1ef582918job_title ON default USING GSI
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=DROP+INDEX+employee49ea525937f248c1849061a1ef582918job_title+ON+default+USING+GSI
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 35.23322ms
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee49ea525937f248c1849061a1ef582918job_title'
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee49ea525937f248c1849061a1ef582918job_title%27
2021-03-11 09:52:12 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 1.764444ms
2021-03-11 09:52:12 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:52:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 7.052626ms
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 40.979522ms
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 17.90964982624967, 'mem_free': 13890019328, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2021-03-11 09:52:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2021-03-11 09:52:15 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running?
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2021-03-11 09:52:18 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_1
ok

----------------------------------------------------------------------
Ran 1 test in 72.372s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_2

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,delete_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'delete_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 2, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_2'}
[2021-03-11 09:52:18,838] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:18,838] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:19,113] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:19,120] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,133] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:52:19,144] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:52:19,144] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #2 test_multi_create_query_explain_drop_index==============
[2021-03-11 09:52:19,158] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:52:19,163] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:52:19,170] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:52:19,176] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:52:19,180] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,210] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:52:19,211] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,224] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:52:19,229] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:52:19,229] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,235] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:52:19,240] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:52:19,240] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,245] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:52:19,249] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:52:19,250] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:52:19,254] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:52:19,255] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:52:19,841] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:52:19,845] - [task:157] INFO -  {'uptime': '156', 'memoryTotal': 15466930176, 'memoryFree': 13890019328, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 09:52:19,848] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:52:19,848] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:52:19,848] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:52:19,858] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:52:19,868] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:52:19,868] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:52:19,871] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:52:19,872] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:52:19,874] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:52:19,875] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:52:19,938] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:52:19,945] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:19,946] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:20,214] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:20,226] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:52:20,282] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:52:20,283] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:52:20,288] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:20,290] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:20,297] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:52:20,328] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:52:20,333] - [task:157] INFO -  {'uptime': '154', 'memoryTotal': 15466930176, 'memoryFree': 13937811456, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:52:20,338] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:52:20,342] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:52:20,343] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:52:20,412] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:52:20,418] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:20,419] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:20,698] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:20,709] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:52:20,766] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:52:20,766] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:52:20,772] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:20,775] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:20,781] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:52:20,801] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:52:20,806] - [task:157] INFO -  {'uptime': '154', 'memoryTotal': 15466930176, 'memoryFree': 13885018112, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:52:20,810] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:52:20,815] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:52:20,815] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:52:20,884] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:52:20,891] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:20,891] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:21,168] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:21,179] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:52:21,233] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:52:21,234] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:52:21,239] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:21,241] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:21,248] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:52:21,269] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:52:21,273] - [task:157] INFO -  {'uptime': '154', 'memoryTotal': 15466930176, 'memoryFree': 13937557504, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:52:21,278] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:52:21,284] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:52:21,284] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:52:21,354] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:52:21,359] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:21,360] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:21,621] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:21,633] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:52:21,685] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:52:21,686] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:52:21,691] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:21,695] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:52:21,701] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:52:21,733] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:52:21,859] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:52:26,864] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:52:26,878] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:52:26,884] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:26,884] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:27,126] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:27,137] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:52:27,208] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:52:27,209] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:52:27,210] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:52:27,748] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:52:27,758] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:52:27,759] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:27,991] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,057] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,083] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:52:28,083] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:28,173] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,223] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,248] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:52:28,248] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:28,311] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,357] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,383] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:52:28,383] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:28,458] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,505] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,530] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:52:28,530] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:28,597] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,644] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,676] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:52:28,677] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:52:28,739] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,788] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:28,820] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:52:28,824] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:52:28,837] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:28,838] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:29,107] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:29,112] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:29,112] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:29,377] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:29,382] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:29,382] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:29,614] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:29,618] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:29,618] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:29,871] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:33,048] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:52:33,048] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 31.75381263616558, 'mem_free': 13956554752, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:52:33,048] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:52:33,055] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:52:33,055] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:52:33,064] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:52:33,064] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:52:33,072] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:52:33,073] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:52:33,080] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:52:33,080] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:52:33,087] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:52:33,103] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:52:33,104] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:52:33,104] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:52:38,113] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:52:38,117] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:52:38,117] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:52:38,375] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:52:39,349] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:52:39,380] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:52:41,543] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:52:41,565] - [newtuq:82] INFO - {'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2021-03-11 09:52:42,166] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:52:42,167] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:52:42,167] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:53:12,181] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:53:12,185] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:53:12,270] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 83.202864ms
[2021-03-11 09:53:12,274] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:53:12,278] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:53:12,877] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 596.895709ms
[2021-03-11 09:53:12,885] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:53:12,892] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:53:12,920] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 24.811898ms
[2021-03-11 09:53:12,951] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:12,952] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:13,303] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:13,324] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:14,323] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employeef517fc34867a4d87825763c43e3eaf96job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:53:14,327] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employeef517fc34867a4d87825763c43e3eaf96job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:53:14,392] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 63.210378ms
[2021-03-11 09:53:14,392] - [base_gsi:216] INFO - BUILD INDEX on default(employeef517fc34867a4d87825763c43e3eaf96job_title) USING GSI
[2021-03-11 09:53:15,398] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employeef517fc34867a4d87825763c43e3eaf96job_title) USING GSI
[2021-03-11 09:53:15,403] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employeef517fc34867a4d87825763c43e3eaf96job_title%29+USING+GSI
[2021-03-11 09:53:15,455] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 49.424544ms
[2021-03-11 09:53:16,460] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeef517fc34867a4d87825763c43e3eaf96job_title'
[2021-03-11 09:53:16,463] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeef517fc34867a4d87825763c43e3eaf96job_title%27
[2021-03-11 09:53:16,475] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.925383ms
[2021-03-11 09:53:17,480] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeef517fc34867a4d87825763c43e3eaf96job_title'
[2021-03-11 09:53:17,484] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeef517fc34867a4d87825763c43e3eaf96job_title%27
[2021-03-11 09:53:17,496] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.761454ms
[2021-03-11 09:53:18,024] - [basetestcase:2746] INFO - delete 0.0 to default documents...
[2021-03-11 09:53:18,057] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:19,190] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:53:19,498] - [task:3110] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:19,505] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:53:19,510] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2021-03-11 09:53:19,514] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 2.21993ms
[2021-03-11 09:53:19,514] - [task:3120] INFO - {'requestID': '1e90c455-c548-4f74-910c-d709a751aa07', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeef517fc34867a4d87825763c43e3eaf96job_title', 'index_id': '7963b27a08d3e656', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.21993ms', 'executionTime': '2.130338ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
[2021-03-11 09:53:19,514] - [task:3121] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:19,515] - [task:3151] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:19,515] - [base_gsi:489] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:53:19,515] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 09:53:19,516] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2021-03-11 09:53:19,516] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 09:53:19,516] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 09:53:19,516] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:53:19,517] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:53:20,516] - [task:3110] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:20,521] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:53:20,524] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2021-03-11 09:53:20,661] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 126.532393ms
[2021-03-11 09:53:20,661] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 09:53:20,662] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 09:53:21,462] - [task:3121] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:21,462] - [task:3151] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:53:22,468] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeef517fc34867a4d87825763c43e3eaf96job_title'
[2021-03-11 09:53:22,472] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeef517fc34867a4d87825763c43e3eaf96job_title%27
[2021-03-11 09:53:22,483] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.628829ms
[2021-03-11 09:53:22,487] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employeef517fc34867a4d87825763c43e3eaf96job_title ON default USING GSI
[2021-03-11 09:53:22,491] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employeef517fc34867a4d87825763c43e3eaf96job_title+ON+default+USING+GSI
[2021-03-11 09:53:22,530] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 36.812774ms
[2021-03-11 09:53:22,537] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeef517fc34867a4d87825763c43e3eaf96job_title'
[2021-03-11 09:53:22,543] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeef517fc34867a4d87825763c43e3eaf96job_title%27
[2021-03-11 09:53:22,548] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 2.410404ms
[2021-03-11 09:53:22,571] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:22,575] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:22,576] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:22,943] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:22,953] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:53:22,953] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 09:53:23,015] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:23,025] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:23,029] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:23,029] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:23,375] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:23,384] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:53:23,384] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 09:53:23,443] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:23,454] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:23,455] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 09:53:23,455] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:53:23,459] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:53:23,463] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:53:23,473] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.815424ms
[2021-03-11 09:53:23,478] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:53:23,482] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 09:53:23,539] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 54.688042ms
[2021-03-11 09:53:23,562] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:53:23,562] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 3.491655969191271, 'mem_free': 13972381696, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:53:23,562] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:53:23,568] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:23,568] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:23,909] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:23,914] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:23,914] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:24,280] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:24,285] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:24,346] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:24,940] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:24,949] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:24,950] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:25,552] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:29,234] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #2 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:53:29,267] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 09:53:30,230] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 09:53:30,232] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 09:53:30,233] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 09:53:30,238] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,243] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,250] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,255] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,269] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:53:30,269] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,273] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:53:30,277] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:53:30,277] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,282] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:53:30,286] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:53:30,286] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,290] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:53:30,294] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:53:30,294] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,297] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:53:30,297] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:53:30,298] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 09:53:30,299] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_2
ok

----------------------------------------------------------------------
Ran 1 test in 71.471s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_3

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,update_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'update_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 3, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_3'}
[2021-03-11 09:53:30,369] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:30,370] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:30,694] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:30,702] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,713] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:53:30,722] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:53:30,722] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #3 test_multi_create_query_explain_drop_index==============
[2021-03-11 09:53:30,737] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,742] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,748] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,753] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:53:30,758] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,773] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:53:30,774] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,777] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:53:30,781] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:53:30,781] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,785] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:53:30,789] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:53:30,789] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,792] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:53:30,796] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:53:30,796] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:53:30,799] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:53:30,800] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:53:31,369] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:53:31,373] - [task:157] INFO -  {'uptime': '227', 'memoryTotal': 15466930176, 'memoryFree': 13872959488, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 09:53:31,376] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:53:31,376] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:53:31,376] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:53:31,385] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:53:31,398] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:53:31,399] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:53:31,403] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:53:31,404] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:53:31,405] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:53:31,405] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:53:31,469] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:53:31,474] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:31,474] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:31,792] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:31,804] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:53:31,862] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:31,864] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:53:31,869] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:31,871] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:31,878] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:53:31,904] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:53:31,907] - [task:157] INFO -  {'uptime': '225', 'memoryTotal': 15466930176, 'memoryFree': 13858365440, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:53:31,912] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:53:31,917] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:53:31,917] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:53:31,986] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:53:31,991] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:31,992] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:32,334] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:32,346] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:53:32,407] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:32,408] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:53:32,413] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:32,415] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:32,421] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:53:32,442] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:53:32,446] - [task:157] INFO -  {'uptime': '225', 'memoryTotal': 15466930176, 'memoryFree': 13752471552, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:53:32,451] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:53:32,458] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:53:32,458] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:53:32,526] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:53:32,533] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:32,534] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:32,858] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:32,873] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:53:32,946] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:32,947] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:53:32,952] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:32,954] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:32,960] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:53:32,982] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:53:32,986] - [task:157] INFO -  {'uptime': '225', 'memoryTotal': 15466930176, 'memoryFree': 13854969856, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:53:32,991] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:53:32,996] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:53:32,996] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:53:33,075] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:53:33,080] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:33,081] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:33,386] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:33,397] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:53:33,455] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:33,456] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:53:33,462] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:33,465] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:53:33,470] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:53:33,500] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:53:33,670] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:53:38,674] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:53:38,703] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:53:38,708] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:38,708] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:39,014] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:39,024] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:53:39,087] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:53:39,088] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:53:39,089] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:53:39,516] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:53:39,524] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:53:39,524] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:39,755] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:39,833] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:39,867] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:53:39,868] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:39,963] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,011] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,037] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:53:40,038] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:40,115] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,163] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,188] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:53:40,188] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:40,261] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,307] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,332] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:53:40,333] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:40,404] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,452] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,478] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:53:40,478] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:53:40,547] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,609] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:40,646] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:53:40,650] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:53:40,661] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:40,661] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:40,999] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:41,006] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:41,006] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:41,340] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:41,345] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:41,345] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:41,651] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:41,656] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:41,691] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:42,234] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:45,872] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:53:45,873] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 30.12076172782164, 'mem_free': 13919092736, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:53:45,873] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:53:45,882] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:53:45,882] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:53:45,891] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:53:45,891] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:53:45,900] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:53:45,901] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:53:45,910] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:53:45,910] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:53:45,917] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:45,942] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:53:45,942] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:53:45,942] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:53:50,953] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:53:50,957] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:53:50,957] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:53:51,283] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:53:52,252] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:53:52,283] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:53:54,414] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:53:54,430] - [newtuq:82] INFO - {'update': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2021-03-11 09:53:55,208] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:53:55,208] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:53:55,208] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:54:25,243] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:54:25,247] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:54:25,334] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 85.656637ms
[2021-03-11 09:54:25,339] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:54:25,343] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:54:25,984] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 639.706956ms
[2021-03-11 09:54:25,989] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:54:26,002] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:54:26,027] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.684711ms
[2021-03-11 09:54:26,047] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:26,048] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:26,477] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:26,497] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:54:27,496] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employeeb165e05c1bf74c0081a294fde71ccf9ajob_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:54:27,501] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employeeb165e05c1bf74c0081a294fde71ccf9ajob_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:54:27,568] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 64.601561ms
[2021-03-11 09:54:27,569] - [base_gsi:216] INFO - BUILD INDEX on default(employeeb165e05c1bf74c0081a294fde71ccf9ajob_title) USING GSI
[2021-03-11 09:54:28,579] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employeeb165e05c1bf74c0081a294fde71ccf9ajob_title) USING GSI
[2021-03-11 09:54:28,582] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employeeb165e05c1bf74c0081a294fde71ccf9ajob_title%29+USING+GSI
[2021-03-11 09:54:28,607] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 22.587869ms
[2021-03-11 09:54:29,613] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb165e05c1bf74c0081a294fde71ccf9ajob_title'
[2021-03-11 09:54:29,618] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb165e05c1bf74c0081a294fde71ccf9ajob_title%27
[2021-03-11 09:54:29,629] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.937688ms
[2021-03-11 09:54:30,634] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb165e05c1bf74c0081a294fde71ccf9ajob_title'
[2021-03-11 09:54:30,638] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb165e05c1bf74c0081a294fde71ccf9ajob_title%27
[2021-03-11 09:54:30,648] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 8.341674ms
[2021-03-11 09:54:31,203] - [basetestcase:2746] INFO - update 0.0 to default documents...
[2021-03-11 09:54:31,235] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:32,062] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:54:32,650] - [task:3110] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:32,657] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:54:32,662] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2021-03-11 09:54:32,667] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.928304ms
[2021-03-11 09:54:32,667] - [task:3120] INFO - {'requestID': '0cc21135-628b-4a91-9a94-818ae238ef23', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeeb165e05c1bf74c0081a294fde71ccf9ajob_title', 'index_id': 'c92386e0db5aadec', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '3.928304ms', 'executionTime': '3.847983ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
[2021-03-11 09:54:32,668] - [task:3121] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:32,668] - [task:3151] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:32,669] - [base_gsi:489] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:54:32,670] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 09:54:32,670] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2021-03-11 09:54:32,670] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 09:54:32,671] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 09:54:32,671] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:54:32,671] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:54:33,670] - [task:3110] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:33,678] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:54:33,682] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2021-03-11 09:54:33,811] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 118.903929ms
[2021-03-11 09:54:33,811] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 09:54:33,812] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 09:54:34,634] - [task:3121] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:34,634] - [task:3151] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:54:35,640] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb165e05c1bf74c0081a294fde71ccf9ajob_title'
[2021-03-11 09:54:35,644] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb165e05c1bf74c0081a294fde71ccf9ajob_title%27
[2021-03-11 09:54:35,656] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.277692ms
[2021-03-11 09:54:35,661] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employeeb165e05c1bf74c0081a294fde71ccf9ajob_title ON default USING GSI
[2021-03-11 09:54:35,665] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employeeb165e05c1bf74c0081a294fde71ccf9ajob_title+ON+default+USING+GSI
[2021-03-11 09:54:35,720] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 53.883254ms
[2021-03-11 09:54:35,730] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb165e05c1bf74c0081a294fde71ccf9ajob_title'
[2021-03-11 09:54:35,736] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb165e05c1bf74c0081a294fde71ccf9ajob_title%27
[2021-03-11 09:54:35,739] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.589511ms
[2021-03-11 09:54:35,758] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:54:35,764] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:35,764] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:36,161] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:36,171] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:54:36,171] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 09:54:36,239] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:36,249] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:54:36,253] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:36,253] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:36,651] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:36,660] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:54:36,661] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 09:54:36,730] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:36,740] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:54:36,741] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 09:54:36,741] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:54:36,745] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:54:36,749] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:54:36,759] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.107773ms
[2021-03-11 09:54:36,763] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:54:36,768] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 09:54:36,813] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 43.407966ms
[2021-03-11 09:54:36,832] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:54:36,832] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 16.13073273589879, 'mem_free': 13708996608, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:54:36,832] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:54:36,844] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:36,844] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:37,222] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:37,227] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:37,227] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:37,587] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:37,593] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:37,593] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:37,956] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:37,962] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:37,962] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:38,530] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:43,579] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #3 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:54:43,616] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 09:54:45,015] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 09:54:45,018] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 09:54:45,019] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 09:54:45,025] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,031] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,037] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,042] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,056] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:54:45,056] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,060] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:54:45,064] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:54:45,065] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,068] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:54:45,073] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:54:45,074] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,079] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:54:45,084] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:54:45,084] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,089] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:54:45,089] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:54:45,089] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 09:54:45,092] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_3
ok

----------------------------------------------------------------------
Ran 1 test in 74.747s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_4

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,expiry_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'expiry_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 4, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_4'}
[2021-03-11 09:54:45,165] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:45,165] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:45,583] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:45,591] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,603] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:54:45,612] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:54:45,612] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #4 test_multi_create_query_explain_drop_index==============
[2021-03-11 09:54:45,629] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,636] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,642] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,648] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:54:45,652] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,666] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:54:45,666] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,670] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:54:45,674] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:54:45,674] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,678] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:54:45,683] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:54:45,683] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,687] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:54:45,691] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:54:45,692] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:54:45,695] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:54:45,696] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:54:46,169] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:54:46,173] - [task:157] INFO -  {'uptime': '302', 'memoryTotal': 15466930176, 'memoryFree': 13800456192, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:54:46,176] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:54:46,176] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:54:46,176] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:54:46,189] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:54:46,199] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:54:46,200] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:54:46,204] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:54:46,204] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:54:46,205] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:54:46,205] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:54:46,270] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:54:46,276] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:46,276] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:46,681] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:46,696] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:54:46,773] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:46,774] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:54:46,779] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:46,782] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:46,788] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:54:46,820] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:54:46,824] - [task:157] INFO -  {'uptime': '300', 'memoryTotal': 15466930176, 'memoryFree': 13716774912, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:54:46,829] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:54:46,834] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:54:46,835] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:54:46,903] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:54:46,908] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:46,909] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:47,312] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:47,323] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:54:47,395] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:47,397] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:54:47,402] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:47,404] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:47,411] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:54:47,431] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:54:47,435] - [task:157] INFO -  {'uptime': '301', 'memoryTotal': 15466930176, 'memoryFree': 13796139008, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:54:47,440] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:54:47,444] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:54:47,445] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:54:47,514] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:54:47,521] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:47,521] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:47,919] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:47,930] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:54:48,003] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:48,004] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:54:48,009] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:48,013] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:48,019] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:54:48,037] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:54:48,041] - [task:157] INFO -  {'uptime': '301', 'memoryTotal': 15466930176, 'memoryFree': 13717065728, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:54:48,046] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:54:48,051] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:54:48,052] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:54:48,123] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:54:48,128] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:48,128] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:48,522] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:48,534] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:54:48,599] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:48,600] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:54:48,605] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:48,608] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:54:48,614] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:54:48,642] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:54:48,809] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:54:53,813] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:54:53,827] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:54:53,832] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:53,832] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:54,232] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:54,242] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:54:54,319] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:54:54,320] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:54:54,320] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:54:54,660] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:54:54,673] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:54:54,673] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:54,871] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,009] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,038] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:54:55,039] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:55,121] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,208] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,234] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:54:55,235] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:55,285] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,372] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,405] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:54:55,405] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:55,457] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,542] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,570] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:54:55,570] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:55,618] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,703] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,751] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:54:55,751] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:54:55,821] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,916] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:54:55,941] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:54:55,944] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:54:55,952] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:55,953] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:56,401] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:56,405] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:56,406] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:56,797] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:56,803] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:56,803] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:57,363] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:54:57,372] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:54:57,373] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:54:58,024] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:02,263] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:55:02,264] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 25.2950643776824, 'mem_free': 13892255744, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:55:02,264] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:55:02,271] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:55:02,271] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:55:02,280] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:55:02,280] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:55:02,288] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:55:02,289] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:55:02,297] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:55:02,297] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:55:02,304] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:02,320] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:55:02,320] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:55:02,321] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:55:07,331] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:55:07,337] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:07,337] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:07,709] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:08,619] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:55:08,652] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:55:10,755] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:55:10,772] - [newtuq:82] INFO - {'expiry': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2021-03-11 09:55:11,391] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:11,391] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:55:11,391] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:55:41,420] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:55:41,424] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:55:41,507] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 81.504926ms
[2021-03-11 09:55:41,512] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:55:41,516] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:55:42,125] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 607.550812ms
[2021-03-11 09:55:42,134] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:55:42,146] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:55:42,174] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 25.063954ms
[2021-03-11 09:55:42,194] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:42,194] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:42,698] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:42,720] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:43,718] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee5279ebdb06c24b20b34f4963f588d225job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:55:43,723] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee5279ebdb06c24b20b34f4963f588d225job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:55:43,791] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 66.910495ms
[2021-03-11 09:55:43,792] - [base_gsi:216] INFO - BUILD INDEX on default(employee5279ebdb06c24b20b34f4963f588d225job_title) USING GSI
[2021-03-11 09:55:44,797] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee5279ebdb06c24b20b34f4963f588d225job_title) USING GSI
[2021-03-11 09:55:44,802] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee5279ebdb06c24b20b34f4963f588d225job_title%29+USING+GSI
[2021-03-11 09:55:44,826] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.969214ms
[2021-03-11 09:55:45,833] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee5279ebdb06c24b20b34f4963f588d225job_title'
[2021-03-11 09:55:45,838] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee5279ebdb06c24b20b34f4963f588d225job_title%27
[2021-03-11 09:55:45,846] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.018418ms
[2021-03-11 09:55:46,852] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee5279ebdb06c24b20b34f4963f588d225job_title'
[2021-03-11 09:55:46,856] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee5279ebdb06c24b20b34f4963f588d225job_title%27
[2021-03-11 09:55:46,868] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.554691ms
[2021-03-11 09:55:47,300] - [basetestcase:2746] INFO - update 0.0 to default documents...
[2021-03-11 09:55:47,469] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:55:48,507] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:55:48,556] - [data_helper:311] INFO - dict:{'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}
[2021-03-11 09:55:48,556] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:55:48,593] - [cluster_helper:371] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10 on default
[2021-03-11 09:55:48,594] - [mc_bin_client:680] INFO - setting param: exp_pager_stime 10
[2021-03-11 09:55:48,595] - [cluster_helper:385] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10, result: (2529242610, 0, b'')
[2021-03-11 09:55:48,870] - [task:3110] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:48,875] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:55:48,879] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2021-03-11 09:55:48,883] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 2.325658ms
[2021-03-11 09:55:48,883] - [task:3120] INFO - {'requestID': '98fff894-60e0-4d63-b3e4-c4f5d56cda30', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee5279ebdb06c24b20b34f4963f588d225job_title', 'index_id': 'ae6343768baf1b28', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.325658ms', 'executionTime': '2.219128ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
[2021-03-11 09:55:48,883] - [task:3121] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:48,883] - [task:3151] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:48,884] - [base_gsi:489] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:55:48,884] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 09:55:48,884] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2021-03-11 09:55:48,885] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 09:55:48,885] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 09:55:48,885] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:55:48,885] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:55:49,884] - [task:3110] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:49,889] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:55:49,893] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2021-03-11 09:55:50,011] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 110.152712ms
[2021-03-11 09:55:50,011] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 09:55:50,012] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 09:55:50,878] - [task:3121] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:50,878] - [task:3151] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:55:51,884] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee5279ebdb06c24b20b34f4963f588d225job_title'
[2021-03-11 09:55:51,888] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee5279ebdb06c24b20b34f4963f588d225job_title%27
[2021-03-11 09:55:51,899] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.510475ms
[2021-03-11 09:55:51,903] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee5279ebdb06c24b20b34f4963f588d225job_title ON default USING GSI
[2021-03-11 09:55:51,908] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee5279ebdb06c24b20b34f4963f588d225job_title+ON+default+USING+GSI
[2021-03-11 09:55:51,945] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 35.205713ms
[2021-03-11 09:55:51,950] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee5279ebdb06c24b20b34f4963f588d225job_title'
[2021-03-11 09:55:51,958] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee5279ebdb06c24b20b34f4963f588d225job_title%27
[2021-03-11 09:55:51,961] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.571295ms
[2021-03-11 09:55:51,991] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:51,996] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:51,996] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:52,497] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:52,506] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:55:52,507] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 09:55:52,585] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:55:52,595] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:52,598] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:52,598] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:53,084] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:53,093] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:55:53,093] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 09:55:53,179] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:55:53,192] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:55:53,192] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 09:55:53,192] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:55:53,196] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:55:53,201] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:55:53,211] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 8.404181ms
[2021-03-11 09:55:53,215] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:55:53,219] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 09:55:53,263] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 40.680582ms
[2021-03-11 09:55:53,278] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:55:53,278] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 21.0483432916893, 'mem_free': 13668724736, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:55:53,278] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:55:53,283] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:53,284] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:53,749] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:53,756] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:53,757] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:54,213] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:54,219] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:54,220] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:54,788] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:55:54,796] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:55:54,796] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:55:55,620] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:00,867] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #4 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:56:00,897] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 09:56:02,131] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 09:56:02,134] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 09:56:02,134] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 09:56:02,140] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,146] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,152] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,158] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,176] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:56:02,176] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,180] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:56:02,184] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:56:02,186] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,195] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:56:02,200] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:56:02,200] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,208] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:56:02,213] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:56:02,213] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,219] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:56:02,219] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:56:02,220] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 09:56:02,223] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 4 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_4
ok

----------------------------------------------------------------------
Ran 1 test in 77.072s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_5

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 5, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_5'}
[2021-03-11 09:56:02,287] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:02,288] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:02,733] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:02,740] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,752] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:56:02,761] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:56:02,761] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #5 test_multi_create_query_explain_drop_index==============
[2021-03-11 09:56:02,775] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,782] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,788] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,793] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:56:02,798] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,811] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:56:02,812] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,815] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:56:02,819] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:56:02,819] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,823] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:56:02,826] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:56:02,827] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,830] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:56:02,834] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:56:02,834] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:56:02,837] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:56:02,838] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:56:03,291] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:56:03,295] - [task:157] INFO -  {'uptime': '379', 'memoryTotal': 15466930176, 'memoryFree': 13792165888, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 09:56:03,298] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:56:03,298] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:56:03,299] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:56:03,306] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:56:03,318] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:56:03,319] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:56:03,322] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:56:03,322] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:56:03,323] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:56:03,323] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:56:03,388] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:56:03,393] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:03,393] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:03,856] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:03,869] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:56:03,945] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:56:03,946] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:56:03,951] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:03,954] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:03,960] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:56:03,986] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:56:03,990] - [task:157] INFO -  {'uptime': '376', 'memoryTotal': 15466930176, 'memoryFree': 13773385728, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:56:03,994] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:56:03,999] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:56:03,999] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:56:04,068] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:56:04,075] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:04,075] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:04,532] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:04,545] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:56:04,630] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:56:04,631] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:56:04,635] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:04,638] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:04,644] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:56:04,672] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:56:04,676] - [task:157] INFO -  {'uptime': '377', 'memoryTotal': 15466930176, 'memoryFree': 13787791360, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:56:04,680] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:56:04,686] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:56:04,687] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:56:04,756] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:56:04,763] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:04,763] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:05,218] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:05,230] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:56:05,328] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:56:05,329] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:56:05,334] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:05,336] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:05,342] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:56:05,364] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:56:05,368] - [task:157] INFO -  {'uptime': '377', 'memoryTotal': 15466930176, 'memoryFree': 13770104832, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:56:05,374] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:56:05,379] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:56:05,379] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:56:05,447] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:56:05,452] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:05,453] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:05,914] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:05,926] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:56:06,013] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:56:06,014] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:56:06,018] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:06,021] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:56:06,027] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:56:06,055] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:56:06,198] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:56:11,202] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:56:11,219] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:56:11,224] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:11,224] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:11,659] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:11,670] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:56:11,748] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:56:11,749] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:56:11,750] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:56:12,071] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:56:12,079] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:56:12,080] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:12,312] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,365] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,419] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:56:12,419] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:12,467] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,517] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,559] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:56:12,559] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:12,684] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,737] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,765] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:56:12,765] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:12,868] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,922] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:12,952] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:56:12,952] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:13,046] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:13,095] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:13,124] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:56:13,124] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:56:13,260] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:13,311] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:13,339] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:56:13,342] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:56:13,351] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:13,351] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:13,861] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:13,866] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:13,866] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:14,315] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:14,320] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:14,320] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:15,073] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:15,080] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:15,080] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:15,853] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:20,539] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:56:20,540] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 27.65208647561589, 'mem_free': 13845745664, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:56:20,540] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:56:20,547] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:56:20,547] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:56:20,556] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:56:20,556] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:56:20,565] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:56:20,565] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:56:20,574] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:56:20,574] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:56:20,581] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:56:20,596] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:56:20,596] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:56:20,596] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:56:25,605] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:56:25,608] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:56:25,608] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:56:26,023] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:56:27,020] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:56:27,053] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:56:29,004] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:56:29,019] - [newtuq:82] INFO - {'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2021-03-11 09:56:30,020] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:56:30,020] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:56:30,020] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:57:00,054] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:57:00,058] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:57:00,141] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 82.099445ms
[2021-03-11 09:57:00,146] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:57:00,149] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:57:00,737] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 585.722399ms
[2021-03-11 09:57:00,744] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:57:00,752] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:57:00,773] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 19.304686ms
[2021-03-11 09:57:00,799] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:00,799] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:01,433] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:01,454] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:02,453] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee416c7048fcaf40c2ac764755828b5933job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:57:02,457] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee416c7048fcaf40c2ac764755828b5933job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:57:02,508] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 48.883101ms
[2021-03-11 09:57:02,508] - [base_gsi:216] INFO - BUILD INDEX on default(employee416c7048fcaf40c2ac764755828b5933job_title) USING GSI
[2021-03-11 09:57:03,514] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee416c7048fcaf40c2ac764755828b5933job_title) USING GSI
[2021-03-11 09:57:03,518] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee416c7048fcaf40c2ac764755828b5933job_title%29+USING+GSI
[2021-03-11 09:57:03,550] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 29.38185ms
[2021-03-11 09:57:04,556] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee416c7048fcaf40c2ac764755828b5933job_title'
[2021-03-11 09:57:04,560] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee416c7048fcaf40c2ac764755828b5933job_title%27
[2021-03-11 09:57:04,573] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.995538ms
[2021-03-11 09:57:05,582] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee416c7048fcaf40c2ac764755828b5933job_title'
[2021-03-11 09:57:05,589] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee416c7048fcaf40c2ac764755828b5933job_title%27
[2021-03-11 09:57:05,602] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 11.06512ms
[2021-03-11 09:57:06,186] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:57:06,220] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:08,968] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:57:09,606] - [task:3110] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:09,614] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:57:09,618] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2021-03-11 09:57:09,621] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.96971ms
[2021-03-11 09:57:09,621] - [task:3120] INFO - {'requestID': '7c196ffb-a762-46b7-adbf-71817361b7dd', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee416c7048fcaf40c2ac764755828b5933job_title', 'index_id': 'e8a3e73e6de2e2ca', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '1.96971ms', 'executionTime': '1.895283ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
[2021-03-11 09:57:09,622] - [task:3121] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:09,622] - [task:3151] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:09,622] - [base_gsi:489] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:57:09,623] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 09:57:09,623] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2021-03-11 09:57:09,623] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 09:57:09,624] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 09:57:09,624] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:57:09,624] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:57:10,623] - [task:3110] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:10,627] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:57:10,631] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2021-03-11 09:57:10,852] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 201.04945ms
[2021-03-11 09:57:10,852] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 09:57:10,853] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 09:57:12,512] - [task:3121] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:12,512] - [task:3151] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:57:13,518] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee416c7048fcaf40c2ac764755828b5933job_title'
[2021-03-11 09:57:13,522] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee416c7048fcaf40c2ac764755828b5933job_title%27
[2021-03-11 09:57:13,533] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.068008ms
[2021-03-11 09:57:13,536] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee416c7048fcaf40c2ac764755828b5933job_title ON default USING GSI
[2021-03-11 09:57:13,540] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee416c7048fcaf40c2ac764755828b5933job_title+ON+default+USING+GSI
[2021-03-11 09:57:13,589] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 46.567171ms
[2021-03-11 09:57:13,595] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee416c7048fcaf40c2ac764755828b5933job_title'
[2021-03-11 09:57:13,602] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee416c7048fcaf40c2ac764755828b5933job_title%27
[2021-03-11 09:57:13,609] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.5777ms
[2021-03-11 09:57:13,632] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:13,636] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:13,636] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:14,229] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:14,239] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:57:14,239] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 09:57:14,333] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:14,346] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:14,350] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:14,350] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:14,923] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:14,933] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:57:14,933] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 09:57:15,028] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:15,038] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:15,038] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 09:57:15,038] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:57:15,042] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:57:15,046] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:57:15,057] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.005583ms
[2021-03-11 09:57:15,061] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:57:15,065] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 09:57:15,109] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 41.470266ms
[2021-03-11 09:57:15,128] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:57:15,129] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 37.07234997195737, 'mem_free': 13609123840, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:57:15,129] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:57:15,136] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:15,137] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:15,724] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:15,729] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:15,729] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:16,632] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:16,640] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:16,640] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:17,519] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:17,527] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:17,527] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:18,385] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:23,169] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #5 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:57:23,201] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 09:57:23,590] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 09:57:23,593] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 09:57:23,593] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 09:57:23,598] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:23,604] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:23,610] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:23,615] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:23,628] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:57:23,628] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:23,632] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:57:23,636] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:57:23,636] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:23,640] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:57:23,644] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:57:23,645] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:23,648] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:57:23,652] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:57:23,652] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:23,655] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:57:23,656] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:57:23,656] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 09:57:23,658] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 5 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_5
ok

----------------------------------------------------------------------
Ran 1 test in 81.380s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_6

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,delete_ops_per=.2,update_ops_per=.2,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'delete_ops_per': '.2', 'update_ops_per': '.2', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 6, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_6'}
[2021-03-11 09:57:23,720] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:23,720] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:24,228] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:24,235] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,247] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:57:24,256] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:57:24,257] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #6 test_multi_create_query_explain_drop_index==============
[2021-03-11 09:57:24,301] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:24,308] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:24,316] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:24,324] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:57:24,329] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,344] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:57:24,345] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,348] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:57:24,352] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:57:24,353] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,356] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:57:24,360] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:57:24,360] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,364] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:57:24,367] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:57:24,368] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:57:24,371] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:57:24,372] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:57:24,723] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:57:24,728] - [task:157] INFO -  {'uptime': '461', 'memoryTotal': 15466930176, 'memoryFree': 13743685632, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 4032}
[2021-03-11 09:57:24,731] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:57:24,731] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:57:24,731] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:57:24,746] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:57:24,754] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:57:24,755] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:57:24,758] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:57:24,758] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:57:24,759] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:57:24,759] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:57:24,826] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:57:24,834] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:24,834] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:25,347] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:25,360] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:57:25,460] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:25,461] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:57:25,466] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:25,468] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:25,476] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:57:25,504] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:57:25,508] - [task:157] INFO -  {'uptime': '462', 'memoryTotal': 15466930176, 'memoryFree': 13729648640, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:57:25,513] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:57:25,519] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:57:25,519] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:57:25,588] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:57:25,594] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:25,594] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:26,111] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:26,123] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:57:26,219] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:26,221] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:57:26,226] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:26,228] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:26,235] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:57:26,259] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:57:26,264] - [task:157] INFO -  {'uptime': '463', 'memoryTotal': 15466930176, 'memoryFree': 13738594304, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:57:26,268] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:57:26,273] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:57:26,273] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:57:26,344] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:57:26,353] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:26,353] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:26,858] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:26,872] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:57:26,959] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:26,960] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:57:26,965] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:26,967] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:26,974] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:57:26,993] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:57:26,996] - [task:157] INFO -  {'uptime': '463', 'memoryTotal': 15466930176, 'memoryFree': 13772677120, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:57:27,002] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:57:27,007] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:57:27,007] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:57:27,075] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:57:27,081] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:27,082] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:27,582] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:27,595] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:57:27,680] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:27,681] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:57:27,686] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:27,688] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:57:27,694] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:57:27,722] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:57:27,871] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:57:32,871] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:57:32,886] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:57:32,892] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:32,892] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:33,394] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:33,404] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:57:33,497] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:57:33,498] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:57:33,498] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:57:33,738] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:57:33,745] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:57:33,745] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:33,945] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,000] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,142] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:57:34,142] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:34,195] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,244] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,272] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:57:34,272] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:34,374] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,421] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,446] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:57:34,446] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:34,492] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,540] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,624] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:57:34,625] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:34,671] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,718] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,744] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:57:34,745] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:57:34,859] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,928] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:34,957] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:57:34,960] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:57:34,969] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:34,969] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:35,536] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:35,543] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:35,544] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:36,137] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:36,148] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:36,148] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:37,000] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:37,008] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:37,008] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:37,853] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:42,931] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:57:42,932] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 29.03140670361573, 'mem_free': 13823733760, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:57:42,932] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:57:42,938] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:57:42,939] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:57:42,946] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:57:42,947] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:57:42,955] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:57:42,955] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:57:42,964] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:57:42,964] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:57:42,971] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:42,992] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:57:42,992] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:57:42,993] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:57:48,005] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:57:48,008] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:57:48,009] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:57:48,467] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:57:49,467] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:57:49,500] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:57:51,396] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:57:51,417] - [newtuq:82] INFO - {'update': {'start': 0, 'end': 0}, 'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2021-03-11 09:57:52,813] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:57:52,813] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:57:52,813] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:58:22,844] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:58:22,848] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:58:22,932] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 81.728723ms
[2021-03-11 09:58:22,936] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:58:22,940] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:58:23,460] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 518.212818ms
[2021-03-11 09:58:23,472] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:58:23,480] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:58:23,498] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 17.031791ms
[2021-03-11 09:58:23,526] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:23,526] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:24,214] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:24,233] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:58:25,233] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee1565b7e8ecf54f4fbdddce2924f87612job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:58:25,237] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee1565b7e8ecf54f4fbdddce2924f87612job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:58:25,296] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 57.340925ms
[2021-03-11 09:58:25,298] - [base_gsi:216] INFO - BUILD INDEX on default(employee1565b7e8ecf54f4fbdddce2924f87612job_title) USING GSI
[2021-03-11 09:58:26,303] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee1565b7e8ecf54f4fbdddce2924f87612job_title) USING GSI
[2021-03-11 09:58:26,307] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee1565b7e8ecf54f4fbdddce2924f87612job_title%29+USING+GSI
[2021-03-11 09:58:26,336] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 28.006304ms
[2021-03-11 09:58:27,344] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1565b7e8ecf54f4fbdddce2924f87612job_title'
[2021-03-11 09:58:27,348] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1565b7e8ecf54f4fbdddce2924f87612job_title%27
[2021-03-11 09:58:27,359] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.195479ms
[2021-03-11 09:58:28,365] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1565b7e8ecf54f4fbdddce2924f87612job_title'
[2021-03-11 09:58:28,369] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1565b7e8ecf54f4fbdddce2924f87612job_title%27
[2021-03-11 09:58:28,381] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.899793ms
[2021-03-11 09:58:29,000] - [basetestcase:2746] INFO - update 0.0 to default documents...
[2021-03-11 09:58:29,036] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:58:30,103] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:58:30,544] - [basetestcase:2746] INFO - delete 0.0 to default documents...
[2021-03-11 09:58:30,583] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:58:31,157] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:58:31,766] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:58:31,804] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:58:35,477] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:58:36,498] - [task:3110] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:36,505] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:58:36,509] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2021-03-11 09:58:36,516] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 4.949656ms
[2021-03-11 09:58:36,516] - [task:3120] INFO - {'requestID': '5121dfff-ed81-4c93-8079-4e97a82245e4', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee1565b7e8ecf54f4fbdddce2924f87612job_title', 'index_id': 'eb49fe49a29e7f46', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '4.949656ms', 'executionTime': '4.867858ms', 'resultCount': 1, 'resultSize': 698, 'serviceLoad': 6}}
[2021-03-11 09:58:36,516] - [task:3121] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:36,517] - [task:3151] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:36,517] - [base_gsi:489] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:58:36,517] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 09:58:36,518] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2021-03-11 09:58:36,518] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 09:58:36,518] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 09:58:36,519] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:58:36,519] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2021-03-11 09:58:37,517] - [task:3110] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:37,522] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2021-03-11 09:58:37,526] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2021-03-11 09:58:37,738] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 197.650114ms
[2021-03-11 09:58:37,738] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 09:58:37,739] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 09:58:39,251] - [task:3121] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:39,251] - [task:3151] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2021-03-11 09:58:40,257] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1565b7e8ecf54f4fbdddce2924f87612job_title'
[2021-03-11 09:58:40,262] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1565b7e8ecf54f4fbdddce2924f87612job_title%27
[2021-03-11 09:58:40,272] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.59433ms
[2021-03-11 09:58:40,276] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee1565b7e8ecf54f4fbdddce2924f87612job_title ON default USING GSI
[2021-03-11 09:58:40,280] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee1565b7e8ecf54f4fbdddce2924f87612job_title+ON+default+USING+GSI
[2021-03-11 09:58:40,318] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 36.612705ms
[2021-03-11 09:58:40,324] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1565b7e8ecf54f4fbdddce2924f87612job_title'
[2021-03-11 09:58:40,328] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1565b7e8ecf54f4fbdddce2924f87612job_title%27
[2021-03-11 09:58:40,332] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.506627ms
[2021-03-11 09:58:40,351] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:58:40,356] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:40,356] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:41,078] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:41,089] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:58:41,089] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 09:58:41,213] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:41,223] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:58:41,226] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:41,227] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:41,897] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:41,907] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 09:58:41,907] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 09:58:42,036] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:42,048] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:58:42,049] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 09:58:42,049] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:58:42,053] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:58:42,057] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:58:42,069] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.586109ms
[2021-03-11 09:58:42,073] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 09:58:42,077] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 09:58:42,129] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 50.823389ms
[2021-03-11 09:58:42,155] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:58:42,156] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 31.58041179744018, 'mem_free': 13532446720, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:58:42,156] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:58:42,165] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:42,166] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:42,852] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:42,857] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:42,858] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:43,645] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:43,656] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:43,657] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:44,774] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:44,785] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:44,785] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:45,918] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:52,592] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #6 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:58:52,622] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 09:58:53,233] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 09:58:53,236] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 09:58:53,236] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 09:58:53,242] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:53,248] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:53,254] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:53,259] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:53,272] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:58:53,272] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:53,276] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:58:53,280] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:58:53,280] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:53,284] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:58:53,288] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:58:53,288] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:53,291] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:58:53,295] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:58:53,295] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:53,299] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:58:53,299] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2021-03-11 09:58:53,300] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 09:58:53,302] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_6
ok

----------------------------------------------------------------------
Ran 1 test in 89.593s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_7

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=simple,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 7, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_7'}
[2021-03-11 09:58:53,399] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:53,400] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:54,037] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:54,045] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,056] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 09:58:54,066] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 09:58:54,066] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #7 test_multi_create_drop_index==============
[2021-03-11 09:58:54,079] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:54,085] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:54,091] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:54,098] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 09:58:54,102] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,118] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 09:58:54,118] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,122] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 09:58:54,126] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 09:58:54,126] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,129] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 09:58:54,134] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 09:58:54,134] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,138] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 09:58:54,141] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 09:58:54,141] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 09:58:54,145] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 09:58:54,145] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 09:58:54,403] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 09:58:54,407] - [task:157] INFO -  {'uptime': '550', 'memoryTotal': 15466930176, 'memoryFree': 13697359872, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 4032}
[2021-03-11 09:58:54,410] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 09:58:54,410] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 09:58:54,410] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 09:58:54,420] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 09:58:54,430] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 09:58:54,431] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 09:58:54,437] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 09:58:54,438] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 09:58:54,438] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 09:58:54,439] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 09:58:54,515] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:58:54,520] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:54,520] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:55,124] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:55,139] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:58:55,242] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:55,243] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:58:55,249] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:55,253] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:55,260] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:58:55,288] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 09:58:55,292] - [task:157] INFO -  {'uptime': '548', 'memoryTotal': 15466930176, 'memoryFree': 13681561600, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:58:55,297] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:58:55,302] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 09:58:55,302] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 09:58:55,368] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:58:55,374] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:55,374] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:56,001] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:56,014] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:58:56,117] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:56,118] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:58:56,128] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:56,130] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:56,137] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:58:56,158] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 09:58:56,162] - [task:157] INFO -  {'uptime': '548', 'memoryTotal': 15466930176, 'memoryFree': 13691797504, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:58:56,167] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:58:56,172] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 09:58:56,172] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 09:58:56,238] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:58:56,248] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:56,248] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:56,897] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:56,909] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:58:57,013] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:57,014] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:58:57,019] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:57,021] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:57,027] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:58:57,045] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 09:58:57,049] - [task:157] INFO -  {'uptime': '549', 'memoryTotal': 15466930176, 'memoryFree': 13676417024, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 09:58:57,055] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 09:58:57,061] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 09:58:57,061] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 09:58:57,129] - [rest_client:1046] INFO - --> status:True
[2021-03-11 09:58:57,135] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:58:57,135] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:58:57,772] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:58:57,784] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:58:57,886] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:58:57,887] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:58:57,892] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:57,894] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 09:58:57,900] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 09:58:57,928] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 09:58:58,074] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:59:03,077] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 09:59:03,091] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 09:59:03,096] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:03,097] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:03,732] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:03,742] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 09:59:03,849] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 09:59:03,850] - [remote_util:5201] INFO - b'ok'
[2021-03-11 09:59:03,850] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 09:59:03,945] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 09:59:03,959] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 09:59:03,959] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:04,180] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,249] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,411] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 09:59:04,412] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:04,458] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,507] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,533] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 09:59:04,534] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:04,593] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,710] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,736] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 09:59:04,736] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:04,783] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,829] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:04,856] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 09:59:04,856] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:04,966] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:05,029] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:05,064] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 09:59:05,064] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 09:59:05,127] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:05,179] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:05,272] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 09:59:05,276] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #7 test_multi_create_drop_index ==============
[2021-03-11 09:59:05,285] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:05,285] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:05,938] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:05,943] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:05,943] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:06,789] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:06,797] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:06,797] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:07,817] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:07,825] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:07,825] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:08,830] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:14,466] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 09:59:14,466] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 28.13091695969705, 'mem_free': 13799284736, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 09:59:14,467] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 09:59:14,473] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:59:14,474] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:59:14,483] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:59:14,483] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:59:14,491] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:59:14,492] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:59:14,500] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 09:59:14,500] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 09:59:14,506] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:59:14,522] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 09:59:14,522] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 09:59:14,523] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 09:59:19,530] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 09:59:19,534] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:19,535] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:20,122] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:21,103] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 09:59:21,136] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 09:59:23,897] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 09:59:23,934] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 09:59:23,934] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 09:59:23,934] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 09:59:53,956] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:59:53,960] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:59:54,047] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 85.119164ms
[2021-03-11 09:59:54,052] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 09:59:54,056] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 09:59:54,639] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 581.878126ms
[2021-03-11 09:59:54,651] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 09:59:54,656] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 09:59:54,671] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 13.621174ms
[2021-03-11 09:59:54,691] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 09:59:54,692] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 09:59:55,418] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 09:59:55,433] - [base_gsi:260] INFO - []
[2021-03-11 09:59:56,439] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee12d94fa3f7d145848be041035b674625job_title ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:59:56,444] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee12d94fa3f7d145848be041035b674625job_title+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:59:56,513] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 66.975731ms
[2021-03-11 09:59:56,520] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee12d94fa3f7d145848be041035b674625join_yr ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2021-03-11 09:59:56,527] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee12d94fa3f7d145848be041035b674625join_yr+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 09:59:56,584] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 54.181786ms
[2021-03-11 09:59:56,584] - [base_gsi:216] INFO - BUILD INDEX on default(employee12d94fa3f7d145848be041035b674625job_title,employee12d94fa3f7d145848be041035b674625join_yr) USING GSI
[2021-03-11 09:59:57,590] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee12d94fa3f7d145848be041035b674625job_title,employee12d94fa3f7d145848be041035b674625join_yr) USING GSI
[2021-03-11 09:59:57,594] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee12d94fa3f7d145848be041035b674625job_title%2Cemployee12d94fa3f7d145848be041035b674625join_yr%29+USING+GSI
[2021-03-11 09:59:57,631] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 35.697022ms
[2021-03-11 09:59:58,637] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625job_title'
[2021-03-11 09:59:58,641] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625job_title%27
[2021-03-11 09:59:58,653] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.371694ms
[2021-03-11 09:59:59,658] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625job_title'
[2021-03-11 09:59:59,662] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625job_title%27
[2021-03-11 09:59:59,673] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.741379ms
[2021-03-11 09:59:59,678] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625join_yr'
[2021-03-11 09:59:59,682] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625join_yr%27
[2021-03-11 09:59:59,684] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.13923ms
[2021-03-11 10:00:00,691] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625job_title'
[2021-03-11 10:00:00,695] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625job_title%27
[2021-03-11 10:00:00,704] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.833486ms
[2021-03-11 10:00:00,708] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee12d94fa3f7d145848be041035b674625job_title ON default USING GSI
[2021-03-11 10:00:00,712] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee12d94fa3f7d145848be041035b674625job_title+ON+default+USING+GSI
[2021-03-11 10:00:00,771] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 56.469539ms
[2021-03-11 10:00:00,781] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625join_yr'
[2021-03-11 10:00:00,787] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625join_yr%27
[2021-03-11 10:00:00,791] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 2.513414ms
[2021-03-11 10:00:00,795] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee12d94fa3f7d145848be041035b674625join_yr ON default USING GSI
[2021-03-11 10:00:00,800] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee12d94fa3f7d145848be041035b674625join_yr+ON+default+USING+GSI
[2021-03-11 10:00:00,840] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 38.793807ms
[2021-03-11 10:00:00,846] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625job_title'
[2021-03-11 10:00:00,852] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625job_title%27
[2021-03-11 10:00:00,863] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.179816ms
[2021-03-11 10:00:00,871] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee12d94fa3f7d145848be041035b674625join_yr'
[2021-03-11 10:00:00,876] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee12d94fa3f7d145848be041035b674625join_yr%27
[2021-03-11 10:00:00,879] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.500105ms
[2021-03-11 10:00:00,900] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:00:00,904] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:00,904] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:01,544] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:01,554] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:00:01,555] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:00:01,656] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:01,667] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:00:01,672] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:01,672] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:02,365] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:02,375] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:00:02,376] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:00:02,486] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:02,500] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:00:02,500] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:00:02,501] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:00:02,505] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:00:02,510] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:00:02,521] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.694394ms
[2021-03-11 10:00:02,526] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:00:02,530] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 10:00:02,576] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 44.623946ms
[2021-03-11 10:00:02,594] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:00:02,595] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.16311300639659, 'mem_free': 13516353536, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:00:02,595] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:00:02,601] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:02,601] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:03,280] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:03,287] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:03,287] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:04,227] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:04,238] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:04,239] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:05,276] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:05,285] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:05,285] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:06,339] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:12,036] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #7 test_multi_create_drop_index ==============
[2021-03-11 10:00:12,204] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 10:00:13,427] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 10:00:13,430] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 10:00:13,430] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 10:00:13,436] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:13,442] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:13,448] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:13,453] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:13,467] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:00:13,467] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:13,471] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:00:13,474] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:00:13,474] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:13,479] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:00:13,487] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:00:13,487] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:13,492] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:00:13,497] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:00:13,499] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:13,504] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:00:13,505] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #7 test_multi_create_drop_index ==============
[2021-03-11 10:00:13,506] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:00:13,511] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_7
ok

----------------------------------------------------------------------
Ran 1 test in 80.124s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_8

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=composite,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'composite', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 8, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_8'}
[2021-03-11 10:00:13,545] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:13,545] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:14,302] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:14,310] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,322] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 10:00:14,331] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 10:00:14,331] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #8 test_multi_create_drop_index==============
[2021-03-11 10:00:14,346] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:14,353] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:14,358] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:14,364] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:00:14,368] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,383] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:00:14,383] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,387] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:00:14,391] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:00:14,391] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,394] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:00:14,398] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:00:14,398] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,402] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:00:14,406] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:00:14,406] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:00:14,410] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:00:14,410] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 10:00:14,548] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 10:00:14,553] - [task:157] INFO -  {'uptime': '631', 'memoryTotal': 15466930176, 'memoryFree': 13671505920, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 10:00:14,557] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 10:00:14,557] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 10:00:14,558] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 10:00:14,567] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 10:00:14,578] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 10:00:14,578] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 10:00:14,583] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 10:00:14,584] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 10:00:14,584] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 10:00:14,584] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 10:00:14,652] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:00:14,657] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:14,658] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:15,341] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:15,354] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:00:15,460] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:15,461] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:00:15,467] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:15,469] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:15,476] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:00:15,509] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 10:00:15,514] - [task:157] INFO -  {'uptime': '629', 'memoryTotal': 15466930176, 'memoryFree': 13658157056, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:00:15,519] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:00:15,524] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 10:00:15,524] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 10:00:15,591] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:00:15,599] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:15,599] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:16,272] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:16,285] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:00:16,392] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:16,393] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:00:16,398] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:16,401] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:16,408] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:00:16,430] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 10:00:16,435] - [task:157] INFO -  {'uptime': '629', 'memoryTotal': 15466930176, 'memoryFree': 13666865152, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:00:16,440] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:00:16,445] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 10:00:16,445] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 10:00:16,516] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:00:16,521] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:16,521] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:17,116] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:17,129] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:00:17,231] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:17,232] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:00:17,238] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:17,240] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:17,247] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:00:17,275] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 10:00:17,283] - [task:157] INFO -  {'uptime': '630', 'memoryTotal': 15466930176, 'memoryFree': 13656608768, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:00:17,290] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:00:17,298] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 10:00:17,298] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 10:00:17,367] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:00:17,370] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:17,371] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:17,966] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:17,978] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:00:18,101] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:18,103] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:00:18,107] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:18,109] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:00:18,115] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:00:18,140] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 10:00:18,275] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:00:23,281] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 10:00:23,295] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 10:00:23,300] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:23,300] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:23,914] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:23,925] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:00:24,027] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:00:24,028] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:00:24,028] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 10:00:24,157] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:00:24,172] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 10:00:24,172] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:24,373] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,448] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,491] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:00:24,492] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:24,541] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,742] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,768] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:00:24,769] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:24,816] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,864] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:24,888] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:00:24,889] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:24,937] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,105] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,130] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:00:25,131] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:25,177] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,226] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,254] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:00:25,254] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:00:25,376] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,424] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:25,476] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:00:25,480] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #8 test_multi_create_drop_index ==============
[2021-03-11 10:00:25,488] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:25,489] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:26,105] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:26,110] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:26,110] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:26,883] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:26,894] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:26,894] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:27,931] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:27,939] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:27,939] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:29,000] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:34,991] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:00:34,992] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 27.46838848533764, 'mem_free': 13749092352, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:00:34,992] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:00:34,999] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:00:34,999] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:00:35,009] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:00:35,009] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:00:35,017] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:00:35,017] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:00:35,026] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:00:35,026] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:00:35,033] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:00:35,052] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 10:00:35,052] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 10:00:35,053] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:00:40,067] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 10:00:40,073] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:00:40,073] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:00:40,680] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:00:41,726] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 10:00:41,756] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:00:44,496] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 10:00:44,531] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:00:44,531] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 10:00:44,532] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 10:01:14,562] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:01:14,567] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:01:14,648] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 79.518566ms
[2021-03-11 10:01:14,652] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 10:01:14,656] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 10:01:15,211] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 553.459542ms
[2021-03-11 10:01:15,217] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:01:15,224] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:01:15,250] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.533849ms
[2021-03-11 10:01:15,279] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:15,280] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:16,090] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:16,111] - [base_gsi:260] INFO - []
[2021-03-11 10:01:17,116] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr ON default(join_yr,job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2021-03-11 10:01:17,120] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr+ON+default%28join_yr%2Cjob_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 10:01:17,167] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 45.436055ms
[2021-03-11 10:01:17,168] - [base_gsi:216] INFO - BUILD INDEX on default(employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr) USING GSI
[2021-03-11 10:01:18,174] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr) USING GSI
[2021-03-11 10:01:18,178] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr%29+USING+GSI
[2021-03-11 10:01:18,199] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 19.295149ms
[2021-03-11 10:01:19,205] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr'
[2021-03-11 10:01:19,209] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr%27
[2021-03-11 10:01:19,221] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.369291ms
[2021-03-11 10:01:20,227] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr'
[2021-03-11 10:01:20,230] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr%27
[2021-03-11 10:01:20,242] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.31525ms
[2021-03-11 10:01:21,249] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr'
[2021-03-11 10:01:21,253] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr%27
[2021-03-11 10:01:21,262] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.568014ms
[2021-03-11 10:01:21,266] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr ON default USING GSI
[2021-03-11 10:01:21,270] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr+ON+default+USING+GSI
[2021-03-11 10:01:21,308] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 36.035743ms
[2021-03-11 10:01:21,316] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr'
[2021-03-11 10:01:21,321] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee810f3bbb6cfe4d8b9b12897f65ebfd3ejob_title_join_yr%27
[2021-03-11 10:01:21,325] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.511401ms
[2021-03-11 10:01:21,348] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:01:21,352] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:21,352] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:21,992] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:22,001] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:01:22,002] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:01:22,107] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:22,120] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:01:22,124] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:22,124] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:22,758] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:22,768] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:01:22,768] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:01:22,876] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:22,889] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:01:22,889] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:01:22,890] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:01:22,894] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:01:22,900] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:01:22,912] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.337444ms
[2021-03-11 10:01:22,916] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:01:22,921] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 10:01:22,980] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 56.909028ms
[2021-03-11 10:01:23,002] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:01:23,002] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 17.78718898888301, 'mem_free': 13493006336, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:01:23,002] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:01:23,009] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:23,009] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:23,674] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:23,679] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:23,679] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:24,518] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:24,527] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:24,527] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:25,615] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:25,627] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:25,628] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:26,736] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:32,922] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #8 test_multi_create_drop_index ==============
[2021-03-11 10:01:32,954] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 10:01:33,714] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 10:01:33,717] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 10:01:33,718] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 10:01:33,724] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:33,730] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:33,736] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:33,742] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:33,758] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:01:33,758] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:33,762] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:01:33,765] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:01:33,766] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:33,769] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:01:33,773] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:01:33,773] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:33,777] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:01:33,781] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:01:33,781] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:33,784] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:01:33,785] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #8 test_multi_create_drop_index ==============
[2021-03-11 10:01:33,785] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:01:33,788] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_8
ok

----------------------------------------------------------------------
Ran 1 test in 80.253s

OK
test_remove_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_9

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 9, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_9'}
[2021-03-11 10:01:33,835] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:33,835] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:34,535] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:34,544] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,557] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 10:01:34,567] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 10:01:34,567] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #9 test_remove_bucket_and_query==============
[2021-03-11 10:01:34,582] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:34,588] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:34,593] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:34,599] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:01:34,603] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,618] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:01:34,618] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,622] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:01:34,626] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:01:34,626] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,630] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:01:34,633] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:01:34,633] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,637] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:01:34,641] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:01:34,641] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:01:34,645] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:01:34,645] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 10:01:34,839] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 10:01:34,843] - [task:157] INFO -  {'uptime': '711', 'memoryTotal': 15466930176, 'memoryFree': 13658984448, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 10:01:34,846] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 10:01:34,846] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 10:01:34,847] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 10:01:34,858] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 10:01:34,870] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 10:01:34,871] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 10:01:34,874] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 10:01:34,875] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 10:01:34,875] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 10:01:34,875] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 10:01:34,942] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:01:34,952] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:34,952] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:35,587] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:35,600] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:01:35,711] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:35,712] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:01:35,717] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:35,719] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:35,726] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:01:35,754] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 10:01:35,759] - [task:157] INFO -  {'uptime': '710', 'memoryTotal': 15466930176, 'memoryFree': 13640601600, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:01:35,765] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:01:35,769] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 10:01:35,770] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 10:01:35,841] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:01:35,846] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:35,846] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:36,516] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:36,529] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:01:36,644] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:36,645] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:01:36,650] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:36,653] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:36,659] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:01:36,683] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 10:01:36,687] - [task:157] INFO -  {'uptime': '710', 'memoryTotal': 15466930176, 'memoryFree': 13655797760, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:01:36,692] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:01:36,696] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 10:01:36,697] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 10:01:36,768] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:01:36,779] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:36,779] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:37,473] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:37,485] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:01:37,591] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:37,592] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:01:37,597] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:37,599] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:37,605] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:01:37,628] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 10:01:37,632] - [task:157] INFO -  {'uptime': '710', 'memoryTotal': 15466930176, 'memoryFree': 13638959104, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:01:37,637] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:01:37,642] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 10:01:37,642] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 10:01:37,713] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:01:37,717] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:37,717] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:38,374] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:38,387] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:01:38,498] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:38,499] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:01:38,504] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:38,507] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:01:38,512] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:01:38,539] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 10:01:38,677] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:01:43,679] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 10:01:43,694] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 10:01:43,701] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:43,701] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:44,389] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:44,401] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:01:44,510] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:01:44,511] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:01:44,511] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 10:01:44,556] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:01:44,566] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 10:01:44,566] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:44,760] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:44,811] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:44,845] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:01:44,846] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:44,902] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:44,988] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,139] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:01:45,140] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:45,188] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,236] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,267] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:01:45,267] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:45,315] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,450] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,513] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:01:45,513] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:45,564] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,619] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,654] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:01:45,654] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:01:45,718] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,782] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:01:45,887] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:01:45,890] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #9 test_remove_bucket_and_query ==============
[2021-03-11 10:01:45,900] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:45,900] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:46,588] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:46,592] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:46,593] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:47,458] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:47,470] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:47,470] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:48,505] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:48,513] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:01:48,513] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:01:49,613] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:01:55,624] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:01:55,625] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 26.91790040376851, 'mem_free': 13734436864, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:01:55,625] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:01:55,632] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:01:55,632] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:01:55,642] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:01:55,642] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:01:55,651] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:01:55,652] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:01:55,661] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:01:55,661] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:01:55,668] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:01:55,683] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 10:01:55,683] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 10:01:55,684] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:02:00,694] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 10:02:00,698] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:00,698] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:01,359] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:02,361] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 10:02:02,393] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:02:05,138] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 10:02:05,170] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:05,171] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 10:02:05,171] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 10:02:35,202] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:02:35,207] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:02:35,290] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 82.307098ms
[2021-03-11 10:02:35,294] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 10:02:35,298] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 10:02:35,890] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 590.115763ms
[2021-03-11 10:02:35,897] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:02:35,912] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:02:35,932] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 18.043434ms
[2021-03-11 10:02:35,963] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:35,965] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:36,687] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:36,708] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:37,708] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee6fe6c6220ad84339ba9d46765abd0620join_yr ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2021-03-11 10:02:37,712] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee6fe6c6220ad84339ba9d46765abd0620join_yr+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 10:02:37,773] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 59.005265ms
[2021-03-11 10:02:37,774] - [base_gsi:216] INFO - BUILD INDEX on default(employee6fe6c6220ad84339ba9d46765abd0620join_yr) USING GSI
[2021-03-11 10:02:38,779] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee6fe6c6220ad84339ba9d46765abd0620join_yr) USING GSI
[2021-03-11 10:02:38,783] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee6fe6c6220ad84339ba9d46765abd0620join_yr%29+USING+GSI
[2021-03-11 10:02:38,807] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.766257ms
[2021-03-11 10:02:38,818] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6fe6c6220ad84339ba9d46765abd0620join_yr'
[2021-03-11 10:02:38,829] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6fe6c6220ad84339ba9d46765abd0620join_yr%27
[2021-03-11 10:02:38,848] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 16.667763ms
[2021-03-11 10:02:39,854] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6fe6c6220ad84339ba9d46765abd0620join_yr'
[2021-03-11 10:02:39,858] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6fe6c6220ad84339ba9d46765abd0620join_yr%27
[2021-03-11 10:02:39,869] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.350552ms
[2021-03-11 10:02:40,875] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6fe6c6220ad84339ba9d46765abd0620join_yr'
[2021-03-11 10:02:40,879] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6fe6c6220ad84339ba9d46765abd0620join_yr%27
[2021-03-11 10:02:40,888] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.246194ms
[2021-03-11 10:02:41,893] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:02:41,897] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2021-03-11 10:02:41,902] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.58533ms
[2021-03-11 10:02:41,903] - [base_gsi:433] INFO - {'requestID': 'f230feb9-e357-4d9f-a551-488434a7acba', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee6fe6c6220ad84339ba9d46765abd0620join_yr', 'index_id': 'd404cb945481dec0', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '3.58533ms', 'executionTime': '3.515839ms', 'resultCount': 1, 'resultSize': 866, 'serviceLoad': 6}}
[2021-03-11 10:02:41,904] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 10:02:41,904] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:02:41,904] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 10:02:41,904] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 10:02:41,906] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:02:41,906] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:02:41,907] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:02:41,907] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:02:41,973] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:02:41,995] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:02:42,000] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2021-03-11 10:02:42,186] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 170.369799ms
[2021-03-11 10:02:42,187] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 10:02:42,188] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 10:02:44,065] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:44,069] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6fe6c6220ad84339ba9d46765abd0620join_yr'
[2021-03-11 10:02:44,073] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6fe6c6220ad84339ba9d46765abd0620join_yr%27
[2021-03-11 10:02:44,079] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.958753ms
[2021-03-11 10:02:44,091] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:44,094] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:44,094] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:44,886] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:44,896] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:02:44,896] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:02:45,022] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:02:45,032] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:45,035] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:45,036] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:45,682] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:45,692] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:02:45,692] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:02:45,830] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:02:45,843] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:02:45,843] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:02:45,843] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:02:45,847] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:02:45,853] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:02:45,860] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 5.137997ms
[2021-03-11 10:02:45,867] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:02:45,867] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 16.18771421038756, 'mem_free': 13478408192, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:02:45,868] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:02:45,872] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:45,873] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:46,541] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:46,547] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:46,547] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:47,463] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:47,471] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:47,471] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:48,596] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:48,603] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:48,604] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:49,794] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:56,165] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #9 test_remove_bucket_and_query ==============
[2021-03-11 10:02:56,189] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:56,195] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:56,202] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:56,209] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:56,213] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,227] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:02:56,228] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,232] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:02:56,236] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:02:56,236] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,240] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:02:56,243] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:02:56,244] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,247] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:02:56,251] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:02:56,251] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,255] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:02:56,255] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #9 test_remove_bucket_and_query ==============
[2021-03-11 10:02:56,255] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:02:56,258] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_9
ok

----------------------------------------------------------------------
Ran 1 test in 82.433s

OK
test_change_bucket_properties (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_10

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 10, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_10'}
[2021-03-11 10:02:56,303] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:56,303] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:56,961] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:56,968] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:56,980] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 10:02:56,987] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 10:02:56,987] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #10 test_change_bucket_properties==============
[2021-03-11 10:02:57,001] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:57,007] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:57,014] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:57,020] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:02:57,024] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:57,038] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:02:57,038] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:57,042] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:02:57,046] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:02:57,046] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:57,050] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:02:57,054] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:02:57,054] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:57,058] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:02:57,061] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:02:57,062] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:02:57,065] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:02:57,066] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 10:02:57,307] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 10:02:57,311] - [task:157] INFO -  {'uptime': '791', 'memoryTotal': 15466930176, 'memoryFree': 13658390528, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:02:57,316] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 10:02:57,316] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 10:02:57,317] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 10:02:57,325] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 10:02:57,337] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 10:02:57,338] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 10:02:57,347] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 10:02:57,348] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 10:02:57,348] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 10:02:57,348] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 10:02:57,422] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:02:57,427] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:57,427] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:58,117] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:58,130] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:02:58,245] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:02:58,246] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:02:58,251] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:02:58,254] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:02:58,261] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:02:58,286] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 10:02:58,290] - [task:157] INFO -  {'uptime': '791', 'memoryTotal': 15466930176, 'memoryFree': 13502234624, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:02:58,295] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:02:58,300] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 10:02:58,300] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 10:02:58,367] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:02:58,374] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:58,374] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:02:59,082] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:02:59,095] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:02:59,212] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:02:59,214] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:02:59,219] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:02:59,222] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:02:59,228] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:02:59,254] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 10:02:59,258] - [task:157] INFO -  {'uptime': '791', 'memoryTotal': 15466930176, 'memoryFree': 13654351872, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:02:59,266] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:02:59,273] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 10:02:59,274] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 10:02:59,346] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:02:59,351] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:02:59,352] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:00,047] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:00,058] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:03:00,173] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:03:00,174] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:03:00,179] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:03:00,183] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:03:00,189] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:03:00,228] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 10:03:00,232] - [task:157] INFO -  {'uptime': '796', 'memoryTotal': 15466930176, 'memoryFree': 13680816128, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:03:00,238] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:03:00,242] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 10:03:00,242] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 10:03:00,311] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:03:00,317] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:00,317] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:00,997] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:01,009] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:03:01,123] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:03:01,124] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:03:01,129] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:03:01,131] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:03:01,137] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:03:01,165] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 10:03:01,321] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:03:06,327] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 10:03:06,341] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 10:03:06,346] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:06,346] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:07,038] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:07,049] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:03:07,163] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:03:07,164] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:03:07,165] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 10:03:08,181] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:03:08,188] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 10:03:08,189] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:08,401] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,454] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,492] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:03:08,493] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:08,546] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,630] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,665] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:03:08,666] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:08,713] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,765] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:08,928] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:03:08,928] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:08,979] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,029] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,056] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:03:09,056] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:09,117] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,259] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,286] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:03:09,286] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:03:09,340] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,421] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:09,461] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:03:09,465] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #10 test_change_bucket_properties ==============
[2021-03-11 10:03:09,475] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:09,475] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:10,237] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:10,242] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:10,242] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:11,272] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:11,281] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:11,281] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:12,414] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:12,426] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:12,427] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:13,535] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:19,710] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:03:19,710] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.00915455035003, 'mem_free': 13718671360, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:03:19,711] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:03:19,719] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:03:19,720] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:03:19,729] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:03:19,729] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:03:19,737] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:03:19,738] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:03:19,746] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:03:19,747] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:03:19,755] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:03:19,777] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 10:03:19,777] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 10:03:19,779] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:03:24,787] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 10:03:24,790] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:24,791] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:03:25,443] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:03:26,534] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 10:03:26,566] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:03:29,159] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 10:03:29,191] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:03:29,191] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 10:03:29,191] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 10:03:59,215] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:03:59,219] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:03:59,308] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 87.113969ms
[2021-03-11 10:03:59,312] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 10:03:59,316] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 10:03:59,877] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 559.174326ms
[2021-03-11 10:03:59,885] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:03:59,894] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:03:59,935] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 39.067911ms
[2021-03-11 10:03:59,960] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:03:59,961] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:00,813] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:00,869] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:01,858] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employeebb9ad82cdff6409bb65598e982af01a3join_yr ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2021-03-11 10:04:01,863] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employeebb9ad82cdff6409bb65598e982af01a3join_yr+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 10:04:01,936] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 70.976253ms
[2021-03-11 10:04:01,937] - [base_gsi:216] INFO - BUILD INDEX on default(employeebb9ad82cdff6409bb65598e982af01a3join_yr) USING GSI
[2021-03-11 10:04:02,942] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employeebb9ad82cdff6409bb65598e982af01a3join_yr) USING GSI
[2021-03-11 10:04:02,946] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employeebb9ad82cdff6409bb65598e982af01a3join_yr%29+USING+GSI
[2021-03-11 10:04:02,971] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.092719ms
[2021-03-11 10:04:02,983] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeebb9ad82cdff6409bb65598e982af01a3join_yr'
[2021-03-11 10:04:02,991] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeebb9ad82cdff6409bb65598e982af01a3join_yr%27
[2021-03-11 10:04:03,007] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 14.072263ms
[2021-03-11 10:04:04,011] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeebb9ad82cdff6409bb65598e982af01a3join_yr'
[2021-03-11 10:04:04,015] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeebb9ad82cdff6409bb65598e982af01a3join_yr%27
[2021-03-11 10:04:04,027] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.163172ms
[2021-03-11 10:04:05,033] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeebb9ad82cdff6409bb65598e982af01a3join_yr'
[2021-03-11 10:04:05,036] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeebb9ad82cdff6409bb65598e982af01a3join_yr%27
[2021-03-11 10:04:05,049] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.415901ms
[2021-03-11 10:04:06,054] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:04:06,059] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2021-03-11 10:04:06,064] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.927099ms
[2021-03-11 10:04:06,065] - [base_gsi:433] INFO - {'requestID': 'b8566e73-c80f-4f7e-b978-2742890fb0f9', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeebb9ad82cdff6409bb65598e982af01a3join_yr', 'index_id': 'c94c8d051b3b940b', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '3.927099ms', 'executionTime': '3.828707ms', 'resultCount': 1, 'resultSize': 866, 'serviceLoad': 6}}
[2021-03-11 10:04:06,065] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 10:04:06,066] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:06,066] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 10:04:06,066] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 10:04:06,067] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:06,067] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:06,067] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:06,067] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:06,133] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:06,157] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:04:06,161] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2021-03-11 10:04:06,352] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 174.488785ms
[2021-03-11 10:04:06,352] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 10:04:06,353] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 10:04:07,556] - [rest_client:2869] INFO - http://127.0.0.1:9000/pools/default/buckets/default with param: 
[2021-03-11 10:04:07,567] - [rest_client:2877] INFO - bucket default updated
[2021-03-11 10:04:07,577] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:04:07,583] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2021-03-11 10:04:07,589] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.744787ms
[2021-03-11 10:04:07,589] - [base_gsi:433] INFO - {'requestID': 'ad36b8bf-087b-4878-91e5-3fd5240e544d', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeebb9ad82cdff6409bb65598e982af01a3join_yr', 'index_id': 'c94c8d051b3b940b', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '3.744787ms', 'executionTime': '3.638133ms', 'resultCount': 1, 'resultSize': 866, 'serviceLoad': 6}}
[2021-03-11 10:04:07,590] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 10:04:07,590] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:07,590] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 10:04:07,590] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 10:04:07,590] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:07,591] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:07,591] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:07,591] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:04:07,660] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:04:07,678] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:04:07,682] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2021-03-11 10:04:07,872] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 180.412958ms
[2021-03-11 10:04:07,872] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 10:04:07,873] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 10:04:09,083] - [tuq_helper:287] INFO - RUN QUERY DROP INDEX employeebb9ad82cdff6409bb65598e982af01a3join_yr ON default USING GSI
[2021-03-11 10:04:09,087] - [rest_client:3905] INFO - query params : statement=DROP+INDEX+employeebb9ad82cdff6409bb65598e982af01a3join_yr+ON+default+USING+GSI
[2021-03-11 10:04:09,132] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 42.23737ms
[2021-03-11 10:04:09,140] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeebb9ad82cdff6409bb65598e982af01a3join_yr'
[2021-03-11 10:04:09,146] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeebb9ad82cdff6409bb65598e982af01a3join_yr%27
[2021-03-11 10:04:09,157] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 8.96805ms
[2021-03-11 10:04:09,177] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:09,180] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:09,180] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:09,909] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:09,919] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:04:09,919] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:04:10,028] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:10,038] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:10,041] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:10,041] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:10,777] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:10,787] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:04:10,787] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:04:10,922] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:10,936] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:10,936] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:04:10,936] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:04:10,941] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:04:10,946] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:04:10,957] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 8.423841ms
[2021-03-11 10:04:10,962] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:04:10,970] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 10:04:11,030] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 52.446453ms
[2021-03-11 10:04:11,047] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:04:11,047] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 28.30607152758525, 'mem_free': 13477789696, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:04:11,048] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:04:11,055] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:11,055] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:11,732] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:11,737] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:11,738] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:12,786] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:12,793] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:12,794] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:13,953] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:13,966] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:13,966] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:15,159] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:21,631] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #10 test_change_bucket_properties ==============
[2021-03-11 10:04:21,664] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 10:04:22,775] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 10:04:22,780] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 10:04:22,780] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 10:04:22,787] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:22,794] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:22,801] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:22,807] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:22,821] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:04:22,821] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:22,826] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:04:22,833] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:04:22,834] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:22,839] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:04:22,850] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:04:22,851] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:22,855] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:04:22,860] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:04:22,860] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:22,865] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:04:22,865] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #10 test_change_bucket_properties ==============
[2021-03-11 10:04:22,865] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:04:22,870] - [basetestcase:609] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_10
ok

----------------------------------------------------------------------
Ran 1 test in 86.579s

OK
test_delete_create_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_11

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'gsi_type': 'plasma', 'makefile': 'True', 'num_nodes': 4, 'case_number': 11, 'logs_folder': '/opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_11'}
[2021-03-11 10:04:22,918] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:22,918] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:23,577] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:23,585] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,596] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 10:04:23,606] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 10:04:23,606] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #11 test_delete_create_bucket_and_query==============
[2021-03-11 10:04:23,620] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:23,626] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:23,632] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:23,638] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:04:23,642] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,655] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:04:23,656] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,659] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:04:23,663] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:04:23,663] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,666] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:04:23,670] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:04:23,670] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,674] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:04:23,677] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:04:23,677] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:04:23,681] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:04:23,681] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 10:04:23,921] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 10:04:23,925] - [task:157] INFO -  {'uptime': '880', 'memoryTotal': 15466930176, 'memoryFree': 13665607680, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 2016}
[2021-03-11 10:04:23,928] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 10:04:23,928] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 10:04:23,928] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 10:04:23,938] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 10:04:23,951] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 10:04:23,952] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 10:04:23,955] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 10:04:23,956] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 10:04:23,956] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 10:04:23,956] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 10:04:24,026] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:04:24,031] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:24,031] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:24,717] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:24,731] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:04:24,844] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:24,845] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:04:24,850] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:24,853] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:24,859] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:04:24,891] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 10:04:24,896] - [task:157] INFO -  {'uptime': '877', 'memoryTotal': 15466930176, 'memoryFree': 13650501632, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:04:24,901] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:04:24,905] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 10:04:24,905] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 10:04:24,996] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:04:25,002] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:25,002] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:25,678] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:25,690] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:04:25,820] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:25,821] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:04:25,826] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:25,829] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:25,835] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:04:25,862] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 10:04:25,866] - [task:157] INFO -  {'uptime': '882', 'memoryTotal': 15466930176, 'memoryFree': 13660745728, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:04:25,872] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:04:25,882] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 10:04:25,882] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 10:04:25,951] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:04:25,959] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:25,959] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:26,675] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:26,687] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:04:26,802] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:26,803] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:04:26,809] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:26,811] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:26,818] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:04:26,842] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 10:04:26,846] - [task:157] INFO -  {'uptime': '882', 'memoryTotal': 15466930176, 'memoryFree': 13688889344, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:04:26,852] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:04:26,857] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 10:04:26,857] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 10:04:26,926] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:04:26,931] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:26,931] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:27,646] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:27,659] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:04:27,774] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:27,775] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:04:27,783] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:27,786] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:04:27,794] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:04:27,834] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 10:04:27,967] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:04:32,969] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 10:04:32,982] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 10:04:32,987] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:32,987] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:33,645] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:33,656] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:04:33,770] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:04:33,771] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:04:33,771] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 10:04:33,846] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:04:33,856] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 10:04:33,856] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:34,079] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,138] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,174] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:04:34,174] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:34,380] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,432] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,460] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:04:34,461] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:34,507] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,556] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,583] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:04:34,583] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:34,715] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,763] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,790] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:04:34,790] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:34,834] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,881] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:34,907] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:04:34,907] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:04:35,064] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:35,133] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:35,161] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:04:35,164] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #11 test_delete_create_bucket_and_query ==============
[2021-03-11 10:04:35,174] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:35,174] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:35,909] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:35,914] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:35,914] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:36,920] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:36,929] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:36,930] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:38,054] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:38,065] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:38,066] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:39,199] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:45,380] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:04:45,380] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 28.39439655172414, 'mem_free': 13747544064, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:04:45,380] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:04:45,387] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:04:45,388] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:04:45,396] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:04:45,397] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:04:45,411] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:04:45,411] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:04:45,421] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:04:45,421] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:04:45,428] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:45,442] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 10:04:45,442] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 10:04:45,443] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:04:50,453] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 10:04:50,457] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:04:50,457] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:04:51,180] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:04:52,306] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 10:04:52,342] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:04:54,959] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 10:04:54,992] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:04:54,992] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 10:04:54,993] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 10:05:25,027] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:05:25,031] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:05:25,115] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 82.496432ms
[2021-03-11 10:05:25,119] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 10:05:25,123] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 10:05:25,702] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 577.387798ms
[2021-03-11 10:05:25,715] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:05:25,722] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:05:25,738] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 14.906439ms
[2021-03-11 10:05:25,758] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:25,758] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:26,675] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:26,696] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:27,696] - [tuq_helper:287] INFO - RUN QUERY CREATE INDEX employee363196f120e544709f6544ff72aae336join_yr ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2021-03-11 10:05:27,700] - [rest_client:3905] INFO - query params : statement=CREATE+INDEX+employee363196f120e544709f6544ff72aae336join_yr+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2021-03-11 10:05:27,760] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 57.172172ms
[2021-03-11 10:05:27,760] - [base_gsi:216] INFO - BUILD INDEX on default(employee363196f120e544709f6544ff72aae336join_yr) USING GSI
[2021-03-11 10:05:28,767] - [tuq_helper:287] INFO - RUN QUERY BUILD INDEX on default(employee363196f120e544709f6544ff72aae336join_yr) USING GSI
[2021-03-11 10:05:28,771] - [rest_client:3905] INFO - query params : statement=BUILD+INDEX+on+default%28employee363196f120e544709f6544ff72aae336join_yr%29+USING+GSI
[2021-03-11 10:05:28,802] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 27.099901ms
[2021-03-11 10:05:28,815] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee363196f120e544709f6544ff72aae336join_yr'
[2021-03-11 10:05:28,825] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee363196f120e544709f6544ff72aae336join_yr%27
[2021-03-11 10:05:28,856] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 28.687503ms
[2021-03-11 10:05:29,861] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee363196f120e544709f6544ff72aae336join_yr'
[2021-03-11 10:05:29,865] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee363196f120e544709f6544ff72aae336join_yr%27
[2021-03-11 10:05:29,877] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 10.391963ms
[2021-03-11 10:05:30,888] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee363196f120e544709f6544ff72aae336join_yr'
[2021-03-11 10:05:30,895] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee363196f120e544709f6544ff72aae336join_yr%27
[2021-03-11 10:05:30,904] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 7.33663ms
[2021-03-11 10:05:31,909] - [tuq_helper:287] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:05:31,913] - [rest_client:3905] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2021-03-11 10:05:31,918] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 3.278585ms
[2021-03-11 10:05:31,919] - [base_gsi:433] INFO - {'requestID': '4d65d443-6bcf-45cd-b032-719fc9eb428d', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee363196f120e544709f6544ff72aae336join_yr', 'index_id': '33a0aa2d9b0e98c4', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '3.278585ms', 'executionTime': '3.158343ms', 'resultCount': 1, 'resultSize': 866, 'serviceLoad': 6}}
[2021-03-11 10:05:31,919] - [tuq_generators:70] INFO - FROM clause ===== is default
[2021-03-11 10:05:31,920] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:05:31,920] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2021-03-11 10:05:31,920] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2021-03-11 10:05:31,921] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:05:31,921] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:05:31,921] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:05:31,921] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2021-03-11 10:05:31,989] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2021-03-11 10:05:32,010] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2021-03-11 10:05:32,014] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2021-03-11 10:05:32,196] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 166.434634ms
[2021-03-11 10:05:32,196] - [tuq_helper:369] INFO -  Analyzing Actual Result
[2021-03-11 10:05:32,197] - [tuq_helper:371] INFO -  Analyzing Expected Result
[2021-03-11 10:05:34,291] - [basetestcase:663] INFO - sleep for 2 secs.  ...
[2021-03-11 10:05:36,642] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:05:36,659] - [rest_client:2825] INFO - 0.02 seconds to create bucket default
[2021-03-11 10:05:36,661] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:36,892] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:36,962] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:36,990] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:05:36,991] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:37,052] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,111] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,291] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:05:37,291] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:37,340] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,391] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,417] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:05:37,417] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:37,466] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,514] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,540] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:05:37,540] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:37,586] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,737] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,763] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:05:37,764] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:05:37,824] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,888] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:05:37,916] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:05:37,919] - [basetestcase:663] INFO - sleep for 2 secs.  ...
[2021-03-11 10:05:39,927] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:05:39,927] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:39,931] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:05:39,935] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:05:39,935] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:39,939] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:05:39,943] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:05:39,943] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:39,947] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:05:39,950] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:05:39,951] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:39,954] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:05:39,965] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:39,969] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee363196f120e544709f6544ff72aae336join_yr'
[2021-03-11 10:05:39,973] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee363196f120e544709f6544ff72aae336join_yr%27
[2021-03-11 10:05:40,057] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 81.988219ms
[2021-03-11 10:05:40,060] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee363196f120e544709f6544ff72aae336join_yr'
[2021-03-11 10:05:40,064] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee363196f120e544709f6544ff72aae336join_yr%27
[2021-03-11 10:05:40,066] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 1.045545ms
[2021-03-11 10:05:40,072] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:40,090] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:40,092] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:40,093] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:40,789] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:40,799] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:05:40,800] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:05:40,924] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:40,934] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:40,938] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:40,938] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:41,683] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:41,692] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:05:41,692] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:05:41,812] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:41,825] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:05:41,825] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:05:41,826] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:05:41,830] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:05:41,834] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:05:41,844] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 8.78558ms
[2021-03-11 10:05:41,845] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:05:41,848] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:05:41,852] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:05:41,855] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 913.988µs
[2021-03-11 10:05:41,862] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:05:41,862] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 17.18339464004204, 'mem_free': 13422067712, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:05:41,862] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:05:41,865] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:41,866] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:42,590] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:42,595] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:42,595] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:43,612] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:43,620] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:43,621] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:44,826] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:44,837] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:44,837] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:46,069] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:52,677] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #11 test_delete_create_bucket_and_query ==============
[2021-03-11 10:05:52,704] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 10:05:53,075] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 10:05:53,078] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 10:05:53,078] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 10:05:53,085] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,090] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,096] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,101] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,116] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:05:53,116] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,120] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:05:53,123] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:05:53,123] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,127] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:05:53,132] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:05:53,132] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,135] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:05:53,139] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:05:53,139] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,143] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:05:53,143] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #11 test_delete_create_bucket_and_query ==============
[2021-03-11 10:05:53,143] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:05:53,147] - [basetestcase:609] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 90.239s

OK
suite_tearDown (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-21-Mar-11_09-49-40/test_11

*** Tests executed count: 11

Run after suite setup for gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query
[2021-03-11 10:05:53,161] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:53,161] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:53,888] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:53,896] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,908] - [rest_client:2434] INFO - Node version in cluster 7.0.0-4170-rel-enterprise
[2021-03-11 10:05:53,917] - [rest_client:2444] INFO - Node versions in cluster ['7.0.0-4170-rel-enterprise']
[2021-03-11 10:05:53,917] - [basetestcase:233] INFO - ==============  basetestcase setup was started for test #11 suite_tearDown==============
[2021-03-11 10:05:53,934] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,940] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,946] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,952] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:05:53,956] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,972] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:05:53,974] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,978] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:05:53,981] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:05:53,981] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,985] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:05:53,988] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:05:53,989] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,991] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:05:53,995] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:05:53,995] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:05:53,999] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:05:54,000] - [basetestcase:295] INFO - initializing cluster
[2021-03-11 10:05:54,163] - [task:152] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2021-03-11 10:05:54,167] - [task:157] INFO -  {'uptime': '970', 'memoryTotal': 15466930176, 'memoryFree': 13668708352, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'moxi': 11211, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:05:54,171] - [task:196] INFO - quota for index service will be 256 MB
[2021-03-11 10:05:54,171] - [task:198] INFO - set index quota to node 127.0.0.1 
[2021-03-11 10:05:54,171] - [rest_client:1153] INFO - pools/default params : indexMemoryQuota=256
[2021-03-11 10:05:54,183] - [rest_client:1141] INFO - pools/default params : memoryQuota=7650
[2021-03-11 10:05:54,193] - [rest_client:1102] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2021-03-11 10:05:54,194] - [rest_client:1118] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2021-03-11 10:05:54,197] - [rest_client:1018] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'["cannot change node services after cluster is provisioned"]' auth: Administrator:asdasd
[2021-03-11 10:05:54,198] - [rest_client:1124] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2021-03-11 10:05:54,198] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9000
[2021-03-11 10:05:54,198] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2021-03-11 10:05:54,265] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:05:54,269] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:54,270] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:54,977] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:54,990] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:05:55,113] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:55,114] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:05:55,119] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:55,121] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:55,128] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:05:55,153] - [task:152] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2021-03-11 10:05:55,157] - [task:157] INFO -  {'uptime': '968', 'memoryTotal': 15466930176, 'memoryFree': 13646839808, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:05:55,162] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:05:55,168] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9001
[2021-03-11 10:05:55,168] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2021-03-11 10:05:55,234] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:05:55,240] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:55,240] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:55,953] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:55,964] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:05:56,109] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:56,110] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:05:56,115] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:56,118] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:56,125] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:05:56,148] - [task:152] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2021-03-11 10:05:56,152] - [task:157] INFO -  {'uptime': '968', 'memoryTotal': 15466930176, 'memoryFree': 13663535104, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:05:56,157] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:05:56,162] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9002
[2021-03-11 10:05:56,162] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2021-03-11 10:05:56,230] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:05:56,235] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:56,235] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:56,957] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:56,969] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:05:57,087] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:57,088] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:05:57,093] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:57,095] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:57,104] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:05:57,141] - [task:152] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2021-03-11 10:05:57,145] - [task:157] INFO -  {'uptime': '968', 'memoryTotal': 15466930176, 'memoryFree': 13642977280, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-4170-rel-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'moxi': 11211, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2021-03-11 10:05:57,150] - [rest_client:1141] INFO - pools/default params : memoryQuota=7906
[2021-03-11 10:05:57,155] - [rest_client:1039] INFO - --> in init_cluster...Administrator,asdasd,9003
[2021-03-11 10:05:57,156] - [rest_client:1044] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2021-03-11 10:05:57,226] - [rest_client:1046] INFO - --> status:True
[2021-03-11 10:05:57,234] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:05:57,234] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:05:57,937] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:05:57,948] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:05:58,063] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:05:58,064] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:05:58,068] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:58,071] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,0] command: cluster_compat_mode:get_compat_version().
[2021-03-11 10:05:58,078] - [rest_client:1176] INFO - settings/indexes params : storageMode=plasma
[2021-03-11 10:05:58,122] - [basetestcase:2387] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2021-03-11 10:05:58,262] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:06:03,268] - [basetestcase:2392] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2021-03-11 10:06:03,284] - [basetestcase:332] INFO - done initializing cluster
[2021-03-11 10:06:03,289] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:03,290] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:04,059] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:04,070] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2021-03-11 10:06:04,195] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:06:04,196] - [remote_util:5201] INFO - b'ok'
[2021-03-11 10:06:04,196] - [basetestcase:2975] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2021-03-11 10:06:05,178] - [rest_client:2800] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&authType=sasl&saslPassword=None&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2021-03-11 10:06:05,188] - [rest_client:2825] INFO - 0.01 seconds to create bucket default
[2021-03-11 10:06:05,188] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:05,396] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,458] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,679] - [task:386] WARNING - vbucket map not ready after try 0
[2021-03-11 10:06:05,680] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:05,728] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,773] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,797] - [task:386] WARNING - vbucket map not ready after try 1
[2021-03-11 10:06:05,798] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:05,850] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,899] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:05,923] - [task:386] WARNING - vbucket map not ready after try 2
[2021-03-11 10:06:05,923] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:05,972] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,122] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,155] - [task:386] WARNING - vbucket map not ready after try 3
[2021-03-11 10:06:06,155] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:06,208] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,257] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,284] - [task:386] WARNING - vbucket map not ready after try 4
[2021-03-11 10:06:06,284] - [bucket_helper:344] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2021-03-11 10:06:06,336] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,399] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:06,545] - [task:386] WARNING - vbucket map not ready after try 5
[2021-03-11 10:06:06,548] - [basetestcase:434] INFO - ==============  basetestcase setup was finished for test #11 suite_tearDown ==============
[2021-03-11 10:06:06,556] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:06,556] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:07,341] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:07,346] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:07,346] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:08,366] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:08,376] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:08,376] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:09,538] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:09,547] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:09,547] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:10,699] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:17,091] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:06:17,092] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 24.5338305807139, 'mem_free': 13732478976, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:06:17,092] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:06:17,098] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:06:17,098] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:06:17,108] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:06:17,108] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:06:17,116] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:06:17,117] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:06:17,124] - [newtuq:21] INFO - Initial status of 127.0.0.1 cluster is healthy
[2021-03-11 10:06:17,125] - [newtuq:26] INFO - current status of 127.0.0.1  is healthy
[2021-03-11 10:06:17,132] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:06:17,145] - [rest_client:1922] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2021-03-11 10:06:17,145] - [newtuq:36] INFO - Allowing the indexer to complete restart after setting the internal settings
[2021-03-11 10:06:17,146] - [basetestcase:663] INFO - sleep for 5 secs.  ...
[2021-03-11 10:06:22,159] - [rest_client:1922] INFO - {'indexer.api.enableTestServer': True} set
[2021-03-11 10:06:22,163] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:22,163] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:22,849] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:23,824] - [basetestcase:2746] INFO - create 2016.0 to default documents...
[2021-03-11 10:06:23,857] - [data_helper:314] INFO - creating direct client 127.0.0.1:12000 default
[2021-03-11 10:06:26,582] - [basetestcase:2759] INFO - LOAD IS FINISHED
[2021-03-11 10:06:26,614] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:06:26,614] - [newtuq:94] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2021-03-11 10:06:26,615] - [basetestcase:663] INFO - sleep for 30 secs.  ...
[2021-03-11 10:06:56,650] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:06:56,654] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:06:56,747] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 90.772752ms
[2021-03-11 10:06:56,750] - [tuq_helper:287] INFO - RUN QUERY CREATE PRIMARY INDEX ON default  USING GSI
[2021-03-11 10:06:56,754] - [rest_client:3905] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default++USING+GSI
[2021-03-11 10:06:57,295] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 538.626408ms
[2021-03-11 10:06:57,305] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:06:57,312] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:06:57,335] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 21.25994ms
[2021-03-11 10:06:57,386] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:57,386] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:58,256] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:58,282] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:06:58,298] - [basetestcase:2637] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:06:58,301] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:58,301] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:59,072] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:59,082] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:06:59,082] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2021-03-11 10:06:59,206] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:06:59,216] - [basetestcase:2637] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:06:59,220] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:06:59,221] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:06:59,978] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:06:59,987] - [rest_client:1737] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2021-03-11 10:06:59,988] - [remote_util:3406] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2021-03-11 10:07:00,115] - [remote_util:3454] INFO - command executed successfully with Administrator
[2021-03-11 10:07:00,128] - [basetestcase:2637] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2021-03-11 10:07:00,129] - [tuq_helper:680] INFO - CHECK FOR PRIMARY INDEXES
[2021-03-11 10:07:00,129] - [tuq_helper:687] INFO - DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:07:00,133] - [tuq_helper:287] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2021-03-11 10:07:00,137] - [rest_client:3905] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2021-03-11 10:07:00,148] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 9.454068ms
[2021-03-11 10:07:00,153] - [tuq_helper:287] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2021-03-11 10:07:00,157] - [rest_client:3905] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2021-03-11 10:07:00,205] - [tuq_helper:310] INFO - TOTAL ELAPSED TIME: 46.31678ms
[2021-03-11 10:07:00,227] - [basetestcase:467] INFO - ------- Cluster statistics -------
[2021-03-11 10:07:00,227] - [basetestcase:469] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 3.346203346203346, 'mem_free': 13559062528, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2021-03-11 10:07:00,228] - [basetestcase:470] INFO - --- End of cluster statistics ---
[2021-03-11 10:07:00,232] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:07:00,232] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:07:01,019] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:07:01,024] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:07:01,025] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:07:02,248] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:07:02,261] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:07:02,261] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:07:03,585] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:07:03,595] - [remote_util:298] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2021-03-11 10:07:03,595] - [remote_util:336] INFO - SSH Connected to 127.0.0.1 as Administrator
[2021-03-11 10:07:04,882] - [remote_util:3722] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2021-03-11 10:07:11,997] - [basetestcase:572] INFO - ==============  basetestcase cleanup was started for test #11 suite_tearDown ==============
[2021-03-11 10:07:12,029] - [bucket_helper:142] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2021-03-11 10:07:12,478] - [bucket_helper:233] INFO - waiting for bucket deletion to complete....
[2021-03-11 10:07:12,483] - [rest_client:137] INFO - node 127.0.0.1 existing buckets : []
[2021-03-11 10:07:12,483] - [bucket_helper:165] INFO - deleted bucket : default from 127.0.0.1
[2021-03-11 10:07:12,489] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:07:12,495] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:07:12,501] - [bucket_helper:167] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2021-03-11 10:07:12,505] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:07:12,520] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9000
[2021-03-11 10:07:12,520] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:07:12,524] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9000 is running
[2021-03-11 10:07:12,528] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9001
[2021-03-11 10:07:12,528] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:07:12,533] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9001 is running
[2021-03-11 10:07:12,537] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9002
[2021-03-11 10:07:12,537] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:07:12,541] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9002 is running
[2021-03-11 10:07:12,545] - [cluster_helper:82] INFO - waiting for ns_server @ 127.0.0.1:9003
[2021-03-11 10:07:12,546] - [rest_client:41] INFO - -->is_ns_server_running?
[2021-03-11 10:07:12,550] - [cluster_helper:86] INFO - ns_server @ 127.0.0.1:9003 is running
[2021-03-11 10:07:12,550] - [basetestcase:596] INFO - ==============  basetestcase cleanup was finished for test #11 suite_tearDown ==============
[2021-03-11 10:07:12,550] - [basetestcase:604] INFO - closing all ssh connections
[2021-03-11 10:07:12,554] - [basetestcase:609] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 79.432s

OK
Cluster instance shutdown with force
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query', ' pass')
During the test, Remote Connections: 0, Disconnections: 1909
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
*** TestRunner ***
scripts/start_cluster_and_run_tests.sh: line 85:  5684 Terminated              COUCHBASE_NUM_VBUCKETS=64 python ./cluster_run --nodes=$servers_count &> $wd/cluster_run.log  (wd: /opt/build/ns_server)

Testing Failed: Required test failed

FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	7911.874s
--- FAIL: TestCollectionMultipleBuilds (913.73s)
Version: versions-11.03.2021-05.30.cfg
Build Log: make-11.03.2021-05.30.log
Server Log: logs-11.03.2021-05.30.tar.gz

Finished