Details
-
Task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
Hey Michael,
Can you take a look at the performance matrix I compiled for durability. Here is the sheet: https://docs.google.com/spreadsheets/d/1B8v4OZneOeGxJwUj226zA3YDr0Y0gjRSVLwy0IAP9qw/edit?usp=sharing . SDK2 and SDK3 columns are for old durability params (replicate to and persist to) and SDK3 New is the new durability levels. There are two issues that I am confused by and need some input to make sure I did the testing correctly. First: For SDK3 New, all durability levels except durabilityLevel=None have the same performance. To me, it does not make sense why majority and persistMajority would perform the same. Also, the performance impact is severe, dropping from 387k to 1k going from None to majority, >99% drop. Second: SDK3 with replicateTo=1 persistTo=0 performs significantly slower than replicateTo=1 persistTo=1 and replicateTo=1 persistTo=2, which implies that adding persist to increases performance and this doesn't really make sense.
Here is my YCSB code I am using for the tests, I create a branch called couchbase3-new-durability based on couchbase3 branch: https://github.com/couchbaselabs/YCSB/blob/couchbase3-new-durability/couchbase3/src/main/java/com/yahoo/ycsb/db/couchbase3/Couchbase3Client.java
Here is the set of test files I am using: https://github.com/couchbase/perfrunner/tree/master/tests/durability
Attachments
Issue Links
Activity
Field | Original Value | New Value |
---|---|---|
Assignee | Michael Nitschinger [ daschl ] | Korrigan Clark [ korrigan.clark ] |
Labels | 6.5mustpass |
Status | New [ 10003 ] | Open [ 1 ] |
Fix Version/s | 2.0.0-alpha.5 [ 16203 ] |
Comment |
[ [~daschl], I ran some tests... Looks like kvendpoints doesnt have the effect we thought it might. The results show that increasing kv endpoints does not increase throughput by a similar amount:
kvendpoints=1 d=0 [OVERALL], Throughput(ops/sec), 107.38831615120274 d=1 [OVERALL], Throughput(ops/sec), 87.04659604286175 d=2 [OVERALL], Throughput(ops/sec), 91.9303535641398 d=3 [OVERALL], Throughput(ops/sec), 96.49901571003976 kvendpoints=2 d=0 [OVERALL], Throughput(ops/sec), 118.2941976696043 d=1 [OVERALL], Throughput(ops/sec), 100.47423840527289 d=2 [OVERALL], Throughput(ops/sec), 91.69096477233134 d=3 [OVERALL], Throughput(ops/sec), 87.89199831247363 kvendpoints=16 d=0 [OVERALL], Throughput(ops/sec), 119.98752129778504 d=1 [OVERALL], Throughput(ops/sec), 96.25380202518 d=2 [OVERALL], Throughput(ops/sec), 81.82971236856102 d=3 [OVERALL], Throughput(ops/sec), 95.53835865099838 ] |
Labels | 6.5mustpass | 6.5mustpass durability |
Fix Version/s | 2.0.0-alpha.6 [ 16233 ] | |
Fix Version/s | 2.0.0-alpha.5 [ 16203 ] |
Resolution | Fixed [ 1 ] | |
Status | Open [ 1 ] | Resolved [ 5 ] |
Actual End | 2019-07-08 11:24 (issue has been resolved) |
Status | Resolved [ 5 ] | Closed [ 6 ] |
Hi,
some observations:
So one thing I'd ask you to try is kvEndpoints 2, 4, 8, and 16 and see how the numbers change. If they go up significantly (or linearly with the number of sockets) this is very likely the head of line blocking issue since we do not have async ops on the kv layer yet.