Details
-
Bug
-
Resolution: Fixed
-
Major
-
2.5.1
-
Security Level: Public
-
Physical Machine environment - using CentOS
Couchbase Inc machines -172.23.96.15 - 172.23.96.18
Running 2.5.1
Test using uploaded program.
-
Yes
Description
I performed 2 experiments. In the first experiment I ran my test program durability-test-simple (which I have uploaded) on 172.23.96.15, and used a 2 node cluster (172.23.96.16 & 17). I ran my program 5 times, as follows ./durability-test 1 1 200 2 0. This means document size= 1 byte, number of documents=1, repeat =200x, mode=2 (which means replicate to one other node). The zero at the end means don't print debug.
I have uploaded the results in experiment1.txt. The summary of the 5 runs (each looping 200x) are provided below.
(Note: The average is not very meaningful because we have lots of fast operations and a few very slow (i.e. 1000x) operations).
Time (microseconds) to complete SETs (no guarantee) = 177 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 166 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 2742 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 15998 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 191 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 186 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 22835 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 34900 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 193 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 189 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 2927 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 2829 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 190 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 171 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 17349 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 63352 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 163 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 160 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 5687 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 11300 (averaged over number of runs)
As you can see the standard SET /ADDs are approximately the same, however the replica vary significantly. See experiment1.txt for more details.
The second experiment was the same as the first however I used 172.23.96.16 as the client. i.e. one of the nodes in the cluster.
The detailed results are provided in experiment2.txt, however I enclose the summary below.
Time (microseconds) to complete SETs (no guarantee) = 137 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 131 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 2857 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 25564 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 143 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 141 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 5050 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 30525 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 146 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 138 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 2822 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 2713 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 145 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 142 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 16368 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 19275 (averaged over number of runs)
Time (microseconds) to complete SETs (no guarantee) = 153 (averaged over number of runs)
Time (microseconds) to complete ADDs (no guarantee) = 146 (averaged over number of runs)
Time (microseconds) Observing SETS until completion = 22488 (averaged over number of runs)
Time (microseconds) Observing ADDS time taken = 39718 (averaged over number of runs)
Again, as you can see the standard SET /ADDs are approximately the same, however the replica vary significantly. See experiment2.txt for more details.
I appreciate there is a concern about network latency etc., therefore collected the ping times from
15 -> 16 = ~200 microseconds
15 -> 17 = ~200 microseconds.
16 -> 16 = ~24 microseconds
16 -> 17 = ~200 microseconds
The detailed results are provided in pingdata.pdf.