Details
-
Bug
-
Resolution: Fixed
-
Major
-
5.0.0, 5.5.0
-
64GB RAM
Samsung SM863 960GB
-
Untriaged
-
-
No
Description
While looking at logs from MB-25993 I noticed that we save more than 16M documents in a single batch:
rw_1:bulkSize (1733 total)
|
1 - 2 : ( 0.17%) 3
|
2 - 4 : ( 0.17%) 0
|
4 - 8 : ( 0.17%) 0
|
8 - 16 : ( 20.89%) 359 #########
|
16 - 32 : ( 63.94%) 746 ##################
|
32 - 64 : ( 78.42%) 251 ######
|
64 - 128 : ( 88.17%) 169 ####
|
128 - 256 : ( 91.98%) 66 #
|
256 - 512 : ( 95.50%) 61 #
|
512 - 1KB : ( 97.35%) 32
|
1KB - 2KB : ( 98.27%) 16
|
2KB - 4KB : ( 98.62%) 6
|
4KB - 8KB : ( 98.67%) 1
|
8KB - 16KB : ( 98.67%) 0
|
16KB - 32KB : ( 98.73%) 1
|
32KB - 64KB : ( 98.79%) 1
|
64KB - 128KB : ( 98.85%) 1
|
128KB - 256KB : ( 98.96%) 2
|
256KB - 512KB : ( 99.13%) 3
|
512KB - 1MB : ( 99.25%) 2
|
1MB - 2MB : ( 99.37%) 2
|
2MB - 4MB : ( 99.48%) 2
|
4MB - 8MB : ( 99.54%) 1
|
8MB - 16MB : ( 99.60%) 1
|
16MB - 32MB : (100.00%) 7
|
Avg : ( 78KB)
|
If we want to store 200M+ keys per vbucket than we probably should have an upper limit. Just look at these timings:
disk_commit (1733 total)
|
0 - 1s : ( 98.79%) 1712 ####################################
|
1s - 2s : ( 98.90%) 2
|
2s - 4s : ( 99.02%) 2
|
4s - 7s : ( 99.13%) 2
|
7s - 10s : ( 99.19%) 1
|
10s - 16s : ( 99.25%) 1
|
16s - 23s : ( 99.31%) 1
|
23s - 34s : ( 99.37%) 1
|
34s - 49s : ( 99.42%) 1
|
49s - 1m:09s : ( 99.48%) 1
|
1m:09s - 1m:38s : ( 99.54%) 1
|
1m:38s - 2m:19s : ( 99.60%) 1
|
2m:19s - 3m:15s : ( 99.60%) 0
|
3m:15s - 4m:35s : ( 99.65%) 1
|
4m:35s - 6m:26s : ( 99.65%) 0
|
6m:26s - 9m:01s : ( 99.65%) 0
|
9m:01s - 12m:39s : ( 99.77%) 2
|
12m:39s - 17m:44s : ( 99.77%) 0
|
17m:44s - 24m:51s : ( 99.88%) 2
|
24m:51s - 34m:49s : ( 99.88%) 0
|
34m:49s - 48m:45s : ( 99.88%) 0
|
48m:45s - 68m:17s : ( 99.94%) 1
|
68m:17s - 95m:37s : (100.00%) 1
|
Avg : ( 6s)
|
I don't think there is a huge benefit in keeping commits in-flight for 1 hour.
Attachments
Issue Links
- relates to
-
MB-25993 vbucket move doesn't seem to work when vbuckets are large
- Closed