Details
-
Task
-
Resolution: Won't Fix
-
Critical
-
4.6
-
None
Description
Add two notes for:
For Watson, the following holds true
- completed_requests lives completely in memory, and memory usage is about 1K per request, so even at 100k requests or thereabout, memory consumption is going to be significantly lower than what n1ql uses to operate
- adding every request to completed_request is likely to only add a few microseconds to the request duration, which are needed to assemble the entry
- the completed_request cache is fragmented across multiple buckets and as such contention is not at issue.
- no garbage collection is not involved in adding completed_requests, but it will be involved when deleting completed_requests entries
For spock, we have an extra feature which affects completed_requests: request profiling.
- if the feature is turned on, we will store the execution plan with timings to completed requests
- profiling information is likely to use 100KB+ per entry
- as such we recommend you don't turn on at all times both profiling and logging at the same time. Any other combination is fine.
- profiling does not carry any extra cost beyond memory for completed_requests, so it's fine to have it on as a matter of course.