Details
-
Task
-
Resolution: Unresolved
-
Major
-
6.0.0
Description
In 6.0, FTS started rejecting queries based on the sizing estimate derived as here:https://github.com/blevesearch/bleve/blob/f9afd92d0dc4463d5fa49729182fa6968ca7108a/search.go#L526
But this query sizing estimation is naive and may lead to letting more query processing than what is actually permissible from the memory standpoint and can lead to memory bloating issues.
Mainly the current sizing logic doesn't account for overheads involved in getting the search results available in the response. And that search result computation/preparation part accounts for a bigger chunk of memory use. For example, the vellum readers, posting list readers, especially with respect to fan out queries like prefix, wild card, fuzzy / multi term searches.
Need more investigation around improving this logic.
Attachments
Gerrit Reviews
For Gerrit Dashboard: MB-31358 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
145419,2 | Upgrade version of bleve | master | cbft | Status: MERGED | +2 | +1 |