Hi Dave, I do think this is still an issue. Sizing does take care of how much RAM is needed and it does use these same percentages. However, my point with this issue is more around how much RAM we might be "wasting" by only using percentages.
For instance, when running Couchbase on a node with 128GB of RAM, we cut off 80% for the server quota which is >20GB. When a bucket is created, it only has 100GB to work with and then we take another 75% of that for the high water mark (another 25GB). I realize there are various inefficiencies with memory fragmentation and that larger amounts of data in RAM may require larger buffers...but I think nearly 50GB of unusable RAM is too much. And it gets even higher with nodes of 256GB and higher.
We do have reasonable support for reducing that first 80%, but the product doesn't really have a good way for managing high/low watermarks and so I was proposing that we enhance our calculation to take this into account and have fixed values (5GB/10GB?) if the percentages are too high.