positional parameters with long integer values returns wrong results
Description
Components
Fix versions
Labels
Environment
Link to Log File, atop/blg, CBCollectInfo, Core dump
Release Notes Description
Activity
Isha Kandaswamy May 25, 2021 at 7:02 PM
Do not see this issue anymore
Isha Kandaswamy May 25, 2021 at 7:02 PM
cbq> \set -args [1515026765605058000];
cbq> SELECT longint, t from default WHERE longint < $1 AND t = "bad" ORDER BY longint DESC limit 10;
{
"requestID": "e148b749-4188-4f32-8d8a-5447dec5a75c",
"signature": {
"longint": "json",
"t": "json"
},
"results": [
{
"longint": 1515026765604796000,
"t": "bad"
},
{
"longint": 1515026765603621000,
"t": "bad"
}
],
"status": "success",
"metrics": {
"elapsedTime": "6.788002ms",
"executionTime": "6.682954ms",
"resultCount": 2,
"resultSize": 132,
"serviceLoad": 3,
"sortCount": 2
}
}
Johan Larson April 17, 2018 at 4:42 PM
Our code currently handles numbers in three ways:
some numbers get passed through directly, because we do not parse documents deeper than we need to, meaning some deeply nested numbers may be unprocessed and pass through as stored
numbers that get parsed and fit into int64 values are stored as such
remaining numbers are parsed into float64 values, as usual
All of this means that when we output a number it may or may not be what was originally stored.
I propose we change our system to keep two values for numbers: the originally stored text form, as originally stored, and the parsed float64 numerical value. Whenever we need to use the value as a number, whether for comparison, arithmetic operations, sorting, or any other purpose, we use the numerical value. But when we need to copy the value to output, we use the originally stored value. This means users will see the actual numbers they stored, and we will have consistent defensible semantics for the use of very large numbers.
Doing this presents some challenges: we will need to change our parsing and our query/values code. Indexing will also need to become conscious of the dual nature of numbers. And we should make sure our semantics are compatible with whatever FTS and Analytics is doing with numbers.
Given these challenges, this change should be treated as an actual feature, rather than a bug fix. I recommend it be done in the Mad-Hatter scope.
Johan Larson April 11, 2018 at 2:09 PM
We have our own special JSON library that deserializes large integers internally to int64 values rather than float64. I suspect we are just not being consistent about using it. Somewhere we are using the regular json library instead.
Eben Haber January 4, 2018 at 2:15 PM
I bet this issue is related to the fact that in JavaScript numbers are not integers, they are floating point values and lose precision for values greater than 2^53, or 9007199254740991. Integers larger than that must be represented as strings in JSON to avoid loss of precision.
Details
Details
Assignee
Reporter
Priority
Instabug
PagerDuty
PagerDuty Incident
PagerDuty

Sentry
Linked Issues
Sentry
Zendesk Support
Linked Tickets
Zendesk Support

INSERT INTO default VALUES("t1", {"t":"good","longint": 1515026631803155000}); INSERT INTO default VALUES("t2",{"t":"good","longint": 1515026631804568000}); INSERT INTO default VALUES("t3",{"t":"good","longint": 1515026631804859000}); INSERT INTO default VALUES("t4",{"t":"good","longint": 1515026631804990000}); INSERT INTO default VALUES("t5",{"t":"bad","longint": 1515026765603621000}); INSERT INTO default VALUES("t6",{"t":"bad","longint": 1515026765604796000}); INSERT INTO default VALUES("t7",{"t":"bad","longint": 1515026765605058000}); INSERT INTO default VALUES("t8",{"t":"bad","longint": 1515026765605259000}); SELECT longint, t from default WHERE longint < 1515026631804859000 AND t = "good" ORDER BY longint DESC limit 10; SELECT longint, t from default WHERE longint < 1515026765605058000 AND t = "bad" ORDER BY longint DESC limit 10; \set -$l1 1515026631804859000; SELECT longint, t from default WHERE longint < $l1 AND t = "good" ORDER BY longint DESC limit 10; \set -args [1515026631804859000]; SELECT longint, t from default WHERE longint < $1 AND t = "good" ORDER BY longint DESC limit 10; \set -$l1 1515026765605058000; SELECT longint, t from default WHERE longint < $l1 AND t = "bad" ORDER BY longint DESC limit 10;
All above selects gives 2 rows
but the following select gives 3 which is wrong. If same value with constant or named paramters gives right results which are part of above statements.
This may be due to while decoding positional parameters from request it might be using go Json decoder which all numbers convert to float64 and losing precision. This happens certain numbers that are greater than 2**53
\set -args [1515026765605058000]; SELECT longint, t from default WHERE longint < $1 AND t = "bad" ORDER BY longint DESC limit 10;
https://forums.couchbase.com/t/bug-certain-int64-values-fail-to-evaluate-correctly-when-passed-as-positional-parameters-to-query-service-endpoint/15328