Description
An end user was running a set of queries that would error with "err: backfill exceeded limit 5120" using SparkSQL. The concern was that the error didn't propagate well. A subset of data would be returned, they believe.
Suspicion is that the streaming parser works well to a point, then the error is returned but the error handling may not be that good.
Attachments
For Gerrit Dashboard: SPARKC-85 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
109823,2 | SPARKC-85: ensure reasonable error handling in SparkSQL query failure cases | master | couchbase-spark-connector | Status: MERGED | +2 | +1 |
110129,1 | SPARKC-85: ensure reasonable error handling in SparkSQL query failure cases | release/2.2 | couchbase-spark-connector | Status: ABANDONED | 0 | 0 |