Details
-
Bug
-
Resolution: Fixed
-
Major
-
4.0.0
-
None
Description
This issue was encountered with a field in a document with a very large numeric value - 17626319910530664276, which caused an error when attempting to replicate to ElasticSearch.
Seen on version 4.0. I'm unsure if this also affects previous versions.
This value appears to have been read as a BigInteger, which has caused an error in the ElasticSearch client:
10:01:55.285 [es-worker-0] WARN c.c.c.e.ElasticsearchWorker - Error in Elasticsearch worker thread
|
java.lang.IllegalArgumentException: cannot write xcontent for unknown value of type class java.math.BigInteger
|
at org.elasticsearch.common.xcontent.XContentBuilder.unknownValue(XContentBuilder.java:755) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.common.xcontent.XContentBuilder.map(XContentBuilder.java:810) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.common.xcontent.XContentBuilder.map(XContentBuilder.java:792) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.action.index.IndexRequest.source(IndexRequest.java:313) ~[elasticsearch-6.3.2.jar:6.3.2]
|
at com.couchbase.connector.elasticsearch.io.DefaultDocumentTransformer.setSourceFromEventContent(DefaultDocumentTransformer.java:121) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.RequestFactory.newIndexRequest(RequestFactory.java:87) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.RequestFactory.newDocWriteRequest(RequestFactory.java:72) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.ElasticsearchWriter.write(ElasticsearchWriter.java:151) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.ElasticsearchWorker.lambda$doRun$0(ElasticsearchWorker.java:76) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
|
10:01:55.289 [es-worker-0] INFO c.c.c.e.ElasticsearchWorker - Thread[es-worker-0,5,main] stopped.
|
10:01:55.289 [main] ERROR c.c.c.e.ElasticsearchConnector - Terminating due to fatal error.
|
java.lang.IllegalArgumentException: cannot write xcontent for unknown value of type class java.math.BigInteger
|
at org.elasticsearch.common.xcontent.XContentBuilder.unknownValue(XContentBuilder.java:755) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.common.xcontent.XContentBuilder.map(XContentBuilder.java:810) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.common.xcontent.XContentBuilder.map(XContentBuilder.java:792) ~[elasticsearch-x-content-6.3.2.jar:6.3.2]
|
at org.elasticsearch.action.index.IndexRequest.source(IndexRequest.java:313) ~[elasticsearch-6.3.2.jar:6.3.2]
|
at com.couchbase.connector.elasticsearch.io.DefaultDocumentTransformer.setSourceFromEventContent(DefaultDocumentTransformer.java:121) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.RequestFactory.newIndexRequest(RequestFactory.java:87) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.RequestFactory.newDocWriteRequest(RequestFactory.java:72) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.io.ElasticsearchWriter.write(ElasticsearchWriter.java:151) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at com.couchbase.connector.elasticsearch.ElasticsearchWorker.lambda$doRun$0(ElasticsearchWorker.java:76) ~[couchbase-elasticsearch-connector-4.0.0.jar:?]
|
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_201]
|
|
There are two issues here:
1. In this specific case, very large values are interpreted as a BigInteger by the connector, which is causing errors.
2. More generally, and more importantly, the connector process exits when parsing errors such as these are hit, which can be quite disruptive.
Is the process exiting the intended approach to handling errors such as these? In the general case, should documents that fail be logged and skipped, to allow the replication to continue?
Attachments
Issue Links
- mentioned in
-
Page Loading...