Uploaded image for project: 'Couchbase Spark Connector'
  1. Couchbase Spark Connector
  2. SPARKC-104

Stopping and restarting of Spark-Couchbase-Streaming-job throws ClassCastException

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Fix
    • Major
    • None
    • 2.3.0
    • Core, Spark Streaming
    • None
    • 1
    • Medium

    Description

      Using following jars with Spark spark-2.4.3-bin-hadoop2.7 and Scala 2.11:

       
      spark-connector_2.11-2.3.0.jar
      opentracing-api-0.31.0.jar
      rxjava-1.3.8.jar
      rxscala_2.11-0.27.0.jar
      dcp-client-0.23.0.jar
      java-client-2.7.6.jar
      core-io-1.7.6.jar
       

      Exception:

      ERROR MicroBatchExecution: Query [id = 5206cb7e-4260-4465-816f-82bf6b1429f6, runId = df2d54e7-c3f9-4075-810b-1ea7ebe0b9b6] terminated with error

      java.lang.ClassCastException: org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to com.couchbase.spark.sql.streaming.CouchbaseSourceOffset

      at com.couchbase.spark.sql.streaming.CouchbaseSource$$anonfun$7.apply(CouchbaseSource.scala:172)

      at com.couchbase.spark.sql.streaming.CouchbaseSource$$anonfun$7.apply(CouchbaseSource.scala:172)

      at scala.Option.map(Option.scala:146)

      at com.couchbase.spark.sql.streaming.CouchbaseSource.getBatch(CouchbaseSource.scala:172)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$10.apply(MicroBatchExecution.scala:438)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$10.apply(MicroBatchExecution.scala:434)

      at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)

      at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)

      at scala.collection.Iterator$class.foreach(Iterator.scala:891)

      at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)

      at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)

      at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25)

      at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)

      at org.apache.spark.sql.execution.streaming.StreamProgress.flatMap(StreamProgress.scala:25)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:434)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:434)

      at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)

      at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:433)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)

      at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)

      at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)

      at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)

      at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)

      at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)

      at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            graham.pople Graham Pople
            amarsladdha Amar Laddha
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty