I am using spark-couchbase connector dp2 in scala. I have spark streaming application which reads file streams from hadoop directory joins with data from another RDD created from textFile. I am using spark SQL over streaming. My application work fine and successfully load data into Couchbase from spark when it runs on standalone spark in local mode. But my requirement is to run my application on standalone spark cluster which currently has two nodes. When I tried to run it on cluster, it does all operations like aggregation and join successfully (I checked it by printing data after aggregation and join) but does not loads into couchbase and hangs instantly. It does not even reads data from next batch. Please have a look into the issue and let me know if this explains my problem or any other information is required from my side.