Uploaded image for project: 'Couchbase Go SDK'
  1. Couchbase Go SDK
  2. GOCBC-1208

[GOCB] Cluster level bootstrapping appears to fallback to HTTP polling with an empty bucket

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • core-10.0.6
    • 2.3.4
    • library
    • None
    • 1

    Description

      What's the issue?
      We've had a recent issue with sample importing where we've failed to bootstrap against the target cluster.

      This bootstrapping failed logging some of the following errors:

      Bad Bucket

      2021-12-03T01:36:25.477-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      

      In the logs, we can see that we've logged that we're creating the connection, but not that it's been successfully created (which indicates that the SDK has not passed control back over to us yet).

      Connection Logging

      2021-12-03T01:36:25.381-08:00 (Couchbase) Creating Query connection to cluster 'couchbases://172.23.107.90:11207,172.23.105.215:11207'
      ...
      2021-12-03T01:37:25.383-08:00 JSON import failed: failed to execute cluster operations: failed to execute bucket operation for bucket 'travel-sample': failed to execute queries for bucket 'travel-sample': failed to execute queries against sink bucket: failed to ensure gocb is connected to the cluster: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":60000095986,"RetryReasons":["NOT_READY"],"RetryAttempts":65,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}
      JSON import failed: operation has timed out
      

      I was able to reproduce this issue on the cluster that was provided by the initial reporter (which is no longer available). When inspecting the logs, I saw the following:

      HTTP Bootstrapping With an Empty Bucket

      2021-12-03T10:20:33.208+00:00 (Gocbcore) Requesting config from: https://172.23.107.90:18091//pools/default/bs/.
       
      2021-12-03T10:20:33.208+00:00 (Gocbcore) Writing HTTP request to https://172.23.107.90:18091/pools/default/bs/ ID=b791971b-a2de-4e64-984c-df92156d4f9a
      

      Further to this, I don't have any more information on steps to reproduce, other than I was simply performing a sample import using the CLI (Sumedh Basarkod might be able to provide more information into the setup).

      Logs
      The supportal snapshot can be found here.

      cbimport logs

      2021-12-03T01:36:25.267-08:00 (Plan) (Query) Executing queries for bucket 'travel-sample'
      2021-12-03T01:36:25.381-08:00 (Couchbase) Creating Query connection to cluster 'couchbases://172.23.107.90:11207,172.23.105.215:11207'
      2021-12-03T01:36:25.477-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:25.528-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:35.587-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:35.669-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:45.730-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:45.801-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:55.871-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:36:55.930-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:37:05.996-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:37:06.058-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:37:16.117-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:37:16.174-08:00 (Gocbcore) Failed to connect to host, bad bucket.
      2021-12-03T01:37:25.383-08:00 JSON import failed: failed to execute cluster operations: failed to execute bucket operation for bucket 'travel-sample': failed to execute queries for bucket 'travel-sample': failed to execute queries against sink bucket: failed to ensure gocb is connected to the cluster: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":60000095986,"RetryReasons":["NOT_READY"],"RetryAttempts":65,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}
      JSON import failed: operation has timed out
      

      For the complete importer output, search for "Loader's" in 'ns_server.debug.log' on node 172.23.107.90.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              james.lee James Lee
              james.lee James Lee
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty