Details
-
Bug
-
Resolution: Done
-
Major
-
6.0.0
-
centos longevity
-
Untriaged
-
-
Unknown
Description
Build : 6.0.0-1529
Test : -test tests/integration/test_allFeatures_alice_timers.yml -scope tests/integration/scope_Xattrs_Alice.yml
Scale : 3
Iteration : 1st
Observed the following panic in query logs on one of the query nodes - 172.23.104.67.
_time=2018-08-18T03:19:37.944-07:00 _level=ERROR _msg=Failed to perform <ud>insert</ud> on key <ud>3.10.0.7</ud> for Keyspace ORDER_LINE.
|
_time=2018-08-18T03:19:37.954-07:00 _level=ERROR _msg=Failed to perform <ud>insert</ud> on key <ud>3.4.0.13</ud> for Keyspace ORDER_LINE.
|
_time=2018-08-18T03:19:38.007-07:00 _level=ERROR _msg=Failed to perform <ud>insert</ud> on key <ud>2.10.0</ud> for Keyspace ORDERS.
|
_time=2018-08-18T03:19:38.008-07:00 _level=SEVERE _msg=panic: runtime error: slice bounds out of range
|
|
request text:
|
<ud>SELECT C_DISCOUNT, C_LAST, C_CREDIT FROM CUSTOMER USE KEYS [(to_string($1) || '.' || to_string($2) || '.' || to_string($3)) ]</ud>
|
|
stack:
|
goroutine 53368879 [running]:
|
github.com/couchbase/query/execution.(*Context).Recover(0xc42b5c6dc0)
|
goproj/src/github.com/couchbase/query/execution/context.go:498 +0xbc
|
panic(0xe39640, 0x1870170)
|
/home/couchbase/.cbdepscache/exploded/x86_64/go-1.8.5/go/src/runtime/panic.go:489 +0x2cf
|
github.com/couchbase/query/datastore/couchbase.doFetch(0xc429d44620, 0x8, 0xc42b449500, 0x0, 0xc400000000)
|
goproj/src/github.com/couchbase/query/datastore/couchbase/couchbase.go:1063 +0x554
|
github.com/couchbase/query/datastore/couchbase.(*keyspace).Fetch(0xc4282bda90, 0xc443f9fc00, 0x1, 0x40, 0xc4340891a0, 0x189d4a0, 0xc42b5c6dc0, 0x0, 0x0, 0x0, ...)
|
goproj/src/github.com/couchbase/query/datastore/couchbase/couchbase.go:1037 +0x779
|
github.com/couchbase/query/execution.(*Fetch).flushBatch(0xc4285210e0, 0xc42b5c6dc0, 0x100000000aef300)
|
goproj/src/github.com/couchbase/query/execution/fetch.go:108 +0x533
|
github.com/couchbase/query/execution.(*Fetch).afterItems(0xc4285210e0, 0xc42b5c6dc0)
|
goproj/src/github.com/couchbase/query/execution/fetch.go:65 +0x35
|
github.com/couchbase/query/execution.(*base).runConsumer.func1()
|
goproj/src/github.com/couchbase/query/execution/base.go:551 +0x296
|
github.com/couchbase/query/util.(*Once).Do(0xc4285211d8, 0xc42b63af38)
|
goproj/src/github.com/couchbase/query/util/sync.go:51 +0x68
|
github.com/couchbase/query/execution.(*base).runConsumer(0xc4285210e0, 0x189ac40, 0xc4285210e0, 0xc42b5c6dc0, 0x0, 0x0)
|
goproj/src/github.com/couchbase/query/execution/base.go:552 +0xaf
|
github.com/couchbase/query/execution.(*Fetch).RunOnce(0xc4285210e0, 0xc42b5c6dc0, 0x0, 0x0)
|
goproj/src/github.com/couchbase/query/execution/fetch.go:49 +0x5c
|
created by github.com/couchbase/query/execution.(*base).runConsumer.func1
|
goproj/src/github.com/couchbase/query/execution/base.go:537 +0x2f6
|
This could be due to the processing of the TPCC workload in the test that started a few hours ago in the test before hitting this panic.
[2018-08-18T00:35:30-07:00, sequoiatools/tpcc:d5d01f] ./run.sh 172.23.104.67:8093 util/cbcrindex.sql
|
[2018-08-18T00:46:57-07:00, sequoiatools/tpcc:8a5873] python tpcc.py --duration 259200 --client 3 --warehouses 5 --no-execute n1ql --query-url 172.23.104.88:8093 --userid Administrator --password password
|
[2018-08-18T00:47:03-07:00, sequoiatools/tpcc:a7818a] python tpcc.py --duration 2259200 --client 3 --warehouses 5 --no-load n1ql --query-url 172.23.104.67:8093
|