Details
-
Bug
-
Resolution: Fixed
-
Critical
-
7.1.0
-
Untriaged
-
1
-
Yes
-
Tools 2022-Feb
Description
What's happening?
When using --purge to backup on S3 after a previous failed backup, not all the multipart uploads are cleaned up, leaving them hanging.
The backup itself succeeds fine, with no indication that cleanup failed.
What's expected?
I expect all the multipart uploads to be aborted and cleaned up
Steps to reproduce
- Set up a cluster (defaults fine)
- Perform a backup to S3
- Abort this backup (in my case by killing the erlang process)
- Perform another backup, this time with the --purge flag i.e.
/opt/couchbase/bin//cbbackupmgr backup --archive s3://testrunner-gcp-jmj-test/archive-10.112.212.101 --repo backup --cluster http://10.112.212.101:8091 --username Administrator --password password --obj-access-key-id asdf --obj-endpoint http://10.112.212.101:4572 --obj-region us --obj-secret-access-key asdf --obj-staging-dir /tmp/cbbackupmgr-staging --s3-force-path-style --purge
|
s3.MultipartUpload(bucket_name='testrunner-gcp-jmj-test', object_key='archive-10.112.212.101/backup/2022-03-03T14_41_15.198773897Z/default-1a369fb4c4b52efc6156b41c968764fc/data/data_1023.rift.0', id='plgXMjtD5NlNuIRFCmwWkDtHHrbPgHtcAAcSskK8BxGkzOtfJ1zQGsmw'), s3.MultipartUpload(bucket_name='testrunner-gcp-jmj-test', object_key='archive-10.112.212.101/backup/2022-03-03T14_41_15.198773897Z/default-1a369fb4c4b52efc6156b41c968764fc/data/data_1022.rift.0', id='DEjmWnYo2RxWwaDoqlOYHp9JksCNTcn1Jsny8FS0Fr0mW79i5Iyhwfiiog')
|
etc.
It's also worth noting here that the backup name seen in the multipart uploads (2022-03-03T14_41_15.198773897Z) is the name of the backup that was aborted.