Details
Description
Testing on GA of 3.0 (build 1209)
I took a few incremental backups and am trying to restore.
Under one backup directory I have 1 full and 2 diff directories, all created by repeatedly running this command after adding some data each time:
/opt/couchbase/bin/cbbackup -u Administrator -p password http://localhost:8091 -b test -m diff -v ./
It's not clear from the documentation what I should specify to cbrestore in order to restore the entire dataset.
If I supply the source directory as the high-level directory, it gives me an error:
[root@cb1 backup]# /opt/couchbase/bin/cbrestore -u Administrator -p password ./2014-10-28T061810Z -b test -B test2 http://localhost:8091 -v
2014-10-28 06:23:50,873: mt cbrestore...
2014-10-28 06:23:50,873: mt source : ./2014-10-28T061810Z
2014-10-28 06:23:50,873: mt sink : http://localhost:8091
2014-10-28 06:23:50,873: mt opts : {'username': '<xxx>', 'verbose': 1, 'dry_run': False, 'extra':
, 'from_date': None, 'bucket_destination': 'test2', 'add': False, 'vbucket_list': None, 'threads': 4, 'to_date': None, 'key': None, 'password': '<xxx>', 'id': None, 'bucket_source': 'test'}
Traceback (most recent call last):
File "/opt/couchbase/lib/python/cbrestore", line 12, in <module>
pump_transfer.exit_handler(pump_transfer.Restore().main(sys.argv))
File "/opt/couchbase/lib/python/pump_transfer.py", line 94, in main
rv = pumpStation.run()
File "/opt/couchbase/lib/python/pump.py", line 108, in run
rv, source_map, sink_map = self.check_endpoints()
File "/opt/couchbase/lib/python/pump.py", line 156, in check_endpoints
rv, source_map = self.source_class.check(self.opts, self.source_spec)
File "/opt/couchbase/lib/python/pump_bfd.py", line 318, in check
bucket_dirs = glob.glob(latest_dir + "/bucket-*")
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
If I specify either the 'full' or either of the 'diff' directories, it only restores that data.
Is the problem that all of my backups are under a single top-level directory? In addition to fixing the error above, can we also update the documentation to provide examples of the "standard" way to backup incrementally and restore the whole data set?