Description
What's the issue?
Arun noticed while collecting logs for a GCP archive that it was taking an unreasonable amount of time (to the point where the process had to be killed).
This certainly should not be the case, in theory log collection should take a fairly consistent amount of time, since we perform the following basic steps:
1) Populate the staging directory (in this case it should have already been populated)
2) Collect some system information (with runs commands which in total run for ~30s)
3) Collect some files e.g. the plan
4) Write those into a ZIP file
5) Optionally redact some information
Arun noted that this behavior worked for S3 a few moments later.
Attachments
Issue Links
For Gerrit Dashboard: MB-50558 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
169369,7 | MB-50558 Improve performance of GCP object iteration | master | tools-common | Status: MERGED | +2 | +1 |
169448,3 | MB-50558 Add support for specifying a 'delimiter' to 'IterateObjects' | master | tools-common | Status: MERGED | +2 | +1 |
169481,5 | MB-50558 Limit the scope when listing items in the remote archive | master | backup | Status: MERGED | +2 | +1 |
169672,3 | MB-50558 Ignore empty repository names in 'listAllRepos' | master | backup | Status: MERGED | +2 | +1 |