* fixed a number of cases where misaligned data was causing panics on armv7 (but not armv8)
* travis: enable arm64
* test: reduce compressed data sizes when running on arm
* arm: wait longer for snapshots
KOPIA_VERSION will now always be v-prefixed and we will strip the
prefix before embedding it in KopiaUI manifest.
Also upgraded Node and app NPM dependencies to latest versions.
* restore: support for zip, tar and tar.gz restore outputs
Moved restore functionality to its own package.
* Fix enum values in the 'mode' flag
Co-authored-by: Julio López <julio+gh@kasten.io>
Add two tests:
- TestManySmallFiles: writes 100k files size 4k to a directory. Snapshots the data tree, restores and validates data.
- TestModifyWorkload: Loops over a simple randomized workload. Performs a series of random file writes to some random sub-directories, then takes a snapshot of the data tree. All snapshots taken during this test are restore-verified at the end.
A global test engine is instantiated in main_test.go, to be used in the robustness test suite across tests (saves time loading/saving metadata once per run instead of per test).
* Add test engine to manage snapshot verification testing
Test engine manages the test and metadata repositories, snapshot
checker, metadata storage persistence, and file writer. It is
the high level helper that will be invoked in the snapshot
verification testing suite.
- modify data directory file structure
- issue snapshot/restore/delete to the data directory
- accumulate metadata over the course of the test suite
- flush accumulated metadata to the metadata repository
- load historical metadata from the repository on initialization
- perform automatic data integrity verification on snap restore
This change corresponds to the robustness execution engine component from the design documentation.
* content: ensure that cleanup blobs have unique contents to prevent situation where they keep getting rewritten and thus never deleted
* cli: added '--decrypt' option to 'kopia blob show'
Added test that verifies that when client performs Flush (which happens
at the end of each snapshot and when repository is closed), the
server writes new blobs to the storage.
Fixes#464
- run maintenance even if the command is about to return an error
(otherwise if folks have persistent error causing snapshots to fail
they will never run maintenance)
- disable progress output after snapshotting so that
'kopia snapshot --all' output is clean
Add sftp and webdav as repositories to "Getting started" documentation page, "Setting Up Repository" chapter.
Add repositories list and usage examples to doc.
instead moved to run as part of maintenance ('kopia maintenance run')
added 'kopia maintenance run --force' flag which runs maintenance even
if not owned
Connect the snapshotter, the storer, and the comparer. Invoke
the snapshotter to take/restore/delete snapshots on the repo,
the comparer to gather metadata before the snapshot and
after the restore, and the storer to save metadata for later
lookup when verifying restores.
* content: added support for cache of own writes
Thi keeps track of which blobs (n and m) have been written by the
local repository client, so that even if the storage listing
is eventually consistent (as in S3), we get somewhat sane behavior.
Note that this is still assumming read-after-create semantics, which
S3 also guarantees, otherwise it's very hard to do anything useful.
* compaction: support for compaction logs
Instead of compaction immediately deleting source index blobs, we now
write log entries (with `m` prefix) which are merged on reads
and applied only if the blob list includes all inputs and outputs, in
which case the inputs are discarded since they are known to have been
superseded by the outputs.
This addresses eventual consistency issues in stores such as S3,
which don't guarantee list-after-put or list-after-delete. With such
stores the repository is ultimately eventually consistent and there's
not much that can be done about it, unless we use second strongly
consistent storage (such as GCS) for the index only.
* content: updated list cache to cache both `n` and `m`
* repo: fixed cache clear on windows
Clearing cache requires closing repository first, as Windows is holding
the files locked.
This requires ability to close the repository twice.
* content: refactored index blob management into indexBlobManager
* testing: fixed blobtesting.Map storage to allow overwrites
* blob: added debug output String() to blob.Metadata
* testing: added indexBlobManager stress test
This works by using N parallel "actors", each repeatedly performing
operations on indexBlobManagers all sharing single eventually consistent
storage.
Each actor runs in a loop and randomly selects between:
- *reading* all contents in indexes and verifying that it includes
all contents written by the actor so far and that contents are
correctly marked as deleted
- *creating* new contents
- *deleting* one of previously-created contents (by the same actor)
- *compacting* all index files into one
The test runs on accelerated time (every read of time moves it by 0.1
seconds) and simulates several hours of running.
In case of a failure, the log should provide enough debugging
information to trace the exact sequence of events leading up to the
failure - each log line is prefixed with actorID and all storage
access is logged.
* makefile: increase test timeout
* content: fixed index blob manager race
The race is where if we delete compaction log too early, it may lead to
previously deleted contents becoming temporarily live again to an
outside observer.
Added test case that reproduces the issue, verified that it fails
without the fix and passed with one.
* testing: improvements to TestIndexBlobManagerStress test
- better logging to be able to trace the root cause in case of a failure
- prevented concurrent compaction which is unsafe:
The sequence:
1. A creates contentA1 in INDEX-1
2. B creates contentB1 in INDEX-2
3. A deletes contentA1 in INDEX-3
4. B does compaction, but is not seeing INDEX-3 (due to EC or simply
because B started read before #3 completed), so it writes
INDEX-4==merge(INDEX-1,INDEX-2)
* INDEX-4 has contentA1 as active
5. A does compaction but it's not seeing INDEX-4 yet (due to EC
or because read started before #4), so it drops contentA1, writes
INDEX-5=merge(INDEX-1,INDEX-2,INDEX-3)
* INDEX-5 does not have contentA1
7. C sees INDEX-5 and INDEX-5 and merge(INDEX-4,INDEX-5)
contains contentA1 which is wrong, because A has been deleted
(and there's no record of it anywhere in the system)
* content: when building pack index ensure index bytes are different each time by adding 32 random bytes
Change log:
https://github.com/aws/aws-sdk-go/releases
Notably:
SDK Bugs
service/s3/s3crypto: Add missing return in encryption client (#3258)
Fixes a missing return in the encryption client that was causing a nil
dereference panic.
New usage:
```
kopia snapshot delete manifestID... [--delete]
kopia snapshot delete rootObjectID... [--delete]
```
Fixes#435
cli: added --unsafe-ignore-source as alias for `--delete`
This is a hidden flag for backwards compatibility. It will be removed.