* fixed a number of cases where misaligned data was causing panics on armv7 (but not armv8)
* travis: enable arm64
* test: reduce compressed data sizes when running on arm
* arm: wait longer for snapshots
* restore: support for zip, tar and tar.gz restore outputs
Moved restore functionality to its own package.
* Fix enum values in the 'mode' flag
Co-authored-by: Julio López <julio+gh@kasten.io>
Add two tests:
- TestManySmallFiles: writes 100k files size 4k to a directory. Snapshots the data tree, restores and validates data.
- TestModifyWorkload: Loops over a simple randomized workload. Performs a series of random file writes to some random sub-directories, then takes a snapshot of the data tree. All snapshots taken during this test are restore-verified at the end.
A global test engine is instantiated in main_test.go, to be used in the robustness test suite across tests (saves time loading/saving metadata once per run instead of per test).
* Add test engine to manage snapshot verification testing
Test engine manages the test and metadata repositories, snapshot
checker, metadata storage persistence, and file writer. It is
the high level helper that will be invoked in the snapshot
verification testing suite.
- modify data directory file structure
- issue snapshot/restore/delete to the data directory
- accumulate metadata over the course of the test suite
- flush accumulated metadata to the metadata repository
- load historical metadata from the repository on initialization
- perform automatic data integrity verification on snap restore
This change corresponds to the robustness execution engine component from the design documentation.
Added test that verifies that when client performs Flush (which happens
at the end of each snapshot and when repository is closed), the
server writes new blobs to the storage.
Fixes#464
Connect the snapshotter, the storer, and the comparer. Invoke
the snapshotter to take/restore/delete snapshots on the repo,
the comparer to gather metadata before the snapshot and
after the restore, and the storer to save metadata for later
lookup when verifying restores.
* content: added support for cache of own writes
Thi keeps track of which blobs (n and m) have been written by the
local repository client, so that even if the storage listing
is eventually consistent (as in S3), we get somewhat sane behavior.
Note that this is still assumming read-after-create semantics, which
S3 also guarantees, otherwise it's very hard to do anything useful.
* compaction: support for compaction logs
Instead of compaction immediately deleting source index blobs, we now
write log entries (with `m` prefix) which are merged on reads
and applied only if the blob list includes all inputs and outputs, in
which case the inputs are discarded since they are known to have been
superseded by the outputs.
This addresses eventual consistency issues in stores such as S3,
which don't guarantee list-after-put or list-after-delete. With such
stores the repository is ultimately eventually consistent and there's
not much that can be done about it, unless we use second strongly
consistent storage (such as GCS) for the index only.
* content: updated list cache to cache both `n` and `m`
* repo: fixed cache clear on windows
Clearing cache requires closing repository first, as Windows is holding
the files locked.
This requires ability to close the repository twice.
* content: refactored index blob management into indexBlobManager
* testing: fixed blobtesting.Map storage to allow overwrites
* blob: added debug output String() to blob.Metadata
* testing: added indexBlobManager stress test
This works by using N parallel "actors", each repeatedly performing
operations on indexBlobManagers all sharing single eventually consistent
storage.
Each actor runs in a loop and randomly selects between:
- *reading* all contents in indexes and verifying that it includes
all contents written by the actor so far and that contents are
correctly marked as deleted
- *creating* new contents
- *deleting* one of previously-created contents (by the same actor)
- *compacting* all index files into one
The test runs on accelerated time (every read of time moves it by 0.1
seconds) and simulates several hours of running.
In case of a failure, the log should provide enough debugging
information to trace the exact sequence of events leading up to the
failure - each log line is prefixed with actorID and all storage
access is logged.
* makefile: increase test timeout
* content: fixed index blob manager race
The race is where if we delete compaction log too early, it may lead to
previously deleted contents becoming temporarily live again to an
outside observer.
Added test case that reproduces the issue, verified that it fails
without the fix and passed with one.
* testing: improvements to TestIndexBlobManagerStress test
- better logging to be able to trace the root cause in case of a failure
- prevented concurrent compaction which is unsafe:
The sequence:
1. A creates contentA1 in INDEX-1
2. B creates contentB1 in INDEX-2
3. A deletes contentA1 in INDEX-3
4. B does compaction, but is not seeing INDEX-3 (due to EC or simply
because B started read before #3 completed), so it writes
INDEX-4==merge(INDEX-1,INDEX-2)
* INDEX-4 has contentA1 as active
5. A does compaction but it's not seeing INDEX-4 yet (due to EC
or because read started before #4), so it drops contentA1, writes
INDEX-5=merge(INDEX-1,INDEX-2,INDEX-3)
* INDEX-5 does not have contentA1
7. C sees INDEX-5 and INDEX-5 and merge(INDEX-4,INDEX-5)
contains contentA1 which is wrong, because A has been deleted
(and there's no record of it anywhere in the system)
* content: when building pack index ensure index bytes are different each time by adding 32 random bytes
New usage:
```
kopia snapshot delete manifestID... [--delete]
kopia snapshot delete rootObjectID... [--delete]
```
Fixes#435
cli: added --unsafe-ignore-source as alias for `--delete`
This is a hidden flag for backwards compatibility. It will be removed.
This is done by introducing N unsynchronized caches, which simulate
what frontend of a cloud storage system might do, that causes eventual
consistency behavior.
Support for remote content repository where all contents and
manifests are fetched over HTTP(S) instead of locally
manipulating blob storage
* server: implement content and manifest access APIs
* apiclient: moved Kopia API client to separate package
* content: exposed content.ValidatePrefix()
* manifest: added JSON serialization attributes to EntryMetadata
* repo: changed repo.Open() to return Repository instead of *DirectRepository
* repo: added apiServerRepository
* cli: added 'kopia repository connect server'
This sets up repository connection via the API server instead of
directly-manipulated storage.
* server: add support for specifying a list of usernames/password via --htpasswd-file
* tests: added API server repository E2E test
* server: only return manifests (policies and snapshots) belonging to authenticated user
Maintenance: support for automatic GC
Moved maintenance algorithms from 'cli' to 'repo/maintenance' package
Added support for CLI commands:
kopia gc - performs quick maintenance
kopia gc --full- perform full maintenance
Full maintenance performs snapshot gc, but it's not safe to do this automatically possibly in parallel to snapshots being taken. This will be addressed ~0.7 timeframe.
* server: when serving HTML UI, prefix the title with string from KOPIA_UI_TITLE_PREFIX envar
* kopia-ui: support for multiple repositories + portability
This is a major rewrite of the app/ codebase which changes
how configuration for repositories is maintained and how it flows
through the component hierarchy.
Portable mode is enabled by creating 'repositories' subdirectory before
launching the app.
on macOS:
<parent>/KopiaUI.app
<parent>/repositories/
On Windows, option #1 - nested directory
<parent>\KopiaUI.exe
<parent>\repositories\
On Windows, option #2 - parallel directory
<parent>\some-dir\KopiaUI.exe
<parent>\repositories\
In portable mode, repositories will have 'cache' and 'logs' nested
in it.
* This is 99% mechanical:
Extracted repo.Repository interface that only exposes high-level object and manifest management methods, but not blob nor content management.
Renamed old *repo.Repository to *repo.DirectRepository
Reviewed codebase to only depend on repo.Repository as much as possible, but added way for low-level CLI commands to use DirectRepository.
* PR fixes
* performance: added buf.Pool which can be used to manage ephemeral buffers for encryption and compression
* repo: switched object writer to buf.Pool
* content: switched encryption to use buf.Pool
* object: switched compression to use buf.Pool
* testing: added missing content manager Close()
Add an implementation of the metadata store using kopia snapshots and restores to manage persistence of walk metadata. Metadata are stored to and retrieved from the store, and a mechanism for persisting the store is implemented using kopia snapshots.
* performance: plumbed through output buffer to encryption and hashing, so that the caller can pre-allocate/reuse it
* testing: fixed how we do comparison of byte slices to account for possible nils, which can be returned from encryption
Snapshotter interface describes an entity that can create,
restore, and delete snapshots, as well as manage a repository.
Add kopia implementation of the snapshotter interface.
Motivation: Allow time injection for (unit) tests, to more easily test and
verify time-dependent invariants.
Add time injection support for:
* repo.Manager
* manifest.Manager
* snapshot.Uploader
Then, wire up to these components. The content.Manager already had support for
time injection, but was not wired up from the time function passed to repo creation.
Add an internal/faketime package for testing. Mainly code movement from testing
code in the repo/content package. Motivation: make it available to other packages
outside content Also, add simple tests for faketime functions.
- cleaned up migration progress output
- fixed migration idempotency
- added migration of policies
- renamed --parallelism to --parallel
- improved e2e test
- do not prompt for password to source repository if persisted
Update the fio runner to use a docker image if the appropriate environment variable is set. Docker image is built via a makefile target and used in the robustness tool tests.
Give a `*bytes.Buffer` to the command and let `package exec` read from the pipe into the buffer for us.
The current use of the `StderrPipe()` method was problematic; the documentation for os/exec states:
> Wait will close the pipe after seeing the command exit, so most callers need not close the pipe themselves. It is thus incorrect to call Wait before all reads from the pipe have completed. For the same reason, it is incorrect to use Run when using StderrPipe.
Error variable was being shared between c.Output and
ioutil.ReadAll. The two could race to set err, and if
Output() finishes first with an error, ReadAll could overwrite
it with nil. Fix is to delegate an independent error for
the pipe read, and fail the test if it is non-nil.
The hostname/username are now persisted when connecting to repository
in a local config file.
This prevents weird behavior changes when hostname is suddenly changed,
such as when moving between networks.
repo.Repository will now expose Hostname/Username properties which
are always guarnateed to be set, and are used throughout.
Removed --hostname/--username overrides when taking snapshot et.al.
This is mostly mechanical and changes how loggers are instantiated.
Logger is now associated with a context, passed around all methods,
(most methods had ctx, but had to add it in a few missing places).
By default Kopia does not produce any logs, but it can be overridden,
either locally for a nested context, by calling
ctx = logging.WithLogger(ctx, newLoggerFunc)
To override logs globally, call logging.SetDefaultLogger(newLoggerFunc)
This refactoring allowed removing dependency from Kopia repo
and go-logging library (the CLI still uses it, though).
It is now also possible to have all test methods emit logs using
t.Logf() so that they show up in failure reports, which should make
debugging of test failures suck less.
CreateSnapshotSource API for ensuring source exists
Upload - starts upload on a given source or matching sources
Cancel - cancels upload on a given source or matching sources
Add the walk policy flag WalkCrossDevice to the fswalker Walk calls. This will avoid potential issues when running in a docker container where a FS tree is made of many overlays. Without the flag set, the Walk operation skips over files on devices that do not match the device at the base path.
/api/v1/repo/create
/api/v1/repo/connect
/api/v1/repo/disconnect
Refactored server code and fixed a number of outstanding robustness
issues. Tweaked the API responses a bit to make more sense when consumed
by the UI.
Add comparer interface which gathers data on a path and
compares that data to a new path, returning error if the path
differs in any way from the input data. The details of what
constitutes a difference is left to the implementation.
FSWalker implementation uses Walk and Report to do the data
gathering and comparison. Filters are applied to sort out any
differences that might be expected (e.g. ctime, atime, mtime,
rename of root directory after restore).