Almost all were easy to replace, except ones exposed via JSON which
have been left as-is.
The linter has a cool behavior where it flags attempts to pass
`atomic.Int32` for example by value , which is always a mistake,
say as an argument to `fmt.Sprintf()`
This removes tons of boilerplate code around:
- retry loop
- connection management
- storage registration
* used generics in runInParallel
* introduced generics in freepool
* introduced strong typing for workshare.Pool and workshare.AsyncGroup
* fixed linter error on openbsd
* fix(snapshots): fixed random deadlock when Uploader results in a failure
The deadlock was caused by not properly waiting for all asynchronous
work to complete before closing the worker pool.
Introduced `workshare.AsyncGroup.Close()` and some assertions.
* fixed select race
* linter fix
* pr feedback
From https://github.com/google/gvisor/tree/master/tools/checklocks
This will perform static verification that we're using
`sync.Mutex`, `sync.RWMutex` and `atomic` correctly to guard access
to certain fields.
This was mostly just a matter of adding annotations to indicate which
fields are guarded by which mutex.
In a handful of places the code had to be refactored to allow static
analyzer to do its job better or to not be confused by some
constructs.
In one place this actually uncovered a bug where a function was not
releasing a lock properly in an error case.
The check is part of `make lint` but can also be invoked by
`make check-locks`.
* feat(general): added internal/workshare package
This introduces work sharing utility useful when walking trees of
things (such as filesystem), which allows N threads/goroutines to be
used.
Whenever a routine is visiting its children, it can share some of that
work with another idle goroutine in the pool (when available). If
no other goroutine is idle, we are already at capacity and the caller
simply does the work in their own goroutine.
The API introduced here is not the most beautiful, but allows us to
avoid allocations in most cases, which is critical for high-performance
data processing.
* feat(snapshots): speed up uploads by parallelizing directory traversal
Previously directories were walked strictly sequentially which means
we could never be uploading data from multiple directories in parallel,
even if they had just a few files each.
This change switches to using the new `workshare` utility which improves
parallelism. It also reduces memory allocations, goroutine creations
and overall memory usage when taking large snapshots, while increasing
CPU utilization.
Tests on realistic directory structures show huge speed-ups during cold
snapshots (without any metadata caching:)
Photo library - 160GB, files:41717 dirs:1350
Before: 3m11s
After: 1m50s
Total time reduction: 43%
Working code directory - 30.7 GB files:194560 dirs:42455
Before: 55s
After: 25s
Total time reduction: 55%
* do not report multiple cancelation errors during parallel uploads
* do not report multiple cancelation errors during parallel uploads
* pr feedback, clarified usage, added comments
* fixed flaky test