* Show proper subcommand prefix in generated config CLI.
* Remove useless author info and copy command group description.
* Really accept (implicit) -h and --help flags.
These were disabled by HideHelp, leading to an error message in every
usage output. This way, the flags get documented as well.
* Override AppHelpTemplate to better match Kong's style.
* Override (Sub)commandHelpTemplate to better match Kong's style.
* Use <command> and [flags] like Kong.
Signed-off-by: André Colomb <src@andre.colomb.de>
Based on user requests from Weblate:
* `@miryusifrahimov` for Azerbaijani
* `@halbast` für Kurdish (Central)
Both seem to be legit and have previously contributed translations on
Weblate.
Signed-off-by: André Colomb <src@andre.colomb.de>
In #9701 there was a change that put the mutex used for `getExpireAdd` directly in `defaultRealCaser`, which is erroneous because multiple filesystems can share the same `caseCache`.
### Purpose
Fixes#9836 and [Slow sync sending files from Android](https://forum.syncthing.net/t/slow-sync-sending-files-from-android/24208?u=marbens). There may be other issues caused by `getExpireAdd` conflicting with itself, though.
### Testing
Unit tests pass and the case cache and conflict detection _seem_ to behave correctly.
Signed-off-by: Marcus B Spencer <marcus@marcusspencer.us>
This replaces `allow_contributor` with `allow_non_author_contributor`,
because the former allows authors to approve their own pull requests.
From https://github.com/syncthing/syncthing/pull/9818#issue-2651431707:
> This adds `allow_contributor: true` which allows approvals by contributors to the PR (*but still not the author themself*, which is a different thing).
This statement conflicts with [the policybot README](c013552248/README.md), which says:
> If true, the approvals of someone who has committed to the pull request are
> considered when calculating the status.
> *The pull request author is considered a contributor.*
Signed-off-by: Marcus B Spencer <marcus@marcusspencer.us>
Due to a thinko, this optimisation was wildly incorrect and would read
to lack of block reuse when syncing files.
(We do not insert a blocklist per device, but only a single one. We
can't use the fact of whether the insert happened as a criteria for
inserting blocks.)
Signed-off-by: Jakob Borg <jakob@kastelo.net>
IMHO the logic here was inverted. The only use for the report data is to
show a preview when we ask the user whether they want to participate in
usage reporting. However, the GUI would first load the report data and
then consider whether we wanted to show that dialog or not. Instead,
only load if it we're going to show the dialog.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
This adds a new field to the file information we keep, the "previous
blocks hash". This is the hash of the file contents as it was in its
previous incarnation. That is, every scan that updates the blocks hash
will move the current hash to the "previous" field.
This enables an addition to the conflict detection algorithm: if the
file to be synced is in conflict with the current file on disk
(version-counter wise), but it indicates that it was based on the
precise contents we have (new.prevBlocksHash == current.blocksHash),
then it's not really a conflict.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
This adds a rule where a maintainer can merge a PR on their own even in
non-trivial cases, which we (I) already do as-is today, but by breaking
the rules instead of following them. This just codifies that behavior.
If we get to a point where it's no longer necessary, that'd be cool.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
This changes the files table to use normalisation for the names and
versions. The idea is that these are often common between all remote
devices, and repeating an integer is more efficient than repeating a
long string. A new benchmark bears this out; for a database with 100k
files shared between 31 devices, with some worst case assumption on
version vector size, the database is reduced in size by 50% and the test
finishes quicker:
Current:
db_bench_test.go:322: Total size: 6263.70 MiB
--- PASS: TestBenchmarkSizeManyFilesRemotes (1084.89s)
New:
db_bench_test.go:326: Total size: 3049.95 MiB
--- PASS: TestBenchmarkSizeManyFilesRemotes (776.97s)
The other benchmarks end up about the same within the margin of
variability, with one possible exception being that RemoteNeed seems to
be a little slower on average:
old files/s new files/s
Update/n=RemoteNeed/size=1000-8 5.051k 4.654k
Update/n=RemoteNeed/size=2000-8 5.201k 4.384k
Update/n=RemoteNeed/size=4000-8 4.943k 4.242k
Update/n=RemoteNeed/size=8000-8 5.099k 3.527k
Update/n=RemoteNeed/size=16000-8 3.686k 3.847k
Update/n=RemoteNeed/size=30000-8 4.456k 3.482k
I'm not sure why, possibly that query can be optimised anyhow.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
We have a slightly naive io.ReadAll on the authentication handler, which
can result in unlimited memory consumption from an unauthenticated API
endpoint. Add a reasonable limit there.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Remove the migrated v0.14.0 database format after two weeks. Remove a
few old patterns that are no longer relevant. Ensure the cleanup runs in
both the config and database directories.
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Store the sequence number of the last GC sweep in a KV. Next time, if it
matches we can just skip GC because nothing has been added or removed.
Signed-off-by: Jakob Borg <jakob@kastelo.net>