Compare commits

..

30 Commits

Author SHA1 Message Date
Jakob Borg
e41d6b9c1e fix(db): apply all migrations and schema in one transaction 2025-08-31 12:43:41 +02:00
Jakob Borg
21ad99c80a Revert "chore(db): update schema version in the same transaction as migration (#10321)"
This reverts commit 4459438245.
2025-08-31 12:43:41 +02:00
Jakob Borg
4ad3f07691 chore(db): migration for previous commits (#10319)
Recreate the blocks and block lists tables.

---------

Co-authored-by: bt90 <btom1990@googlemail.com>
2025-08-31 09:27:33 +02:00
Simon Frei
4459438245 chore(db): update schema version in the same transaction as migration (#10321)
Just to be entirely sure that if the migration succeeds the schema
version is always also updated. Currently if a migration succeeds but a
later migration doesn't, the changes of the migration apply but the
version stays - if the migration is breaking/non-idempotent, it will
fail when it tries to rerun it next time (otherwise it's just a
pointless re-execution).

Unfortunately with the current `db.runScripts` it wasn't that easy to
do, so I had to do quite a bit of refactoring. I am also ensuring the
right order of transactions now, though I assume that was already the
case lexicographically - can't hurt to be safe.
2025-08-30 13:18:31 +02:00
Jakob Borg
2306c6d989 chore(db): benchmark output, migration blocks/s output (#10320)
Just minor tweaks
2025-08-29 14:58:38 +00:00
Tomasz Wilczyński
0de55ef262 chore(gui): use step of 3600 for versions cleanup interval (#10317)
Currently, the input field has no step defined, meaning that it can be
increased with the arrow keys by the default value of "1". Considering
the fact that the default value is "3600" (seconds or one hour), it is
unlikely that the user wants to change it with such minimal steps.

For this reason, change the default step to "3600" (one hour). If the
user needs more granual control, they can still input the value
in seconds manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:57:27 +02:00
Tomasz Wilczyński
d083682418 chore(gui): use steps of 1024 KiB for bandwidth rate limits (#10316)
Currently, the bandwidth limit input fields have no step defined, and as
such they use the default value of "1". Taking into account the fact
that these fields use KiB as their measurements, it makes more sense to
use larger steps, such as "1024" (1 MiB), as in most cases, it is very
unlikely that the user needs to have byte-level control over the limits.

Note that these steps only apply to increasing the values by using the
arrow keys, and the user is still allowed to input any value they want
manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:56:55 +02:00
Jakob Borg
c918299eab refactor(db): slightly improve insert performance (#10318)
This just removes an unnecessary foreign key constraint, where we
already do the garbage collection manually in the database service.
However, as part of getting here I tried a couple of other variants
along the way:

- Changing the order of the primary key from `(hash, blocklist_hash,
idx)` to `(blocklist_hash, idx, hash)` so that inserts would be
naturally ordered. However this requires a new index `on blocks (hash)`
so that we can still look up blocks by hash, and turns out to be
strictly worse than what we already have.
- Removing the primary key entirely and the `WITHOUT ROWID` to make it a
rowid table without any required order, and an index as above. This is
faster when the table is small, but becomes slower when it's large (due
to dual indexes I guess).

These are the benchmark results from current `main`, the second
alternative below ("Index(hash)") and this proposal that retains the
combined primary key ("combined"). Overall it ends up being about 65%
faster.

<img width="764" height="452" alt="Screenshot 2025-08-29 at 14 36 28"
src="https://github.com/user-attachments/assets/bff3f9d1-916a-485f-91b7-b54b477f5aac"
/>

Ref #10264
2025-08-29 15:26:23 +02:00
bt90
b59443f136 chore(db): avoid rowid for blocks and blocklists (#10315)
### Purpose

Noticed "large" autgenerated indices on blocks and blocklists in
https://forum.syncthing.net/t/database-or-disk-is-full-might-be-syncthing-might-be-qnap/24930/7

They both have a primary key and don't need rowids

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2025-08-29 11:12:39 +02:00
Tomasz Wilczyński
7189a3ebff fix(model): consider number of CPU cores when calculating hashers on interactive OS (#10284) (#10286)
Currently, the number of hashers is always set to 1 on interactive
operating systems, which are defined as Windows, macOS, iOS, and
Android. However, with modern multicore CPUs, it does not make much
sense to limit performance so much.

For example, without this fix, a CPU with 16 cores / 32 threads is
still limited to using just a single thread to hash files per folder,
which may severely affect its performance.

For this reason, instead of using a fixed value, calculate the number
dynamically, so that it equals one-fourth of the total number of CPU
cores. This way, the value of hashes will still end up being just 1 on
a slower 4-thread CPU, but it will be allowed to take larger values when
the number of threads is higher, increasing hashing performance in the
process.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 10:04:08 +00:00
Tomasz Wilczyński
6ed4cca691 fix(model): consider MaxFolderConcurrency when calculating number of hashers (#10285)
Currently, the number of hashers, with the exception of some specific
operating systems or when defined manually, equals the number of CPU
cores divided by the overall number of folders, and it does not take
into account the value of MaxFolderConcurrency at all. This leads to
artificial performance limits even when MaxFolderConcurrency is set to
values lower than the number of cores.

For example, let's say that the number of folders is 50 and
MaxFolderConcurrency is set a value of 4 on a 16-core CPU. With the old
calculation, the number of hashers would still end up being just 1 due
to the large number of folders. However, with the new calculation, the
number of hashers in this case will be 4, leading to better hashing
performance per folder.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 11:33:58 +02:00
Tommy van der Vorst
958f51ace6 fix(cmd): only start temporary API server during migration if it's enabled (#10284) 2025-08-25 05:46:23 +00:00
Syncthing Release Automation
07f1320e00 chore(gui, man, authors): update docs, translations, and contributors 2025-08-25 03:57:29 +00:00
Jakob Borg
3da449cfa3 chore(ursrv): count database engines 2025-08-24 22:35:00 +02:00
Jakob Borg
655ef63c74 chore(ursrv): separate calculation from serving metrics 2025-08-24 22:34:58 +02:00
Jakob Borg
01257e838b build: use Go 1.24 tools pattern (#10281) 2025-08-24 12:17:20 +00:00
Simon Frei
e54f51c9c5 chore(db): cleanup DB in tests and remove OpenTemp (#10282)
Filled up my tmpfs with test DBs when running benchmarks :)
2025-08-24 09:58:56 +00:00
Simon Frei
a259a009c8 chore(db): adjust db bench name to improve benchstat grouping (#10283)
The benchstat tool allows custom grouping when comparing with what it
calls "sub-name configuration keys":

https://pkg.go.dev/golang.org/x/perf@v0.0.0-20250813145418-2f7363a06fe1/cmd/benchstat#hdr-Configuring_comparisons

That's quite useful for these benchmarks, as we basically have two
independent configs: The type of benchmark and the size. Real example
usage for the prepared named statements PR (results are rubbish for
unrelated reasons):

```
$ benchstat -row ".name /n" bench-main.out bench-prepared.out
goos: linux
goarch: amd64
pkg: github.com/syncthing/syncthing/internal/db/sqlite
cpu: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
                            │ bench-main-20250823_014059.out │   bench-prepared-20250823_022849.out   │
                            │             sec/op             │     sec/op      vs base                │
Update Insert100Loc                           248.5m ±  8% ¹   157.7m ±  7% ¹  -36.54% (p=0.000 n=50)
Update RepBlocks100                           253.7m ±  4% ¹   163.6m ±  7% ¹  -35.49% (p=0.000 n=50)
Update RepSame100                            130.42m ±  3% ¹   60.26m ±  2% ¹  -53.80% (p=0.000 n=50)
Update Insert100Rem                           38.54m ±  5% ¹   21.94m ±  1% ¹  -43.07% (p=0.000 n=50)
Update GetGlobal100                          10.897m ±  4% ¹   4.231m ±  1% ¹  -61.17% (p=0.000 n=50)
Update LocalSequenced                         7.560m ±  5% ¹   3.124m ±  2% ¹  -58.68% (p=0.000 n=50)
Update GetDeviceSequenceLoc                  17.554µ ±  6% ¹   8.400µ ±  1% ¹  -52.15% (n=50)
Update GetDeviceSequenceRem                  17.727µ ±  4% ¹   8.237µ ±  2% ¹  -53.54% (p=0.000 n=50)
Update RemoteNeed                              4.147 ± 77% ¹    1.903 ± 78% ¹  -54.11% (p=0.000 n=50)
Update LocalNeed100Largest                   21.516m ± 22% ¹   9.312m ± 47% ¹  -56.72% (p=0.000 n=50)
geomean                                       15.35m           7.486m          -51.22%
¹ benchmarks vary in .fullname
```
2025-08-23 16:12:55 +02:00
Jakob Borg
8151bcddff fix(db): clean files for dropped folders at startup (#10280)
This adds a cleanup stage to remove database files for folders that no
longer exist on startup. Folder database files were already removed when
dropping a folder, assuming that the folder database had been opened at
that point. This won't be the case though when a folder is removed from
the config when Syncthing isn't running, or when a folder is dropped and
re-migrated in a restarted migration.
2025-08-22 09:00:05 +02:00
Jakob Borg
d776657b52 fix(cmd): provide temporary GUI/API server during database migration (#10279)
This adds a temporary GUI/API server during the database migration. It
responds with 200 OK and some log output for every request. This serves
two purposes:
- Primarily, for deployments that use the API as a health check, it
gives them something positive to accept during the migration, reducing
the risk of the migration getting killed halfway through and restarted,
thus never completing.
- Secondarily, it gives humans who happen to try to load the GUI some
sort of indication of what's going on.

Obviously, anything that expects a well-formed API response at this
stage is still going to fail. They were already failing though, as we
didn't even listen at this point before.
2025-08-22 08:35:42 +02:00
Jakob Borg
0416103f26 fix(cmd): make database migration more robust to write errors (#10278)
Two things:
- We could run into a write error, which would block the progress
forever without an error. This because the writer routine exited, while
the reader was just blocked on sending to it.
- After a failed migration, inserts could fail with unique index
constraint errors because we are reusing the sequence numbers from the
original database. Add a drop folder to the start of migration to handle
this.

Additionally, the drop folder will clear out broken database files due
to killed migrations.
2025-08-22 08:08:06 +02:00
Jakob Borg
7bfcdfb577 build: downgrade gopsutil (fixes #10276) (#10277) 2025-08-21 20:09:31 +00:00
Jakob Borg
e6a9b09527 fix: permissions in moving deb files? 2025-08-20 23:32:32 +02:00
Jakob Borg
c8f52ba1bc build: use new apt publisher 2025-08-20 23:05:52 +02:00
Ross Smith II
3058aa6315 chore(slog): re-enable LOGGER_DISCARD (fixes #10262) (#10267)
### Purpose

Re-enables LOGGER_DISCARD. See #10262.

### Documentation

No changes needed, as the docs already mention this variable.
2025-08-19 22:36:10 +02:00
André Colomb
60160db23a fix(cmd): restore --version flag for compatibility (#10269)
### Purpose

This was lost / replaced when introducing the "version" command.
However, the documentation still lists the flag - actually under the
serve command, but that can be omitted. Common convention for CLI
programs is to accept it as a flag.

### Testing

```
$ bin/syncthing --help
Usage: syncthing <command> [flags]

Flags:
  -h, --help           Show context-sensitive help.
  -C, --config=PATH    Set configuration directory (config and keys) ($STCONFDIR)
  -D, --data=PATH      Set data directory (database and logs) ($STDATADIR)
  -H, --home=PATH      Set configuration and data directory ($STHOMEDIR)
      --version        Show current version, then exit

Commands:
  serve                  Run Syncthing (default)
  cli                    Command line interface for Syncthing
  browser                Open GUI in browser, then exit
  decrypt                Decrypt or verify an encrypted folder
  device-id              Show device ID, then exit
  generate               Generate key and config, then exit
  paths                  Show configuration paths, then exit
  upgrade                Perform or check for upgrade, then exit
  version                Show current version, then exit
  debug                  Various debugging commands
  install-completions    Print commands to install shell completions

Run "syncthing <command> --help" for more information on a command.
```

```
$ bin/syncthing --version
syncthing v2.0.3-dev.2.g0f47e944-restore-version-flag "Hafnium Hornet" (go1.24.0 linux-amd64) acolomb@riddo 2025-08-18 19:25:31 UTC
```

### Documentation

Already / *still* listed in the docs under Command Line Operation.
2025-08-18 22:00:03 +02:00
Syncthing Release Automation
66b28e9aed chore(gui, man, authors): update docs, translations, and contributors 2025-08-18 04:05:25 +00:00
Jakob Borg
755daaa7b7 build: set netgo & osusergo tags for Linux build (#10261)
Avoid:

/_/GOROOT/src/os/user/cgo_lookup_cgo.go:45:(.text+0x54): warning: Using
'getgrgid_r' in statically linked applications requires at runtime the
shared libraries from the glibc version used for linking

and

/tmp/go-build/cgo-gcc-prolog:60:(.text+0x40): warning: Using
'getaddrinfo' in statically linked applications requires at runtime the
shared libraries from the glibc version used for linking
2025-08-16 06:33:01 +02:00
Jakob Borg
33b5c3c62e build: bump required language level to 1.24, compiler to 1.25 (#10248)
(After 2.0.1)
2025-08-16 06:02:58 +02:00
Jakob Borg
ffb30392e8 build: remove netgo and osusergo build tags (fixes #10251) (#10256)
I added these tags as part of the big database PR, but I forget why. I
think it came from an attempt at a static binary using the Go-based
SQLite packages, but that's not the primary build anymore anyway. We can
remove this and go back to the standard resolvers, which gives better
support for primarily Windows and macOS special resolution methods...
2025-08-14 21:32:06 +02:00
84 changed files with 562 additions and 523 deletions

View File

@@ -7,7 +7,7 @@ on:
- infra-*
env:
GO_VERSION: "~1.24.0"
GO_VERSION: "~1.25.0"
CGO_ENABLED: "0"
BUILD_USER: docker
BUILD_HOST: github.syncthing.net

View File

@@ -13,7 +13,7 @@ env:
# The go version to use for builds. We set check-latest to true when
# installing, so we get the latest patch version that matches the
# expression.
GO_VERSION: "~1.24.0"
GO_VERSION: "~1.25.0"
# Optimize compatibility on the slow architectures.
GOMIPS: softfloat
@@ -26,7 +26,8 @@ env:
BUILD_USER: builder
BUILD_HOST: github.syncthing.net
TAGS: "netgo osusergo sqlite_omit_load_extension sqlite_dbstat"
TAGS: "sqlite_omit_load_extension sqlite_dbstat"
TAGS_LINUX: "sqlite_omit_load_extension sqlite_dbstat netgo osusergo"
# A note on actions and third party code... The actions under actions/ (like
# `uses: actions/checkout`) are maintained by GitHub, and we need to trust
@@ -102,7 +103,7 @@ jobs:
runner: ["windows-latest", "ubuntu-latest", "macos-latest"]
# The oldest version in this list should match what we have in our go.mod.
# Variables don't seem to be supported here, or we could have done something nice.
go: ["~1.23.0", "~1.24.0"]
go: ["~1.24.0", "~1.25.0"]
runs-on: ${{ matrix.runner }}
steps:
- name: Set git to use LF
@@ -310,19 +311,19 @@ jobs:
run: |
sudo apt-get install -y gcc-mips64-linux-gnuabi64 gcc-mips64el-linux-gnuabi64
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch 386 -cc "zig cc -target x86-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch arm -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips -cc "zig cc -target mips-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mipsle -cc "zig cc -target mipsel-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips64 -cc mips64-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch mips64le -cc mips64el-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch riscv64 -cc "zig cc -target riscv64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch s390x -cc "zig cc -target s390x-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch loong64 -cc "zig cc -target loongarch64-linux-musl" tar "$tgt"
# go run build.go -tags "${{env.TAGS}}" -goos linux -goarch ppc64 -cc "zig cc -target powerpc64-linux-musl" tar "$tgt" # fails with linkmode not supported
go run build.go -tags "${{env.TAGS}}" -goos linux -goarch ppc64le -cc "zig cc -target powerpc64le-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch 386 -cc "zig cc -target x86-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch arm -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch mips -cc "zig cc -target mips-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch mipsle -cc "zig cc -target mipsel-linux-musleabi" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch mips64 -cc mips64-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch mips64le -cc mips64el-linux-gnuabi64-gcc tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch riscv64 -cc "zig cc -target riscv64-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch s390x -cc "zig cc -target s390x-linux-musl" tar "$tgt"
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch loong64 -cc "zig cc -target loongarch64-linux-musl" tar "$tgt"
# go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch ppc64 -cc "zig cc -target powerpc64-linux-musl" tar "$tgt" # fails with linkmode not supported
go run build.go -tags "${{env.TAGS_LINUX}}" -goos linux -goarch ppc64le -cc "zig cc -target powerpc64le-linux-musl" tar "$tgt"
done
env:
CGO_ENABLED: "1"
@@ -704,10 +705,10 @@ jobs:
- name: Package for Debian (CGO)
run: |
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch armel -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch armhf -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS_LINUX}}" -goos linux -goarch amd64 -cc "zig cc -target x86_64-linux-musl" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS_LINUX}}" -goos linux -goarch armel -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS_LINUX}}" -goos linux -goarch armhf -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" deb "$tgt"
go run build.go -no-upgrade -installsuffix=no-upgrade -tags "${{env.TAGS_LINUX}}" -goos linux -goarch arm64 -cc "zig cc -target aarch64-linux-musl" deb "$tgt"
done
env:
BUILD_USER: debian
@@ -889,8 +890,6 @@ jobs:
RELEASE_GENERATION: ${{ needs.facts.outputs.release-generation }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download packages
uses: actions/download-artifact@v4
with:
@@ -898,14 +897,6 @@ jobs:
path: packages
# Decide whether packages should go to stable, candidate or nightly
- name: Prepare packages
run: |
if [[ $RELEASE_KIND == stable && $RELEASE_GENERATION == v2 ]] ; then
RELEASE_KIND=stable-v2
fi
mkdir -p packages/syncthing/$RELEASE_KIND
mv packages/*.deb packages/syncthing/$RELEASE_KIND
- name: Pull archive
uses: docker://docker.io/rclone/rclone:latest
env:
@@ -917,15 +908,18 @@ jobs:
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync objstore:apt/dists dists
args: sync objstore:apt apt
- name: Prepare packages
run: |
sudo chown -R $(id -u) apt/pool
mv packages/*.deb apt/pool
- name: Update archive
uses: docker://ghcr.io/kastelo/ezapt:latest
with:
args:
publish
--add packages
--dists dists
publish --root apt
env:
EZAPT_KEYRING_BASE64: ${{ secrets.APT_GPG_KEYRING_BASE64 }}
@@ -940,7 +934,7 @@ jobs:
RCLONE_CONFIG_OBJSTORE_REGION: ${{ secrets.S3_REGION }}
RCLONE_CONFIG_OBJSTORE_ACL: public-read
with:
args: sync -v --no-update-modtime dists objstore:apt/dists
args: sync -v --no-update-modtime apt objstore:apt
#
# Build and push (except for PRs) to GHCR.
@@ -993,15 +987,15 @@ jobs:
- name: Build binaries (CGO)
run: |
# amd64
go run build.go -goos linux -goarch amd64 -tags "${{env.TAGS}}" -cc "zig cc -target x86_64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
go run build.go -goos linux -goarch amd64 -tags "${{env.TAGS_LINUX}}" -cc "zig cc -target x86_64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-amd64
# arm64
go run build.go -goos linux -goarch arm64 -tags "${{env.TAGS}}" -cc "zig cc -target aarch64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
go run build.go -goos linux -goarch arm64 -tags "${{env.TAGS_LINUX}}" -cc "zig cc -target aarch64-linux-musl" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-arm64
# arm
go run build.go -goos linux -goarch arm -tags "${{env.TAGS}}" -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" -no-upgrade build ${{ matrix.pkg }}
go run build.go -goos linux -goarch arm -tags "${{env.TAGS_LINUX}}" -cc "zig cc -target arm-linux-musleabi -mcpu=arm1136j_s" -no-upgrade build ${{ matrix.pkg }}
mv ${{ matrix.pkg }} ${{ matrix.pkg }}-linux-arm
env:
CGO_ENABLED: "1"

View File

@@ -64,6 +64,12 @@ linters:
# relax the slog rules for debug lines, for now
- linters: [sloglint]
source: Debug
# contexts are irrelevant for SQLite
- linters: [noctx]
text: database/sql
# Rollback errors can be ignored
- linters: [errcheck]
source: Rollback
settings:
sloglint:
context: "scope"

View File

@@ -32,6 +32,21 @@ var (
Subsystem: "ursrv_v2",
Name: "collect_seconds_last",
})
metricsRecalcsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalcs_total",
})
metricsRecalcSecondsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_total",
})
metricsRecalcSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_last",
})
metricsWriteSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",

View File

@@ -7,6 +7,8 @@
package serve
import (
"context"
"log/slog"
"reflect"
"slices"
"strconv"
@@ -28,7 +30,7 @@ type metricsSet struct {
gaugeVecLabels map[string][]string
summaries map[string]*metricSummary
collectMut sync.Mutex
collectMut sync.RWMutex
collectCutoff time.Duration
}
@@ -108,6 +110,60 @@ func nameConstLabels(name string) (string, prometheus.Labels) {
return name, m
}
func (s *metricsSet) Serve(ctx context.Context) error {
s.recalc()
const recalcInterval = 5 * time.Minute
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer := time.NewTimer(next)
defer recalcTimer.Stop()
for {
select {
case <-recalcTimer.C:
s.recalc()
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer.Reset(next)
case <-ctx.Done():
return ctx.Err()
}
}
}
func (s *metricsSet) recalc() {
s.collectMut.Lock()
defer s.collectMut.Unlock()
t0 := time.Now()
defer func() {
dur := time.Since(t0)
slog.Info("Metrics recalculated", "d", dur.String())
metricsRecalcSecondsLast.Set(dur.Seconds())
metricsRecalcSecondsTotal.Add(dur.Seconds())
metricsRecalcsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
}
func (s *metricsSet) addReport(r *contract.Report) {
gaugeVecs := make(map[string][]string)
s.addReportStruct(reflect.ValueOf(r).Elem(), gaugeVecs)
@@ -198,8 +254,8 @@ func (s *metricsSet) Describe(c chan<- *prometheus.Desc) {
}
func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
s.collectMut.Lock()
defer s.collectMut.Unlock()
s.collectMut.RLock()
defer s.collectMut.RUnlock()
t0 := time.Now()
defer func() {
@@ -209,26 +265,6 @@ func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
metricsCollectsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
for _, g := range s.gauges {
c <- g
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/ur/contract"
"github.com/thejerf/suture/v4"
)
type CLI struct {
@@ -187,7 +188,12 @@ func (cli *CLI) Run() error {
// New external metrics endpoint accepts reports from clients and serves
// aggregated usage reporting metrics.
main := suture.NewSimple("main")
main.ServeBackground(context.Background())
ms := newMetricsSet(srv)
main.Add(ms)
reg := prometheus.NewRegistry()
reg.MustRegister(ms)
@@ -198,7 +204,7 @@ func (cli *CLI) Run() error {
metricsSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
WriteTimeout: 60 * time.Second,
Handler: mux,
}
@@ -227,6 +233,11 @@ func (cli *CLI) downloadDumpFile(blobs blob.Store) error {
}
func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
t0 := time.Now()
defer func() {
metricsWriteSecondsLast.Set(float64(time.Since(t0)))
}()
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
@@ -245,9 +256,10 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
if err := os.Rename(cli.DumpFile+".tmp", cli.DumpFile); err != nil {
return fmt.Errorf("renaming dump file: %w", err)
}
slog.Info("Dump file saved")
slog.Info("Dump file saved", "d", time.Since(t0).String())
if blobs != nil {
t1 := time.Now()
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
@@ -257,7 +269,7 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
slog.Info("Dump file uploaded")
slog.Info("Dump file uploaded", "d", time.Since(t1).String())
}
return nil
@@ -369,6 +381,13 @@ func (s *server) addReport(rep *contract.Report) bool {
rep.DistOS = rep.OS
rep.DistArch = rep.Arch
if strings.HasPrefix(rep.Version, "v2.") {
rep.Database.ModernCSQLite = strings.Contains(rep.LongVersion, "modernc-sqlite")
rep.Database.MattnSQLite = !rep.Database.ModernCSQLite
} else {
rep.Database.LevelDB = true
}
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded
}
@@ -388,6 +407,7 @@ func (s *server) save(w io.Writer) error {
}
func (s *server) load(r io.Reader) {
t0 := time.Now()
dec := json.NewDecoder(r)
s.reports.Clear()
for {
@@ -400,7 +420,7 @@ func (s *server) load(r io.Reader) {
}
s.addReport(&rep)
}
slog.Info("Loaded reports", "count", s.reports.Size())
slog.Info("Loaded reports", "count", s.reports.Size(), "d", time.Since(t0).String())
}
var (

View File

@@ -122,9 +122,10 @@ type CLI struct {
// subcommands. Their settings take effect on the `locations` package by
// way of the command line parser, so anything using `locations.Get` etc
// will be doing the right thing.
ConfDir string `name:"config" short:"C" placeholder:"PATH" env:"STCONFDIR" help:"Set configuration directory (config and keys)"`
DataDir string `name:"data" short:"D" placeholder:"PATH" env:"STDATADIR" help:"Set data directory (database and logs)"`
HomeDir string `name:"home" short:"H" placeholder:"PATH" env:"STHOMEDIR" help:"Set configuration and data directory"`
ConfDir string `name:"config" short:"C" placeholder:"PATH" env:"STCONFDIR" help:"Set configuration directory (config and keys)"`
DataDir string `name:"data" short:"D" placeholder:"PATH" env:"STDATADIR" help:"Set data directory (database and logs)"`
HomeDir string `name:"home" short:"H" placeholder:"PATH" env:"STHOMEDIR" help:"Set configuration and data directory"`
VersionFlag bool `name:"version" help:"Show current version, then exit"`
Serve serveCmd `cmd:"" help:"Run Syncthing (default)" default:"withargs"`
CLI cli.CLI `cmd:"" help:"Command line interface for Syncthing"`
@@ -224,6 +225,12 @@ func main() {
kongplete.Complete(parser)
ctx, err := parser.Parse(os.Args[1:])
parser.FatalIfErrorf(err)
if entrypoint.VersionFlag {
_ = versionCmd{}.Run()
return
}
err = ctx.Run()
parser.FatalIfErrorf(err)
}
@@ -472,7 +479,11 @@ func (c *serveCmd) syncthingMain() {
})
}
if err := syncthing.TryMigrateDatabase(c.DBDeleteRetentionInterval); err != nil {
var tempApiAddress string
if cfgWrapper.GUI().Enabled {
tempApiAddress = cfgWrapper.GUI().Address()
}
if err := syncthing.TryMigrateDatabase(ctx, c.DBDeleteRetentionInterval, tempApiAddress); err != nil {
slog.Error("Failed to migrate old-style database", slogutil.Error(err))
os.Exit(1)
}

14
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/syncthing/syncthing
go 1.23.0
go 1.24.0
require (
github.com/AudriusButkevicius/recli v0.0.7
@@ -24,7 +24,6 @@ require (
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/maruel/panicparse/v2 v2.5.0
github.com/mattn/go-sqlite3 v1.14.31
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3
github.com/maxmind/geoipupdate/v6 v6.1.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75
github.com/oschwald/geoip2-golang v1.13.0
@@ -34,7 +33,7 @@ require (
github.com/quic-go/quic-go v0.52.0
github.com/rabbitmq/amqp091-go v1.10.0
github.com/rcrowley/go-metrics v0.0.0-20250401214520-65e299d6c5c9
github.com/shirou/gopsutil/v4 v4.25.7
github.com/shirou/gopsutil/v4 v4.25.6 // https://github.com/shirou/gopsutil/issues/1898
github.com/syncthing/notify v0.0.0-20250528144937-c7027d4f7465
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d
github.com/thejerf/suture/v4 v4.0.6
@@ -49,7 +48,6 @@ require (
golang.org/x/sys v0.35.0
golang.org/x/text v0.28.0
golang.org/x/time v0.12.0
golang.org/x/tools v0.36.0
google.golang.org/protobuf v1.36.7
modernc.org/sqlite v1.38.2
sigs.k8s.io/yaml v1.6.0
@@ -79,6 +77,7 @@ require (
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/nxadm/tail v1.4.11 // indirect
@@ -103,6 +102,7 @@ require (
go.yaml.in/yaml/v2 v2.4.2 // indirect
golang.org/x/mod v0.27.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/tools v0.36.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.66.3 // indirect
modernc.org/mathutil v1.7.1 // indirect
@@ -114,3 +114,9 @@ replace github.com/gobwas/glob v0.2.3 => github.com/calmh/glob v0.0.0-2022061508
// https://github.com/mattn/go-sqlite3/pull/1338
replace github.com/mattn/go-sqlite3 v1.14.31 => github.com/calmh/go-sqlite3 v1.14.32-0.20250812195006-80712c77b76a
tool (
github.com/calmh/xdr/cmd/genxdr
github.com/maxbrunsfeld/counterfeiter/v6
golang.org/x/tools/cmd/goimports
)

4
go.sum
View File

@@ -240,8 +240,8 @@ github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sclevine/spec v1.4.0 h1:z/Q9idDcay5m5irkZ28M7PtQM4aOISzOpj4bUPkDee8=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
github.com/shirou/gopsutil/v4 v4.25.7 h1:bNb2JuqKuAu3tRlPv5piSmBZyMfecwQ+t/ILq+1JqVM=
github.com/shirou/gopsutil/v4 v4.25.7/go.mod h1:XV/egmwJtd3ZQjBpJVY5kndsiOO4IRqy9TQnmm6VP7U=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=

View File

@@ -61,7 +61,7 @@
"Click to see full identification string and QR code.": "Klicken, um die vollständige Kennung und den QR-Code anzuzeigen.",
"Close": "Schließen",
"Command": "Befehl",
"Comment, when used at the start of a line": "Kommentar, wenn am Anfang der Zeile verwendet.",
"Comment, when used at the start of a line": "Kommentar, wenn am Anfang der Zeile verwendet",
"Compression": "Komprimierung",
"Configuration Directory": "Konfigurationsverzeichnis",
"Configuration File": "Konfigurationsdatei",
@@ -82,6 +82,7 @@
"Custom Range": "Eigener Zeitraum",
"Danger!": "Achtung!",
"Database Location": "Datenbank-Speicherort",
"Debug": "Debug",
"Debugging Facilities": "Debugging-Möglichkeiten",
"Default": "Vorgabe",
"Default Configuration": "Vorgabekonfiguration",
@@ -104,7 +105,7 @@
"Device Status": "Gerätestatus",
"Device is untrusted, enter encryption password": "Gerät wird nicht vertraut, Verschlüsselungspasswort eingeben",
"Device rate limits": "Datenratenbegrenzungen fürs Gerät",
"Device that last modified the item": "Gerät, das das Element zuletzt geändert hat",
"Device that last modified the item": "Gerät, welches das Element zuletzt geändert hat",
"Devices": "Geräte",
"Disable Crash Reporting": "Absturzmeldung deaktivieren",
"Disabled": "Deaktiviert",
@@ -183,7 +184,7 @@
"GUI / API HTTPS Certificate": "GUI / API HTTPS-Zertifikat",
"GUI Authentication Password": "Passwort für Zugang zur Benutzeroberfläche",
"GUI Authentication User": "Benutzername für Zugang zur Benutzeroberfläche",
"GUI Authentication: Set User and Password": "Authentifizierung für die Benutzeroberfläche: Geben Sie Benutzer und Passwort ein.",
"GUI Authentication: Set User and Password": "Authentifizierung für die Benutzeroberfläche: Geben Sie Benutzer und Passwort ein",
"GUI Listen Address": "Adresse der Benutzeroberfläche",
"GUI Override Directory": "GUI-Ersatz-Verzeichnis",
"GUI Theme": "GUI-Design",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Eingehende Datenratenbegrenzung (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Eine falsche Konfiguration kann den Ordnerinhalt beschädigen und Syncthing in einen unausführbaren Zustand versetzen.",
"Incorrect user name or password.": "Falscher Benutzername oder Passwort.",
"Info": "Info",
"Internally used paths:": "Intern verwendete Pfade:",
"Introduced By": "Verteilt von",
"Introducer": "Verteilergerät",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Custom na Saklaw",
"Danger!": "Panganib!",
"Database Location": "Lokasyon ng Database",
"Debug": "Debug",
"Debugging Facilities": "Mga Facility ng Pag-debug",
"Default": "Default",
"Default Configuration": "Default na Configuration",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Rate Limit ng Papasok (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Maaring sirain ng maling pagsasaayos ang nilalaman ng iyong mga folder at gawing inoperable ang Syncthing.",
"Incorrect user name or password.": "Maling user name o password.",
"Info": "Impormasyon",
"Internally used paths:": "Mga internal na ginamit na path:",
"Introduced By": "Ipinakilala Ni/Ng",
"Introducer": "Tagapagpakilala",
@@ -227,6 +229,7 @@
"Learn more": "Matuto pa",
"Learn more at {%url%}": "Matuto pa sa {{url}}",
"Limit": "Limitasyon",
"Limit Bandwidth in LAN": "Limitahan ang Bandwidth sa LAN",
"Listener Failures": "Mga Pagbibigo ng Listener",
"Listener Status": "Status ng Listener",
"Listeners": "Mga Listener",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Plage personnalisée",
"Danger!": "Attention !",
"Database Location": "Emplacement de la base de données",
"Debug": "Débogage",
"Debugging Facilities": "Outils de débogage",
"Default": "Par défaut",
"Default Configuration": "Préférences pour les créations (non rétroactif)",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Incorrect user name or password.": "Nom d'utilisateur ou mot de passe incorrect.",
"Info": "Informations",
"Internally used paths:": "Chemins utilisés en interne :",
"Introduced By": "Introduit par",
"Introducer": "Appareil introducteur",
@@ -322,7 +324,7 @@
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Les versions préliminaires contiennent les dernières fonctionnalités et derniers correctifs. Elles sont identiques aux traditionnelles mises à jour bimensuelles.",
"Remote Devices": "Autres appareils",
"Remote GUI": "IHM distant",
"Remove": "Supprimer",
"Remove": "Enlever",
"Remove Device": "Supprimer l'appareil",
"Remove Folder": "Supprimer le partage",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant du partage. Doit être le même sur tous les appareils concernés (généré aléatoirement, mais modifiable à la création, par exemple pour faire entrer un appareil dans un partage pré-existant actuellement non connecté mais dont on connaît déjà l'ID, ou s'il n'y a personne à l'autre bout pour vous inviter à y participer).",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Raon Saincheaptha",
"Danger!": "Contúirt!",
"Database Location": "Suíomh an Bhunachair Sonraí",
"Debug": "Dífhabhtú",
"Debugging Facilities": "Áiseanna Dífhabhtaithe",
"Default": "Réamhshocrú",
"Default Configuration": "Cumraíocht Réamhshocraithe",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Teorainn Ráta Isteach (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "D'fhéadfadh cumraíocht mhícheart dochar a dhéanamh d'inneachar d'fhillteáin agus sioncronú a dhéanamh do-oibrithe.",
"Incorrect user name or password.": "Ainm úsáideora nó pasfhocal mícheart.",
"Info": "Eolas",
"Internally used paths:": "Cosáin a úsáidtear go hinmheánach:",
"Introduced By": "Tugtha isteach ag",
"Introducer": "Réamhrá",

View File

@@ -82,6 +82,7 @@
"Custom Range": "사용자 설정 기간",
"Danger!": "위험!",
"Database Location": "데이터베이스 위치",
"Debug": "디버그",
"Debugging Facilities": "디버그 기능",
"Default": "기본값",
"Default Configuration": "기본 설정",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "수신 속도 제한(KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "잘못된 설정은 폴더의 내용을 훼손하거나 Syncthing을 작동하지 못하게 할 수 있습니다.",
"Incorrect user name or password.": "사용자 또는 비밀번호가 올바르지 않습니다.",
"Info": "정보",
"Internally used paths:": "내부적으로 사용되는 경로:",
"Introduced By": "소개한 기기",
"Introducer": "소개자",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Aangepast bereik",
"Danger!": "Let op!",
"Database Location": "Locatie van database",
"Debug": "Debuggen",
"Debugging Facilities": "Debugmogelijkheden",
"Default": "Standaard",
"Default Configuration": "Standaardconfiguratie",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Begrenzing downloadsnelheid (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Verkeerde configuratie kan de inhoud van je map beschadigen en Syncthing onbruikbaar maken.",
"Incorrect user name or password.": "Onjuiste gebruikersnaam of wachtwoord.",
"Info": "Info",
"Internally used paths:": "Intern gebruikte paden:",
"Introduced By": "Geïntroduceerd door",
"Introducer": "Introductie-apparaat",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Niestandardowy okres",
"Danger!": "Niebezpieczeństwo!",
"Database Location": "Miejsce przechowywania bazy danych",
"Debug": "Diagnozowanie błędów",
"Debugging Facilities": "Narzędzia do debugowania",
"Default": "Domyślnie",
"Default Configuration": "Domyślne ustawienia",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Ograniczenie prędkości pobierania (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Niepoprawne ustawienia mogą uszkodzić zawartość folderów oraz sprawić, że Syncthing przestanie działać.",
"Incorrect user name or password.": "Nieprawidłowa nazwa użytkownika lub hasło.",
"Info": "Informacje",
"Internally used paths:": "Ścieżki używane wewnętrznie:",
"Introduced By": "Wprowadzony przez",
"Introducer": "Wprowadzający",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Intervalo de tempo",
"Danger!": "Perigo!",
"Database Location": "Localização do banco de dados",
"Debug": "Depuração",
"Debugging Facilities": "Facilidades de depuração",
"Default": "Padrão",
"Default Configuration": "Configuração Padrão",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Limite de velocidade de recepção (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "A configuração incorreta poderá causar danos aos seus dados e tornar o Syncthing inoperante.",
"Incorrect user name or password.": "Nome de usuário ou senha incorretos.",
"Info": "Informações",
"Internally used paths:": "Caminhos usados internamente:",
"Introduced By": "Introduzido por",
"Introducer": "Apresentador",
@@ -227,6 +229,7 @@
"Learn more": "Saiba mais",
"Learn more at {%url%}": "Saiba mais em {{url}}",
"Limit": "Limite",
"Limit Bandwidth in LAN": "Limitar largura de banda na LAN",
"Listener Failures": "Falhas de Escuta",
"Listener Status": "Status da Escuta",
"Listeners": "Escutadores",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Anpassat intervall",
"Danger!": "Fara!",
"Database Location": "Databasplats",
"Debug": "Felsökning",
"Debugging Facilities": "Felsökningsfunktioner",
"Default": "Standard",
"Default Configuration": "Standardkonfiguration",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Ingående hastighetsgräns (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Inkorrekt konfiguration kan skada innehållet i mappen och få Syncthing att sluta fungera.",
"Incorrect user name or password.": "Felaktigt användarnamn eller lösenord.",
"Info": "Info",
"Internally used paths:": "Internt använda sökvägar:",
"Introduced By": "Introducerad av",
"Introducer": "Introduktör",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Özel Aralık",
"Danger!": "Tehlike!",
"Database Location": "Veritabanı Konumu",
"Debug": "Hata Ayıklama",
"Debugging Facilities": "Hata Ayıklama Olanakları",
"Default": "Varsayılan",
"Default Configuration": "Varsayılan Yapılandırma",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Gelen Hız Sınırı (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Yanlış yapılandırma klasör içeriklerinize zarar verebilir ve Syncthing'i çalışamaz hale getirebilir.",
"Incorrect user name or password.": "Yanlış kullanıcı adı ya da parola.",
"Info": "Bilgi",
"Internally used paths:": "Dahili olarak kullanılan yollar:",
"Introduced By": "Tanıtan",
"Introducer": "Tanıtıcı",

View File

@@ -82,6 +82,7 @@
"Custom Range": "自定义范围",
"Danger!": "危险!",
"Database Location": "数据库位置",
"Debug": "调试",
"Debugging Facilities": "调试功能",
"Default": "默认",
"Default Configuration": "默认配置",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "传入速率限制KiB/s",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "不正确的配置可能会损坏您的文件夹内容,并导致 Syncthing 无法运行。",
"Incorrect user name or password.": "用户名或密码不正确。",
"Info": "信息",
"Internally used paths:": "内部使用的路径:",
"Introduced By": "介绍自",
"Introducer": "作为中介",

View File

@@ -159,7 +159,7 @@
<div class="row">
<span class="col-md-8" translate>Incoming Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" />
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxRecvKbps.$valid && deviceEditor.maxRecvKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>
@@ -168,7 +168,7 @@
<div class="row">
<span class="col-md-8" translate>Outgoing Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" />
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxSendKbps.$valid && deviceEditor.maxSendKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>

View File

@@ -146,7 +146,7 @@
<div class="form-group" ng-if="internalVersioningEnabled()" ng-class="{'has-error': folderEditor.cleanupIntervalS.$invalid && folderEditor.cleanupIntervalS.$dirty}">
<label translate for="cleanupIntervalS">Cleanup Interval</label>
<div class="input-group">
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" aria-required="true" />
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" step="3600" aria-required="true" />
<div class="input-group-addon" translate>seconds</div>
</div>
<p class="help-block">

View File

@@ -194,7 +194,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxRecvKbps.$invalid && settingsEditor.MaxRecvKbps.$dirty}">
<label translate for="MaxRecvKbps">Incoming Rate Limit (KiB/s)</label>
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" />
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxRecvKbps.$error.min && settingsEditor.MaxRecvKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>
@@ -203,7 +203,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxSendKbps.$invalid && settingsEditor.MaxSendKbps.$dirty}">
<label translate for="MaxSendKbps">Outgoing Rate Limit (KiB/s)</label>
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" />
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxSendKbps.$error.min && settingsEditor.MaxSendKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>

View File

@@ -10,6 +10,7 @@ import (
"database/sql"
"embed"
"io/fs"
"log/slog"
"net/url"
"path/filepath"
"strconv"
@@ -19,11 +20,12 @@ import (
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 3
const currentSchemaVersion = 4
//go:embed sql/**
var embedded embed.FS
@@ -81,13 +83,20 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
},
}
tx, err := db.sql.Beginx()
if err != nil {
return nil, wrap(err)
}
defer tx.Rollback()
for _, script := range schemaScripts {
if err := db.runScripts(script); err != nil {
if err := db.runScripts(tx, script); err != nil {
return nil, wrap(err)
}
}
ver, _ := db.getAppliedSchemaVersion()
ver, _ := db.getAppliedSchemaVersion(tx)
shouldVacuum := false
if ver.SchemaVersion > 0 {
filter := func(scr string) bool {
scr = filepath.Base(scr)
@@ -99,20 +108,37 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
if err != nil {
return false
}
return int(n) > ver.SchemaVersion
if int(n) > ver.SchemaVersion {
slog.Info("Applying database migration", slogutil.FilePath(db.baseName), slog.String("script", scr))
return true
}
return false
}
for _, script := range migrationScripts {
if err := db.runScripts(script, filter); err != nil {
if err := db.runScripts(tx, script, filter); err != nil {
return nil, wrap(err)
}
shouldVacuum = true
}
}
// Set the current schema version, if not already set
if err := db.setAppliedSchemaVersion(currentSchemaVersion); err != nil {
if err := db.setAppliedSchemaVersion(tx, currentSchemaVersion); err != nil {
return nil, wrap(err)
}
if err := tx.Commit(); err != nil {
return nil, wrap(err)
}
if shouldVacuum {
// We applied migrations and should take the opportunity to vaccuum
// the database.
if err := db.vacuumAndOptimize(); err != nil {
return nil, wrap(err)
}
}
return db, nil
}
@@ -188,6 +214,20 @@ func (s *baseDB) expandTemplateVars(tpl string) string {
return sb.String()
}
func (s *baseDB) vacuumAndOptimize() error {
stmts := []string{
"VACUUM;",
"PRAGMA optimize;",
"PRAGMA wal_checkpoint(truncate);",
}
for _, stmt := range stmts {
if _, err := s.sql.Exec(stmt); err != nil {
return wrap(err, stmt)
}
}
return nil
}
type stmt interface {
Exec(args ...any) (sql.Result, error)
Get(dest any, args ...any) error
@@ -204,18 +244,12 @@ func (f failedStmt) Get(_ any, _ ...any) error { return f.err }
func (f failedStmt) Queryx(_ ...any) (*sqlx.Rows, error) { return nil, f.err }
func (f failedStmt) Select(_ any, _ ...any) error { return f.err }
func (s *baseDB) runScripts(glob string, filter ...func(s string) bool) error {
func (s *baseDB) runScripts(tx *sqlx.Tx, glob string, filter ...func(s string) bool) error {
scripts, err := fs.Glob(embedded, glob)
if err != nil {
return wrap(err)
}
tx, err := s.sql.Begin()
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
nextScript:
for _, scr := range scripts {
for _, fn := range filter {
@@ -238,7 +272,7 @@ nextScript:
}
}
return wrap(tx.Commit())
return nil
}
type schemaVersion struct {
@@ -251,20 +285,20 @@ func (s *schemaVersion) AppliedTime() time.Time {
return time.Unix(0, s.AppliedAt)
}
func (s *baseDB) setAppliedSchemaVersion(ver int) error {
_, err := s.stmt(`
func (s *baseDB) setAppliedSchemaVersion(tx *sqlx.Tx, ver int) error {
_, err := tx.Exec(`
INSERT OR IGNORE INTO schemamigrations (schema_version, applied_at, syncthing_version)
VALUES (?, ?, ?)
`).Exec(ver, time.Now().UnixNano(), build.LongVersion)
`, ver, time.Now().UnixNano(), build.LongVersion)
return wrap(err)
}
func (s *baseDB) getAppliedSchemaVersion() (schemaVersion, error) {
func (s *baseDB) getAppliedSchemaVersion(tx *sqlx.Tx) (schemaVersion, error) {
var v schemaVersion
err := s.stmt(`
err := tx.Get(&v, `
SELECT schema_version as schemaversion, applied_at as appliedat, syncthing_version as syncthingversion FROM schemamigrations
ORDER BY schema_version DESC
LIMIT 1
`).Get(&v)
`)
return v, wrap(err)
}

View File

@@ -7,7 +7,6 @@
package sqlite
import (
"context"
"fmt"
"testing"
"time"
@@ -21,7 +20,7 @@ import (
var globalFi protocol.FileInfo
func BenchmarkUpdate(b *testing.B) {
db, err := OpenTemp()
db, err := Open(b.TempDir())
if err != nil {
b.Fatal(err)
}
@@ -30,19 +29,20 @@ func BenchmarkUpdate(b *testing.B) {
b.Fatal(err)
}
})
svc := db.Service(time.Hour).(*Service)
fs := make([]protocol.FileInfo, 100)
t0 := time.Now()
seed := 0
size := 1000
const numBlocks = 500
fdb, err := db.getFolderDB(folderID, true)
if err != nil {
b.Fatal(err)
}
size := 10000
for size < 200_000 {
t0 := time.Now()
if err := svc.periodic(context.Background()); err != nil {
b.Fatal(err)
}
b.Log("garbage collect in", time.Since(t0))
for {
local, err := db.CountLocal(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -53,17 +53,28 @@ func BenchmarkUpdate(b *testing.B) {
}
fs := make([]protocol.FileInfo, 1000)
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.Run(fmt.Sprintf("Insert100Loc@%d", size), func(b *testing.B) {
var files, blocks int
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM files`).Scan(&files); err != nil {
b.Fatal(err)
}
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM blocks`).Scan(&blocks); err != nil {
b.Fatal(err)
}
d := time.Since(t0)
b.Logf("t=%s, files=%d, blocks=%d, files/s=%.01f, blocks/s=%.01f", d, files, blocks, float64(files)/d.Seconds(), float64(blocks)/d.Seconds())
b.Run(fmt.Sprintf("n=Insert100Loc/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
@@ -72,7 +83,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepBlocks100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepBlocks100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -86,7 +97,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepSame100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepSame100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Version = fs[i].Version.Update(42)
@@ -98,7 +109,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("Insert100Rem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=Insert100Rem/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -112,7 +123,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetGlobal100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetGlobal100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
_, ok, err := db.GetGlobalFile(folderID, fs[i].Name)
@@ -127,7 +138,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalSequenced@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalSequenced/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
cur, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
@@ -146,7 +157,21 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetDeviceSequenceLoc@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=AllLocalBlocksWithHash/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllLocalBlocksWithHash(folderID, globalFi.Blocks[0].Hash)
for range it {
count++
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "blocks/s")
})
b.Run(fmt.Sprintf("n=GetDeviceSequenceLoc/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -154,7 +179,7 @@ func BenchmarkUpdate(b *testing.B) {
}
}
})
b.Run(fmt.Sprintf("GetDeviceSequenceRem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetDeviceSequenceRem/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.DeviceID{42})
if err != nil {
@@ -163,7 +188,7 @@ func BenchmarkUpdate(b *testing.B) {
}
})
b.Run(fmt.Sprintf("RemoteNeed@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RemoteNeed/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0)
@@ -178,7 +203,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalNeed100Largest@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalNeed100Largest/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderLargestFirst, 100, 0)
@@ -193,7 +218,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
size <<= 1
size += 1000
}
}
@@ -202,7 +227,7 @@ func TestBenchmarkDropAllRemote(t *testing.T) {
t.Skip("slow test")
}
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -17,7 +17,7 @@ import (
func TestNeed(t *testing.T) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -115,7 +115,7 @@ func TestDropRecalcsGlobal(t *testing.T) {
func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -181,7 +181,7 @@ func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
func TestNeedDeleted(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -224,7 +224,7 @@ func TestNeedDeleted(t *testing.T) {
func TestDontNeedIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -271,7 +271,7 @@ func TestDontNeedIgnored(t *testing.T) {
func TestDontNeedRemoteInvalid(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -322,7 +322,7 @@ func TestDontNeedRemoteInvalid(t *testing.T) {
func TestRemoteDontNeedLocalIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -364,7 +364,7 @@ func TestRemoteDontNeedLocalIgnored(t *testing.T) {
func TestLocalDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -406,7 +406,7 @@ func TestLocalDontNeedDeletedMissing(t *testing.T) {
func TestRemoteDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -474,7 +474,7 @@ func TestRemoteDontNeedDeletedMissing(t *testing.T) {
func TestNeedRemoteSymlinkAndDir(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -517,7 +517,7 @@ func TestNeedRemoteSymlinkAndDir(t *testing.T) {
func TestNeedPagination(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -583,7 +583,7 @@ func TestDeletedAfterConflict(t *testing.T) {
// 23NHXGS FILE TreeSizeFreeSetup.exe 445 --- 2025-06-23T03:16:10.2804841Z 13832808 -nG---- HZJYWFM:1751507473 7B4kLitF
// JKX6ZDN FILE TreeSizeFreeSetup.exe 320 --- 2025-06-23T03:16:10.2804841Z 13832808 ------- JKX6ZDN:1750992570 7B4kLitF
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -15,7 +15,7 @@ import (
func TestIndexIDs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -16,7 +16,7 @@ import (
func TestBlocks(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -89,7 +89,7 @@ func TestBlocks(t *testing.T) {
func TestBlocksDeleted(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -141,7 +141,7 @@ func TestBlocksDeleted(t *testing.T) {
func TestRemoteSequence(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -14,7 +14,7 @@ import (
func TestMtimePairs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -82,6 +82,10 @@ func Open(path string, opts ...Option) (*DB, error) {
opt(db)
}
if err := db.cleanDroppedFolders(); err != nil {
slog.Warn("Failed to clean dropped folders", slogutil.Error(err))
}
return db, nil
}
@@ -120,27 +124,13 @@ func OpenForMigration(path string) (*DB, error) {
folderDBOpener: openFolderDBForMigration,
}
// // Touch device IDs that should always exist and have a low index
// // numbers, and will never change
// db.localDeviceIdx, _ = db.deviceIdxLocked(protocol.LocalDeviceID)
// db.tplInput["LocalDeviceIdx"] = db.localDeviceIdx
if err := db.cleanDroppedFolders(); err != nil {
slog.Warn("Failed to clean dropped folders", slogutil.Error(err))
}
return db, nil
}
func OpenTemp() (*DB, error) {
// SQLite has a memory mode, but it works differently with concurrency
// compared to what we need with the WAL mode. So, no memory databases
// for now.
dir, err := os.MkdirTemp("", "syncthing-db")
if err != nil {
return nil, wrap(err)
}
path := filepath.Join(dir, "db")
slog.Debug("Test DB", slogutil.FilePath(path))
return Open(path)
}
func (s *DB) Close() error {
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()

View File

@@ -36,7 +36,7 @@ const (
func TestBasics(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -81,7 +81,12 @@ func TestBasics(t *testing.T) {
)
t.Run("SchemaVersion", func(t *testing.T) {
ver, err := sdb.getAppliedSchemaVersion()
tx, err := sdb.sql.Beginx()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
ver, err := sdb.getAppliedSchemaVersion(tx)
if err != nil {
t.Fatal(err)
}
@@ -427,7 +432,7 @@ func TestBasics(t *testing.T) {
func TestPrefixGlobbing(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -496,7 +501,7 @@ func TestPrefixGlobbing(t *testing.T) {
func TestPrefixGlobbingStar(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -529,7 +534,7 @@ func TestPrefixGlobbingStar(t *testing.T) {
}
func TestAvailability(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -596,7 +601,7 @@ func TestAvailability(t *testing.T) {
}
func TestDropFilesNamed(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -640,7 +645,7 @@ func TestDropFilesNamed(t *testing.T) {
}
func TestDropFolder(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -700,7 +705,7 @@ func TestDropFolder(t *testing.T) {
}
func TestDropDevice(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -764,7 +769,7 @@ func TestDropDevice(t *testing.T) {
}
func TestDropAllFiles(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -926,7 +931,7 @@ func TestConcurrentUpdateSelect(t *testing.T) {
func TestAllForBlocksHash(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -988,7 +993,7 @@ func TestAllForBlocksHash(t *testing.T) {
func TestBlocklistGarbageCollection(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1067,7 +1072,7 @@ func TestBlocklistGarbageCollection(t *testing.T) {
func TestInsertLargeFile(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1119,7 +1124,7 @@ func TestStrangeDeletedGlobalBug(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -8,9 +8,14 @@ package sqlite
import (
"fmt"
"log/slog"
"os"
"path/filepath"
"runtime"
"slices"
"strings"
"github.com/syncthing/syncthing/internal/slogutil"
)
func (s *DB) DropFolder(folder string) error {
@@ -41,6 +46,37 @@ func (s *DB) ListFolders() ([]string, error) {
return res, wrap(err)
}
// cleanDroppedFolders removes old database files for folders that no longer
// exist in the main database.
func (s *DB) cleanDroppedFolders() error {
// All expected folder databeses.
var names []string
err := s.stmt(`SELECT database_name FROM folders`).Select(&names)
if err != nil {
return wrap(err)
}
// All folder database files on disk.
files, err := filepath.Glob(filepath.Join(s.pathBase, "folder.*"))
if err != nil {
return wrap(err)
}
// Any files that don't match a name in the database are removed.
for _, file := range files {
base := filepath.Base(file)
inDB := slices.ContainsFunc(names, func(name string) bool { return strings.HasPrefix(base, name) })
if !inDB {
if err := os.Remove(file); err != nil {
slog.Warn("Failed to remove database file for old, dropped folder", slogutil.FilePath(base))
} else {
slog.Info("Cleaned out database file for old, dropped folder", slogutil.FilePath(base))
}
}
}
return nil
}
// wrap returns the error wrapped with the calling function name and
// optional extra context strings as prefix. A nil error wraps to nil.
func wrap(err error, context ...string) error {

View File

@@ -114,11 +114,9 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
if err != nil {
return wrap(err, "marshal blocklist")
}
if _, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
if res, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
return wrap(err, "insert blocklist")
}
if device == protocol.LocalDeviceID {
} else if aff, _ := res.RowsAffected(); aff != 0 && device == protocol.LocalDeviceID {
// Insert all blocks
if err := s.insertBlocksLocked(txp, f.BlocksHash, f.Blocks); err != nil {
return wrap(err, "insert blocks")

View File

@@ -2,7 +2,7 @@ These SQL scripts are embedded in the binary.
Scripts in `schema/` are run at every startup, in alphanumerical order.
Scripts in `migrations/` are run when a migration is needed; the must begin
Scripts in `migrations/` are run when a migration is needed; they must begin
with a number that equals the schema version that results from that
migration. Migrations are not run on initial database creation, so the
scripts in `schema/` should create the latest version.

View File

@@ -0,0 +1,51 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Copy blocks to new table with fewer indexes
DROP TABLE IF EXISTS blocks_v4
;
CREATE TABLE blocks_v4 (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;
INSERT INTO blocks_v4 (hash, blocklist_hash, idx, offset, size)
SELECT hash, blocklist_hash, idx, offset, size FROM blocks ORDER BY hash, blocklist_hash, idx
;
DROP TABLE blocks
;
ALTER TABLE blocks_v4 RENAME TO blocks
;
-- Copy blocklists to new table with fewer indexes
DROP TABLE IF EXISTS blocklists_v4
;
CREATE TABLE blocklists_v4 (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT, WITHOUT ROWID
;
INSERT INTO blocklists_v4 (blocklist_hash, blprotobuf)
SELECT blocklist_hash, blprotobuf FROM blocklists ORDER BY blocklist_hash
;
DROP TABLE blocklists
;
ALTER TABLE blocklists_v4 RENAME TO blocklists
;

View File

@@ -14,21 +14,21 @@
CREATE TABLE IF NOT EXISTS blocklists (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;
-- Blocks
--
-- For all local files we store the blocks individually for quick lookup. A
-- given block can exist in multiple blocklists and at multiple offsets in a
-- blocklist.
-- blocklist. We eschew most indexes here as inserting millions of blocks is
-- common and performance is critical.
CREATE TABLE IF NOT EXISTS blocks (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx),
FOREIGN KEY(blocklist_hash) REFERENCES blocklists(blocklist_hash) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
) STRICT
PRIMARY KEY(hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;

View File

@@ -17,7 +17,7 @@ import (
func TestNamespacedInt(t *testing.T) {
t.Parallel()
ldb, err := sqlite.OpenTemp()
ldb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -7,6 +7,7 @@
package slogutil
import (
"io"
"log/slog"
"os"
"strings"
@@ -21,10 +22,20 @@ var (
}
slogDef = slog.New(&formattingHandler{
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: os.Stdout,
out: logWriter(),
})
)
func logWriter() io.Writer {
if os.Getenv("LOGGER_DISCARD") != "" {
// Hack to completely disable logging, for example when running
// benchmarks.
return io.Discard
}
return os.Stdout
}
func init() {
slog.SetDefault(slogDef)

View File

@@ -130,7 +130,7 @@ func (c *mockClock) wind(t time.Duration) {
func TestTokenManager(t *testing.T) {
t.Parallel()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -82,7 +82,7 @@ func TestStopAfterBrokenConfig(t *testing.T) {
}
w := config.Wrap("/dev/null", cfg, protocol.LocalDeviceID, events.NoopLogger)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -888,7 +888,7 @@ func TestHtmlFormLogin(t *testing.T) {
t.Errorf("Unexpected non-200 return code %d at %s", resp.StatusCode, noAuthPath)
}
if hasSessionCookie(resp.Cookies()) {
t.Errorf("Unexpected session cookie at " + noAuthPath)
t.Errorf("Unexpected session cookie at %s", noAuthPath)
}
})
}
@@ -1049,7 +1049,7 @@ func startHTTPWithShutdownTimeout(t *testing.T, cfg config.Wrapper, shutdownTime
// Instantiate the API service
urService := ur.New(cfg, m, connections, false)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1567,7 +1567,7 @@ func TestEventMasks(t *testing.T) {
cfg := newMockedConfig()
defSub := new(eventmocks.BufferedSubscription)
diskSub := new(eventmocks.BufferedSubscription)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -49,8 +49,8 @@ var (
replaceTags = map[string]string{
"sqlite_omit_load_extension": "",
"sqlite_dbstat": "",
"osusergo": "",
"netgo": "",
"osusergo": "",
}
)
@@ -134,6 +134,7 @@ func TagsList() []string {
tags = tags[1:]
}
tags = slices.Compact(tags)
return tags
}

View File

@@ -1809,60 +1809,6 @@ func (fake *Wrapper) UnsubscribeArgsForCall(i int) config.Committer {
func (fake *Wrapper) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.configPathMutex.RLock()
defer fake.configPathMutex.RUnlock()
fake.defaultDeviceMutex.RLock()
defer fake.defaultDeviceMutex.RUnlock()
fake.defaultFolderMutex.RLock()
defer fake.defaultFolderMutex.RUnlock()
fake.defaultIgnoresMutex.RLock()
defer fake.defaultIgnoresMutex.RUnlock()
fake.deviceMutex.RLock()
defer fake.deviceMutex.RUnlock()
fake.deviceListMutex.RLock()
defer fake.deviceListMutex.RUnlock()
fake.devicesMutex.RLock()
defer fake.devicesMutex.RUnlock()
fake.folderMutex.RLock()
defer fake.folderMutex.RUnlock()
fake.folderListMutex.RLock()
defer fake.folderListMutex.RUnlock()
fake.folderPasswordsMutex.RLock()
defer fake.folderPasswordsMutex.RUnlock()
fake.foldersMutex.RLock()
defer fake.foldersMutex.RUnlock()
fake.gUIMutex.RLock()
defer fake.gUIMutex.RUnlock()
fake.ignoredDeviceMutex.RLock()
defer fake.ignoredDeviceMutex.RUnlock()
fake.ignoredDevicesMutex.RLock()
defer fake.ignoredDevicesMutex.RUnlock()
fake.ignoredFolderMutex.RLock()
defer fake.ignoredFolderMutex.RUnlock()
fake.lDAPMutex.RLock()
defer fake.lDAPMutex.RUnlock()
fake.modifyMutex.RLock()
defer fake.modifyMutex.RUnlock()
fake.myIDMutex.RLock()
defer fake.myIDMutex.RUnlock()
fake.optionsMutex.RLock()
defer fake.optionsMutex.RUnlock()
fake.rawCopyMutex.RLock()
defer fake.rawCopyMutex.RUnlock()
fake.removeDeviceMutex.RLock()
defer fake.removeDeviceMutex.RUnlock()
fake.removeFolderMutex.RLock()
defer fake.removeFolderMutex.RUnlock()
fake.requiresRestartMutex.RLock()
defer fake.requiresRestartMutex.RUnlock()
fake.saveMutex.RLock()
defer fake.saveMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.subscribeMutex.RLock()
defer fake.subscribeMutex.RUnlock()
fake.unsubscribeMutex.RLock()
defer fake.unsubscribeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
//go:generate go tool counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
package config

View File

@@ -403,18 +403,6 @@ func (fake *Service) ServeReturnsOnCall(i int, result1 error) {
func (fake *Service) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.allAddressesMutex.RLock()
defer fake.allAddressesMutex.RUnlock()
fake.connectionStatusMutex.RLock()
defer fake.connectionStatusMutex.RUnlock()
fake.externalAddressesMutex.RLock()
defer fake.externalAddressesMutex.RUnlock()
fake.listenerStatusMutex.RLock()
defer fake.listenerStatusMutex.RUnlock()
fake.nATTypeMutex.RLock()
defer fake.nATTypeMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/service.go --fake-name Service . Service
//go:generate go tool counterfeiter -o mocks/service.go --fake-name Service . Service
package connections

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/manager.go --fake-name Manager . Manager
//go:generate go tool counterfeiter -o mocks/manager.go --fake-name Manager . Manager
package discover

View File

@@ -420,18 +420,6 @@ func (fake *Manager) StringReturnsOnCall(i int, result1 string) {
func (fake *Manager) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.cacheMutex.RLock()
defer fake.cacheMutex.RUnlock()
fake.childErrorsMutex.RLock()
defer fake.childErrorsMutex.RUnlock()
fake.errorMutex.RLock()
defer fake.errorMutex.RUnlock()
fake.lookupMutex.RLock()
defer fake.lookupMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
//go:generate go tool counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
// Package events provides event subscription and polling functionality.
package events

View File

@@ -160,10 +160,6 @@ func (fake *BufferedSubscription) SinceReturnsOnCall(i int, result1 []events.Eve
func (fake *BufferedSubscription) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.maskMutex.RLock()
defer fake.maskMutex.RUnlock()
fake.sinceMutex.RLock()
defer fake.sinceMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -1080,7 +1080,7 @@ func TestIssue4901(t *testing.T) {
fd, err := pats.fs.Create(".stignore")
if err != nil {
t.Fatalf(err.Error())
t.Fatal(err)
}
if _, err := fd.Write([]byte(stignore)); err != nil {
t.Fatal(err)
@@ -1102,7 +1102,7 @@ func TestIssue4901(t *testing.T) {
fd, err = pats.fs.Create("unicorn-lazor-death")
if err != nil {
t.Fatalf(err.Error())
t.Fatal(err)
}
if _, err := fd.Write([]byte(" ")); err != nil {
t.Fatal(err)

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
//go:generate go tool counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
package model

View File

@@ -165,10 +165,6 @@ func (fake *FolderSummaryService) SummaryReturnsOnCall(i int, result1 *model.Fol
func (fake *FolderSummaryService) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.summaryMutex.RLock()
defer fake.summaryMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -3837,112 +3837,6 @@ func (fake *Model) WatchErrorReturnsOnCall(i int, result1 error) {
func (fake *Model) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.addConnectionMutex.RLock()
defer fake.addConnectionMutex.RUnlock()
fake.allGlobalFilesMutex.RLock()
defer fake.allGlobalFilesMutex.RUnlock()
fake.availabilityMutex.RLock()
defer fake.availabilityMutex.RUnlock()
fake.bringToFrontMutex.RLock()
defer fake.bringToFrontMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.completionMutex.RLock()
defer fake.completionMutex.RUnlock()
fake.connectedToMutex.RLock()
defer fake.connectedToMutex.RUnlock()
fake.connectionStatsMutex.RLock()
defer fake.connectionStatsMutex.RUnlock()
fake.currentFolderFileMutex.RLock()
defer fake.currentFolderFileMutex.RUnlock()
fake.currentGlobalFileMutex.RLock()
defer fake.currentGlobalFileMutex.RUnlock()
fake.currentIgnoresMutex.RLock()
defer fake.currentIgnoresMutex.RUnlock()
fake.delayScanMutex.RLock()
defer fake.delayScanMutex.RUnlock()
fake.deviceStatisticsMutex.RLock()
defer fake.deviceStatisticsMutex.RUnlock()
fake.dismissPendingDeviceMutex.RLock()
defer fake.dismissPendingDeviceMutex.RUnlock()
fake.dismissPendingFolderMutex.RLock()
defer fake.dismissPendingFolderMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.folderErrorsMutex.RLock()
defer fake.folderErrorsMutex.RUnlock()
fake.folderProgressBytesCompletedMutex.RLock()
defer fake.folderProgressBytesCompletedMutex.RUnlock()
fake.folderStatisticsMutex.RLock()
defer fake.folderStatisticsMutex.RUnlock()
fake.getFolderVersionsMutex.RLock()
defer fake.getFolderVersionsMutex.RUnlock()
fake.globalDirectoryTreeMutex.RLock()
defer fake.globalDirectoryTreeMutex.RUnlock()
fake.globalSizeMutex.RLock()
defer fake.globalSizeMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.loadIgnoresMutex.RLock()
defer fake.loadIgnoresMutex.RUnlock()
fake.localChangedFolderFilesMutex.RLock()
defer fake.localChangedFolderFilesMutex.RUnlock()
fake.localFilesMutex.RLock()
defer fake.localFilesMutex.RUnlock()
fake.localFilesSequencedMutex.RLock()
defer fake.localFilesSequencedMutex.RUnlock()
fake.localSizeMutex.RLock()
defer fake.localSizeMutex.RUnlock()
fake.needFolderFilesMutex.RLock()
defer fake.needFolderFilesMutex.RUnlock()
fake.needSizeMutex.RLock()
defer fake.needSizeMutex.RUnlock()
fake.onHelloMutex.RLock()
defer fake.onHelloMutex.RUnlock()
fake.overrideMutex.RLock()
defer fake.overrideMutex.RUnlock()
fake.pendingDevicesMutex.RLock()
defer fake.pendingDevicesMutex.RUnlock()
fake.pendingFoldersMutex.RLock()
defer fake.pendingFoldersMutex.RUnlock()
fake.receiveOnlySizeMutex.RLock()
defer fake.receiveOnlySizeMutex.RUnlock()
fake.remoteNeedFolderFilesMutex.RLock()
defer fake.remoteNeedFolderFilesMutex.RUnlock()
fake.remoteSequencesMutex.RLock()
defer fake.remoteSequencesMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.requestGlobalMutex.RLock()
defer fake.requestGlobalMutex.RUnlock()
fake.resetFolderMutex.RLock()
defer fake.resetFolderMutex.RUnlock()
fake.restoreFolderVersionsMutex.RLock()
defer fake.restoreFolderVersionsMutex.RUnlock()
fake.revertMutex.RLock()
defer fake.revertMutex.RUnlock()
fake.scanFolderMutex.RLock()
defer fake.scanFolderMutex.RUnlock()
fake.scanFolderSubdirsMutex.RLock()
defer fake.scanFolderSubdirsMutex.RUnlock()
fake.scanFoldersMutex.RLock()
defer fake.scanFoldersMutex.RUnlock()
fake.sequenceMutex.RLock()
defer fake.sequenceMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.setIgnoresMutex.RLock()
defer fake.setIgnoresMutex.RUnlock()
fake.stateMutex.RLock()
defer fake.stateMutex.RUnlock()
fake.usageReportingStatsMutex.RLock()
defer fake.usageReportingStatsMutex.RUnlock()
fake.watchErrorMutex.RLock()
defer fake.watchErrorMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/model.go --fake-name Model . Model
//go:generate go tool counterfeiter -o mocks/model.go --fake-name Model . Model
package model
@@ -2549,6 +2548,12 @@ func (m *model) numHashers(folder string) int {
m.mut.RLock()
folderCfg := m.folderCfgs[folder]
numFolders := max(1, len(m.folderCfgs))
// MaxFolderConcurrency already limits the number of scanned folders, so
// prefer it over the overall number of folders to avoid limiting performance
// further for no reason.
if concurrency := m.cfg.Options().MaxFolderConcurrency(); concurrency > 0 {
numFolders = min(numFolders, concurrency)
}
m.mut.RUnlock()
if folderCfg.Hashers > 0 {
@@ -2556,16 +2561,17 @@ func (m *model) numHashers(folder string) int {
return folderCfg.Hashers
}
numCpus := runtime.GOMAXPROCS(-1)
if build.IsWindows || build.IsDarwin || build.IsIOS || build.IsAndroid {
// Interactive operating systems; don't load the system too heavily by
// default.
return 1
numCpus = max(1, numCpus/4)
}
// For other operating systems and architectures, lets try to get some
// work done... Divide the available CPU cores among the configured
// folders.
if perFolder := runtime.GOMAXPROCS(-1) / numFolders; perFolder > 0 {
if perFolder := numCpus / numFolders; perFolder > 0 {
return perFolder
}

View File

@@ -149,7 +149,7 @@ type testModel struct {
func newModel(t testing.TB, cfg config.Wrapper, id protocol.DeviceID, protectedFiles []string) *testModel {
t.Helper()
evLogger := events.NewLogger()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -582,24 +582,6 @@ func (fake *mockedConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *mockedConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -1144,44 +1144,6 @@ func (fake *Connection) TypeReturnsOnCall(i int, result1 string) {
func (fake *Connection) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.closeMutex.RLock()
defer fake.closeMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.deviceIDMutex.RLock()
defer fake.deviceIDMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.startMutex.RLock()
defer fake.startMutex.RUnlock()
fake.statisticsMutex.RLock()
defer fake.statisticsMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -584,24 +584,6 @@ func (fake *ConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *ConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,14 +4,12 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
// Prevents import loop, for internal testing
//go:generate counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go run ../../script/prune_mocks.go -t mocked_connection_info_test.go
//go:generate counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate counterfeiter -o mocks/connection.go --fake-name Connection . Connection
//go:generate go tool counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocks/connection.go --fake-name Connection . Connection
package protocol

View File

@@ -1,7 +1,6 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate -command genxdr go run github.com/calmh/xdr/cmd/genxdr
//go:generate genxdr -o packets_xdr.go packets.go
//go:generate go tool genxdr -o packets_xdr.go packets.go
package protocol

View File

@@ -18,7 +18,7 @@ import (
)
func TestDeviceStat(t *testing.T) {
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -191,6 +191,10 @@ func (a *App) startup() error {
if _, ok := cfgFolders[folder]; !ok {
slog.Info("Cleaning metadata for dropped folder", "folder", folder)
a.sdb.DropFolder(folder)
} else {
// Open the folder database, causing it to apply migrations
// early when appropriate.
_, _ = a.sdb.GetDeviceSequence(folder, protocol.LocalDeviceID)
}
}

View File

@@ -72,7 +72,7 @@ func TestStartupFail(t *testing.T) {
}, protocol.LocalDeviceID, events.NoopLogger)
defer os.Remove(cfg.ConfigPath())
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -7,11 +7,13 @@
package syncthing
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"log/slog"
"net/http"
"os"
"sync"
"time"
@@ -156,7 +158,8 @@ func OpenDatabase(path string, deleteRetention time.Duration) (db.DB, error) {
}
// Attempts migration of the old (LevelDB-based) database type to the new (SQLite-based) type
func TryMigrateDatabase(deleteRetention time.Duration) error {
// This will attempt to provide a temporary API server during the migration, if `apiAddr` is not empty.
func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiAddr string) error {
oldDBDir := locations.Get(locations.LegacyDatabase)
if _, err := os.Lstat(oldDBDir); err != nil {
// No old database
@@ -170,6 +173,14 @@ func TryMigrateDatabase(deleteRetention time.Duration) error {
}
defer be.Close()
// Start a temporary API server during the migration
if apiAddr != "" {
api := migratingAPI{addr: apiAddr}
apiCtx, cancel := context.WithCancel(ctx)
defer cancel()
go api.Serve(apiCtx)
}
sdb, err := sqlite.OpenForMigration(locations.Get(locations.Database))
if err != nil {
return err
@@ -197,12 +208,20 @@ func TryMigrateDatabase(deleteRetention time.Duration) error {
var writeErr error
var wg sync.WaitGroup
wg.Add(1)
writerDone := make(chan struct{})
go func() {
defer wg.Done()
defer close(writerDone)
var batch []protocol.FileInfo
files, blocks := 0, 0
t0 := time.Now()
t1 := time.Now()
if writeErr = sdb.DropFolder(folder); writeErr != nil {
slog.Error("Failed database drop", slogutil.Error(writeErr))
return
}
for fi := range fis {
batch = append(batch, fi)
files++
@@ -210,13 +229,14 @@ func TryMigrateDatabase(deleteRetention time.Duration) error {
if len(batch) == 1000 {
writeErr = sdb.Update(folder, protocol.LocalDeviceID, batch)
if writeErr != nil {
slog.Error("Failed database write", slogutil.Error(writeErr))
return
}
batch = batch[:0]
if time.Since(t1) > 10*time.Second {
d := time.Since(t0) + 1
t1 = time.Now()
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "filesrate", float64(files)/d.Seconds())
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "blocksrate", float64(blocks)/d.Seconds(), "filesrate", float64(files)/d.Seconds())
}
}
}
@@ -244,8 +264,12 @@ func TryMigrateDatabase(deleteRetention time.Duration) error {
// criteria in the database
return true
}
fis <- fi
return true
select {
case fis <- fi:
return true
case <-writerDone:
return false
}
})
close(fis)
snap.Release()
@@ -271,3 +295,27 @@ func TryMigrateDatabase(deleteRetention time.Duration) error {
slog.Info("Migration complete", "files", totFiles, "blocks", totBlocks/1000, "duration", time.Since(t0).Truncate(time.Second))
return nil
}
type migratingAPI struct {
addr string
}
func (m migratingAPI) Serve(ctx context.Context) error {
srv := &http.Server{
Addr: m.addr,
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("*** Database migration in progress ***\n\n"))
for _, line := range slogutil.GlobalRecorder.Since(time.Time{}) {
line.WriteTo(w)
}
}),
}
go func() {
slog.InfoContext(ctx, "Starting temporary GUI/API during migration", slogutil.Address(m.addr))
err := srv.ListenAndServe()
slog.InfoContext(ctx, "Temporary GUI/API closed", slogutil.Address(m.addr), slogutil.Error(err))
}()
<-ctx.Done()
return srv.Close()
}

View File

@@ -188,6 +188,13 @@ type Report struct {
DistDist string `json:"distDist" metric:"distribution,gaugeVec:distribution"`
DistOS string `json:"distOS" metric:"distribution,gaugeVec:os"`
DistArch string `json:"distArch" metric:"distribution,gaugeVec:arch"`
// Database counts
Database struct {
ModernCSQLite bool `json:"modernCSQLite" metric:"database_engine{engine=modernc-sqlite},gauge"`
MattnSQLite bool `json:"mattnSQLite" metric:"database_engine{engine=mattn-sqlite},gauge"`
LevelDB bool `json:"levelDB" metric:"database_engine{engine=leveldb},gauge"`
} `json:"database"`
}
func New() *Report {

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "STDISCOSRV" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "STRELAYSRV" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING" "1" "Aug 11, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing \- Syncthing
.SH SYNOPSIS

View File

@@ -4,8 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build ignore
// +build ignore
//go:build tools
// +build tools
package main

View File

@@ -35,7 +35,7 @@ func main() {
if err != nil {
log.Fatal(err)
}
err = exec.Command("go", "run", "golang.org/x/tools/cmd/goimports", "-w", path).Run()
err = exec.Command("go", "tool", "goimports", "-w", path).Run()
if err != nil {
log.Fatal(err)
}

View File

@@ -1,15 +0,0 @@
// This file is never built. It serves to establish dependencies on tools
// used by go generate and build.go. See
// https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module
//go:build tools
// +build tools
package tools
import (
_ "github.com/calmh/xdr"
_ "github.com/coreos/go-semver/semver"
_ "github.com/maxbrunsfeld/counterfeiter/v6"
_ "golang.org/x/tools/cmd/goimports"
)