Commit Graph

7941 Commits

Author SHA1 Message Date
Jakob Borg
25ae01b0d7 chore(sqlite): skip database GC entirely when it's provably unnecessary (#10379)
Store the sequence number of the last GC sweep in a KV. Next time, if it
matches we can just skip GC because nothing has been added or removed.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
v2.0.8
2025-09-08 08:55:04 +02:00
Syncthing Release Automation
66583927f8 chore(gui, man, authors): update docs, translations, and contributors 2025-09-08 03:52:21 +00:00
Simon Frei
f0328abeaa chore(scanner): always return values to the pools when hashing blocks (#10377)
There are some return statements in between, but putting back the values
isn't my motivation (hardly ever happens), I just find this more
readable. Same with moving `hashLength`: Placed next to the pool the
connection with `sha256.New()` is closer.

Followup to:
chore(scanner): reduce memory pressure by using pools inside hasher #10222
6e26fab3a0

Signed-off-by: Simon Frei <freisim93@gmail.com>
2025-09-07 17:00:19 +02:00
Jakob Borg
4b8d07d91c fix(sqlite): explicitly set temporary directory location (fixes #10368) (#10376)
On Unixes, avoid the /tmp which is likely to become the chosen default.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:47 +02:00
Jakob Borg
c33daca3b4 fix(sqlite): less impactful periodic garbage collection (#10374)
Periodic garbage collection can take a long time on large folders. The worst
step is the one for blocks, which are typically orders of magnitude more
numerous than files or block lists.

This improves the situation in by running blocks GC in a number of smaller
range chunks, in random order, and stopping after a time limit. At most ten
minutes per run will be spent garbage collecting blocklists and blocks.

With this, we're not guaranteed to complete a full GC on every run, but
we'll make some progress and get there eventually.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:29 +02:00
Amin Vakil
a533f453f8 build: trigger nightly build only on syncthing repo (#10375)
Signed-off-by: Amin Vakil <info@aminvakil.com>
2025-09-07 14:03:33 +02:00
Jakob Borg
3c9e87d994 build: exclude illumos from cross building
Now that we have a native build for it.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
v2.0.7
2025-09-05 11:51:15 +02:00
Jakob Borg
f0180cb014 fix(sqlite): avoid rowid on kv table (#10367)
No migration on this as it has no practical impact, just a slight
cleanup for new installations.

Also a refactor of how we declare single column primary keys, for
consistency.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 09:31:07 +00:00
Jakob Borg
a99a730c0c fix(tlsutil): support HTTP/2 on GUI/API connections (#10366)
By not setting ALPN we were implicitly rejecting HTTP/2, completely
unnecessarily.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:57:39 +02:00
Jakob Borg
36254473a3 chore(slogutil): add configurable logging format (fixes #10352) (#10354)
This adds several options for configuring the log format of timestamps
and severity levels, making it more suitable for integration with log
systems like systemd.

      --log-format-timestamp="2006-01-02 15:04:05"
         Format for timestamp, set to empty to disable timestamps ($STLOGFORMATTIMESTAMP)

      --[no-]log-format-level-string
         Whether to include level string in log line ($STLOGFORMATLEVELSTRING)

      --[no-]log-format-level-syslog
         Whether to include level as syslog prefix in log line ($STLOGFORMATLEVELSYSLOG)

So, to get a timestamp suitable for systemd (syslog prefix, no level
string, no timestamp) we can pass `--log-format-timestamp=""
--no-log-format-level-string --log-format-level-syslog` or,
equivalently, set `STLOGFORMATTIMESTAMP="" STLOGFORMATLEVELSTRING=false
STLOGFORMATLEVELSYSLOG=true`.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:52:49 +02:00
Jakob Borg
800596139e chore(sqlite): stamp files with application_id
No practical effect, just a tiny bit of fun to stamp the database files
with an application ID that identifies them.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:15:38 +02:00
Jakob Borg
f48782e4df fix(sqlite): revert to default page cache size (#10362)
While we're figuring out optimal defaults, reduce the page cache size to
the compiled-in default. In my computer this makes no difference in
benchmarks. In forum threads, it solved the problem of massive memory
usage during initial scan.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:07:51 +02:00
Jakob Borg
922cc7544e docs: we now do binaries for illumos again
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 21:38:30 +02:00
Tommy van der Vorst
9e262d84de fix(api): redact device encryption passwords in support bundle config (#10359)
* fix(api): redact device encryption passwords in support bundle config

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>

* Update lib/api/support_bundle.go

Signed-off-by: Jakob Borg <jakob@kastelo.net>

---------

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 18:22:59 +00:00
Jakob Borg
42db6280e6 fix(model): earlier free-space check (fixes #10347) (#10348)
Since #10332 we'd create the temp file when closing out the puller state
for a file, but this is inappropriate if the reason we're bailing out is
that there isn't space for it to begin with. Instead, do the
free space check before we even start copying/pulling.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 16:53:30 +00:00
Albert Lee
8d8adae310 build: package for illumos using vmactions/omnios-vm (#10328)
Use GitHub Actions to build illumos/amd64 package.

Signed-off-by: Albert Lee <trisk@forkgnu.org>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 08:51:44 +00:00
Jakob Borg
12ba4b6aea chore(model): adjust folder state logging (fixes #10350) (#10353)
Removes the chitter-chatter of folder state changes from the info level,
while adding the error state at warning level and a corresponding
clearing of the error state at info level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 07:38:06 +00:00
Jakob Borg
372e3c26b0 fix(db): remove temp_store = MEMORY pragmas (#10343)
This reduces database migration memory usage in my test scenario from
3.8 GB to 440 MB. In principle I don't think we're causing many temp
tables to be generated anyway in normal usage, but if we do and someone
can benchmark a performance difference, we can add a tunable. I ran the
database benchmark before and after and didn't see a difference above
the noise level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
v2.0.6
2025-09-03 09:27:53 +02:00
Jakob Borg
01e2426a56 fix(syncthing): properly report kibibytes RSS in Linux perfstats
The value from getrusage is already in KiB, while on macOS it's in
bytes.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 07:52:19 +02:00
Tommy van der Vorst
6e9ccf7211 fix(db): only vacuum database on startup when a migration script was actually run (#10339) v2.0.5 2025-09-02 12:03:22 -07:00
Jakob Borg
4986fc1676 docs: minor formatting fixup of previous 2025-09-02 09:19:43 +02:00
Jakob Borg
5ff050e665 docs: update contribution guidelines from the docs site (#10336)
This copies the relevant parts of the contribution guidelines in the
docs, for the purpose of keeping them in a single place. The in-docs
contribution guidelines can become a link to this document.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 09:16:36 +02:00
Jakob Borg
fc40dc8af2 docs: add DCO requirement to contribution guidelines (#10333)
This adds the requirement to have a DCO sign-off on commits.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 08:24:03 +02:00
Jakob Borg
541678ad9e fix(syncthing): apply folder migrations with temporary API/GUI server (#10330)
Prevent the feeling that nothing is happening / it's not starting.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
v2.0.4 v2.0.4-rc.2
2025-09-01 22:10:48 +02:00
Jakob Borg
fafc3ba45e fix(model): correctly handle block-aligned empty sparse files (fixes #10331) (#10332)
When handling files that consist only of power-of-two-sized blocks of
zero we'd know we have nothing to write, and when using sparse files
we'd never even create the temp file. Hence the sync would fail.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-01 22:01:29 +02:00
Syncthing Release Automation
da7a75a823 chore(gui, man, authors): update docs, translations, and contributors 2025-09-01 03:59:45 +00:00
Jakob Borg
e41d6b9c1e fix(db): apply all migrations and schema in one transaction v2.0.4-rc.1 2025-08-31 12:43:41 +02:00
Jakob Borg
21ad99c80a Revert "chore(db): update schema version in the same transaction as migration (#10321)"
This reverts commit 4459438245.
2025-08-31 12:43:41 +02:00
Jakob Borg
4ad3f07691 chore(db): migration for previous commits (#10319)
Recreate the blocks and block lists tables.

---------

Co-authored-by: bt90 <btom1990@googlemail.com>
2025-08-31 09:27:33 +02:00
Simon Frei
4459438245 chore(db): update schema version in the same transaction as migration (#10321)
Just to be entirely sure that if the migration succeeds the schema
version is always also updated. Currently if a migration succeeds but a
later migration doesn't, the changes of the migration apply but the
version stays - if the migration is breaking/non-idempotent, it will
fail when it tries to rerun it next time (otherwise it's just a
pointless re-execution).

Unfortunately with the current `db.runScripts` it wasn't that easy to
do, so I had to do quite a bit of refactoring. I am also ensuring the
right order of transactions now, though I assume that was already the
case lexicographically - can't hurt to be safe.
2025-08-30 13:18:31 +02:00
Jakob Borg
2306c6d989 chore(db): benchmark output, migration blocks/s output (#10320)
Just minor tweaks
2025-08-29 14:58:38 +00:00
Tomasz Wilczyński
0de55ef262 chore(gui): use step of 3600 for versions cleanup interval (#10317)
Currently, the input field has no step defined, meaning that it can be
increased with the arrow keys by the default value of "1". Considering
the fact that the default value is "3600" (seconds or one hour), it is
unlikely that the user wants to change it with such minimal steps.

For this reason, change the default step to "3600" (one hour). If the
user needs more granual control, they can still input the value
in seconds manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:57:27 +02:00
Tomasz Wilczyński
d083682418 chore(gui): use steps of 1024 KiB for bandwidth rate limits (#10316)
Currently, the bandwidth limit input fields have no step defined, and as
such they use the default value of "1". Taking into account the fact
that these fields use KiB as their measurements, it makes more sense to
use larger steps, such as "1024" (1 MiB), as in most cases, it is very
unlikely that the user needs to have byte-level control over the limits.

Note that these steps only apply to increasing the values by using the
arrow keys, and the user is still allowed to input any value they want
manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:56:55 +02:00
Jakob Borg
c918299eab refactor(db): slightly improve insert performance (#10318)
This just removes an unnecessary foreign key constraint, where we
already do the garbage collection manually in the database service.
However, as part of getting here I tried a couple of other variants
along the way:

- Changing the order of the primary key from `(hash, blocklist_hash,
idx)` to `(blocklist_hash, idx, hash)` so that inserts would be
naturally ordered. However this requires a new index `on blocks (hash)`
so that we can still look up blocks by hash, and turns out to be
strictly worse than what we already have.
- Removing the primary key entirely and the `WITHOUT ROWID` to make it a
rowid table without any required order, and an index as above. This is
faster when the table is small, but becomes slower when it's large (due
to dual indexes I guess).

These are the benchmark results from current `main`, the second
alternative below ("Index(hash)") and this proposal that retains the
combined primary key ("combined"). Overall it ends up being about 65%
faster.

<img width="764" height="452" alt="Screenshot 2025-08-29 at 14 36 28"
src="https://github.com/user-attachments/assets/bff3f9d1-916a-485f-91b7-b54b477f5aac"
/>

Ref #10264
2025-08-29 15:26:23 +02:00
bt90
b59443f136 chore(db): avoid rowid for blocks and blocklists (#10315)
### Purpose

Noticed "large" autgenerated indices on blocks and blocklists in
https://forum.syncthing.net/t/database-or-disk-is-full-might-be-syncthing-might-be-qnap/24930/7

They both have a primary key and don't need rowids

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2025-08-29 11:12:39 +02:00
Tomasz Wilczyński
7189a3ebff fix(model): consider number of CPU cores when calculating hashers on interactive OS (#10284) (#10286)
Currently, the number of hashers is always set to 1 on interactive
operating systems, which are defined as Windows, macOS, iOS, and
Android. However, with modern multicore CPUs, it does not make much
sense to limit performance so much.

For example, without this fix, a CPU with 16 cores / 32 threads is
still limited to using just a single thread to hash files per folder,
which may severely affect its performance.

For this reason, instead of using a fixed value, calculate the number
dynamically, so that it equals one-fourth of the total number of CPU
cores. This way, the value of hashes will still end up being just 1 on
a slower 4-thread CPU, but it will be allowed to take larger values when
the number of threads is higher, increasing hashing performance in the
process.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 10:04:08 +00:00
Tomasz Wilczyński
6ed4cca691 fix(model): consider MaxFolderConcurrency when calculating number of hashers (#10285)
Currently, the number of hashers, with the exception of some specific
operating systems or when defined manually, equals the number of CPU
cores divided by the overall number of folders, and it does not take
into account the value of MaxFolderConcurrency at all. This leads to
artificial performance limits even when MaxFolderConcurrency is set to
values lower than the number of cores.

For example, let's say that the number of folders is 50 and
MaxFolderConcurrency is set a value of 4 on a 16-core CPU. With the old
calculation, the number of hashers would still end up being just 1 due
to the large number of folders. However, with the new calculation, the
number of hashers in this case will be 4, leading to better hashing
performance per folder.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 11:33:58 +02:00
Tommy van der Vorst
958f51ace6 fix(cmd): only start temporary API server during migration if it's enabled (#10284) 2025-08-25 05:46:23 +00:00
Syncthing Release Automation
07f1320e00 chore(gui, man, authors): update docs, translations, and contributors 2025-08-25 03:57:29 +00:00
Jakob Borg
3da449cfa3 chore(ursrv): count database engines 2025-08-24 22:35:00 +02:00
Jakob Borg
655ef63c74 chore(ursrv): separate calculation from serving metrics 2025-08-24 22:34:58 +02:00
Jakob Borg
01257e838b build: use Go 1.24 tools pattern (#10281) 2025-08-24 12:17:20 +00:00
Simon Frei
e54f51c9c5 chore(db): cleanup DB in tests and remove OpenTemp (#10282)
Filled up my tmpfs with test DBs when running benchmarks :)
2025-08-24 09:58:56 +00:00
Simon Frei
a259a009c8 chore(db): adjust db bench name to improve benchstat grouping (#10283)
The benchstat tool allows custom grouping when comparing with what it
calls "sub-name configuration keys":

https://pkg.go.dev/golang.org/x/perf@v0.0.0-20250813145418-2f7363a06fe1/cmd/benchstat#hdr-Configuring_comparisons

That's quite useful for these benchmarks, as we basically have two
independent configs: The type of benchmark and the size. Real example
usage for the prepared named statements PR (results are rubbish for
unrelated reasons):

```
$ benchstat -row ".name /n" bench-main.out bench-prepared.out
goos: linux
goarch: amd64
pkg: github.com/syncthing/syncthing/internal/db/sqlite
cpu: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
                            │ bench-main-20250823_014059.out │   bench-prepared-20250823_022849.out   │
                            │             sec/op             │     sec/op      vs base                │
Update Insert100Loc                           248.5m ±  8% ¹   157.7m ±  7% ¹  -36.54% (p=0.000 n=50)
Update RepBlocks100                           253.7m ±  4% ¹   163.6m ±  7% ¹  -35.49% (p=0.000 n=50)
Update RepSame100                            130.42m ±  3% ¹   60.26m ±  2% ¹  -53.80% (p=0.000 n=50)
Update Insert100Rem                           38.54m ±  5% ¹   21.94m ±  1% ¹  -43.07% (p=0.000 n=50)
Update GetGlobal100                          10.897m ±  4% ¹   4.231m ±  1% ¹  -61.17% (p=0.000 n=50)
Update LocalSequenced                         7.560m ±  5% ¹   3.124m ±  2% ¹  -58.68% (p=0.000 n=50)
Update GetDeviceSequenceLoc                  17.554µ ±  6% ¹   8.400µ ±  1% ¹  -52.15% (n=50)
Update GetDeviceSequenceRem                  17.727µ ±  4% ¹   8.237µ ±  2% ¹  -53.54% (p=0.000 n=50)
Update RemoteNeed                              4.147 ± 77% ¹    1.903 ± 78% ¹  -54.11% (p=0.000 n=50)
Update LocalNeed100Largest                   21.516m ± 22% ¹   9.312m ± 47% ¹  -56.72% (p=0.000 n=50)
geomean                                       15.35m           7.486m          -51.22%
¹ benchmarks vary in .fullname
```
2025-08-23 16:12:55 +02:00
Jakob Borg
8151bcddff fix(db): clean files for dropped folders at startup (#10280)
This adds a cleanup stage to remove database files for folders that no
longer exist on startup. Folder database files were already removed when
dropping a folder, assuming that the folder database had been opened at
that point. This won't be the case though when a folder is removed from
the config when Syncthing isn't running, or when a folder is dropped and
re-migrated in a restarted migration.
v2.0.3
2025-08-22 09:00:05 +02:00
Jakob Borg
d776657b52 fix(cmd): provide temporary GUI/API server during database migration (#10279)
This adds a temporary GUI/API server during the database migration. It
responds with 200 OK and some log output for every request. This serves
two purposes:
- Primarily, for deployments that use the API as a health check, it
gives them something positive to accept during the migration, reducing
the risk of the migration getting killed halfway through and restarted,
thus never completing.
- Secondarily, it gives humans who happen to try to load the GUI some
sort of indication of what's going on.

Obviously, anything that expects a well-formed API response at this
stage is still going to fail. They were already failing though, as we
didn't even listen at this point before.
2025-08-22 08:35:42 +02:00
Jakob Borg
0416103f26 fix(cmd): make database migration more robust to write errors (#10278)
Two things:
- We could run into a write error, which would block the progress
forever without an error. This because the writer routine exited, while
the reader was just blocked on sending to it.
- After a failed migration, inserts could fail with unique index
constraint errors because we are reusing the sequence numbers from the
original database. Add a drop folder to the start of migration to handle
this.

Additionally, the drop folder will clear out broken database files due
to killed migrations.
2025-08-22 08:08:06 +02:00
Jakob Borg
7bfcdfb577 build: downgrade gopsutil (fixes #10276) (#10277) 2025-08-21 20:09:31 +00:00
Jakob Borg
e6a9b09527 fix: permissions in moving deb files? 2025-08-20 23:32:32 +02:00
Jakob Borg
c8f52ba1bc build: use new apt publisher 2025-08-20 23:05:52 +02:00