Compare commits

...

51 Commits

Author SHA1 Message Date
Jakob Borg
3382ccc3f1 chore(model): slightly deflake TestRecvOnlyRevertOwnID (#10390)
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-12 09:41:47 +00:00
Jakob Borg
9ee208b441 chore(sqlite): use normalised tables for file names and versions (#10383)
This changes the files table to use normalisation for the names and
versions. The idea is that these are often common between all remote
devices, and repeating an integer is more efficient than repeating a
long string. A new benchmark bears this out; for a database with 100k
files shared between 31 devices, with some worst case assumption on
version vector size, the database is reduced in size by 50% and the test
finishes quicker:

    Current:
        db_bench_test.go:322: Total size: 6263.70 MiB
    --- PASS: TestBenchmarkSizeManyFilesRemotes (1084.89s)

    New:
        db_bench_test.go:326: Total size: 3049.95 MiB
    --- PASS: TestBenchmarkSizeManyFilesRemotes (776.97s)

The other benchmarks end up about the same within the margin of
variability, with one possible exception being that RemoteNeed seems to
be a little slower on average:

                                          old files/s   new files/s
    Update/n=RemoteNeed/size=1000-8            5.051k        4.654k
    Update/n=RemoteNeed/size=2000-8            5.201k        4.384k
    Update/n=RemoteNeed/size=4000-8            4.943k        4.242k
    Update/n=RemoteNeed/size=8000-8            5.099k        3.527k
    Update/n=RemoteNeed/size=16000-8           3.686k        3.847k
    Update/n=RemoteNeed/size=30000-8           4.456k        3.482k

I'm not sure why, possibly that query can be optimised anyhow.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-12 09:27:41 +00:00
Jakob Borg
dd90e8ec7a fix(api): limit size of allowed authentication request (#10386)
We have a slightly naive io.ReadAll on the authentication handler, which
can result in unlimited memory consumption from an unauthenticated API
endpoint. Add a reasonable limit there.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-11 10:11:29 +00:00
Jakob Borg
aa6ae0f3b0 fix(sqlite): add _txlock=immediate to modernc implementation (#10384)
For symmetry with the CGO variant.

https://pkg.go.dev/modernc.org/sqlite#Driver.Open

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-11 06:16:31 +00:00
Jakob Borg
e8b256793a chore: clean up migrated database (#10381)
Remove the migrated v0.14.0 database format after two weeks. Remove a
few old patterns that are no longer relevant. Ensure the cleanup runs in
both the config and database directories.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-10 12:23:35 +02:00
Catfriend1
8233279a65 chore(ursrv): update regex patterns for Syncthing-Fork entries (#10380)
Update regex patterns for Syncthing-Fork entries

Signed-off-by: Catfriend1 <16361913+Catfriend1@users.noreply.github.com>
2025-09-09 14:34:12 +02:00
Jakob Borg
8e5d5802cc chore(ursrv): calculate more fine-grained percentiles
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-09 07:37:39 +02:00
Jakob Borg
25ae01b0d7 chore(sqlite): skip database GC entirely when it's provably unnecessary (#10379)
Store the sequence number of the last GC sweep in a KV. Next time, if it
matches we can just skip GC because nothing has been added or removed.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-08 08:55:04 +02:00
Syncthing Release Automation
66583927f8 chore(gui, man, authors): update docs, translations, and contributors 2025-09-08 03:52:21 +00:00
Simon Frei
f0328abeaa chore(scanner): always return values to the pools when hashing blocks (#10377)
There are some return statements in between, but putting back the values
isn't my motivation (hardly ever happens), I just find this more
readable. Same with moving `hashLength`: Placed next to the pool the
connection with `sha256.New()` is closer.

Followup to:
chore(scanner): reduce memory pressure by using pools inside hasher #10222
6e26fab3a0

Signed-off-by: Simon Frei <freisim93@gmail.com>
2025-09-07 17:00:19 +02:00
Jakob Borg
4b8d07d91c fix(sqlite): explicitly set temporary directory location (fixes #10368) (#10376)
On Unixes, avoid the /tmp which is likely to become the chosen default.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:47 +02:00
Jakob Borg
c33daca3b4 fix(sqlite): less impactful periodic garbage collection (#10374)
Periodic garbage collection can take a long time on large folders. The worst
step is the one for blocks, which are typically orders of magnitude more
numerous than files or block lists.

This improves the situation in by running blocks GC in a number of smaller
range chunks, in random order, and stopping after a time limit. At most ten
minutes per run will be spent garbage collecting blocklists and blocks.

With this, we're not guaranteed to complete a full GC on every run, but
we'll make some progress and get there eventually.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:29 +02:00
Amin Vakil
a533f453f8 build: trigger nightly build only on syncthing repo (#10375)
Signed-off-by: Amin Vakil <info@aminvakil.com>
2025-09-07 14:03:33 +02:00
Jakob Borg
3c9e87d994 build: exclude illumos from cross building
Now that we have a native build for it.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 11:51:15 +02:00
Jakob Borg
f0180cb014 fix(sqlite): avoid rowid on kv table (#10367)
No migration on this as it has no practical impact, just a slight
cleanup for new installations.

Also a refactor of how we declare single column primary keys, for
consistency.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 09:31:07 +00:00
Jakob Borg
a99a730c0c fix(tlsutil): support HTTP/2 on GUI/API connections (#10366)
By not setting ALPN we were implicitly rejecting HTTP/2, completely
unnecessarily.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:57:39 +02:00
Jakob Borg
36254473a3 chore(slogutil): add configurable logging format (fixes #10352) (#10354)
This adds several options for configuring the log format of timestamps
and severity levels, making it more suitable for integration with log
systems like systemd.

      --log-format-timestamp="2006-01-02 15:04:05"
         Format for timestamp, set to empty to disable timestamps ($STLOGFORMATTIMESTAMP)

      --[no-]log-format-level-string
         Whether to include level string in log line ($STLOGFORMATLEVELSTRING)

      --[no-]log-format-level-syslog
         Whether to include level as syslog prefix in log line ($STLOGFORMATLEVELSYSLOG)

So, to get a timestamp suitable for systemd (syslog prefix, no level
string, no timestamp) we can pass `--log-format-timestamp=""
--no-log-format-level-string --log-format-level-syslog` or,
equivalently, set `STLOGFORMATTIMESTAMP="" STLOGFORMATLEVELSTRING=false
STLOGFORMATLEVELSYSLOG=true`.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:52:49 +02:00
Jakob Borg
800596139e chore(sqlite): stamp files with application_id
No practical effect, just a tiny bit of fun to stamp the database files
with an application ID that identifies them.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:15:38 +02:00
Jakob Borg
f48782e4df fix(sqlite): revert to default page cache size (#10362)
While we're figuring out optimal defaults, reduce the page cache size to
the compiled-in default. In my computer this makes no difference in
benchmarks. In forum threads, it solved the problem of massive memory
usage during initial scan.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:07:51 +02:00
Jakob Borg
922cc7544e docs: we now do binaries for illumos again
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 21:38:30 +02:00
Tommy van der Vorst
9e262d84de fix(api): redact device encryption passwords in support bundle config (#10359)
* fix(api): redact device encryption passwords in support bundle config

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>

* Update lib/api/support_bundle.go

Signed-off-by: Jakob Borg <jakob@kastelo.net>

---------

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 18:22:59 +00:00
Jakob Borg
42db6280e6 fix(model): earlier free-space check (fixes #10347) (#10348)
Since #10332 we'd create the temp file when closing out the puller state
for a file, but this is inappropriate if the reason we're bailing out is
that there isn't space for it to begin with. Instead, do the
free space check before we even start copying/pulling.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 16:53:30 +00:00
Albert Lee
8d8adae310 build: package for illumos using vmactions/omnios-vm (#10328)
Use GitHub Actions to build illumos/amd64 package.

Signed-off-by: Albert Lee <trisk@forkgnu.org>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 08:51:44 +00:00
Jakob Borg
12ba4b6aea chore(model): adjust folder state logging (fixes #10350) (#10353)
Removes the chitter-chatter of folder state changes from the info level,
while adding the error state at warning level and a corresponding
clearing of the error state at info level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 07:38:06 +00:00
Jakob Borg
372e3c26b0 fix(db): remove temp_store = MEMORY pragmas (#10343)
This reduces database migration memory usage in my test scenario from
3.8 GB to 440 MB. In principle I don't think we're causing many temp
tables to be generated anyway in normal usage, but if we do and someone
can benchmark a performance difference, we can add a tunable. I ran the
database benchmark before and after and didn't see a difference above
the noise level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 09:27:53 +02:00
Jakob Borg
01e2426a56 fix(syncthing): properly report kibibytes RSS in Linux perfstats
The value from getrusage is already in KiB, while on macOS it's in
bytes.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 07:52:19 +02:00
Tommy van der Vorst
6e9ccf7211 fix(db): only vacuum database on startup when a migration script was actually run (#10339) 2025-09-02 12:03:22 -07:00
Jakob Borg
4986fc1676 docs: minor formatting fixup of previous 2025-09-02 09:19:43 +02:00
Jakob Borg
5ff050e665 docs: update contribution guidelines from the docs site (#10336)
This copies the relevant parts of the contribution guidelines in the
docs, for the purpose of keeping them in a single place. The in-docs
contribution guidelines can become a link to this document.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 09:16:36 +02:00
Jakob Borg
fc40dc8af2 docs: add DCO requirement to contribution guidelines (#10333)
This adds the requirement to have a DCO sign-off on commits.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 08:24:03 +02:00
Jakob Borg
541678ad9e fix(syncthing): apply folder migrations with temporary API/GUI server (#10330)
Prevent the feeling that nothing is happening / it's not starting.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-01 22:10:48 +02:00
Jakob Borg
fafc3ba45e fix(model): correctly handle block-aligned empty sparse files (fixes #10331) (#10332)
When handling files that consist only of power-of-two-sized blocks of
zero we'd know we have nothing to write, and when using sparse files
we'd never even create the temp file. Hence the sync would fail.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-01 22:01:29 +02:00
Syncthing Release Automation
da7a75a823 chore(gui, man, authors): update docs, translations, and contributors 2025-09-01 03:59:45 +00:00
Jakob Borg
e41d6b9c1e fix(db): apply all migrations and schema in one transaction 2025-08-31 12:43:41 +02:00
Jakob Borg
21ad99c80a Revert "chore(db): update schema version in the same transaction as migration (#10321)"
This reverts commit 4459438245.
2025-08-31 12:43:41 +02:00
Jakob Borg
4ad3f07691 chore(db): migration for previous commits (#10319)
Recreate the blocks and block lists tables.

---------

Co-authored-by: bt90 <btom1990@googlemail.com>
2025-08-31 09:27:33 +02:00
Simon Frei
4459438245 chore(db): update schema version in the same transaction as migration (#10321)
Just to be entirely sure that if the migration succeeds the schema
version is always also updated. Currently if a migration succeeds but a
later migration doesn't, the changes of the migration apply but the
version stays - if the migration is breaking/non-idempotent, it will
fail when it tries to rerun it next time (otherwise it's just a
pointless re-execution).

Unfortunately with the current `db.runScripts` it wasn't that easy to
do, so I had to do quite a bit of refactoring. I am also ensuring the
right order of transactions now, though I assume that was already the
case lexicographically - can't hurt to be safe.
2025-08-30 13:18:31 +02:00
Jakob Borg
2306c6d989 chore(db): benchmark output, migration blocks/s output (#10320)
Just minor tweaks
2025-08-29 14:58:38 +00:00
Tomasz Wilczyński
0de55ef262 chore(gui): use step of 3600 for versions cleanup interval (#10317)
Currently, the input field has no step defined, meaning that it can be
increased with the arrow keys by the default value of "1". Considering
the fact that the default value is "3600" (seconds or one hour), it is
unlikely that the user wants to change it with such minimal steps.

For this reason, change the default step to "3600" (one hour). If the
user needs more granual control, they can still input the value
in seconds manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:57:27 +02:00
Tomasz Wilczyński
d083682418 chore(gui): use steps of 1024 KiB for bandwidth rate limits (#10316)
Currently, the bandwidth limit input fields have no step defined, and as
such they use the default value of "1". Taking into account the fact
that these fields use KiB as their measurements, it makes more sense to
use larger steps, such as "1024" (1 MiB), as in most cases, it is very
unlikely that the user needs to have byte-level control over the limits.

Note that these steps only apply to increasing the values by using the
arrow keys, and the user is still allowed to input any value they want
manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:56:55 +02:00
Jakob Borg
c918299eab refactor(db): slightly improve insert performance (#10318)
This just removes an unnecessary foreign key constraint, where we
already do the garbage collection manually in the database service.
However, as part of getting here I tried a couple of other variants
along the way:

- Changing the order of the primary key from `(hash, blocklist_hash,
idx)` to `(blocklist_hash, idx, hash)` so that inserts would be
naturally ordered. However this requires a new index `on blocks (hash)`
so that we can still look up blocks by hash, and turns out to be
strictly worse than what we already have.
- Removing the primary key entirely and the `WITHOUT ROWID` to make it a
rowid table without any required order, and an index as above. This is
faster when the table is small, but becomes slower when it's large (due
to dual indexes I guess).

These are the benchmark results from current `main`, the second
alternative below ("Index(hash)") and this proposal that retains the
combined primary key ("combined"). Overall it ends up being about 65%
faster.

<img width="764" height="452" alt="Screenshot 2025-08-29 at 14 36 28"
src="https://github.com/user-attachments/assets/bff3f9d1-916a-485f-91b7-b54b477f5aac"
/>

Ref #10264
2025-08-29 15:26:23 +02:00
bt90
b59443f136 chore(db): avoid rowid for blocks and blocklists (#10315)
### Purpose

Noticed "large" autgenerated indices on blocks and blocklists in
https://forum.syncthing.net/t/database-or-disk-is-full-might-be-syncthing-might-be-qnap/24930/7

They both have a primary key and don't need rowids

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2025-08-29 11:12:39 +02:00
Tomasz Wilczyński
7189a3ebff fix(model): consider number of CPU cores when calculating hashers on interactive OS (#10284) (#10286)
Currently, the number of hashers is always set to 1 on interactive
operating systems, which are defined as Windows, macOS, iOS, and
Android. However, with modern multicore CPUs, it does not make much
sense to limit performance so much.

For example, without this fix, a CPU with 16 cores / 32 threads is
still limited to using just a single thread to hash files per folder,
which may severely affect its performance.

For this reason, instead of using a fixed value, calculate the number
dynamically, so that it equals one-fourth of the total number of CPU
cores. This way, the value of hashes will still end up being just 1 on
a slower 4-thread CPU, but it will be allowed to take larger values when
the number of threads is higher, increasing hashing performance in the
process.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 10:04:08 +00:00
Tomasz Wilczyński
6ed4cca691 fix(model): consider MaxFolderConcurrency when calculating number of hashers (#10285)
Currently, the number of hashers, with the exception of some specific
operating systems or when defined manually, equals the number of CPU
cores divided by the overall number of folders, and it does not take
into account the value of MaxFolderConcurrency at all. This leads to
artificial performance limits even when MaxFolderConcurrency is set to
values lower than the number of cores.

For example, let's say that the number of folders is 50 and
MaxFolderConcurrency is set a value of 4 on a 16-core CPU. With the old
calculation, the number of hashers would still end up being just 1 due
to the large number of folders. However, with the new calculation, the
number of hashers in this case will be 4, leading to better hashing
performance per folder.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 11:33:58 +02:00
Tommy van der Vorst
958f51ace6 fix(cmd): only start temporary API server during migration if it's enabled (#10284) 2025-08-25 05:46:23 +00:00
Syncthing Release Automation
07f1320e00 chore(gui, man, authors): update docs, translations, and contributors 2025-08-25 03:57:29 +00:00
Jakob Borg
3da449cfa3 chore(ursrv): count database engines 2025-08-24 22:35:00 +02:00
Jakob Borg
655ef63c74 chore(ursrv): separate calculation from serving metrics 2025-08-24 22:34:58 +02:00
Jakob Borg
01257e838b build: use Go 1.24 tools pattern (#10281) 2025-08-24 12:17:20 +00:00
Simon Frei
e54f51c9c5 chore(db): cleanup DB in tests and remove OpenTemp (#10282)
Filled up my tmpfs with test DBs when running benchmarks :)
2025-08-24 09:58:56 +00:00
Simon Frei
a259a009c8 chore(db): adjust db bench name to improve benchstat grouping (#10283)
The benchstat tool allows custom grouping when comparing with what it
calls "sub-name configuration keys":

https://pkg.go.dev/golang.org/x/perf@v0.0.0-20250813145418-2f7363a06fe1/cmd/benchstat#hdr-Configuring_comparisons

That's quite useful for these benchmarks, as we basically have two
independent configs: The type of benchmark and the size. Real example
usage for the prepared named statements PR (results are rubbish for
unrelated reasons):

```
$ benchstat -row ".name /n" bench-main.out bench-prepared.out
goos: linux
goarch: amd64
pkg: github.com/syncthing/syncthing/internal/db/sqlite
cpu: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
                            │ bench-main-20250823_014059.out │   bench-prepared-20250823_022849.out   │
                            │             sec/op             │     sec/op      vs base                │
Update Insert100Loc                           248.5m ±  8% ¹   157.7m ±  7% ¹  -36.54% (p=0.000 n=50)
Update RepBlocks100                           253.7m ±  4% ¹   163.6m ±  7% ¹  -35.49% (p=0.000 n=50)
Update RepSame100                            130.42m ±  3% ¹   60.26m ±  2% ¹  -53.80% (p=0.000 n=50)
Update Insert100Rem                           38.54m ±  5% ¹   21.94m ±  1% ¹  -43.07% (p=0.000 n=50)
Update GetGlobal100                          10.897m ±  4% ¹   4.231m ±  1% ¹  -61.17% (p=0.000 n=50)
Update LocalSequenced                         7.560m ±  5% ¹   3.124m ±  2% ¹  -58.68% (p=0.000 n=50)
Update GetDeviceSequenceLoc                  17.554µ ±  6% ¹   8.400µ ±  1% ¹  -52.15% (n=50)
Update GetDeviceSequenceRem                  17.727µ ±  4% ¹   8.237µ ±  2% ¹  -53.54% (p=0.000 n=50)
Update RemoteNeed                              4.147 ± 77% ¹    1.903 ± 78% ¹  -54.11% (p=0.000 n=50)
Update LocalNeed100Largest                   21.516m ± 22% ¹   9.312m ± 47% ¹  -56.72% (p=0.000 n=50)
geomean                                       15.35m           7.486m          -51.22%
¹ benchmarks vary in .fullname
```
2025-08-23 16:12:55 +02:00
112 changed files with 1987 additions and 786 deletions

View File

@@ -159,6 +159,7 @@ jobs:
needs:
- build-test
- package-linux
- package-illumos
- package-cross
- package-source
- package-debian
@@ -337,6 +338,39 @@ jobs:
*.tar.gz
compat.json
package-illumos:
runs-on: ubuntu-latest
name: Package for illumos
needs:
- facts
env:
VERSION: ${{ needs.facts.outputs.version }}
GO_VERSION: ${{ needs.facts.outputs.go-version }}
steps:
- uses: actions/checkout@v4
- name: Build syncthing in OmniOS VM
uses: vmactions/omnios-vm@v1
with:
envs: "VERSION GO_VERSION CGO_ENABLED"
usesh: true
prepare: |
pkg install developer/gcc14 web/curl archiver/gnu-tar
run: |
curl -L "https://go.dev/dl/go$GO_VERSION.illumos-amd64.tar.gz" | gtar xzf -
export PATH="$GITHUB_WORKSPACE/go/bin:$PATH"
go version
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" tar "$tgt"
done
env:
CGO_ENABLED: "1"
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: packages-illumos
path: "*.tar.gz"
#
# macOS. The entire build runs in the release environment because code
# signing is part of the build process, so it is limited to release
@@ -503,6 +537,7 @@ jobs:
| grep -v aix/ppc64 \
| grep -v android/ \
| grep -v darwin/ \
| grep -v illumos/ \
| grep -v ios/ \
| grep -v js/ \
| grep -v linux/ \
@@ -588,6 +623,7 @@ jobs:
needs:
- codesign-windows
- package-linux
- package-illumos
- package-macos
- package-cross
- package-source

View File

@@ -8,6 +8,7 @@ on:
jobs:
trigger-nightly:
if: github.repository_owner == 'syncthing'
runs-on: ubuntu-latest
name: Push to release-nightly to trigger build
steps:

View File

@@ -64,6 +64,12 @@ linters:
# relax the slog rules for debug lines, for now
- linters: [sloglint]
source: Debug
# contexts are irrelevant for SQLite
- linters: [noctx]
text: database/sql
# Rollback errors can be ignored
- linters: [errcheck]
source: Rollback
settings:
sloglint:
context: "scope"

View File

@@ -35,6 +35,7 @@ Lode Hoste (Zillode) <zillode@zillode.be>
Michael Ploujnikov (plouj) <ploujj@gmail.com>
Ross Smith II (rasa) <ross@smithii.com>
Stefan Tatschner (rumpelsepp) <stefan@sevenbyte.org> <rumpelsepp@sevenbyte.org> <stefan@rumpelsepp.org>
Tommy van der Vorst <tommy-github@pixelspark.nl> <tommy@pixelspark.nl>
Wulf Weich (wweich) <wweich@users.noreply.github.com> <wweich@gmx.de> <wulf@weich-kr.de>
Adam Piggott (ProactiveServices) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com> <ProactiveServices@users.noreply.github.com> <adam@proactiveservices.co.uk>
Adel Qalieh (adelq) <aqalieh95@gmail.com> <adelq@users.noreply.github.com>
@@ -296,7 +297,6 @@ Tobias Klauser <tobias.klauser@gmail.com>
Tobias Nygren (tnn2) <tnn@nygren.pp.se>
Tobias Tom (tobiastom) <t.tom@succont.de>
Tom Jakubowski <tom@crystae.net>
Tommy van der Vorst <tommy-github@pixelspark.nl> <tommy@pixelspark.nl>
Tully Robinson (tojrobinson) <tully@tojr.org>
Tyler Brazier (tylerbrazier) <tyler@tylerbrazier.com>
Tyler Kropp <kropptyler@gmail.com>

View File

@@ -34,19 +34,163 @@ Note that the previously used service at
retired and we kindly ask you to sign up on Weblate for continued
involvement.
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! See the [Contribution
Guidelines](https://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
## Contributing Documentation
Updates to the [documentation site](https://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Here's a short rundown of
what you need to keep in mind:
- Don't worry. You are not expected to get everything right on the first
attempt, we'll guide you through it.
- Make sure there is an
[issue](https://github.com/syncthing/syncthing/issues) that describes the
change you want to do. If the thing you want to do does not have an issue
yet, please file one before starting work on it.
- Fork the repository and make your changes in a new branch. Once it's ready
for review, create a pull request.
### Authorship
All code authors are listed in the AUTHORS file. When your first pull
request is accepted your details are added to the AUTHORS file and the list
of authors in the GUI. Commits must be made with the same name and email as
listed in the AUTHORS file. To accomplish this, ensure that your git
configuration is set correctly prior to making your first commit:
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to use
your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### The Developer Certificate of Origin (DCO)
The Syncthing project requires the Developer Certificate of Origin (DCO)
sign-off on pull requests (PRs). This means that all commit messages must
contain a signature line to indicate that the developer accepts the DCO.
The DCO is a lightweight way for contributors to certify that they wrote (or
otherwise have the right to submit) the code and changes they are
contributing to the project. Here is the full [text of the
DCO](https://developercertificate.org):
---
By making a contribution to this project, I certify that:
1. The contribution was created in whole or in part by me and I have the
right to submit it under the open source license indicated in the file;
or
2. The contribution is based upon previous work that, to the best of my
knowledge, is covered under an appropriate open source license and I have
the right under that license to submit that work with modifications,
whether created in whole or in part by me, under the same open source
license (unless I am permitted to submit under a different license), as
indicated in the file; or
3. The contribution was provided directly to me by some other person who
certified (1), (2) or (3) and I have not modified it.
4. I understand and agree that this project and the contribution are public
and that a record of the contribution (including all personal information
I submit with it, including my sign-off) is maintained indefinitely and
may be redistributed consistent with this project or the open source
license(s) involved.
---
Contributors indicate that they adhere to these requirements by adding
a `Signed-off-by` line to their commit messages. For example:
This is my commit message
Signed-off-by: Random J Developer <random@developer.example.org>
The name and email address in this line must match those of the committing
author, and be the same as what you want in the AUTHORS file as per above.
### Coding Style
#### General
- All text files use Unix line endings. The git settings already present in
the repository attempt to enforce this.
- When making changes, follow the brace and parenthesis style of the
surrounding code.
#### Go Specific
- Follow the conventions laid out in [Effective
Go](https://go.dev/doc/effective_go) as much as makes sense. The review
guidelines in [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments) should
generally be followed.
- Each commit should be `go fmt` clean.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
### Commits
- Commit messages (and pull request titles) should follow the [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and
be in lower case.
- We use a scope description in the commit message subject. This is the
component of Syncthing that the commit affects. For example, `gui`,
`protocol`, `scanner`, `upnp`, etc -- typically, the part after
`internal/`, `lib/` or `cmd/` in the package path. If the commit doesn't
affect a specific component, such as for changes to the build system or
documentation, the scope should be omitted. The same goes for changes that
affect many components which would be cumbersome to list.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject. A correctly
formatted commit message subject looks like this:
feat(dialer): add env var to disable proxy fallback (fixes #3006)
- If the commit message subject doesn't say it all, one or more paragraphs of
describing text should be added to the commit message. This should explain
why the change is made and what it accomplishes.
- When drafting a pull request, please feel free to add commits with
corrections and merge from `main` when necessary. This provides a clear time
line with changes and simplifies review. Do not, in general, rebase your
commits, as this makes review harder.
- Pull requests are merged to `main` using squash merge. The "stream of
consciousness" set of commits described in the previous point will be reduced
to a single commit at merge time. The pull request title and description will
be used as the commit message.
### Tests
Yes please, do add tests when adding features or fixing bugs. Also, when a
pull request is filed a number of automatic tests are run on the code. This
includes:
- That the code actually builds and the test suite passes.
- That the code is correctly formatted (`go fmt`).
- That the commits are based on a reasonably recent `main`.
- That the output from `go lint` and `go vet` is clean. (This checks for a
number of potential problems the compiler doesn't catch.)
## Licensing
All contributions are made available under the same license as the already
@@ -59,10 +203,6 @@ otherwise stated this means MPLv2, but there are exceptions:
- The documentation (man/...) is licensed under the Creative Commons
Attribution 4.0 International License.
- Projects under vendor/... are copyright by and licensed from their
respective original authors. Contributions should be made to the original
project, not here.
Regardless of the license in effect, you retain the copyright to your
contribution.

View File

@@ -32,6 +32,21 @@ var (
Subsystem: "ursrv_v2",
Name: "collect_seconds_last",
})
metricsRecalcsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalcs_total",
})
metricsRecalcSecondsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_total",
})
metricsRecalcSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_last",
})
metricsWriteSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",

View File

@@ -7,6 +7,9 @@
package serve
import (
"context"
"fmt"
"log/slog"
"reflect"
"slices"
"strconv"
@@ -28,7 +31,7 @@ type metricsSet struct {
gaugeVecLabels map[string][]string
summaries map[string]*metricSummary
collectMut sync.Mutex
collectMut sync.RWMutex
collectCutoff time.Duration
}
@@ -108,6 +111,60 @@ func nameConstLabels(name string) (string, prometheus.Labels) {
return name, m
}
func (s *metricsSet) Serve(ctx context.Context) error {
s.recalc()
const recalcInterval = 5 * time.Minute
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer := time.NewTimer(next)
defer recalcTimer.Stop()
for {
select {
case <-recalcTimer.C:
s.recalc()
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer.Reset(next)
case <-ctx.Done():
return ctx.Err()
}
}
}
func (s *metricsSet) recalc() {
s.collectMut.Lock()
defer s.collectMut.Unlock()
t0 := time.Now()
defer func() {
dur := time.Since(t0)
slog.Info("Metrics recalculated", "d", dur.String())
metricsRecalcSecondsLast.Set(dur.Seconds())
metricsRecalcSecondsTotal.Add(dur.Seconds())
metricsRecalcsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
}
func (s *metricsSet) addReport(r *contract.Report) {
gaugeVecs := make(map[string][]string)
s.addReportStruct(reflect.ValueOf(r).Elem(), gaugeVecs)
@@ -198,8 +255,8 @@ func (s *metricsSet) Describe(c chan<- *prometheus.Desc) {
}
func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
s.collectMut.Lock()
defer s.collectMut.Unlock()
s.collectMut.RLock()
defer s.collectMut.RUnlock()
t0 := time.Now()
defer func() {
@@ -209,26 +266,6 @@ func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
metricsCollectsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
for _, g := range s.gauges {
c <- g
}
@@ -299,12 +336,12 @@ func (q *metricSummary) Collect(c chan<- prometheus.Metric) {
}
slices.Sort(vs)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[0], append(labelVals, "0")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*5/100], append(labelVals, "0.05")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)/2], append(labelVals, "0.5")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*9/10], append(labelVals, "0.9")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*95/100], append(labelVals, "0.95")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)-1], append(labelVals, "1")...)
pctiles := []float64{0, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.975, 0.99, 1}
for _, pct := range pctiles {
idx := int(float64(len(vs)-1) * pct)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[idx], append(labelVals, fmt.Sprint(pct))...)
}
}
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/ur/contract"
"github.com/thejerf/suture/v4"
)
type CLI struct {
@@ -78,7 +79,8 @@ var (
{regexp.MustCompile(`\svagrant@bullseye`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bookworm`), "F-Droid"},
{regexp.MustCompile(`Anwender@NET2017`), "Syncthing-Fork (3rd party)"},
{regexp.MustCompile(`\sreproducible-build@Catfriend1-syncthing-android`), "Syncthing-Fork Catfriend1 (3rd party)"},
{regexp.MustCompile(`\sreproducible-build@nel0x-syncthing-android-gplay`), "Syncthing-Fork nel0x (3rd party)"},
{regexp.MustCompile(`\sbuilduser@(archlinux|svetlemodry)`), "Arch (3rd party)"},
{regexp.MustCompile(`\ssyncthing@archlinux`), "Arch (3rd party)"},
@@ -187,7 +189,12 @@ func (cli *CLI) Run() error {
// New external metrics endpoint accepts reports from clients and serves
// aggregated usage reporting metrics.
main := suture.NewSimple("main")
main.ServeBackground(context.Background())
ms := newMetricsSet(srv)
main.Add(ms)
reg := prometheus.NewRegistry()
reg.MustRegister(ms)
@@ -198,7 +205,7 @@ func (cli *CLI) Run() error {
metricsSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
WriteTimeout: 60 * time.Second,
Handler: mux,
}
@@ -227,6 +234,11 @@ func (cli *CLI) downloadDumpFile(blobs blob.Store) error {
}
func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
t0 := time.Now()
defer func() {
metricsWriteSecondsLast.Set(float64(time.Since(t0)))
}()
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
@@ -245,9 +257,10 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
if err := os.Rename(cli.DumpFile+".tmp", cli.DumpFile); err != nil {
return fmt.Errorf("renaming dump file: %w", err)
}
slog.Info("Dump file saved")
slog.Info("Dump file saved", "d", time.Since(t0).String())
if blobs != nil {
t1 := time.Now()
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
@@ -257,7 +270,7 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
slog.Info("Dump file uploaded")
slog.Info("Dump file uploaded", "d", time.Since(t1).String())
}
return nil
@@ -369,6 +382,13 @@ func (s *server) addReport(rep *contract.Report) bool {
rep.DistOS = rep.OS
rep.DistArch = rep.Arch
if strings.HasPrefix(rep.Version, "v2.") {
rep.Database.ModernCSQLite = strings.Contains(rep.LongVersion, "modernc-sqlite")
rep.Database.MattnSQLite = !rep.Database.ModernCSQLite
} else {
rep.Database.LevelDB = true
}
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded
}
@@ -388,6 +408,7 @@ func (s *server) save(w io.Writer) error {
}
func (s *server) load(r io.Reader) {
t0 := time.Now()
dec := json.NewDecoder(r)
s.reports.Clear()
for {
@@ -400,7 +421,7 @@ func (s *server) load(r io.Reader) {
}
s.addReport(&rep)
}
slog.Info("Loaded reports", "count", s.reports.Size())
slog.Info("Loaded reports", "count", s.reports.Size(), "d", time.Since(t0).String())
}
var (

View File

@@ -164,6 +164,9 @@ type serveCmd struct {
LogLevel slog.Level `help:"Log level for all packages (DEBUG,INFO,WARN,ERROR)" env:"STLOGLEVEL" default:"INFO"`
LogMaxFiles int `name:"log-max-old-files" help:"Number of old files to keep (zero to keep only current)" default:"${logMaxFiles}" placeholder:"N" env:"STLOGMAXOLDFILES"`
LogMaxSize int `help:"Maximum size of any file (zero to disable log rotation)" default:"${logMaxSize}" placeholder:"BYTES" env:"STLOGMAXSIZE"`
LogFormatTimestamp string `name:"log-format-timestamp" help:"Format for timestamp, set to empty to disable timestamps" env:"STLOGFORMATTIMESTAMP" default:"${timestampFormat}"`
LogFormatLevelString bool `name:"log-format-level-string" help:"Whether to include level string in log line" env:"STLOGFORMATLEVELSTRING" default:"${levelString}" negatable:""`
LogFormatLevelSyslog bool `name:"log-format-level-syslog" help:"Whether to include level as syslog prefix in log line" env:"STLOGFORMATLEVELSYSLOG" default:"${levelSyslog}" negatable:""`
NoBrowser bool `help:"Do not start browser" env:"STNOBROWSER"`
NoPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup" env:"STNOPORTPROBING"`
NoRestart bool `help:"Do not restart Syncthing when exiting due to API/GUI command, upgrade, or crash" env:"STNORESTART"`
@@ -186,10 +189,13 @@ type serveCmd struct {
}
func defaultVars() kong.Vars {
vars := kong.Vars{}
vars["logMaxSize"] = strconv.Itoa(10 << 20) // 10 MiB
vars["logMaxFiles"] = "3" // plus the current one
vars := kong.Vars{
"logMaxSize": strconv.Itoa(10 << 20), // 10 MiB
"logMaxFiles": "3", // plus the current one
"levelString": strconv.FormatBool(slogutil.DefaultLineFormat.LevelString),
"levelSyslog": strconv.FormatBool(slogutil.DefaultLineFormat.LevelSyslog),
"timestampFormat": slogutil.DefaultLineFormat.TimestampFormat,
}
// On non-Windows, we explicitly default to "-" which means stdout. On
// Windows, the "default" options.logFile will later be replaced with the
@@ -262,8 +268,14 @@ func (c *serveCmd) Run() error {
osutil.HideConsole()
}
// The default log level for all packages
// Customize the logging early
slogutil.SetLineFormat(slogutil.LineFormat{
TimestampFormat: c.LogFormatTimestamp,
LevelString: c.LogFormatLevelString,
LevelSyslog: c.LogFormatLevelSyslog,
})
slogutil.SetDefaultLevel(c.LogLevel)
slogutil.SetLevelOverrides(os.Getenv("STTRACE"))
// Treat an explicitly empty log file name as no log file
if c.LogFile == "" {
@@ -479,7 +491,20 @@ func (c *serveCmd) syncthingMain() {
})
}
if err := syncthing.TryMigrateDatabase(ctx, c.DBDeleteRetentionInterval, cfgWrapper.GUI().Address()); err != nil {
migratingAPICtx, migratingAPICancel := context.WithCancel(ctx)
if cfgWrapper.GUI().Enabled {
// Start a temporary API server during the migration. It will wait
// startDelay until actually starting, so that if we quickly pass
// through the migration steps (e.g., there was nothing to migrate)
// and cancel the context, it will never even start.
api := migratingAPI{
addr: cfgWrapper.GUI().Address(),
startDelay: 5 * time.Second,
}
go api.Serve(migratingAPICtx)
}
if err := syncthing.TryMigrateDatabase(ctx, c.DBDeleteRetentionInterval); err != nil {
slog.Error("Failed to migrate old-style database", slogutil.Error(err))
os.Exit(1)
}
@@ -490,6 +515,8 @@ func (c *serveCmd) syncthingMain() {
os.Exit(1)
}
migratingAPICancel() // we're done with the temporary API server
// Check if auto-upgrades is possible, and if yes, and it's enabled do an initial
// upgrade immediately. The auto-upgrade routine can only be started
// later after App is initialised.
@@ -754,40 +781,39 @@ func initialAutoUpgradeCheck(misc *db.Typed) (upgrade.Release, error) {
// suitable time after they have gone out of fashion.
func cleanConfigDirectory() {
patterns := map[string]time.Duration{
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.11.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.13.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"tmp-index-sorter.*": time.Minute, // these should never exist on startup
"support-bundle-*": 30 * 24 * time.Hour, // keep old support bundle zip or folder for a month
"csrftokens.txt": 0, // deprecated, remove immediately
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index-v0.14.0.db-migrated": 14 * 24 * time.Hour, // keep old index format for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"support-bundle-*": 30 * 24 * time.Hour, // keep old support bundle zip or folder for a month
}
for pat, dur := range patterns {
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, locations.GetBaseDir(locations.ConfigBaseDir))
files, err := fs.Glob(pat)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
continue
}
for _, file := range files {
info, err := fs.Lstat(file)
locations := slices.Compact([]string{
locations.GetBaseDir(locations.ConfigBaseDir),
locations.GetBaseDir(locations.DataBaseDir),
})
for _, loc := range locations {
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, loc)
for pat, dur := range patterns {
entries, err := fs.Glob(pat)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
continue
}
if time.Since(info.ModTime()) > dur {
if err = fs.RemoveAll(file); err != nil {
for _, entry := range entries {
info, err := fs.Lstat(entry)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
} else {
slog.Warn("Cleaned away old file", slogutil.FilePath(filepath.Base(file)))
continue
}
if time.Since(info.ModTime()) > dur {
if err = fs.RemoveAll(entry); err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
} else {
slog.Warn("Cleaned away old file", slogutil.FilePath(filepath.Base(entry)))
}
}
}
}
@@ -1011,3 +1037,32 @@ func setConfigDataLocationsFromFlags(homeDir, confDir, dataDir string) error {
}
return nil
}
type migratingAPI struct {
addr string
startDelay time.Duration
}
func (m migratingAPI) Serve(ctx context.Context) error {
srv := &http.Server{
Addr: m.addr,
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("*** Database migration in progress ***\n\n"))
for _, line := range slogutil.GlobalRecorder.Since(time.Time{}) {
_, _ = line.WriteTo(w, slogutil.DefaultLineFormat)
}
}),
}
go func() {
select {
case <-time.After(m.startDelay):
slog.InfoContext(ctx, "Starting temporary GUI/API during migration", slogutil.Address(m.addr))
err := srv.ListenAndServe()
slog.InfoContext(ctx, "Temporary GUI/API closed", slogutil.Address(m.addr), slogutil.Error(err))
case <-ctx.Done():
}
}()
<-ctx.Done()
return srv.Close()
}

View File

@@ -16,7 +16,9 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/exp/constraints"
)
@@ -48,14 +50,19 @@ func savePerfStats(file string) {
in, out := protocol.TotalInOut()
timeDiff := t.Sub(prevTime)
rss := curRus.Maxrss
if build.IsDarwin {
rss /= 1024
}
fmt.Fprintf(fd, "%.03f\t%f\t%d\t%d\t%.0f\t%.0f\t%d\n",
t.Sub(t0).Seconds(),
rate(cpusec(&prevRus), cpusec(&curRus), timeDiff, 1),
(curMem.Sys-curMem.HeapReleased)/1024,
curRus.Maxrss/1024,
rss,
rate(prevIn, in, timeDiff, 1e3),
rate(prevOut, out, timeDiff, 1e3),
dirsize(locations.Get(locations.Database))/1024,
osutil.DirSize(locations.Get(locations.Database))/1024,
)
prevTime = t
@@ -78,21 +85,3 @@ func rate[T number](prev, cur T, d time.Duration, div float64) float64 {
rate := float64(diff) / d.Seconds() / div
return rate
}
func dirsize(location string) int64 {
entries, err := os.ReadDir(location)
if err != nil {
return 0
}
var size int64
for _, entry := range entries {
fi, err := entry.Info()
if err != nil {
continue
}
size += fi.Size()
}
return size
}

View File

@@ -7,6 +7,9 @@ StartLimitBurst=4
[Service]
User=%i
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1

View File

@@ -5,7 +5,10 @@ StartLimitIntervalSec=60
StartLimitBurst=4
[Service]
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart --logflags=0
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4

10
go.mod
View File

@@ -24,7 +24,6 @@ require (
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/maruel/panicparse/v2 v2.5.0
github.com/mattn/go-sqlite3 v1.14.31
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3
github.com/maxmind/geoipupdate/v6 v6.1.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75
github.com/oschwald/geoip2-golang v1.13.0
@@ -49,7 +48,6 @@ require (
golang.org/x/sys v0.35.0
golang.org/x/text v0.28.0
golang.org/x/time v0.12.0
golang.org/x/tools v0.36.0
google.golang.org/protobuf v1.36.7
modernc.org/sqlite v1.38.2
sigs.k8s.io/yaml v1.6.0
@@ -79,6 +77,7 @@ require (
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/nxadm/tail v1.4.11 // indirect
@@ -103,6 +102,7 @@ require (
go.yaml.in/yaml/v2 v2.4.2 // indirect
golang.org/x/mod v0.27.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/tools v0.36.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.66.3 // indirect
modernc.org/mathutil v1.7.1 // indirect
@@ -114,3 +114,9 @@ replace github.com/gobwas/glob v0.2.3 => github.com/calmh/glob v0.0.0-2022061508
// https://github.com/mattn/go-sqlite3/pull/1338
replace github.com/mattn/go-sqlite3 v1.14.31 => github.com/calmh/go-sqlite3 v1.14.32-0.20250812195006-80712c77b76a
tool (
github.com/calmh/xdr/cmd/genxdr
github.com/maxbrunsfeld/counterfeiter/v6
golang.org/x/tools/cmd/goimports
)

View File

@@ -82,6 +82,7 @@
"Custom Range": "В периода",
"Danger!": "Опасност!",
"Database Location": "Местоположение на хранилището",
"Debug": "Отстраняване на дефекти",
"Debugging Facilities": "Отстраняване на дефекти",
"Default": "По подразбиране",
"Default Configuration": "Настройки по подразбиране",
@@ -176,7 +177,7 @@
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Вида „{{receiveEncrypted}}“ може да бъде избран само при добавяне на папка.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Видът папката „{{receiveEncrypted}}“ не може да бъде променян след нейното създаване. Трябва да я премахнете, изтриете или разшифровате съдържанието и да добавите папката отново.",
"Folders": "Папки",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете нейната първопричина или потърсете помощ ако не съумявате.",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете първопричината или ако не съумявате потърсете помощ.",
"Forever": "Завинаги",
"Full Rescan Interval (s)": "Интервал на пълно обхождане (секунди)",
"GUI": "Интерфейс",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Ограничение при изтегляне (КиБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Неправилни настройки могат да повредят файлове и да попречат на синхронизирането.",
"Incorrect user name or password.": "Грешно потребителско име или парола.",
"Info": "Сведения",
"Internally used paths:": "Вътрешно използвани пътища:",
"Introduced By": "Предложено от",
"Introducer": "Поръчител",
@@ -529,7 +531,7 @@
"You have unsaved changes. Do you really want to discard them?": "Има незапазени промени. Желаете ли да се откажете от тях?",
"You must keep at least one version.": "Необходимо е да запазите поне една версия.",
"You should never add or change anything locally in a \"{%receiveEncrypted%}\" folder.": "Никога не трябва да променяте нищо в папка от вида „{{receiveEncrypted}}“.",
"Your SMS app should open to let you choose the recipient and send it from your own number.": "Приложението за SMS би трябвало да се отвори, да ви даде възможност да изберете получател, за да изпратите съобщението от вашия телефонен номер.",
"Your SMS app should open to let you choose the recipient and send it from your own number.": "Приложението за SMS би трябвало да се отвори, да ви даде възможност да изберете получател, на когото да изпратите съобщението от вашия телефонен номер.",
"Your email app should open to let you choose the recipient and send it from your own address.": "Пощенският клиент би трябвало да се отвори, да ви даде възможност да изберете получател, за да изпратите съобщението от вашия адрес за електронна поща.",
"days": "дни",
"deleted": "премахнато",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Custom na Saklaw",
"Danger!": "Panganib!",
"Database Location": "Lokasyon ng Database",
"Debug": "Debug",
"Debugging Facilities": "Mga Facility ng Pag-debug",
"Default": "Default",
"Default Configuration": "Default na Configuration",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Rate Limit ng Papasok (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Maaring sirain ng maling pagsasaayos ang nilalaman ng iyong mga folder at gawing inoperable ang Syncthing.",
"Incorrect user name or password.": "Maling user name o password.",
"Info": "Impormasyon",
"Internally used paths:": "Mga internal na ginamit na path:",
"Introduced By": "Ipinakilala Ni/Ng",
"Introducer": "Tagapagpakilala",
@@ -227,6 +229,7 @@
"Learn more": "Matuto pa",
"Learn more at {%url%}": "Matuto pa sa {{url}}",
"Limit": "Limitasyon",
"Limit Bandwidth in LAN": "Limitahan ang Bandwidth sa LAN",
"Listener Failures": "Mga Pagbibigo ng Listener",
"Listener Status": "Status ng Listener",
"Listeners": "Mga Listener",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Plage personnalisée",
"Danger!": "Attention !",
"Database Location": "Emplacement de la base de données",
"Debug": "Débogage",
"Debugging Facilities": "Outils de débogage",
"Default": "Par défaut",
"Default Configuration": "Préférences pour les créations (non rétroactif)",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Incorrect user name or password.": "Nom d'utilisateur ou mot de passe incorrect.",
"Info": "Informations",
"Internally used paths:": "Chemins utilisés en interne :",
"Introduced By": "Introduit par",
"Introducer": "Appareil introducteur",
@@ -322,7 +324,7 @@
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Les versions préliminaires contiennent les dernières fonctionnalités et derniers correctifs. Elles sont identiques aux traditionnelles mises à jour bimensuelles.",
"Remote Devices": "Autres appareils",
"Remote GUI": "IHM distant",
"Remove": "Supprimer",
"Remove": "Enlever",
"Remove Device": "Supprimer l'appareil",
"Remove Folder": "Supprimer le partage",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant du partage. Doit être le même sur tous les appareils concernés (généré aléatoirement, mais modifiable à la création, par exemple pour faire entrer un appareil dans un partage pré-existant actuellement non connecté mais dont on connaît déjà l'ID, ou s'il n'y a personne à l'autre bout pour vous inviter à y participer).",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Redes permitidas",
"Alphabetic": "Alfabética",
"Altered by ignoring deletes.": "Cambiado por ignorar o borrado.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Sempre acendido cando o cartafol é de tipo \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Un comando externo xestiona as versións. Ten que eliminar o ficheiro do cartafol compartido. Si a ruta ao aplicativo contén espazos, deberían ir acotados.",
"Anonymous Usage Reporting": "Informe anónimo de uso",
"Anonymous usage report format has changed. Would you like to move to the new format?": "O formato do informe de uso anónimo cambiou. Quere usar o novo formato?",
@@ -52,6 +53,7 @@
"Body:": "Corpo:",
"Bugs": "Erros",
"Cancel": "Cancelar",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Non se pode activar cando o cartafol é de tipo \"{{foldertype}}\".",
"Changelog": "Rexistro de cambios",
"Clean out after": "Limpar despois",
"Cleaning Versions": "Limpando Versións",
@@ -80,6 +82,7 @@
"Custom Range": "Rango personalizado",
"Danger!": "Perigo!",
"Database Location": "Localización da Base de Datos",
"Debug": "Depurar",
"Debugging Facilities": "Ferramentas de depuración",
"Default": "Predeterminado",
"Default Configuration": "Configuración Predeterminada",
@@ -140,6 +143,7 @@
"Enables sending extended attributes to other devices, and applying incoming extended attributes. May require running with elevated privileges.": "Activa o envío de atributos extendidos a outros dispositivos, e aplicar os atributos extendidos recibidos. Podería requerir a execución con privilexios elevados.",
"Enables sending extended attributes to other devices, but not applying incoming extended attributes. This can have a significant performance impact. Always enabled when \"Sync Extended Attributes\" is enabled.": "Activa o envío de atributos extendidos a outros dispositivos, pero non aplica atributos extendidos que se reciben. Isto podería afectar significativamente ao rendemento. Sempre está activado cando «Sincr Atributos Extendidos\" está activado.",
"Enables sending ownership information to other devices, and applying incoming ownership information. Typically requires running with elevated privileges.": "Activa o envío de información sobre a propiedade a outros dispositivos, e aplica a información sobre a propiedade cando se recibe. Normalmente require a execución con privilexios elevados.",
"Enables sending ownership information to other devices, but not applying incoming ownership information. This can have a significant performance impact. Always enabled when \"Sync Ownership\" is enabled.": "Activa o envío a outros dispositivos de información sobre a propiedade, pero non aplica información entrante sobre a propiedade. Isto pode afectar en gran medida ao rendemento. Está sempre activado cando \"Sincronización da propiedade\" está activada.",
"Enter a non-negative number (e.g., \"2.35\") and select a unit. Percentages are as part of the total disk size.": "Introduza un número non negativo (por exemplo, \"2.35\") e seleccione unha unidade. As porcentaxes son como partes totais do tamaño do disco.",
"Enter a non-privileged port number (1024 - 65535).": "Introduza un número de porto non privilexiado (1024-65535).",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduza direccións separadas por comas (\"tcp://ip:porto\", \"tcp://host:porto\") ou \"dynamic\" para realizar o descubrimento automático da dirección.",

View File

@@ -1,4 +1,7 @@
{
"A device with that ID is already added.": "Uređaj s tim ID-om je već dodan.",
"A negative number of days doesn't make sense.": "Negativan broj dana nema smisla.",
"A new major version may not be compatible with previous versions.": "Nova glavna verzija možda neće biti kompatibilna s prethodnim verzijama.",
"API Key": "API ključ",
"About": "Informacije",
"Action": "Radnja",
@@ -8,15 +11,545 @@
"Add Device": "Dodaj uređaj",
"Add Folder": "Dodaj mapu",
"Add Remote Device": "Dodaj udaljeni uređaj",
"Add filter entry": "Dodaj unos filtra",
"Add ignore patterns": "Dodaj uzorke zanemarivanja",
"Add new folder?": "Dodati novu mapu?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Osim toga, interval ponovnog skeniranja će se povećati (60 puta, tj. nova zadana vrijednost je 1 sat). Također ga možete ručno konfigurirati za svaku mapu kasnije nakon što odaberete „Ne”.",
"Address": "Adresa",
"Addresses": "Adrese",
"Advanced": "Napredno",
"Advanced Configuration": "Napredna konfiguracija",
"All Data": "Svi podaci",
"All Time": "Svo vrijeme",
"All folders shared with this device must be protected by a password, such that all sent data is unreadable without the given password.": "Sve mape kaoje se dijele s ovim uređajem moraju biti zaštićene lozinkom, tako da su svi poslani podaci nečitljivi bez navedene lozinke.",
"Allow Anonymous Usage Reporting?": "Dozvoliti anonimno izvještavanje o korištenju?",
"Allowed Networks": "Dozvoljene mreže",
"Alphabetic": "Abecednim redom",
"Altered by ignoring deletes.": "Promijenjeno ignoriranjem brisanja.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Uvijek je uključeno kada je vrsta mape „{{foldertype}}“.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Eksterna naredba barata upravljanjem verzijama. Mora ukloniti datoteku iz dijeljene mape. Ako staza do programa sadrži razmake, treba je staviti u navodnike.",
"Anonymous Usage Reporting": "Anonimno izvještavanje o korištenju",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Promijenjen je format za anonimno izvještavanje o korištenju. Želiš li prijeći na novi format?",
"Applied to LAN": "Primijenjeno na LAN",
"Apply": "Primijeni",
"LDAP": "LDAP"
"Are you sure you want to override all remote changes?": "Stvarno želite prepisati sve udaljene promjene?",
"Are you sure you want to permanently delete all these files?": "Stvarno želite trajno izbrisati sve ove datoteke?",
"Are you sure you want to remove device {%name%}?": "Stvarno želite ukloniti uređaj {{name}}?",
"Are you sure you want to remove folder {%label%}?": "Stvarno želite ukloniti mapu {{label}}?",
"Are you sure you want to restore {%count%} files?": "Stvarno želite obnoviti {{count}} datoteke(a)?",
"Are you sure you want to revert all local changes?": "Stvarno želite poništiti sve lokalne promjene?",
"Are you sure you want to upgrade?": "Stvarno želite nadograditi?",
"Authentication Required": "Potrebna je autentifikacija",
"Authors": "Autori",
"Auto Accept": "Automatski prihvati",
"Automatic Crash Reporting": "Automatsko izvještavanje o prekidu rada programa",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatska nadogradnja sada nudi izbor između stabilnih izdanja i kandidata za izdanje.",
"Automatic upgrades": "Automatske nadogradnje",
"Automatic upgrades are always enabled for candidate releases.": "Automatske nadogradnje su uvijek uključene za izdanja kandidata.",
"Automatically create or share folders that this device advertises at the default path.": "Automatski stvorite ili dijelite mape koje ovaj uređaj oglašava na zadanoj stazi.",
"Available debug logging facilities:": "Dostupni alati za bilježenje grešaka:",
"Be careful!": "Pazi!",
"Body:": "Sadržaj:",
"Bugs": "Greške",
"Cancel": "Odustani",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Ne može se uključiti kada je vrsta mape „{{foldertype}}“.",
"Changelog": "Dnevnik promjena",
"Clean out after": "Očisti nakon",
"Cleaning Versions": "Čišćenje verzija",
"Cleanup Interval": "Interval čišćenja",
"Click to see full identification string and QR code.": "Kliknite za prikaz potpunog identifikacijskog niza i QR koda.",
"Close": "Zatvori",
"Command": "Naredba",
"Comment, when used at the start of a line": "Komentar, kada se koristi na početku retka",
"Compression": "Kompresija",
"Configuration Directory": "Direktorij konfiguracije",
"Configuration File": "Datoteka konfiguracije",
"Configured": "Konfigurirano",
"Connected (Unused)": "Povezano (nekorišteno)",
"Connection Error": "Greška u vezi",
"Connection Management": "Upravljanje vezama",
"Connection Type": "Vrsta veze",
"Connections": "Veze",
"Connections via relays might be rate limited by the relay": "Veze putem releja mogu biti ograničene brzinom releja",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Kontinuirano praćenje promjena sada je dostupno unutar Syncthinga. Ovo će otkriti promjene na disku i pokrenuti skeniranje samo promijenjenih staza. Prednosti su da se promjene brže šire i da je potreban manji broj potpunih skeniranja.",
"Copied from elsewhere": "Kopirano iz jednog drugog mjesta",
"Copied from original": "Kopirano iz originala",
"Copied!": "Kopirano!",
"Copy": "Kopiraj",
"Copy failed! Try to select and copy manually.": "Kopiranje nije uspjelo! Pokušajte ručno odabrati i kopirati.",
"Currently Shared With Devices": "Trenutačno se dijeli s uređajima",
"Custom Range": "Prilagođeni raspon",
"Danger!": "Opasnost!",
"Database Location": "Lokacija baze podataka",
"Debug": "Otklanjanje grešaka",
"Debugging Facilities": "Alati za otklanjanje grešaka",
"Default": "Zadano",
"Default Configuration": "Zadana konfiguracija",
"Default Device": "Zadani uređaj",
"Default Folder": "Zadana mapa",
"Default Ignore Patterns": "Zadani uzorci zanemarivanja",
"Defaults": "Zadane vrijednosti",
"Delete": "Izbriži",
"Delete Unexpected Items": "Izbriši neočekivane stavke",
"Deleted {%file%}": "Izbrisano {{file}}",
"Deselect All": "Odznači sve",
"Deselect devices to stop sharing this folder with.": "Odznačite uređaje da biste prestali dijeliti ovu mapu s njima.",
"Deselect folders to stop sharing with this device.": "Odznačite mape da biste ih prestali dijeliti s ovim uređajem.",
"Device": "Uređaj",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Uređaj „{{name}}“ ({{device}} na {{address}}) se želi povezati. Dodati novi uređaj?",
"Device Certificate": "Certifikat uređaja",
"Device ID": "ID uređaja",
"Device Identification": "Identifikacija uređaja",
"Device Name": "Ime uređaja",
"Device Status": "Stanje uređaja",
"Device is untrusted, enter encryption password": "Uređaj je nepovjerljiv, unesite lozinku za šifriranje",
"Device rate limits": "Ograničenja brzine uređaja",
"Device that last modified the item": "Uređaj koji je zadnji put promijenio stavku",
"Devices": "Uređaji",
"Disable Crash Reporting": "Isključi izvještavanje o prekidu rada programa",
"Disabled": "Isključeno",
"Disabled periodic scanning and disabled watching for changes": "Isključeno periodično skeniranje i isključeno praćenje promjena",
"Disabled periodic scanning and enabled watching for changes": "Isključeno periodično skeniranje i uključeno praćenje promjena",
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Isključeno periodično skeniranje i neuspjelo postavljanje praćenja promjena, ponovni pokušaj svake 1 min:",
"Disables comparing and syncing file permissions. Useful on systems with nonexistent or custom permissions (e.g. FAT, exFAT, Synology, Android).": "Isključuje uspoređivanje i sinkroniziranje dozvola za datoteke. Korisno na sustavima s nepostojećim ili prilagođenim dozvolama (npr. FAT, exFAT, Synology, Android).",
"Discard": "Odbaci",
"Disconnected": "Odspojeno",
"Disconnected (Inactive)": "Odspojeno (neaktivno)",
"Disconnected (Unused)": "Odspojeno (neiskorišteno)",
"Discovered": "Otkriveno",
"Discovery": "Otkrivanje",
"Discovery Failures": "Neuspjella otkrivanja",
"Discovery Status": "Stanje otkrivanja",
"Dismiss": "Odbaci",
"Do not add it to the ignore list, so this notification may recur.": "Nemojte je dodati na popis za zanemarivanje, kako bi se ova obavijest mogla ponovo pojaviti.",
"Do not restore": "Nemoj obnoviti",
"Do not restore all": "Nemoj sve obnoviti",
"Do you want to enable watching for changes for all your folders?": "Želite li uključiti praćenje promjena za sve svoje mape?",
"Documentation": "Dokumentacija",
"Download Rate": "Stopa preuzimanja",
"Downloaded": "Preuzeto",
"Downloading": "Preuzimanje",
"Edit": "Uredi",
"Edit Device": "Uredi uređaj",
"Edit Device Defaults": "Uredi zadane postavke uređaja",
"Edit Folder": "Uredi mapu",
"Edit Folder Defaults": "Uredi zadane postavke mape",
"Editing {%path%}.": "Uređivanje {{path}}.",
"Enable Crash Reporting": "Uključi izvještavanje o prekidu rada programa",
"Enable NAT traversal": "Uključi NAT povezivanje",
"Enable Relaying": "Uključi komunikaciju putem releja",
"Enabled": "Uključeno",
"Enables sending extended attributes to other devices, and applying incoming extended attributes. May require running with elevated privileges.": "Uključuje slanje proširenih svojstva na druge uređaje i primjenu dolaznih proširenih svojstva. Može zahtijevati pokretanje s povlaštenim privilegijama.",
"Enables sending extended attributes to other devices, but not applying incoming extended attributes. This can have a significant performance impact. Always enabled when \"Sync Extended Attributes\" is enabled.": "Uključuje slanje proširenih svojstva drugim uređajima, ali ne primijenjuje dolazna proširena svojstva. To može imati značajan utjecaj na performansu. Uvijek je uključeno kada je uključena opcija „Sinkroniziraj proširena svojstva”.",
"Enables sending ownership information to other devices, and applying incoming ownership information. Typically requires running with elevated privileges.": "Uključuje slanje podataka o vlasništvu na druge uređaje i primjenu dolaznih podataka o vlasništvu. Obično zahtijeva pokretanje s povlaštnim privilegijama.",
"Enables sending ownership information to other devices, but not applying incoming ownership information. This can have a significant performance impact. Always enabled when \"Sync Ownership\" is enabled.": "Uključuje slanje podataka vlasništva drugim uređajima, ali ne primijenjuje dolazne podatke vlasništva. To može imati značajan utjecaj na performansu. Uvijek je uključeno kada je uključena opcija „Sinkroniziraj vlasništvo”.",
"Enter a non-negative number (e.g., \"2.35\") and select a unit. Percentages are as part of the total disk size.": "Unesite nenegativan broj (npr. „2,35“) i odaberite jedinicu. Postoci su dio ukupne veličine diska.",
"Enter a non-privileged port number (1024 - 65535).": "Unesite broj neprivilegiranog priključka (1024 do 65535).",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Unesite adrese odvojene zarezom („tcp://ip:port“, „tcp://host:port“) ili „dinamičko“ za izvršavanje automatskog otkrivanja adrese.",
"Enter ignore patterns, one per line.": "Unesite uzorke zanemarivanja, jedan po retku.",
"Enter up to three octal digits.": "Unesite do tri oktalne znamenke.",
"Error": "Greška",
"Extended Attributes": "Proširena svojstva",
"Extended Attributes Filter": "Filtar proširenih svojstva",
"External": "Eksterno",
"External File Versioning": "Eksterno upravljanje verzijama datoteka",
"Failed Items": "Neuspjele stavke",
"Failed to load file versions.": "Učitavanje verzija datoteke nije uspjelo.",
"Failed to load ignore patterns.": "Učitavanje uzoraka za zanemarivanje nije uspjelo.",
"Failed to set up, retrying": "Postavljanje nije uspjelo, ponovni pokušaj",
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "Neuspjelo povezivanje s IPv6 serverima je normalno ako ne postoji IPv6 veza.",
"File Pull Order": "Redoslijed preuzimanja datoteka",
"File Versioning": "Upravljanje verzijama datoteka",
"Files are moved to .stversions directory when replaced or deleted by Syncthing.": "Datoteke se premještaju u .stversions direktorij kada ih Syncthing zamijeni ili izbriše.",
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Datoteke se premještaju u verzije s datumom u .stversions direktorij kada ih Syncthing zamijeni ili izbriše.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Datoteke su zaštićene od promjena na drugim uređajima, ali promjene koje su napravljene na ovom uređaju će se poslati ostatku klastera.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Datoteke se sinkroniziraju iz klastera, ali sve lokalno napravljene promjene se neće slati na druge uređaje.",
"Filesystem Watcher Errors": "Greške u praćenju datotečnog sustava",
"Filter by date": "Filtriraj po datumu",
"Filter by name": "Filtriraj po imenu",
"Folder": "Mapa",
"Folder ID": "ID mape",
"Folder Label": "Oznaka mape",
"Folder Path": "Staza mape",
"Folder Status": "Stanje mape",
"Folder Type": "Vrsta mape",
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Vrsta mape „{{receiveEncrypted}}“ se može postaviti samo prilikom dodavanja nove mape.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Vrsta mape „{{receiveEncrypted}}“ se ne može promijeniti nakon dodavanja mape. Morate ukloniti mapu, izbrisati ili dešifrirati podatke na disku i mapu ponovo dodati.",
"Folders": "Mape",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Dogodila se greška u sljedećim mapama prilikom početka praćenja promjena. Pokušat će se ponavljati svake minute, tako da bi greške uskoro mogle nestati. Ako ne nestanu, pokušajte riješiti temeljni problem i zatražite pomoć ako ga ne možete riješiti.",
"Forever": "Zauvijek",
"Full Rescan Interval (s)": "Interval potpunog ponovnog skeniranja (s)",
"GUI": "Grafičko korisničko sučelje",
"GUI / API HTTPS Certificate": "HTTPS certifikat za GUI / API",
"GUI Authentication Password": "Lozinka za GUI autentifikacju",
"GUI Authentication User": "Korisnik za GUI autentifikacju",
"GUI Authentication: Set User and Password": "GUI autentifikacija: Postavite korisnika i lozinku",
"GUI Override Directory": "Direktorij za nadjačavanje GUI-ja",
"GUI Theme": "Tema GUI-ja",
"General": "Općenito",
"Generate": "Generiraj",
"Global Discovery": "Globalno otkrivanje",
"Global Discovery Servers": "Serveri za globalno otkrivanje",
"Global State": "Globalno stanje",
"Help": "Pomoć",
"Hint: only deny-rules detected while the default is deny. Consider adding \"permit any\" as last rule.": "Savjet: otkriveno je samo odbij-pravila dok je zadana postavka „Odbij“. Razmislite o dodavanju „dozvoli bilo koje“ kao posljednje pravilo.",
"Home page": "Početna stranica",
"However, your current settings indicate you might not want it enabled. We have disabled automatic crash reporting for you.": "Međutim, vaše trenutačne postavke ukazuju da možda ne želite da je uključeno. Za vas smo isključili automatsko izvještavanje o pregidu rada programa.",
"Identification": "Identifikacija",
"If untrusted, enter encryption password": "Ako je nepovjerljiv, unesite lozinku za šifriranje",
"If you want to prevent other users on this computer from accessing Syncthing and through it your files, consider setting up authentication.": "Ako želite spriječiti druge korisnike na ovom računalu da pristupe Syncthingu i putem njega vašim datotekama, razmislite o postavljanju autentifikacije.",
"Ignore": "Zanemari",
"Ignore Patterns": "Uzorci zanemarivanja",
"Ignore Permissions": "Zanemari dozvole",
"Ignore patterns can only be added after the folder is created. If checked, an input field to enter ignore patterns will be presented after saving.": "Uzorci zanemarivanja mogu se dodati tek nakon što se mapa stvori. Ako je označeno, nakon spremanja će se prikazati polje za unos uzoraka zanemarivanja.",
"Ignored Devices": "Zanemareni uređaji",
"Ignored Folders": "Zanemarene mape",
"Ignored at": "Zanemaren na",
"Included Software": "Uključeni softver",
"Incoming Rate Limit (KiB/s)": "Dolazno ograničenje brzine (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Neispravna konfiguracija može oštetiti sadržaj vaše mape kao i ispravno funkcioniranje Syncthinga.",
"Incorrect user name or password.": "Neispravno korisničko ime ili lozinka.",
"Info": "Informacije",
"Internally used paths:": "Interno korištene staze:",
"Introduction": "Uvod",
"Inversion of the given condition (i.e. do not exclude)": "Inverzija zadanog uvjeta (tj. ne isključuj)",
"Keep Versions": "Zadrži verzije",
"LDAP": "LDAP",
"Largest First": "Najprije najveće",
"Last 30 Days": "Zadnjih 30 dana",
"Last 7 Days": "Zadnjih 7 dana",
"Last Month": "Prošli mjesec",
"Last Scan": "Zadnje skeniranje",
"Last seen": "Zadnje viđeno",
"Latest Change": "Zadnja promjena",
"Learn more": "Saznaj više",
"Learn more at {%url%}": "Saznaj više na {{url}}",
"Limit": "Ograničenje",
"Limit Bandwidth in LAN": "Ograniči propusnost u LAN-u",
"Listener Failures": "Neuspjesi slušača",
"Listener Status": "Stanje slušača",
"Listeners": "Slušači",
"Loading data...": "Učitavanje podataka …",
"Loading...": "Učitavanje …",
"Local Additions": "Lokalni dodaci",
"Local Discovery": "Lokalno otkrivanje",
"Local State": "Lokalno stanje",
"Local State (Total)": "Lokalno stanje (ukupno)",
"Locally Changed Items": "Lokalno promijenjene stavke",
"Log": "Zapis",
"Log File": "Datoteka zapisa",
"Log In": "Prijavi se",
"Log Out": "Odjavi se",
"Log in to see paths information.": "Prijavite se da biste vidjeli informacije o stazama.",
"Log in to see version information.": "Prijavite se da biste vidjeli informacije o verziji.",
"Log tailing paused. Scroll to the bottom to continue.": "Praćenje zapisa je pauzirano. Pomaknite se na dno kako biste nastavili.",
"Login failed, see Syncthing logs for details.": "Prijava nije uspjela. Pogledaj detalje u odjeljku zapisima sinkronizacije.",
"Logs": "Zapisi",
"Major Upgrade": "Velika nadogradnja",
"Mass actions": "Masovne radnje",
"Maximum Age": "Maksimalna starost",
"Maximum single entry size": "Maksimalna veličina pojedinačnog unosa",
"Maximum total size": "Maksimalna ukupna veličina",
"Metadata Only": "Samo metapodaci",
"Minimum Free Disk Space": "Minimalna količina slobodnog prostora na disku",
"Mod. Device": "Uređaj promjene",
"Mod. Time": "Vrijeme promjene",
"More than a month ago": "Prije više od mjesec dana",
"More than a week ago": "Prije više od tjedan dana",
"More than a year ago": "Prije više od godinu dana",
"Move to top of queue": "Premjesti na vrh popisa",
"Multi level wildcard (matches multiple directory levels)": "Višerazinski zamjenski znak (podudara se s više razina direktorija)",
"Never": "Nikada",
"New Device": "Novi uređaj",
"New Folder": "Nova mapa",
"Newest First": "Najprije najnovije",
"No": "Ne",
"No File Versioning": "Bez upravljanje verzijama datoteka",
"No files will be deleted as a result of this operation.": "Nijedna datoteka se neće izbrisati kao rezultat ove operacije.",
"No rules set": "Nema postavljenih pravila",
"No upgrades": "Nema nadogradnji",
"Not shared": "Ne dijeli se",
"Notice": "Napomena",
"Number of Connections": "Broj veza",
"OK": "U redu",
"Off": "Isključeno",
"Oldest First": "Najprije najstarije",
"Optional descriptive label for the folder. Can be different on each device.": "Neobavezna opisna etiketa za mapu. Može biti različita na svakom uređaju.",
"Options": "Opcije",
"Out of Sync": "Nesinkronizirano",
"Out of Sync Items": "Nesinkronizirane stavke",
"Outgoing Rate Limit (KiB/s)": "Odlazno ograničenje brzine (KiB/s)",
"Override": "Nadjačavanje",
"Override Changes": "Nadjačaj promjene",
"Ownership": "Vlasništvo",
"Password": "Lozinka",
"Path": "Staza",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Staza do mape na lokalnom računalu. Stvorit će se ako ne postoji. Znak tildе (~) može se koristiti kao prečac za",
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "Staza gdje bi se verzije trebale spremati (ostavite prazno za zadani direktorij .stversions u dijeljenoj mapi).",
"Paths": "Staze",
"Pause": "Pauziraj",
"Pause All": "Pauziraj sve",
"Paused": "Pauzirano",
"Paused (Unused)": "Pauzirano (neiskorišteno)",
"Pending changes": "Promjene na čekanju",
"Periodic scanning at given interval and disabled watching for changes": "Periodično skeniranje u zadanom intervalu i iskljjučeo praćenje promjena",
"Periodic scanning at given interval and enabled watching for changes": "Periodično skeniranje u zadanom intervalu i uključeno praćenje promjena",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Periodično skeniranje u zadanom intervalu i neuspjelo postavljanje praćenja promjena, ponavlja se svake 1 min:",
"Permanently add it to the ignore list, suppressing further notifications.": "Stalno dodajte na popis zanemarivanja, potiskujući daljnje obavijesti.",
"Please consult the release notes before performing a major upgrade.": "Pročitajta bilješke o izdanju prije izvršavanja velike nadogradnje.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Postavite korisnika i lozinku za GUI autentifikaciju u dijalogu Postavke.",
"Please wait": "Pričekajte",
"Prefix indicating that the file can be deleted if preventing directory removal": "Prefiks koji označava da se datoteka može izbrisati ako sprečava uklanjanje direktorija",
"Prefix indicating that the pattern should be matched without case sensitivity": "Prefiks koji označava da se uzorak treba usporediti bez razlikovanja velikih i malih slova",
"Preparing to Sync": "Priprema za sinkronizaciju",
"Preview": "Pregled",
"Preview Usage Report": "Pregled izvještaja o korištenju",
"QR code": "QR kod",
"QUIC LAN": "QUIC LAN",
"QUIC WAN": "QUIC WAN",
"Quick guide to supported patterns": "Kratki vodič za podržane uzorke",
"Random": "Slučajno",
"Receive Encrypted": "Primi šifrirano",
"Receive Only": "Samo primaj",
"Received data is already encrypted": "Primljeni podaci su već šifrirani",
"Recent Changes": "Nedavne promjene",
"Reduced by ignore patterns": "Reducirano pomoću uzoraka zanemarivanja",
"Relay LAN": "LAN relej",
"Relay WAN": "WAN relej",
"Release Notes": "Bilješke o izdanju",
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Kandidati za izdanje sadrže najnovije funkcije i ispravke. Slični su tradicionalnim dvotjednim izdanjima Syncthinga.",
"Remote Devices": "Udaljeni uređaji",
"Remote GUI": "Udaljeni GUI",
"Remove": "Ukloni",
"Remove Device": "Ukloni uređaj",
"Remove Folder": "Ukloni mapu",
"Required identifier for the folder. Must be the same on all cluster devices.": "Obavezni identifikator za mapu. Mora biti isti na svim uređajima u klasteru.",
"Rescan": "Ponovo skeniraj",
"Rescan All": "Ponovo skeniraj sve",
"Rescans": "Ponovna skeniranja",
"Restart": "Pokreni ponovo",
"Restart Needed": "Potrebno je ponovno pokretanje",
"Restarting": "Ponovno pokretanje",
"Restore": "Obnovi",
"Restore Versions": "Obnovi verzije",
"Resume": "Nastavi",
"Resume All": "Nastavi sve",
"Reused": "Ponovo koršteno",
"Revert": "Poništi",
"Revert Local Changes": "Poništi lokalne promjene",
"Save": "Spremi",
"Saving changes": "Spremanje promjena",
"Scan Time Remaining": "Preostalo vrijeme skeniranja",
"Scanning": "Skeniranje",
"See external versioning help for supported templated command line parameters.": "Pogledajte pomoć za upravljanje verzijama eksternih datoteka za podržane parametre naredbenog retka u stilu predložaka.",
"Select All": "Odaberi sve",
"Select a version": "Odaberi verziju",
"Select additional devices to share this folder with.": "Odaberite dodatne uređaje za dijeljenje s ovom mapom.",
"Select additional folders to share with this device.": "Odaberite dodatne mape za dijeljenje s ovim uređajem.",
"Select latest version": "Odaberi najnoviju verziju",
"Select oldest version": "Odaberi najstariju verziju",
"Send & Receive": "Šalji i primaj",
"Send Extended Attributes": "Šalji proširena svojstva",
"Send Only": "Samo šalji",
"Send Ownership": "Šalji vlasništvo",
"Set Ignores on Added Folder": "Postavi zanemarivanja na dodanu mapu",
"Settings": "Postavke",
"Share": "Dijeli",
"Share Folder": "Dijeli mapu",
"Share by Email": "Dijeli putem e-pošte",
"Share by SMS": "Dijeli putem SMS-a",
"Share this folder?": "Dijeliti ovu mapu?",
"Shared Folders": "Dijeljenje mape",
"Shared With": "Dijeli se sa",
"Sharing": "Dijeljenje",
"Show ID": "Prikaži ID",
"Show QR": "Prikaži QR",
"Show detailed discovery status": "Prikaži detaljno stanje otkrivanja",
"Show detailed listener status": "Prikaži detaljno stanje slušača",
"Show diff with previous version": "Prikaži razliku s prethodnom verzijom",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Prikazuje se umjesto ID-ja uređaja u stanju klastera. Bit će predstavljeno drugim uređajima kao opcionalno zadano ime.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Prikazuje se umjesto ID-ja uređaja u stanju klastera. Bit će aktualizirano na ime koje uređaj predstavlja ako se ostavi przno.",
"Shut Down": "Isključi",
"Shutdown Complete": "Isključivanje dovršeno",
"Simple": "Jednostavno",
"Simple File Versioning": "Jednostavno upravljanje verzijama datoteka",
"Single level wildcard (matches within a directory only)": "Jednorazinski zamjenski znak (podudara se samo unutar direktorija)",
"Size": "Veličina",
"Smallest First": "Najprije najmanje",
"Some discovery methods could not be established for finding other devices or announcing this device:": "Neke metode otkrivanja se nisu mogle uspostaviti za pronalaženje drugih uređaja ili najavljivanje ovog uređaja:",
"Some items could not be restored:": "Neke se stavke nisu mogle obnoviti:",
"Some listening addresses could not be enabled to accept connections:": "Neke adrese za osluškivanje se nisu mogle uključiti za prihvaćanje veza:",
"Source Code": "Izvorni kod",
"Stable releases and release candidates": "Stabilna izdanja i kandidati za izdanje",
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "Stabilna izdanja kasne oko dva tjedna. Tijekom tog vremena se testiraju kao kandidati za izdanje.",
"Stable releases only": "Samo stabilna izdanja",
"Staggered": "U fazama",
"Staggered File Versioning": "Upravljanje verzijama datoteka u fazama",
"Start Browser": "Pokreni preglednik",
"Statistics": "Statistike",
"Stay logged in": "Ostanite prijavljeni",
"Stopped": "Zaustavljeno",
"Stores and syncs only encrypted data. Folders on all connected devices need to be set up with the same password or be of type \"{%receiveEncrypted%}\" too.": "Sprema i sinkronizira samo šifrirane podatke. Mape na svim povezanim uređajima moraju biti postavljene s istom lozinkom ili također biti vrste „{{receiveEncrypted}}“.",
"Subject:": "Predmet:",
"Support": "Podrška",
"Support Bundle": "Paket podrške",
"Sync Extended Attributes": "Sinkroniziraj proširena svojstva",
"Sync Ownership": "Sinkroniziraj vlasništvo",
"Sync Status": "Stanje sinkronizacije",
"Syncing": "Sinkroniziranje",
"Syncthing device ID for \"{%devicename%}\"": "Syncthing ID uređaja za „{{devicename}}“",
"Syncthing has been shut down.": "Syncthing je isključen.",
"Syncthing includes the following software or portions thereof:": "Syncthing uključuje sljedeći softver ili njegove dijelove:",
"Syncthing is Free and Open Source Software licensed as MPL v2.0.": "Syncthing je besplatan softver otvorenog koda s licencom MPL v2.0.",
"Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time, safely protected from prying eyes. Your data is your data alone and you deserve to choose where it is stored, whether it is shared with some third party, and how it's transmitted over the internet.": "Syncthing je program za kontinuiranu sinkronizaciju datoteka. Sinkronizira datoteke između dva ili više računala u stvarnom vremenu, sigurno zaštićene od znatiželjnih očiju. Vaši podaci su isključivo vaši i mođete odlučiti gdje će se spremati, hoće li se dijeliti s nekim trećim stranama i kako će se prenositi preko interneta.",
"Syncthing is listening on the following network addresses for connection attempts from other devices:": "Syncthing osluškuje na sljedećim mrežnim adresama pokušaje povezivanja s drugih uređaja:",
"Syncthing is not listening for connection attempts from other devices on any address. Only outgoing connections from this device may work.": "Syncthing ne prima pokušaje povezivanja s drugih uređaja na bilo kojoj adresi. Moguće je da rade samo odlazne veze s ovog uređaja.",
"Syncthing is restarting.": "Syncthing se ponovo pokreće.",
"Syncthing is saving changes.": "Syncthing sprema promjene.",
"Syncthing is upgrading.": "Syncthing se nadograđuje.",
"Syncthing now supports automatically reporting crashes to the developers. This feature is enabled by default.": "Syncthing sada podržava automatsko prijavljivanje pregida rada programa programerima. Ova je funkcija standardno uključena.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Čini se da Syncthing ne radi ili postoji problem s internetskom vezom. Pokušaj se ponavlja …",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Čini se da Syncthing ima problem u obradi vašeg zahtjeva. Altualizirajte stranicu ili ponovo pokrenite Syncthing ako problem ne nestane.",
"TCP LAN": "TCP LAN",
"TCP WAN": "TCP WAN",
"Take me back": "Vrati me natrag",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "Opcije pokretanja nadjačavaju GUI adresu. Ovdje učinjene promjene neće stupiti na snagu dok je nadjačavanje postavljeno.",
"The Syncthing Authors": "Syncthing autori",
"The Syncthing admin interface is configured to allow remote access without a password.": "Administratorsko sučelje Syncthinga konfigurirano je tako da dopušta udaljeni pristup bez lozinke.",
"The aggregated statistics are publicly available at the URL below.": "Prikupljeni statistički podaci javno su dostupni na URL-u u nastavku.",
"The cleanup interval cannot be blank.": "Interval čišćenja ne može biti prazan.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfiguracija je spremljena, ali nije aktivirana. Syncthing se mora ponovo pokrenuti kako bi se aktivirala nova konfiguracija.",
"The device ID cannot be blank.": "ID uređaja ne može biti prazan.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID uređaja koji ovdje treba unijeti se može pronaći u dijalogu „Radnje > Prikaži ID“ na drugom uređaju. Razmaci i crtice su opcionalni (zanemaruju se).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes, and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Šifrirani izvještaj o korištenju šalje se svakodnevno. Koristi se za praćenje uobičajenih platformi, veličina mapa i verzija programa. Ako se prijavljeni skup podataka promijeni, ovaj će se dijalog ponovo prikazati.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Uneseni ID uređaja ne valja. Trebao bi biti niz od 52 ili 56 znakova koji se sastoji od slova i brojeva, s razmacima i opcionalno sa crticama.",
"The folder ID cannot be blank.": "ID mape ne može biti prazan.",
"The folder ID must be unique.": "ID mape mora biti jedinstven.",
"The folder content on other devices will be overwritten to become identical with this device. Files not present here will be deleted on other devices.": "Sadržaj mape na drugim uređajima će se prepisati kako bi postao identičan s ovim uređajem. Datoteke koje nisu ovdje će se izbrisati na drugim uređajima.",
"The folder content on this device will be overwritten to become identical with other devices. Files newly added here will be deleted.": "Sadržaj mape na ovom uređaju će se prepisati kako bi postao identičan s drugim uređajima. Ovdje novo dodane datoteke će se izbrisati.",
"The folder path cannot be blank.": "Staza mape ne može biti prazna.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Koriste se sljedeći intervali: u prvom satu se verzija čuva svakih 30 sekundi, prvi dan se verzija čuva svaki sat, prvih 30 dana se verzija čuva svaki dan, a do maksimalne starosti verzija se čuva svaki tjedan.",
"The following items could not be synchronized.": "Sljedeće stavke se nisu mogle sinkronizirati.",
"The following items were changed locally.": "Sljedeće stavke su promijenjene lokalno.",
"The following methods are used to discover other devices on the network and announce this device to be found by others:": "Sljedeće metode se koriste za otkrivanje drugih uređaja na mreži i objavljivanje ovog uređaja drugima:",
"The following text will automatically be inserted into a new message.": "Sljedeći tekst će se automatski umetnuti u novu poruku.",
"The following unexpected items were found.": "Pronađene su sljedeće neočekivane stavke.",
"The interval must be a positive number of seconds.": "Interval mora biti pozitivan broj sekundi.",
"The interval, in seconds, for running cleanup in the versions directory. Zero to disable periodic cleaning.": "Interval, u sekundama, za pokretanje čišćenja u direktoriju verzija. Nula za isključivanje periodičnog čišćenja.",
"The maximum age must be a number and cannot be blank.": "Maksimalna starost mora biti broj i ne može biti prazna.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maksimalno vrijeme za zadržavanje verzije (u danima, postavite na 0 da bi se verzije zauvijek zadržale).",
"The number of connections must be a non-negative number.": "Broj veza mora biti nenegativan broj.",
"The number of days must be a number and cannot be blank.": "Broj dana mora biti broj i ne može biti prazan.",
"The number of days to keep files in the trash can. Zero means forever.": "Broj dana za čuvanje datoteka u kanti za smeće. Nula znači zauvijek.",
"The number of old versions to keep, per file.": "Broj starih verzija koje se trebaju zadržati, po datoteci.",
"The number of versions must be a number and cannot be blank.": "Broj verzija mora biti broj i ne može biti prazan.",
"The path cannot be blank.": "Staza ne može biti prazna.",
"The rate limit is applied to the accumulated traffic of all connections to this device.": "Ograničenje brzine se primjenjuje na ukupni promet svih veza s ovim uređajem.",
"The rate limit must be a non-negative number (0: no limit)": "Ograničenje brzine mora biti nenegativan broj (0: bez ograničenja)",
"The remote device has not accepted sharing this folder.": "Udaljeni uređaj nije prihvatio dijeljenje ove mape.",
"The remote device has paused this folder.": "Udaljeni uređaj je pauzirao ovu mapu.",
"The rescan interval must be a non-negative number of seconds.": "Interval ponovnog skeniranja mora biti nenegativan broj sekundi.",
"There are no devices to share this folder with.": "Nema uređaja za dijeljenje ove mape.",
"There are no file versions to restore.": "Nema verzija datoteke za obnavljanje.",
"There are no folders to share with this device.": "Nema mapa za dijeljenje s ovim uređajem.",
"They are retried automatically and will be synced when the error is resolved.": "Ponovni pokušaji će se izvršiti automatski i sinkronizirat će se kada se greška riješi.",
"This Device": "Ovaj uređaj",
"This Month": "Ovaj mjesec",
"This can easily give hackers access to read and change any files on your computer.": "Ovo hakerima omogućuje jednostavan pristup za čitanje i mijenjanje bilo kojih datoteka na vašem računalu.",
"This device cannot automatically discover other devices or announce its own address to be found by others. Only devices with statically configured addresses can connect.": "Ovaj uređaj ne može automatski otkriti druge uređaje niti objaviti vlastitu adresu kako bi je drugi pronašli. Povezati se mogu samo uređaji sa statički konfiguriranim adresama.",
"This is a major version upgrade.": "Ovo je nadogradnja glavne verzije.",
"This setting controls the free space required on the home (i.e., index database) disk.": "Ova postavka upravlja potrebnom slobodnom memorijom na glavnom disku (tj. disku baze podataka indeksa).",
"Time": "Vrijeme",
"Time the item was last modified": "Vrijeme zadnje promjene stavke",
"To connect with the Syncthing device named \"{%devicename%}\", add a new remote device on your end with this ID:": "Za povezivanje sa Syncthing uređajem nazvanim „{{devicename}}“, dodajte novi udaljeni uređaj s ovim ID-om na svojoj strani:",
"To permit a rule, have the checkbox checked. To deny a rule, leave it unchecked.": "Za dopuštanje pravila, označite polje. Za odbijanje pravila, ostavite polje neoznačeno.",
"Today": "Danas",
"Trash Can": "Kanta za smeće",
"Trash Can File Versioning": "Upravljanje verzijama datoteka u kanti za smeće",
"Type": "Vrsta",
"UNIX Permissions": "UNIX dozvola",
"Unavailable": "Nedostupno",
"Unavailable/Disabled by administrator or maintainer": "Nedostupno/Onemogućeno od administratora ili održavatelja",
"Undecided (will prompt)": "Neodlučeno (odluka će se zatražiti)",
"Unexpected Items": "Neočekivane stavke",
"Unexpected items have been found in this folder.": "U ovoj su mapi pronađene neočekivane stavke.",
"Unignore": "Poništi zanemarivanje",
"Unknown": "Nepoznato",
"Unshared": "Nedijeljeno",
"Unshared Devices": "Nedijeljeni uređaji",
"Unshared Folders": "Nedijeljene mape",
"Untrusted": "Nepovjerljivo",
"Up to Date": "Aktualno",
"Updated {%file%}": "Aktualizirano {{file}}",
"Upgrade": "Nadogradi",
"Upgrade To {%version%}": "Nadogradi na {{version}}",
"Upgrading": "Nadograđivanje",
"Upload Rate": "Stopa prijenosa",
"Uptime": "Vrijeme rada",
"Usage reporting is always enabled for candidate releases.": "Izvještavanje o korištenju je uvijek uključeno za izdanja kandidata.",
"Use HTTPS for GUI": "Koristi HTTPS za GUI",
"Use notifications from the filesystem to detect changed items.": "Koristite obavijesti iz datotečnog sustava za otkrivanje promijenjenih stavki.",
"User": "Korisnik",
"User Home": "Korisnički direktorij",
"Username/Password has not been set for the GUI authentication. Please consider setting it up.": "Korisničko ime/lozinka nisu postavljeni za GUI autentifikaciju. Razmislite o njihovom postavljanju.",
"Using a QUIC connection over LAN": "Korištenje QUIC veze preko LAN-a",
"Using a QUIC connection over WAN": "Korištenje QUIC veze preko WAN-a",
"Using a direct TCP connection over LAN": "Korištenje izravne TCP veze preko LAN-a",
"Using a direct TCP connection over WAN": "Korištenje izravne TCP veze preko WAN-a",
"Version": "Verzija",
"Versions": "Verzije",
"Versions Path": "Staza verzija",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Verzije se automatski brišu ako su starije od maksimalne starosti ili premaše broj dopuštenih datoteka u intervalu.",
"Waiting to Clean": "Čekanje na čišćenje",
"Waiting to Scan": "Čekanje na skeniranje",
"Waiting to Sync": "Čekanje na sinkronizaciju",
"Warning": "Upozorenje",
"Warning, this path is a parent directory of an existing folder \"{%otherFolder%}\".": "Upozorenje, ova je staza nadređeni direktorij postojeće mape „{{otherFolder}}“.",
"Warning, this path is a parent directory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Upozorenje, ova je staza nadređeni direktorij postojeće mape „{{otherFolderLabel}}“ ({{otherFolder}}).",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Upozorenje, ova je staza poddirektorij postojeće mape „{{otherFolder}}“.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Upozorenje, ova je staza poddirektorij postojeće mape „{{otherFolderLabel}}“ ({{otherFolder}}).",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Upozorenje: Ako koristite eksterni alat za praćenje kao što je {{syncthingInotify}}, osigurajte da jei deaktiviran.",
"Watch for Changes": "Prati promjene",
"Watching for Changes": "Praćenje promjena",
"Watching for changes discovers most changes without periodic scanning.": "Praćenje promjena otkriva većinu promjena bez periodičnog skeniranja.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Prilikom dodavanja novog uređaja imajte na umu da se ovaj uređaj mora dodati i na drugoj strani.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Prilikom dodavanja nove mape imajte na umu da se ID mape koristi za povezivanje mapa između uređaja. Razlikuju velika i mala slova i moraju se točno podudarati između svih uređaja.",
"When set to more than one on both devices, Syncthing will attempt to establish multiple concurrent connections. If the values differ, the highest will be used. Set to zero to let Syncthing decide.": "Kada je postavljen na više od jednog na oba uređaja, Syncthing će pokušati uspostaviti više istodobnih veza. Ako se vrijednosti razlikuju, koristit će se najviša. Postavite na nulu kako bi Syncthing odlučio.",
"Yes": "Da",
"Yesterday": "Jučer",
"You can also copy and paste the text into a new message manually.": "Tekst možete i ručno kopirati i zalijepiti u novu poruku.",
"You can also select one of these nearby devices:": "Možete odabrati i jedan od sljedećih uređaja u blizini:",
"You can change your choice at any time in the Settings dialog.": "Izbor možete promijeniti u bilo kojem trenutku u dijalogu „Postavke“.",
"You can read more about the two release channels at the link below.": "Saznajte više o dvama izdavačkim kanalima putem sljedeće poveznice.",
"You have no ignored devices.": "Nemate zanemarene uređaje.",
"You have no ignored folders.": "Nemate zanemarene mape.",
"You have unsaved changes. Do you really want to discard them?": "Imate nespremljene promjene. Želite li ih doista odbaciti?",
"You must keep at least one version.": "Morate zadržati barem jednu verziju.",
"You should never add or change anything locally in a \"{%receiveEncrypted%}\" folder.": "Nemojte nikada nešto lokalno dodavati ili mijenjati u mapi „{{receiveEncrypted}}“.",
"Your SMS app should open to let you choose the recipient and send it from your own number.": "Vaša program za SMS bi se trebao otvoriti kako biste mogli odabrati primatelja i poslati poruku sa svog broja.",
"Your email app should open to let you choose the recipient and send it from your own address.": "Vaša program za e-poštu bi se trebao otvoriti kako biste mogli odabrati primatelja i poslati poruku s vlastite adrese.",
"days": "dani",
"deleted": "izbrisano",
"deny": "odbij",
"directories": "direktoriji",
"file": "datoteka",
"files": "datoteke",
"folder": "mapa",
"full documentation": "potpuna dokumentacija",
"items": "stavke",
"modified": "promjenjeno",
"permit": "dozvoli",
"seconds": "sekunde",
"theme": {
"name": {
"black": "Crna",
"dark": "Tamna",
"default": "Zadano",
"light": "Svijetla"
}
},
"unknown device": "nepoznat uređaj",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} želi dijeliti mapu „{{folder}}“.",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} želi dijeliti mapu „{{folderlabel}}“ ({{folder}}).",
"{%reintroducer%} might reintroduce this device.": "{{reintroducer}} bi možda mogao ponovno uvesti ovaj uređaj."
}

View File

@@ -82,6 +82,7 @@
"Custom Range": "Выбрать диапазон",
"Danger!": "Опасно!",
"Database Location": "Расположение базы данных",
"Debug": "Дебаг (отладка)",
"Debugging Facilities": "Средства отладки",
"Default": "По умолчанию",
"Default Configuration": "Настройки по умолчанию",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Ограничение входящей скорости (КиБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Неправильные настройки могут повредить содержимое папок и сделать Syncthing неработоспособным.",
"Incorrect user name or password.": "Неверное имя пользователя или пароль.",
"Info": "Информация",
"Internally used paths:": "Внутренние используемые пути:",
"Introduced By": "Рекомендовано",
"Introducer": "Рекомендатель",
@@ -255,7 +257,7 @@
"Metadata Only": "Только метаданные",
"Minimum Free Disk Space": "Минимальное свободное место на диске",
"Mod. Device": "Изм. устройство",
"Mod. Time": "Посл. изм.",
"Mod. Time": "Время изменения",
"More than a month ago": "Больше месяца назад",
"More than a week ago": "Больше недели назад",
"More than a year ago": "Больше года назад",
@@ -542,7 +544,7 @@
"items": "объекты",
"modified": "изменено",
"permit": "разрешить",
"seconds": "сек.",
"seconds": "секунд",
"theme": {
"name": {
"black": "Чёрная",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Обрати діапазон",
"Danger!": "Небезпечно!",
"Database Location": "Розташування бази даних",
"Debug": "Дебаг (налагодження)",
"Debugging Facilities": "Засоби налагодження",
"Default": "Типово",
"Default Configuration": "Типові налаштування",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Ліміт швидкості завантаження (КіБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Невірна конфігурація може пошкодити вміст вашої теки та зробити Syncthing недієздатним.",
"Incorrect user name or password.": "Невірний логін або пароль.",
"Info": "Інформація",
"Internally used paths:": "Шляхи, що використовуються внутрішньо:",
"Introduced By": "Рекомендовано",
"Introducer": "Рекомендувач",
@@ -254,13 +256,13 @@
"Maximum total size": "Максимальний загальний розмір",
"Metadata Only": "Тільки метадані",
"Minimum Free Disk Space": "Мінімальний вільний простір на диску",
"Mod. Device": "Модифікований пристрій:",
"Mod. Time": "Час модифікації:",
"Mod. Device": "Модифікований пристрій",
"Mod. Time": "Час модифікації",
"More than a month ago": "Більше місяця тому",
"More than a week ago": "Більше тижня тому",
"More than a year ago": "Більше року тому",
"Move to top of queue": "Пересунути у початок черги",
"Multi level wildcard (matches multiple directory levels)": "Багаторівнева маска (пошук збігів в усіх піддиректоріях) ",
"Multi level wildcard (matches multiple directory levels)": "Багаторівнева маска (пошук збігів в усіх піддиректоріях)",
"Never": "Ніколи",
"New Device": "Новий пристрій",
"New Folder": "Нова тека",
@@ -287,7 +289,7 @@
"Password": "Пароль",
"Path": "Шлях",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Шлях до папки на локальному комп’ютері. Буде створений, якщо не існує. Символ тильди (~) може бути використаний як ярлик для",
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "Шлях, де повинні зберігатися версії (залиште порожнім для зберігання в .stversions усередині директорії)",
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "Шлях, де повинні зберігатися версії (залиште порожнім для зберігання в .stversions усередині директорії).",
"Paths": "Шляхи",
"Pause": "Пауза",
"Pause All": "Призупинити все",
@@ -298,7 +300,7 @@
"Periodic scanning at given interval and enabled watching for changes": "Періодичне сканування через визначений інтервал та увімкнене відстеження змін",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Періодичне сканування через визначений інтервал та невдале відстеження змін, повторні спроби кожну 1 хв.:",
"Permanently add it to the ignore list, suppressing further notifications.": "Додати до чорного списку, щоб ігнорувати подальші сповіщення.",
"Please consult the release notes before performing a major upgrade.": "Будь ласка, перегляньте примітки до випуску перед мажорним оновленням. ",
"Please consult the release notes before performing a major upgrade.": "Будь ласка, перегляньте примітки до випуску перед мажорним оновленням.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Будь ласка, встановіть у налаштуваннях ім'я користувача та пароль до панелі керування.",
"Please wait": "Будь ласка, зачекайте",
"Prefix indicating that the file can be deleted if preventing directory removal": "Префікс означає, що файл може бути видалений при запобіганні видаленню директорії",
@@ -375,11 +377,11 @@
"Shutdown Complete": "Вимикання завершене",
"Simple": "Просте",
"Simple File Versioning": "Просте версіювання",
"Single level wildcard (matches within a directory only)": "Однорівнева маска (пошук збігів лише в середині директорії) ",
"Single level wildcard (matches within a directory only)": "Однорівнева маска (пошук збігів лише в середині директорії)",
"Size": "Розмір",
"Smallest First": "Спершу найменші",
"Some discovery methods could not be established for finding other devices or announcing this device:": "Деякі методи виявлення не вдалося встановити для пошуку інших пристроїв або оголошення цього пристрою:",
"Some items could not be restored:": "Деякі елементи не можуть бути відновлені: ",
"Some items could not be restored:": "Деякі елементи не можуть бути відновлені:",
"Some listening addresses could not be enabled to accept connections:": "Деякі адреси прослуховування не можуть приймати підключення:",
"Source Code": "Сирцевий код",
"Stable releases and release candidates": "Стабільні випуски та реліз-кандидати",
@@ -448,14 +450,14 @@
"The number of versions must be a number and cannot be blank.": "Кількість версій повинна бути цифрою та не може бути порожньою.",
"The path cannot be blank.": "Шлях не може бути порожнім.",
"The rate limit is applied to the accumulated traffic of all connections to this device.": "Обмеження швидкості накладається на сумарний трафік всіх під'єднань до цього пристрою.",
"The rate limit must be a non-negative number (0: no limit)": "Швидкість має бути додатнім числом.",
"The rate limit must be a non-negative number (0: no limit)": "Швидкість має бути додатнім числом",
"The remote device has not accepted sharing this folder.": "Віддалений пристрій не прийняв спільний доступ до цієї папки.",
"The remote device has paused this folder.": "Віддалений пристрій призупинив синхронізацію цієї папки.",
"The rescan interval must be a non-negative number of seconds.": "Інтервал повторного сканування повинен бути неід’ємною величиною.",
"There are no devices to share this folder with.": "Немає пристроїв, які мають доступ до цієї папки.",
"There are no file versions to restore.": "Немає версій для відновлення.",
"There are no folders to share with this device.": "Немає папок для спільного використання з цим пристроєм.",
"They are retried automatically and will be synced when the error is resolved.": "Вони будуть автоматично повторно синхронізовані, коли помилку буде усунено. ",
"They are retried automatically and will be synced when the error is resolved.": "Вони будуть автоматично повторно синхронізовані, коли помилку буде усунено.",
"This Device": "Локальний пристрій",
"This Month": "Цього місяця",
"This can easily give hackers access to read and change any files on your computer.": "Це легко може дати хакерам доступ на читання та внесення змін до будь-яких файлів на вашому комп'ютері.",
@@ -463,7 +465,7 @@
"This is a major version upgrade.": "Це мажорне оновлення.",
"This setting controls the free space required on the home (i.e., index database) disk.": "Це налаштування визначає необхідний вільний простір на домашньому (тобто той, що містить базу даних) диску.",
"Time": "Час",
"Time the item was last modified": "Час останньої зміни елемента:",
"Time the item was last modified": "Час останньої зміни елемента",
"To connect with the Syncthing device named \"{%devicename%}\", add a new remote device on your end with this ID:": "Щоб підключитися до пристрою Syncthing з назвою \"{{devicename}}\", додайте новий віддалений пристрій із свого боку за цим ID:",
"To permit a rule, have the checkbox checked. To deny a rule, leave it unchecked.": "Аби застосувати правило, зазначте поле. Аби відмінити правило, залишить поле порожнім.",
"Today": "Сьогодні",

View File

@@ -1 +1 @@
var langPrettyprint = {"ar":"Arabic","bg":"Bulgarian","ca":"Catalan","ca@valencia":"Valencian","cs":"Czech","da":"Danish","de":"German","el":"Greek","en":"English","en-GB":"English (United Kingdom)","es":"Spanish","eu":"Basque","fil":"Filipino","fr":"French","fy":"Frisian","ga":"Irish","he-IL":"Hebrew (Israel)","hi":"Hindi","hu":"Hungarian","id":"Indonesian","it":"Italian","ja":"Japanese","ko-KR":"Korean","lt":"Lithuanian","nl":"Dutch","pl":"Polish","pt-BR":"Portuguese (Brazil)","pt-PT":"Portuguese (Portugal)","ro-RO":"Romanian","ru":"Russian","sk":"Slovak","sl":"Slovenian","sv":"Swedish","tr":"Turkish","uk":"Ukrainian","zh-CN":"Chinese (Simplified Han script)","zh-HK":"Chinese (Traditional Han script, Hong Kong)","zh-TW":"Chinese (Traditional Han script)"}
var langPrettyprint = {"ar":"Arabic","bg":"Bulgarian","ca":"Catalan","ca@valencia":"Valencian","cs":"Czech","da":"Danish","de":"German","el":"Greek","en":"English","en-GB":"English (United Kingdom)","es":"Spanish","eu":"Basque","fil":"Filipino","fr":"French","fy":"Frisian","ga":"Irish","he-IL":"Hebrew (Israel)","hi":"Hindi","hr":"Croatian","hu":"Hungarian","id":"Indonesian","it":"Italian","ja":"Japanese","ko-KR":"Korean","lt":"Lithuanian","nl":"Dutch","pl":"Polish","pt-BR":"Portuguese (Brazil)","pt-PT":"Portuguese (Portugal)","ro-RO":"Romanian","ru":"Russian","sk":"Slovak","sl":"Slovenian","sv":"Swedish","tr":"Turkish","uk":"Ukrainian","zh-CN":"Chinese (Simplified Han script)","zh-HK":"Chinese (Traditional Han script, Hong Kong)","zh-TW":"Chinese (Traditional Han script)"}

View File

@@ -1 +1 @@
var validLangs = ["ar","bg","ca","ca@valencia","cs","da","de","el","en","en-GB","es","eu","fil","fr","fy","ga","he-IL","hi","hu","id","it","ja","ko-KR","lt","nl","pl","pt-BR","pt-PT","ro-RO","ru","sk","sl","sv","tr","uk","zh-CN","zh-HK","zh-TW"]
var validLangs = ["ar","bg","ca","ca@valencia","cs","da","de","el","en","en-GB","es","eu","fil","fr","fy","ga","he-IL","hi","hr","hu","id","it","ja","ko-KR","lt","nl","pl","pt-BR","pt-PT","ro-RO","ru","sk","sl","sv","tr","uk","zh-CN","zh-HK","zh-TW"]

View File

@@ -30,7 +30,7 @@
<h4 class="text-center" translate>The Syncthing Authors</h4>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Simon Frei, Tomasz Wilczyński, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, bt90, Caleb Callaway, Daniel Harte, Emil Lundberg, Eric P, Evgeny Kuznetsov, greatroar, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Ross Smith II, Stefan Tatschner, Wulf Weich, Adam Piggott, Adel Qalieh, Aleksey Vasenev, Alessandro G., Alex Ionescu, Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Andreas Sommer, andresvia, Andrew Rabert, Andrey D, andyleap, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, ardevd, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Ashish Bhate, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Beat Reichenbach, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benno Fünfstück, Benny Ng, boomsquared, Boqin Qin, Boris Rybalkin, Brendan Long, Catfriend1, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Christian Kujau, Christian Prescott, chucic, cjc7373, Colin Kennedy, Cromefire_, Cyprien Devillez, d-volution, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Daniel Padrta, Daniil Gentili, Darshil Chanpura, dashangcun, David Rimmer, DeflateAwning, Denis A., Dennis Wilson, derekriemer, DerRockWolf, desbma, Devon G. Redekopp, digital, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, domain, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Eng Zer Jun, entity0xfe, Eric Lesiuta, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, georgespatton, ghjklw, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, guangwu, gudvinr, Gusted, Han Boetes, HansK-p, Harrison Jones, Hazem Krimi, Heiko Zuerker, Hireworks, Hugo Locurcio, Iain Barnett, Ian Johnson, ignacy123, Iskander Sharipov, Jaakko Hannikainen, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jaspitta, Jaya Chithra, Jaya Kumar, Jeffery To, jelle van der Waa, Jens Diemer, Jochen Voss, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jose Manuel Delicado, jtagcat, Julian Lehrhuber, Jörg Thalheim, Jędrzej Kula, Kapil Sareen, Karol Różycki, Kebin Liu, Keith Harrison, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., klemens, Kurt Fitzner, kylosus, Lars Lehtonen, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, LSmithx2, Lukas Lihotzki, Luke Hamburg, luzpaz, Majed Abdulaziz, Marc Laporte, Marcel Meyer, Marcin Dziadus, Marcus B Spencer, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Mateusz Naściszewski, Mateusz Ż, mathias4833, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maximilian, Michael Jephcote, Michael Rienstra, MichaIng, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, mv1005, Nate Morrison, nf, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, orangekame3, otbutz, overkill, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Paul Donald, Pawel Palenica, perewa, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Philippe Schommers, Phill Luby, Piotr Bejda, polyfloyd, pullmerge, Quentin Hibon, Rahmi Pruitt, red_led, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, rubenbe, Ruslan Yevdokymov, Ryan Qian, Ryan Sullivan, Sacheendra Talluri, Scott Klupfel, sec65, Sergey Mishin, Sertonix, Severin von Wnuck-Lipinski, Shaarad Dalvi, Simon Mwepu, Simon Pickup, Sly_tom_cat, Sonu Kumar Saw, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Sven Bachmann, Sébastien WENSKE, Taylor Khan, Terrance, TheCreeper, Thomas, Thomas Hipp, Tim Abell, Tim Howes, Tobias Frölich, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tommy van der Vorst, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, vapatel2, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, villekalliomaki, Vladimir Rusinov, wangguoliang, WangXi, Will Rouesnel, William A. Kennington III, wouter bolsterlee, xarx00, Xavier O., xjtdy888, Yannic A., yparitcher, 佛跳墙, 落心
Jakob Borg, Audrius Butkevicius, Simon Frei, Tomasz Wilczyński, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, bt90, Caleb Callaway, Daniel Harte, Emil Lundberg, Eric P, Evgeny Kuznetsov, greatroar, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Ross Smith II, Stefan Tatschner, Tommy van der Vorst, Wulf Weich, Adam Piggott, Adel Qalieh, Aleksey Vasenev, Alessandro G., Alex Ionescu, Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Andreas Sommer, andresvia, Andrew Rabert, Andrey D, andyleap, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, ardevd, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Ashish Bhate, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Beat Reichenbach, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benno Fünfstück, Benny Ng, boomsquared, Boqin Qin, Boris Rybalkin, Brendan Long, Catfriend1, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Christian Kujau, Christian Prescott, chucic, cjc7373, Colin Kennedy, Cromefire_, Cyprien Devillez, d-volution, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Daniel Padrta, Daniil Gentili, Darshil Chanpura, dashangcun, David Rimmer, DeflateAwning, Denis A., Dennis Wilson, derekriemer, DerRockWolf, desbma, Devon G. Redekopp, digital, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, domain, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Eng Zer Jun, entity0xfe, Eric Lesiuta, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, georgespatton, ghjklw, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, guangwu, gudvinr, Gusted, Han Boetes, HansK-p, Harrison Jones, Hazem Krimi, Heiko Zuerker, Hireworks, Hugo Locurcio, Iain Barnett, Ian Johnson, ignacy123, Iskander Sharipov, Jaakko Hannikainen, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jaspitta, Jaya Chithra, Jaya Kumar, Jeffery To, jelle van der Waa, Jens Diemer, Jochen Voss, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jose Manuel Delicado, jtagcat, Julian Lehrhuber, Jörg Thalheim, Jędrzej Kula, Kapil Sareen, Karol Różycki, Kebin Liu, Keith Harrison, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., klemens, Kurt Fitzner, kylosus, Lars Lehtonen, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, LSmithx2, Lukas Lihotzki, Luke Hamburg, luzpaz, Majed Abdulaziz, Marc Laporte, Marcel Meyer, Marcin Dziadus, Marcus B Spencer, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Mateusz Naściszewski, Mateusz Ż, mathias4833, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maximilian, Michael Jephcote, Michael Rienstra, MichaIng, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, mv1005, Nate Morrison, nf, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, orangekame3, otbutz, overkill, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Paul Donald, Pawel Palenica, perewa, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Philippe Schommers, Phill Luby, Piotr Bejda, polyfloyd, pullmerge, Quentin Hibon, Rahmi Pruitt, red_led, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, rubenbe, Ruslan Yevdokymov, Ryan Qian, Ryan Sullivan, Sacheendra Talluri, Scott Klupfel, sec65, Sergey Mishin, Sertonix, Severin von Wnuck-Lipinski, Shaarad Dalvi, Simon Mwepu, Simon Pickup, Sly_tom_cat, Sonu Kumar Saw, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Sven Bachmann, Sébastien WENSKE, Taylor Khan, Terrance, TheCreeper, Thomas, Thomas Hipp, Tim Abell, Tim Howes, Tobias Frölich, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, vapatel2, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, villekalliomaki, Vladimir Rusinov, wangguoliang, WangXi, Will Rouesnel, William A. Kennington III, wouter bolsterlee, xarx00, Xavier O., xjtdy888, Yannic A., yparitcher, 佛跳墙, 落心
</div>
</div>
</div>

View File

@@ -159,7 +159,7 @@
<div class="row">
<span class="col-md-8" translate>Incoming Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" />
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxRecvKbps.$valid && deviceEditor.maxRecvKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>
@@ -168,7 +168,7 @@
<div class="row">
<span class="col-md-8" translate>Outgoing Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" />
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxSendKbps.$valid && deviceEditor.maxSendKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>

View File

@@ -146,7 +146,7 @@
<div class="form-group" ng-if="internalVersioningEnabled()" ng-class="{'has-error': folderEditor.cleanupIntervalS.$invalid && folderEditor.cleanupIntervalS.$dirty}">
<label translate for="cleanupIntervalS">Cleanup Interval</label>
<div class="input-group">
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" aria-required="true" />
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" step="3600" aria-required="true" />
<div class="input-group-addon" translate>seconds</div>
</div>
<p class="help-block">

View File

@@ -194,7 +194,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxRecvKbps.$invalid && settingsEditor.MaxRecvKbps.$dirty}">
<label translate for="MaxRecvKbps">Incoming Rate Limit (KiB/s)</label>
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" />
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxRecvKbps.$error.min && settingsEditor.MaxRecvKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>
@@ -203,7 +203,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxSendKbps.$invalid && settingsEditor.MaxSendKbps.$dirty}">
<label translate for="MaxSendKbps">Outgoing Rate Limit (KiB/s)</label>
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" />
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxSendKbps.$error.min && settingsEditor.MaxSendKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>

View File

@@ -7,9 +7,11 @@
package sqlite
import (
"context"
"database/sql"
"embed"
"io/fs"
"log/slog"
"net/url"
"path/filepath"
"strconv"
@@ -19,11 +21,16 @@ import (
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 3
const (
currentSchemaVersion = 5
applicationIDMain = 0x53546d6e // "STmn", Syncthing main database
applicationIDFolder = 0x53546664 // "STfd", Syncthing folder database
)
//go:embed sql/**
var embedded embed.FS
@@ -81,13 +88,44 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
},
}
// Create a specific connection for the schema setup and migration to
// run in. We do this because we need to disable foreign keys for the
// duration, which is a thing that needs to happen outside of a
// transaction and affects the connection it's run on. So we need to a)
// make sure all our commands run on this specific connection (which the
// transaction accomplishes naturally) and b) make sure these pragmas
// don't leak to anyone else afterwards.
ctx := context.TODO()
conn, err := db.sql.Connx(ctx)
if err != nil {
return nil, wrap(err)
}
defer func() {
_, _ = conn.ExecContext(ctx, "PRAGMA foreign_keys = ON")
_, _ = conn.ExecContext(ctx, "PRAGMA legacy_alter_table = OFF")
conn.Close()
}()
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_keys = OFF"); err != nil {
return nil, wrap(err)
}
if _, err := conn.ExecContext(ctx, "PRAGMA legacy_alter_table = ON"); err != nil {
return nil, wrap(err)
}
tx, err := conn.BeginTxx(ctx, nil)
if err != nil {
return nil, wrap(err)
}
defer tx.Rollback()
for _, script := range schemaScripts {
if err := db.runScripts(script); err != nil {
if err := db.runScripts(tx, script); err != nil {
return nil, wrap(err)
}
}
ver, _ := db.getAppliedSchemaVersion()
ver, _ := db.getAppliedSchemaVersion(tx)
shouldVacuum := false
if ver.SchemaVersion > 0 {
filter := func(scr string) bool {
scr = filepath.Base(scr)
@@ -99,20 +137,53 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
if err != nil {
return false
}
return int(n) > ver.SchemaVersion
if int(n) > ver.SchemaVersion {
slog.Info("Applying database migration", slogutil.FilePath(db.baseName), slog.String("script", scr))
shouldVacuum = true
return true
}
return false
}
for _, script := range migrationScripts {
if err := db.runScripts(script, filter); err != nil {
if err := db.runScripts(tx, script, filter); err != nil {
return nil, wrap(err)
}
}
// Run the initial schema scripts once more. This is generally a
// no-op. However, dropping a table removes associated triggers etc,
// and that's a thing we sometimes do in migrations. To avoid having
// to repeat the setup of associated triggers and indexes in the
// migration, we re-run the initial schema scripts.
for _, script := range schemaScripts {
if err := db.runScripts(tx, script); err != nil {
return nil, wrap(err)
}
}
// Finally, ensure nothing we've done along the way has violated key integrity.
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_key_check"); err != nil {
return nil, wrap(err)
}
}
// Set the current schema version, if not already set
if err := db.setAppliedSchemaVersion(currentSchemaVersion); err != nil {
if err := db.setAppliedSchemaVersion(tx, currentSchemaVersion); err != nil {
return nil, wrap(err)
}
if err := tx.Commit(); err != nil {
return nil, wrap(err)
}
if shouldVacuum {
// We applied migrations and should take the opportunity to vaccuum
// the database.
if err := db.vacuumAndOptimize(); err != nil {
return nil, wrap(err)
}
}
return db, nil
}
@@ -188,6 +259,20 @@ func (s *baseDB) expandTemplateVars(tpl string) string {
return sb.String()
}
func (s *baseDB) vacuumAndOptimize() error {
stmts := []string{
"VACUUM;",
"PRAGMA optimize;",
"PRAGMA wal_checkpoint(truncate);",
}
for _, stmt := range stmts {
if _, err := s.sql.Exec(stmt); err != nil {
return wrap(err, stmt)
}
}
return nil
}
type stmt interface {
Exec(args ...any) (sql.Result, error)
Get(dest any, args ...any) error
@@ -204,18 +289,12 @@ func (f failedStmt) Get(_ any, _ ...any) error { return f.err }
func (f failedStmt) Queryx(_ ...any) (*sqlx.Rows, error) { return nil, f.err }
func (f failedStmt) Select(_ any, _ ...any) error { return f.err }
func (s *baseDB) runScripts(glob string, filter ...func(s string) bool) error {
func (s *baseDB) runScripts(tx *sqlx.Tx, glob string, filter ...func(s string) bool) error {
scripts, err := fs.Glob(embedded, glob)
if err != nil {
return wrap(err)
}
tx, err := s.sql.Begin()
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
nextScript:
for _, scr := range scripts {
for _, fn := range filter {
@@ -233,12 +312,17 @@ nextScript:
// also statement-internal semicolons in the triggers.
for _, stmt := range strings.Split(string(bs), "\n;") {
if _, err := tx.Exec(s.expandTemplateVars(stmt)); err != nil {
return wrap(err, stmt)
if strings.Contains(stmt, "syncthing:ignore-failure") {
// We're ok with this failing. Just note it.
slog.Debug("Script failed, but with ignore-failure annotation", slog.String("script", scr), slogutil.Error(wrap(err, stmt)))
} else {
return wrap(err, stmt)
}
}
}
}
return wrap(tx.Commit())
return nil
}
type schemaVersion struct {
@@ -251,20 +335,20 @@ func (s *schemaVersion) AppliedTime() time.Time {
return time.Unix(0, s.AppliedAt)
}
func (s *baseDB) setAppliedSchemaVersion(ver int) error {
_, err := s.stmt(`
func (s *baseDB) setAppliedSchemaVersion(tx *sqlx.Tx, ver int) error {
_, err := tx.Exec(`
INSERT OR IGNORE INTO schemamigrations (schema_version, applied_at, syncthing_version)
VALUES (?, ?, ?)
`).Exec(ver, time.Now().UnixNano(), build.LongVersion)
`, ver, time.Now().UnixNano(), build.LongVersion)
return wrap(err)
}
func (s *baseDB) getAppliedSchemaVersion() (schemaVersion, error) {
func (s *baseDB) getAppliedSchemaVersion(tx *sqlx.Tx) (schemaVersion, error) {
var v schemaVersion
err := s.stmt(`
err := tx.Get(&v, `
SELECT schema_version as schemaversion, applied_at as appliedat, syncthing_version as syncthingversion FROM schemamigrations
ORDER BY schema_version DESC
LIMIT 1
`).Get(&v)
`)
return v, wrap(err)
}

View File

@@ -7,13 +7,14 @@
package sqlite
import (
"context"
"fmt"
"os"
"testing"
"time"
"github.com/syncthing/syncthing/internal/timeutil"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
@@ -21,7 +22,7 @@ import (
var globalFi protocol.FileInfo
func BenchmarkUpdate(b *testing.B) {
db, err := OpenTemp()
db, err := Open(b.TempDir())
if err != nil {
b.Fatal(err)
}
@@ -30,19 +31,20 @@ func BenchmarkUpdate(b *testing.B) {
b.Fatal(err)
}
})
svc := db.Service(time.Hour).(*Service)
fs := make([]protocol.FileInfo, 100)
t0 := time.Now()
seed := 0
size := 1000
const numBlocks = 500
fdb, err := db.getFolderDB(folderID, true)
if err != nil {
b.Fatal(err)
}
size := 10000
for size < 200_000 {
t0 := time.Now()
if err := svc.periodic(context.Background()); err != nil {
b.Fatal(err)
}
b.Log("garbage collect in", time.Since(t0))
for {
local, err := db.CountLocal(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -53,17 +55,28 @@ func BenchmarkUpdate(b *testing.B) {
}
fs := make([]protocol.FileInfo, 1000)
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.Run(fmt.Sprintf("Insert100Loc@%d", size), func(b *testing.B) {
var files, blocks int
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM files`).Scan(&files); err != nil {
b.Fatal(err)
}
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM blocks`).Scan(&blocks); err != nil {
b.Fatal(err)
}
d := time.Since(t0)
b.Logf("t=%s, files=%d, blocks=%d, files/s=%.01f, blocks/s=%.01f", d, files, blocks, float64(files)/d.Seconds(), float64(blocks)/d.Seconds())
b.Run(fmt.Sprintf("n=Insert100Loc/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
@@ -72,7 +85,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepBlocks100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepBlocks100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -86,7 +99,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepSame100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepSame100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Version = fs[i].Version.Update(42)
@@ -98,7 +111,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("Insert100Rem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=Insert100Rem/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -112,7 +125,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetGlobal100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetGlobal100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
_, ok, err := db.GetGlobalFile(folderID, fs[i].Name)
@@ -127,7 +140,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalSequenced@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalSequenced/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
cur, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
@@ -146,7 +159,21 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetDeviceSequenceLoc@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=AllLocalBlocksWithHash/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllLocalBlocksWithHash(folderID, globalFi.Blocks[0].Hash)
for range it {
count++
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "blocks/s")
})
b.Run(fmt.Sprintf("n=GetDeviceSequenceLoc/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -154,7 +181,7 @@ func BenchmarkUpdate(b *testing.B) {
}
}
})
b.Run(fmt.Sprintf("GetDeviceSequenceRem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetDeviceSequenceRem/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.DeviceID{42})
if err != nil {
@@ -163,7 +190,7 @@ func BenchmarkUpdate(b *testing.B) {
}
})
b.Run(fmt.Sprintf("RemoteNeed@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RemoteNeed/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0)
@@ -178,7 +205,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalNeed100Largest@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalNeed100Largest/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderLargestFirst, 100, 0)
@@ -193,16 +220,16 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
size <<= 1
size += 1000
}
}
func TestBenchmarkDropAllRemote(t *testing.T) {
if testing.Short() {
if testing.Short() || os.Getenv("LONG_TEST") == "" {
t.Skip("slow test")
}
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -241,3 +268,61 @@ func TestBenchmarkDropAllRemote(t *testing.T) {
d := time.Since(t0)
t.Log("drop all took", d)
}
func TestBenchmarkSizeManyFilesRemotes(t *testing.T) {
// Reports the database size for a setup with many files and many remote
// devices each announcing every files, with fairly long file names and
// "worst case" version vectors.
if testing.Short() || os.Getenv("LONG_TEST") == "" {
t.Skip("slow test")
}
dir := t.TempDir()
db, err := Open(dir)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// This is equivalent to about 800 GiB in 100k files (i.e., 8 MiB per
// file), shared between 31 devices where each have touched every file.
const numFiles = 1e5
const numRemotes = 30
const numBlocks = 64
const filenameLen = 64
fs := make([]protocol.FileInfo, 1000)
n := 0
seq := 0
for n < numFiles {
for i := range fs {
seq++
fs[i] = genFile(rand.String(filenameLen), numBlocks, seq)
for r := range numRemotes {
fs[i].Version = fs[i].Version.Update(42 + protocol.ShortID(r))
}
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
t.Fatal(err)
}
for r := range numRemotes {
if err := db.Update(folderID, protocol.DeviceID{byte(42 + r)}, fs); err != nil {
t.Fatal(err)
}
}
n += len(fs)
t.Log(n, (numRemotes+1)*n)
}
if err := db.Close(); err != nil {
t.Fatal(err)
}
size := osutil.DirSize(dir)
t.Logf("Total size: %.02f MiB", float64(size)/1024/1024)
}

View File

@@ -17,7 +17,7 @@ import (
func TestNeed(t *testing.T) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -115,7 +115,7 @@ func TestDropRecalcsGlobal(t *testing.T) {
func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -181,7 +181,7 @@ func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
func TestNeedDeleted(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -224,7 +224,7 @@ func TestNeedDeleted(t *testing.T) {
func TestDontNeedIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -271,7 +271,7 @@ func TestDontNeedIgnored(t *testing.T) {
func TestDontNeedRemoteInvalid(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -322,7 +322,7 @@ func TestDontNeedRemoteInvalid(t *testing.T) {
func TestRemoteDontNeedLocalIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -364,7 +364,7 @@ func TestRemoteDontNeedLocalIgnored(t *testing.T) {
func TestLocalDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -406,7 +406,7 @@ func TestLocalDontNeedDeletedMissing(t *testing.T) {
func TestRemoteDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -474,7 +474,7 @@ func TestRemoteDontNeedDeletedMissing(t *testing.T) {
func TestNeedRemoteSymlinkAndDir(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -517,7 +517,7 @@ func TestNeedRemoteSymlinkAndDir(t *testing.T) {
func TestNeedPagination(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -583,7 +583,7 @@ func TestDeletedAfterConflict(t *testing.T) {
// 23NHXGS FILE TreeSizeFreeSetup.exe 445 --- 2025-06-23T03:16:10.2804841Z 13832808 -nG---- HZJYWFM:1751507473 7B4kLitF
// JKX6ZDN FILE TreeSizeFreeSetup.exe 320 --- 2025-06-23T03:16:10.2804841Z 13832808 ------- JKX6ZDN:1750992570 7B4kLitF
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -15,7 +15,7 @@ import (
func TestIndexIDs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -16,7 +16,7 @@ import (
func TestBlocks(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -89,7 +89,7 @@ func TestBlocks(t *testing.T) {
func TestBlocksDeleted(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -141,7 +141,7 @@ func TestBlocksDeleted(t *testing.T) {
func TestRemoteSequence(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -14,7 +14,7 @@ import (
func TestMtimePairs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"log/slog"
"os"
"path/filepath"
@@ -15,6 +16,7 @@ import (
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
)
const (
@@ -52,8 +54,7 @@ func Open(path string, opts ...Option) (*DB, error) {
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -65,6 +66,8 @@ func Open(path string, opts ...Option) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, maxDBConns, pragmas, schemas, migrations)
if err != nil {
@@ -86,6 +89,10 @@ func Open(path string, opts ...Option) (*DB, error) {
slog.Warn("Failed to clean dropped folders", slogutil.Error(err))
}
if err := db.startFolderDatabases(); err != nil {
return nil, wrap(err)
}
return db, nil
}
@@ -95,11 +102,10 @@ func Open(path string, opts ...Option) (*DB, error) {
func OpenForMigration(path string) (*DB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -111,6 +117,8 @@ func OpenForMigration(path string) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, 1, pragmas, schemas, migrations)
if err != nil {
@@ -131,19 +139,6 @@ func OpenForMigration(path string) (*DB, error) {
return db, nil
}
func OpenTemp() (*DB, error) {
// SQLite has a memory mode, but it works differently with concurrency
// compared to what we need with the WAL mode. So, no memory databases
// for now.
dir, err := os.MkdirTemp("", "syncthing-db")
if err != nil {
return nil, wrap(err)
}
path := filepath.Join(dir, "db")
slog.Debug("Test DB", slogutil.FilePath(path))
return Open(path)
}
func (s *DB) Close() error {
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()
@@ -153,3 +148,24 @@ func (s *DB) Close() error {
}
return wrap(s.baseDB.Close())
}
func initTmpDir(path string) {
if build.IsWindows || build.IsDarwin || os.Getenv("SQLITE_TMPDIR") != "" {
// Doesn't use SQLITE_TMPDIR, isn't likely to have a tiny
// ram-backed temp directory, or already set to something.
return
}
// Attempt to override the SQLite temporary directory by setting the
// env var prior to the (first) database being opened and hence
// SQLite becoming initialized. We set the temp dir to the same
// place we store the database, in the hope that there will be
// enough space there for the operations it needs to perform, as
// opposed to /tmp and similar, on some systems.
dbTmpDir := filepath.Join(path, ".tmp")
if err := os.MkdirAll(dbTmpDir, 0o700); err == nil {
os.Setenv("SQLITE_TMPDIR", dbTmpDir)
} else {
slog.Warn("Failed to create temp directory for SQLite", slogutil.FilePath(dbTmpDir), slogutil.Error(err))
}
}

View File

@@ -14,5 +14,5 @@ import (
const (
dbDriver = "sqlite3"
commonOptions = "_fk=true&_rt=true&_cache_size=-65536&_sync=1&_txlock=immediate"
commonOptions = "_fk=true&_rt=true&_sync=1&_txlock=immediate"
)

View File

@@ -15,7 +15,7 @@ import (
const (
dbDriver = "sqlite"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=cache_size(-65536)&_pragma=synchronous(1)"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=synchronous(1)&_txlock=immediate"
)
func init() {

View File

@@ -8,19 +8,28 @@ package sqlite
import (
"context"
"encoding/binary"
"fmt"
"log/slog"
"math/rand"
"strings"
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
)
const (
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
lastSuccessfulGCSeqKey = "lastSuccessfulGCSeq"
gcMinChunks = 5
gcChunkSize = 100_000 // approximate number of rows to process in a single gc query
gcMaxRuntime = 5 * time.Minute // max time to spend on gc, per table, per run
)
func (s *DB) Service(maintenanceInterval time.Duration) suture.Service {
@@ -91,16 +100,44 @@ func (s *Service) periodic(ctx context.Context) error {
}
return wrap(s.sdb.forEachFolder(func(fdb *folderDB) error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
// Get the current device sequence, for comparison in the next step.
seq, err := fdb.GetDeviceSequence(protocol.LocalDeviceID)
if err != nil {
return wrap(err)
}
// Get the last successful GC sequence. If it's the same as the
// current sequence, nothing has changed and we can skip the GC
// entirely.
meta := db.NewTyped(fdb, internalMetaPrefix)
if prev, _, err := meta.Int64(lastSuccessfulGCSeqKey); err != nil {
return wrap(err)
} else if seq == prev {
slog.DebugContext(ctx, "Skipping unnecessary GC", "folder", fdb.folderID, "fdb", fdb.baseName)
return nil
}
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
// Run the GC steps, in a function to be able to use a deferred
// unlock.
if err := func() error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
return wrap(err)
}
if err := garbageCollectNamesAndVersions(ctx, fdb); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
}(); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
// Update the successful GC sequence.
return wrap(meta.PutInt64(lastSuccessfulGCSeqKey, seq))
}))
}
@@ -118,8 +155,36 @@ func tidy(ctx context.Context, db *sqlx.DB) error {
return nil
}
func garbageCollectNamesAndVersions(ctx context.Context, fdb *folderDB) error {
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName)
res, err := fdb.stmt(`
DELETE FROM file_names
WHERE NOT EXISTS (SELECT 1 FROM files f WHERE f.name_idx = idx)
`).Exec()
if err != nil {
return wrap(err, "delete names")
}
if aff, err := res.RowsAffected(); err == nil {
l.DebugContext(ctx, "Removed old file names", "affected", aff)
}
res, err = fdb.stmt(`
DELETE FROM file_versions
WHERE NOT EXISTS (SELECT 1 FROM files f WHERE f.version_idx = idx)
`).Exec()
if err != nil {
return wrap(err, "delete versions")
}
if aff, err := res.RowsAffected(); err == nil {
l.DebugContext(ctx, "Removed old file versions", "affected", aff)
}
return nil
}
func garbageCollectOldDeletedLocked(ctx context.Context, fdb *folderDB) error {
l := slog.With("fdb", fdb.baseDB)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName)
if fdb.deleteRetention <= 0 {
slog.DebugContext(ctx, "Delete retention is infinite, skipping cleanup")
return nil
@@ -171,37 +236,108 @@ func garbageCollectBlocklistsAndBlocksLocked(ctx context.Context, fdb *folderDB)
}
defer tx.Rollback() //nolint:errcheck
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocklists
WHERE NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = blocklists.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocklists")
} else {
slog.DebugContext(ctx, "Blocklist GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
// Both blocklists and blocks refer to blocklists_hash from the files table.
for _, table := range []string{"blocklists", "blocks"} {
// Count the number of rows
var rows int64
if err := tx.GetContext(ctx, &rows, `SELECT count(*) FROM `+table); err != nil {
return wrap(err)
}
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocks
WHERE NOT EXISTS (
SELECT 1 FROM blocklists WHERE blocklists.blocklist_hash = blocks.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocks")
} else {
slog.DebugContext(ctx, "Blocks GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
chunks := max(gcMinChunks, rows/gcChunkSize)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName, "table", table, "rows", rows, "chunks", chunks)
// Process rows in chunks up to a given time limit. We always use at
// least gcMinChunks chunks, then increase the number as the number of rows
// exceeds gcMinChunks*gcChunkSize.
t0 := time.Now()
for i, br := range randomBlobRanges(int(chunks)) {
if d := time.Since(t0); d > gcMaxRuntime {
l.InfoContext(ctx, "GC was interrupted due to exceeding time limit", "processed", i, "runtime", time.Since(t0))
break
}
return slog.Int64("rows", rows)
}))
// The limit column must be an indexed column with a mostly random distribution of blobs.
// That's the blocklist_hash column for blocklists, and the hash column for blocks.
limitColumn := table + ".blocklist_hash"
if table == "blocks" {
limitColumn = "blocks.hash"
}
q := fmt.Sprintf(`
DELETE FROM %s
WHERE %s AND NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = %s.blocklist_hash
)`, table, br.SQL(limitColumn), table)
if res, err := tx.ExecContext(ctx, q); err != nil {
return wrap(err, "delete from "+table)
} else {
l.DebugContext(ctx, "GC query result", "processed", i, "runtime", time.Since(t0), "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
}
}
return wrap(tx.Commit())
}
// blobRange defines a range for blob searching. A range is open ended if
// start or end is nil.
type blobRange struct {
start, end []byte
}
// SQL returns the SQL where clause for the given range, e.g.
// `column >= x'49249248' AND column < x'6db6db6c'`
func (r blobRange) SQL(name string) string {
var sb strings.Builder
if r.start != nil {
fmt.Fprintf(&sb, "%s >= x'%x'", name, r.start)
}
if r.start != nil && r.end != nil {
sb.WriteString(" AND ")
}
if r.end != nil {
fmt.Fprintf(&sb, "%s < x'%x'", name, r.end)
}
return sb.String()
}
// randomBlobRanges returns n blobRanges in random order
func randomBlobRanges(n int) []blobRange {
ranges := blobRanges(n)
rand.Shuffle(len(ranges), func(i, j int) { ranges[i], ranges[j] = ranges[j], ranges[i] })
return ranges
}
// blobRanges returns n blobRanges
func blobRanges(n int) []blobRange {
// We use three byte (24 bit) prefixes to get fairly granular ranges and easy bit
// conversions.
rangeSize := (1 << 24) / n
ranges := make([]blobRange, 0, n)
var prev []byte
for i := range n {
var pref []byte
if i < n-1 {
end := (i + 1) * rangeSize
pref = intToBlob(end)
}
ranges = append(ranges, blobRange{prev, pref})
prev = pref
}
return ranges
}
func intToBlob(n int) []byte {
var pref [4]byte
binary.BigEndian.PutUint32(pref[:], uint32(n)) //nolint:gosec
// first byte is always zero and not part of the range
return pref[1:]
}

View File

@@ -0,0 +1,37 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"bytes"
"fmt"
"strings"
"testing"
)
func TestBlobRange(t *testing.T) {
exp := `
hash < x'249249'
hash >= x'249249' AND hash < x'492492'
hash >= x'492492' AND hash < x'6db6db'
hash >= x'6db6db' AND hash < x'924924'
hash >= x'924924' AND hash < x'b6db6d'
hash >= x'b6db6d' AND hash < x'db6db6'
hash >= x'db6db6'
`
ranges := blobRanges(7)
buf := new(bytes.Buffer)
for _, r := range ranges {
fmt.Fprintln(buf, r.SQL("hash"))
}
if strings.TrimSpace(buf.String()) != strings.TrimSpace(exp) {
t.Log(buf.String())
t.Error("unexpected output")
}
}

View File

@@ -36,7 +36,7 @@ const (
func TestBasics(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -81,7 +81,12 @@ func TestBasics(t *testing.T) {
)
t.Run("SchemaVersion", func(t *testing.T) {
ver, err := sdb.getAppliedSchemaVersion()
tx, err := sdb.sql.Beginx()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
ver, err := sdb.getAppliedSchemaVersion(tx)
if err != nil {
t.Fatal(err)
}
@@ -427,7 +432,7 @@ func TestBasics(t *testing.T) {
func TestPrefixGlobbing(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -496,7 +501,7 @@ func TestPrefixGlobbing(t *testing.T) {
func TestPrefixGlobbingStar(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -529,7 +534,7 @@ func TestPrefixGlobbingStar(t *testing.T) {
}
func TestAvailability(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -596,7 +601,7 @@ func TestAvailability(t *testing.T) {
}
func TestDropFilesNamed(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -640,7 +645,7 @@ func TestDropFilesNamed(t *testing.T) {
}
func TestDropFolder(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -700,7 +705,7 @@ func TestDropFolder(t *testing.T) {
}
func TestDropDevice(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -764,7 +769,7 @@ func TestDropDevice(t *testing.T) {
}
func TestDropAllFiles(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -926,7 +931,7 @@ func TestConcurrentUpdateSelect(t *testing.T) {
func TestAllForBlocksHash(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -988,7 +993,7 @@ func TestAllForBlocksHash(t *testing.T) {
func TestBlocklistGarbageCollection(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1067,7 +1072,7 @@ func TestBlocklistGarbageCollection(t *testing.T) {
func TestInsertLargeFile(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1119,7 +1124,7 @@ func TestStrangeDeletedGlobalBug(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"errors"
"fmt"
"log/slog"
"os"
@@ -77,6 +78,22 @@ func (s *DB) cleanDroppedFolders() error {
return nil
}
// startFolderDatabases loads all existing folder databases, thus causing
// migrations to apply.
func (s *DB) startFolderDatabases() error {
ids, err := s.ListFolders()
if err != nil {
return wrap(err)
}
for _, id := range ids {
_, err := s.getFolderDB(id, false)
if err != nil && !errors.Is(err, errNoSuchFolder) {
return wrap(err)
}
}
return nil
}
// wrap returns the error wrapped with the calling function name and
// optional extra context strings as prefix. A nil error wraps to nil.
func wrap(err error, context ...string) error {

View File

@@ -84,7 +84,7 @@ func (s *folderDB) needSizeRemote(device protocol.DeviceID) (db.Counts, error) {
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
WHERE f.name_idx = g.name_idx AND f.version_idx = g.version_idx AND d.device_id = ?
)
GROUP BY g.type, g.local_flags, g.deleted
@@ -94,7 +94,7 @@ func (s *folderDB) needSizeRemote(device protocol.DeviceID) (db.Counts, error) {
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
WHERE f.name_idx = g.name_idx AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
)
GROUP BY g.type, g.local_flags, g.deleted
`).Select(&res, device.String(),

View File

@@ -27,7 +27,8 @@ func (s *folderDB) GetGlobalFile(file string) (protocol.FileInfo, bool, error) {
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
WHERE f.name = ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name = ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
`).Get(&ind, file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
@@ -49,8 +50,9 @@ func (s *folderDB) GetGlobalAvailability(file string) ([]protocol.DeviceID, erro
err := s.stmt(`
SELECT d.device_id FROM files f
INNER JOIN devices d ON d.idx = f.device_idx
INNER JOIN files g ON g.version = f.version AND g.name = f.name
WHERE g.name = ? AND g.local_flags & {{.FlagLocalGlobal}} != 0 AND f.device_idx != {{.LocalDeviceIdx}}
INNER JOIN files g ON g.version_idx = f.version_idx AND g.name_idx = f.name_idx
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name = ? AND g.local_flags & {{.FlagLocalGlobal}} != 0 AND f.device_idx != {{.LocalDeviceIdx}}
ORDER BY d.device_id
`).Select(&devStrs, file)
if errors.Is(err, sql.ErrNoRows) {
@@ -74,9 +76,10 @@ func (s *folderDB) GetGlobalAvailability(file string) ([]protocol.DeviceID, erro
func (s *folderDB) AllGlobalFiles() (iter.Seq[db.FileMetadata], func() error) {
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
ORDER BY n.name
`).Queryx())
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
@@ -93,9 +96,10 @@ func (s *folderDB) AllGlobalFilesPrefix(prefix string) (iter.Seq[db.FileMetadata
end := prefixEnd(prefix)
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
WHERE f.name >= ? AND f.name < ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name >= ? AND n.name < ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY n.name
`).Queryx(prefix, end))
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
@@ -109,7 +113,7 @@ func (s *folderDB) AllNeededGlobalFiles(device protocol.DeviceID, order config.P
case config.PullOrderRandom:
selectOpts = "ORDER BY RANDOM()"
case config.PullOrderAlphabetic:
selectOpts = "ORDER BY g.name ASC"
selectOpts = "ORDER BY n.name ASC"
case config.PullOrderSmallestFirst:
selectOpts = "ORDER BY g.size ASC"
case config.PullOrderLargestFirst:
@@ -137,9 +141,10 @@ func (s *folderDB) AllNeededGlobalFiles(device protocol.DeviceID, order config.P
func (s *folderDB) neededGlobalFilesLocal(selectOpts string) (iter.Seq[protocol.FileInfo], func() error) {
// Select all the non-ignored files with the need bit set.
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalIgnored}} = 0 AND g.local_flags & {{.FlagLocalNeeded}} != 0
` + selectOpts).Queryx())
return itererr.Map(it, errFn, indirectFI.FileInfo)
@@ -155,24 +160,26 @@ func (s *folderDB) neededGlobalFilesRemote(device protocol.DeviceID, selectOpts
// non-deleted and valid remote file (of any version)
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
WHERE f.name_idx = g.name_idx AND f.version_idx = g.version_idx AND d.device_id = ?
)
UNION ALL
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
WHERE f.name_idx = g.name_idx AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
)
`+selectOpts).Queryx(
device.String(),

View File

@@ -32,7 +32,8 @@ func (s *folderDB) GetDeviceFile(device protocol.DeviceID, file string) (protoco
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON f.device_idx = d.idx
WHERE d.device_id = ? AND f.name = ?
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE d.device_id = ? AND n.name = ?
`).Get(&ind, device.String(), file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
@@ -87,14 +88,16 @@ func (s *folderDB) AllLocalFilesWithPrefix(device protocol.DeviceID, prefix stri
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON d.idx = f.device_idx
WHERE d.device_id = ? AND f.name >= ? AND f.name < ?
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE d.device_id = ? AND n.name >= ? AND n.name < ?
`, device.String(), prefix, end))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) AllLocalFilesWithBlocksHash(h []byte) (iter.Seq[db.FileMetadata], func() error) {
return iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE f.device_idx = {{.LocalDeviceIdx}} AND f.blocklist_hash = ?
`).Queryx(h))
}
@@ -104,7 +107,8 @@ func (s *folderDB) AllLocalBlocksWithHash(hash []byte) (iter.Seq[db.BlockMapEntr
// & blocklists is deferred (garbage collected) while the files list is
// not. This filters out blocks that are in fact deleted.
return iterStructs[db.BlockMapEntry](s.stmt(`
SELECT f.blocklist_hash as blocklisthash, b.idx as blockindex, b.offset, b.size, f.name as filename FROM files f
SELECT f.blocklist_hash as blocklisthash, b.idx as blockindex, b.offset, b.size, n.name as filename FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
LEFT JOIN blocks b ON f.blocklist_hash = b.blocklist_hash
WHERE f.device_idx = {{.LocalDeviceIdx}} AND b.hash = ?
`).Queryx(hash))
@@ -170,10 +174,12 @@ func (s *folderDB) DebugFilePattern(out io.Writer, name string) error {
}
name = "%" + name + "%"
res := itererr.Zip(iterStructs[hashFileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags, f.version, f.blocklist_hash as blocklisthash, d.device_id as deviceid FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags, v.version, f.blocklist_hash as blocklisthash, d.device_id as deviceid FROM files f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name LIKE ?
ORDER BY f.name, f.device_idx
INNER JOIN file_names n ON n.idx = f.name_idx
INNER JOIN file_versions v ON v.idx = f.version_idx
WHERE n.name LIKE ?
ORDER BY n.name, f.device_idx
`).Queryx(name)))
delMap := map[bool]string{

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"time"
"github.com/syncthing/syncthing/lib/protocol"
@@ -25,8 +26,7 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",
@@ -64,11 +64,10 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
func openFolderDBForMigration(folder, path string, deleteRetention time.Duration) (*folderDB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",
@@ -96,16 +95,13 @@ func openFolderDBForMigration(folder, path string, deleteRetention time.Duration
func (s *folderDB) deviceIdxLocked(deviceID protocol.DeviceID) (int64, error) {
devStr := deviceID.String()
if _, err := s.stmt(`
INSERT OR IGNORE INTO devices(device_id)
VALUES (?)
`).Exec(devStr); err != nil {
return 0, wrap(err)
}
var idx int64
if err := s.stmt(`
SELECT idx FROM devices
WHERE device_id = ?
INSERT INTO devices(device_id)
VALUES (?)
ON CONFLICT(device_id) DO UPDATE
SET device_id = excluded.device_id
RETURNING idx
`).Get(&idx, devStr); err != nil {
return 0, wrap(err)
}

View File

@@ -46,9 +46,33 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
//nolint:sqlclosecheck
insertNameStmt, err := txp.Preparex(`
INSERT INTO file_names(name)
VALUES (?)
ON CONFLICT(name) DO UPDATE
SET name = excluded.name
RETURNING idx
`)
if err != nil {
return wrap(err, "prepare insert name")
}
//nolint:sqlclosecheck
insertVersionStmt, err := txp.Preparex(`
INSERT INTO file_versions (version)
VALUES (?)
ON CONFLICT(version) DO UPDATE
SET version = excluded.version
RETURNING idx
`)
if err != nil {
return wrap(err, "prepare insert version")
}
//nolint:sqlclosecheck
insertFileStmt, err := txp.Preparex(`
INSERT OR REPLACE INTO files (device_idx, remote_sequence, name, type, modified, size, version, deleted, local_flags, blocklist_hash)
INSERT OR REPLACE INTO files (device_idx, remote_sequence, type, modified, size, deleted, local_flags, blocklist_hash, name_idx, version_idx)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
RETURNING sequence
`)
@@ -102,8 +126,19 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
prevRemoteSeq = f.Sequence
remoteSeq = &f.Sequence
}
var nameIdx int64
if err := insertNameStmt.Get(&nameIdx, f.Name); err != nil {
return wrap(err, "insert name")
}
var versionIdx int64
if err := insertVersionStmt.Get(&versionIdx, f.Version.String()); err != nil {
return wrap(err, "insert version")
}
var localSeq int64
if err := insertFileStmt.Get(&localSeq, deviceIdx, remoteSeq, f.Name, f.Type, f.ModTime().UnixNano(), f.Size, f.Version.String(), f.IsDeleted(), f.LocalFlags, blockshash); err != nil {
if err := insertFileStmt.Get(&localSeq, deviceIdx, remoteSeq, f.Type, f.ModTime().UnixNano(), f.Size, f.IsDeleted(), f.LocalFlags, blockshash, nameIdx, versionIdx); err != nil {
return wrap(err, "insert file")
}
@@ -114,11 +149,9 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
if err != nil {
return wrap(err, "marshal blocklist")
}
if _, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
if res, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
return wrap(err, "insert blocklist")
}
if device == protocol.LocalDeviceID {
} else if aff, _ := res.RowsAffected(); aff != 0 && device == protocol.LocalDeviceID {
// Insert all blocks
if err := s.insertBlocksLocked(txp, f.BlocksHash, f.Blocks); err != nil {
return wrap(err, "insert blocks")
@@ -248,7 +281,9 @@ func (s *folderDB) DropFilesNamed(device protocol.DeviceID, names []string) erro
query, args, err := sqlx.In(`
DELETE FROM files
WHERE device_idx = ? AND name IN (?)
WHERE device_idx = ? AND name_idx IN (
SELECT idx FROM file_names WHERE name IN (?)
)
`, deviceIdx, names)
if err != nil {
return wrap(err)
@@ -301,12 +336,13 @@ func (s *folderDB) recalcGlobalForFolder(txp *txPreparedStmts) error {
// recalculate.
//nolint:sqlclosecheck
namesStmt, err := txp.Preparex(`
SELECT f.name FROM files f
SELECT n.name FROM files f
INNER JOIN file_names n ON n.idx = f.name_idx
WHERE NOT EXISTS (
SELECT 1 FROM files g
WHERE g.name = f.name AND g.local_flags & ? != 0
WHERE g.name_idx = f.name_idx AND g.local_flags & ? != 0
)
GROUP BY name
GROUP BY n.name
`)
if err != nil {
return wrap(err)
@@ -331,11 +367,13 @@ func (s *folderDB) recalcGlobalForFolder(txp *txPreparedStmts) error {
func (s *folderDB) recalcGlobalForFile(txp *txPreparedStmts, file string) error {
//nolint:sqlclosecheck
selStmt, err := txp.Preparex(`
SELECT name, device_idx, sequence, modified, version, deleted, local_flags FROM files
WHERE name = ?
SELECT n.name, f.device_idx, f.sequence, f.modified, v.version, f.deleted, f.local_flags FROM files f
INNER JOIN file_versions v ON v.idx = f.version_idx
INNER JOIN file_names n ON n.idx = f.name_idx
WHERE n.name = ?
`)
if err != nil {
return wrap(err)
return wrap(err, "prepare select")
}
es, err := itererr.Collect(iterStructs[fileRow](selStmt.Queryx(file)))
if err != nil {
@@ -391,10 +429,10 @@ func (s *folderDB) recalcGlobalForFile(txp *txPreparedStmts, file string) error
//nolint:sqlclosecheck
upStmt, err = txp.Preparex(`
UPDATE files SET local_flags = local_flags & ?
WHERE name = ? AND sequence != ? AND local_flags & ? != 0
WHERE name_idx = (SELECT idx FROM file_names WHERE name = ?) AND sequence != ? AND local_flags & ? != 0
`)
if err != nil {
return wrap(err)
return wrap(err, "prepare update")
}
if _, err := upStmt.Exec(^(protocol.FlagLocalNeeded | protocol.FlagLocalGlobal), global.Name, global.Sequence, protocol.FlagLocalNeeded|protocol.FlagLocalGlobal); err != nil {
return wrap(err)

View File

@@ -2,7 +2,7 @@ These SQL scripts are embedded in the binary.
Scripts in `schema/` are run at every startup, in alphanumerical order.
Scripts in `migrations/` are run when a migration is needed; the must begin
Scripts in `migrations/` are run when a migration is needed; they must begin
with a number that equals the schema version that results from that
migration. Migrations are not run on initial database creation, so the
scripts in `schema/` should create the latest version.

View File

@@ -0,0 +1,51 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Copy blocks to new table with fewer indexes
DROP TABLE IF EXISTS blocks_v4
;
CREATE TABLE blocks_v4 (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;
INSERT INTO blocks_v4 (hash, blocklist_hash, idx, offset, size)
SELECT hash, blocklist_hash, idx, offset, size FROM blocks ORDER BY hash, blocklist_hash, idx
;
DROP TABLE blocks
;
ALTER TABLE blocks_v4 RENAME TO blocks
;
-- Copy blocklists to new table with fewer indexes
DROP TABLE IF EXISTS blocklists_v4
;
CREATE TABLE blocklists_v4 (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT, WITHOUT ROWID
;
INSERT INTO blocklists_v4 (blocklist_hash, blprotobuf)
SELECT blocklist_hash, blprotobuf FROM blocklists ORDER BY blocklist_hash
;
DROP TABLE blocklists
;
ALTER TABLE blocklists_v4 RENAME TO blocklists
;

View File

@@ -0,0 +1,53 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Grab all unique names into the names table
INSERT INTO file_names (idx, name) SELECT DISTINCT null, name FROM files
;
-- Grab all unique versions into the versions table
INSERT INTO file_versions (idx, version) SELECT DISTINCT null, version FROM files
;
-- Create the new files table
DROP TABLE IF EXISTS files_v5
;
CREATE TABLE files_v5 (
device_idx INTEGER NOT NULL,
sequence INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
remote_sequence INTEGER,
name_idx INTEGER NOT NULL, -- changed
type INTEGER NOT NULL,
modified INTEGER NOT NULL,
size INTEGER NOT NULL,
version_idx INTEGER NOT NULL, -- changed
deleted INTEGER NOT NULL,
local_flags INTEGER NOT NULL,
blocklist_hash BLOB,
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE,
FOREIGN KEY(name_idx) REFERENCES file_names(idx), -- added
FOREIGN KEY(version_idx) REFERENCES file_versions(idx) -- added
) STRICT
;
-- Populate the new files table and move it in place
INSERT INTO files_v5
SELECT f.device_idx, f.sequence, f.remote_sequence, n.idx as name_idx, f.type, f.modified, f.size, v.idx as version_idx, f.deleted, f.local_flags, f.blocklist_hash
FROM files f
INNER JOIN file_names n ON n.name = f.name
INNER JOIN file_versions v ON v.version = f.version
;
DROP TABLE files
;
ALTER TABLE files_v5 RENAME TO files
;

View File

@@ -6,9 +6,8 @@
-- Schema migrations hold the list of historical migrations applied
CREATE TABLE IF NOT EXISTS schemamigrations (
schema_version INTEGER NOT NULL,
schema_version INTEGER NOT NULL PRIMARY KEY,
applied_at INTEGER NOT NULL, -- unix nanos
syncthing_version TEXT NOT NULL COLLATE BINARY,
PRIMARY KEY(schema_version)
syncthing_version TEXT NOT NULL COLLATE BINARY
) STRICT
;

View File

@@ -9,5 +9,5 @@
CREATE TABLE IF NOT EXISTS kv (
key TEXT NOT NULL PRIMARY KEY COLLATE BINARY,
value BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;

View File

@@ -25,15 +25,27 @@ CREATE TABLE IF NOT EXISTS files (
device_idx INTEGER NOT NULL, -- actual device ID or LocalDeviceID
sequence INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, -- our local database sequence, for each and every entry
remote_sequence INTEGER, -- remote device's sequence number, null for local or synthetic entries
name TEXT NOT NULL COLLATE BINARY,
name_idx INTEGER NOT NULL,
type INTEGER NOT NULL, -- protocol.FileInfoType
modified INTEGER NOT NULL, -- Unix nanos
size INTEGER NOT NULL,
version TEXT NOT NULL COLLATE BINARY,
version_idx INTEGER NOT NULL,
deleted INTEGER NOT NULL, -- boolean
local_flags INTEGER NOT NULL,
blocklist_hash BLOB, -- null when there are no blocks
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE,
FOREIGN KEY(name_idx) REFERENCES file_names(idx),
FOREIGN KEY(version_idx) REFERENCES file_versions(idx)
) STRICT
;
CREATE TABLE IF NOT EXISTS file_names (
idx INTEGER NOT NULL PRIMARY KEY,
name TEXT NOT NULL UNIQUE COLLATE BINARY
) STRICT
;
CREATE TABLE IF NOT EXISTS file_versions (
idx INTEGER NOT NULL PRIMARY KEY,
version TEXT NOT NULL UNIQUE COLLATE BINARY
) STRICT
;
-- FileInfos store the actual protobuf object. We do this separately to keep
@@ -49,11 +61,17 @@ CREATE UNIQUE INDEX IF NOT EXISTS files_remote_sequence ON files (device_idx, re
WHERE remote_sequence IS NOT NULL
;
-- There can be only one file per folder, device, and name
CREATE UNIQUE INDEX IF NOT EXISTS files_device_name ON files (device_idx, name)
;
-- We want to be able to look up & iterate files based on just folder and name
CREATE INDEX IF NOT EXISTS files_name_only ON files (name)
CREATE UNIQUE INDEX IF NOT EXISTS files_device_name ON files (device_idx, name_idx)
;
-- We want to be able to look up & iterate files based on blocks hash
CREATE INDEX IF NOT EXISTS files_blocklist_hash_only ON files (blocklist_hash, device_idx) WHERE blocklist_hash IS NOT NULL
;
-- We need to look by name_idx or version_idx for garbage collection.
-- This will fail pre-migration for v4 schemas, which is fine.
-- syncthing:ignore-failure
CREATE INDEX IF NOT EXISTS files_name_idx_only ON files (name_idx)
;
-- This will fail pre-migration for v4 schemas, which is fine.
-- syncthing:ignore-failure
CREATE INDEX IF NOT EXISTS files_version_idx_only ON files (version_idx)
;

View File

@@ -6,10 +6,9 @@
-- indexids holds the index ID and maximum sequence for a given device and folder
CREATE TABLE IF NOT EXISTS indexids (
device_idx INTEGER NOT NULL,
device_idx INTEGER NOT NULL PRIMARY KEY,
index_id TEXT NOT NULL COLLATE BINARY,
sequence INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY(device_idx),
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT, WITHOUT ROWID
;

View File

@@ -14,21 +14,21 @@
CREATE TABLE IF NOT EXISTS blocklists (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;
-- Blocks
--
-- For all local files we store the blocks individually for quick lookup. A
-- given block can exist in multiple blocklists and at multiple offsets in a
-- blocklist.
-- blocklist. We eschew most indexes here as inserting millions of blocks is
-- common and performance is critical.
CREATE TABLE IF NOT EXISTS blocks (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx),
FOREIGN KEY(blocklist_hash) REFERENCES blocklists(blocklist_hash) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
) STRICT
PRIMARY KEY(hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;

View File

@@ -6,9 +6,8 @@
--- Backing for the MtimeFS
CREATE TABLE IF NOT EXISTS mtimes (
name TEXT NOT NULL,
name TEXT NOT NULL PRIMARY KEY,
ondisk INTEGER NOT NULL, -- unix nanos
virtual INTEGER NOT NULL, -- unix nanos
PRIMARY KEY(name)
virtual INTEGER NOT NULL -- unix nanos
) STRICT, WITHOUT ROWID
;

View File

@@ -17,7 +17,7 @@ import (
func TestNamespacedInt(t *testing.T) {
t.Parallel()
ldb, err := sqlite.OpenTemp()
ldb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -18,14 +18,30 @@ import (
"time"
)
type formattingHandler struct {
attrs []slog.Attr
groups []string
type LineFormat struct {
TimestampFormat string
LevelString bool
LevelSyslog bool
}
type formattingOptions struct {
LineFormat
out io.Writer
recs []*lineRecorder
timeOverride time.Time
}
type formattingHandler struct {
attrs []slog.Attr
groups []string
opts *formattingOptions
}
func SetLineFormat(f LineFormat) {
globalFormatter.LineFormat = f
}
var _ slog.Handler = (*formattingHandler)(nil)
func (h *formattingHandler) Enabled(context.Context, slog.Level) bool {
@@ -83,19 +99,19 @@ func (h *formattingHandler) Handle(_ context.Context, rec slog.Record) error {
}
line := Line{
When: cmp.Or(h.timeOverride, rec.Time),
When: cmp.Or(h.opts.timeOverride, rec.Time),
Message: sb.String(),
Level: rec.Level,
}
// If there is a recorder, record the line.
for _, rec := range h.recs {
for _, rec := range h.opts.recs {
rec.record(line)
}
// If there's an output, print the line.
if h.out != nil {
_, _ = line.WriteTo(h.out)
if h.opts.out != nil {
_, _ = line.WriteTo(h.opts.out, h.opts.LineFormat)
}
return nil
}
@@ -143,11 +159,9 @@ func (h *formattingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
}
}
return &formattingHandler{
attrs: append(h.attrs, attrs...),
groups: h.groups,
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: append(h.attrs, attrs...),
groups: h.groups,
opts: h.opts,
}
}
@@ -156,11 +170,9 @@ func (h *formattingHandler) WithGroup(name string) slog.Handler {
return h
}
return &formattingHandler{
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
opts: h.opts,
}
}

View File

@@ -17,8 +17,11 @@ import (
func TestFormattingHandler(t *testing.T) {
buf := new(bytes.Buffer)
h := &formattingHandler{
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
opts: &formattingOptions{
LineFormat: DefaultLineFormat,
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
},
}
l := slog.New(h).With("a", "a")

View File

@@ -9,6 +9,7 @@ package slogutil
import (
"log/slog"
"maps"
"strings"
"sync"
)
@@ -39,6 +40,24 @@ func SetDefaultLevel(level slog.Level) {
globalLevels.SetDefault(level)
}
func SetLevelOverrides(sttrace string) {
pkgs := strings.Split(sttrace, ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}
type levelTracker struct {
mut sync.RWMutex
defLevel slog.Level

View File

@@ -7,6 +7,7 @@
package slogutil
import (
"bytes"
"encoding/json"
"fmt"
"io"
@@ -22,13 +23,22 @@ type Line struct {
Level slog.Level `json:"level"`
}
func (l *Line) WriteTo(w io.Writer) (int64, error) {
n, err := fmt.Fprintf(w, "%s %s %s\n", l.timeStr(), l.levelStr(), l.Message)
return int64(n), err
}
func (l *Line) timeStr() string {
return l.When.Format("2006-01-02 15:04:05")
func (l *Line) WriteTo(w io.Writer, f LineFormat) (int64, error) {
buf := new(bytes.Buffer)
if f.LevelSyslog {
_, _ = fmt.Fprintf(buf, "<%d>", l.syslogPriority())
}
if f.TimestampFormat != "" {
buf.WriteString(l.When.Format(f.TimestampFormat))
buf.WriteRune(' ')
}
if f.LevelString {
buf.WriteString(l.levelStr())
buf.WriteRune(' ')
}
buf.WriteString(l.Message)
buf.WriteRune('\n')
return buf.WriteTo(w)
}
func (l *Line) levelStr() string {
@@ -51,6 +61,19 @@ func (l *Line) levelStr() string {
}
}
func (l *Line) syslogPriority() int {
switch {
case l.Level < slog.LevelInfo:
return 7
case l.Level < slog.LevelWarn:
return 6
case l.Level < slog.LevelError:
return 4
default:
return 3
}
}
func (l *Line) MarshalJSON() ([]byte, error) {
// Custom marshal to get short level strings instead of default JSON serialisation
return json.Marshal(map[string]any{

View File

@@ -10,20 +10,26 @@ import (
"io"
"log/slog"
"os"
"strings"
"time"
)
var (
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
globalLevels = &levelTracker{
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
DefaultLineFormat = LineFormat{
TimestampFormat: time.DateTime,
LevelString: true,
}
globalLevels = &levelTracker{
levels: make(map[string]slog.Level),
descrs: make(map[string]string),
}
slogDef = slog.New(&formattingHandler{
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
})
globalFormatter = &formattingOptions{
LineFormat: DefaultLineFormat,
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
}
slogDef = slog.New(&formattingHandler{opts: globalFormatter})
)
func logWriter() io.Writer {
@@ -38,21 +44,4 @@ func logWriter() io.Writer {
func init() {
slog.SetDefault(slogDef)
// Handle legacy STTRACE var
pkgs := strings.Split(os.Getenv("STTRACE"), ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}

View File

@@ -25,9 +25,10 @@ import (
)
const (
maxSessionLifetime = 7 * 24 * time.Hour
maxActiveSessions = 25
randomTokenLength = 64
maxSessionLifetime = 7 * 24 * time.Hour
maxActiveSessions = 25
randomTokenLength = 64
maxLoginRequestSize = 1 << 10 // one kibibyte for username+password
)
func emitLoginAttempt(success bool, username string, r *http.Request, evLogger events.Logger) {
@@ -182,7 +183,7 @@ func (m *basicAuthAndSessionMiddleware) passwordAuthHandler(w http.ResponseWrite
Password string
StayLoggedIn bool
}
if err := unmarshalTo(r.Body, &req); err != nil {
if err := unmarshalTo(http.MaxBytesReader(w, r.Body, maxLoginRequestSize), &req); err != nil {
l.Debugln("Failed to parse username and password:", err)
http.Error(w, "Failed to parse username and password.", http.StatusBadRequest)
return

View File

@@ -130,7 +130,7 @@ func (c *mockClock) wind(t time.Duration) {
func TestTokenManager(t *testing.T) {
t.Parallel()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -82,7 +82,7 @@ func TestStopAfterBrokenConfig(t *testing.T) {
}
w := config.Wrap("/dev/null", cfg, protocol.LocalDeviceID, events.NoopLogger)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1049,7 +1049,7 @@ func startHTTPWithShutdownTimeout(t *testing.T, cfg config.Wrapper, shutdownTime
// Instantiate the API service
urService := ur.New(cfg, m, connections, false)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1567,7 +1567,7 @@ func TestEventMasks(t *testing.T) {
cfg := newMockedConfig()
defSub := new(eventmocks.BufferedSubscription)
diskSub := new(eventmocks.BufferedSubscription)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -23,6 +23,15 @@ func getRedactedConfig(s *service) config.Configuration {
if rawConf.GUI.User != "" {
rawConf.GUI.User = "REDACTED"
}
for folderIdx, folderCfg := range rawConf.Folders {
for deviceIdx, deviceCfg := range folderCfg.Devices {
if deviceCfg.EncryptionPassword != "" {
rawConf.Folders[folderIdx].Devices[deviceIdx].EncryptionPassword = "REDACTED"
}
}
}
return rawConf
}

View File

@@ -1809,60 +1809,6 @@ func (fake *Wrapper) UnsubscribeArgsForCall(i int) config.Committer {
func (fake *Wrapper) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.configPathMutex.RLock()
defer fake.configPathMutex.RUnlock()
fake.defaultDeviceMutex.RLock()
defer fake.defaultDeviceMutex.RUnlock()
fake.defaultFolderMutex.RLock()
defer fake.defaultFolderMutex.RUnlock()
fake.defaultIgnoresMutex.RLock()
defer fake.defaultIgnoresMutex.RUnlock()
fake.deviceMutex.RLock()
defer fake.deviceMutex.RUnlock()
fake.deviceListMutex.RLock()
defer fake.deviceListMutex.RUnlock()
fake.devicesMutex.RLock()
defer fake.devicesMutex.RUnlock()
fake.folderMutex.RLock()
defer fake.folderMutex.RUnlock()
fake.folderListMutex.RLock()
defer fake.folderListMutex.RUnlock()
fake.folderPasswordsMutex.RLock()
defer fake.folderPasswordsMutex.RUnlock()
fake.foldersMutex.RLock()
defer fake.foldersMutex.RUnlock()
fake.gUIMutex.RLock()
defer fake.gUIMutex.RUnlock()
fake.ignoredDeviceMutex.RLock()
defer fake.ignoredDeviceMutex.RUnlock()
fake.ignoredDevicesMutex.RLock()
defer fake.ignoredDevicesMutex.RUnlock()
fake.ignoredFolderMutex.RLock()
defer fake.ignoredFolderMutex.RUnlock()
fake.lDAPMutex.RLock()
defer fake.lDAPMutex.RUnlock()
fake.modifyMutex.RLock()
defer fake.modifyMutex.RUnlock()
fake.myIDMutex.RLock()
defer fake.myIDMutex.RUnlock()
fake.optionsMutex.RLock()
defer fake.optionsMutex.RUnlock()
fake.rawCopyMutex.RLock()
defer fake.rawCopyMutex.RUnlock()
fake.removeDeviceMutex.RLock()
defer fake.removeDeviceMutex.RUnlock()
fake.removeFolderMutex.RLock()
defer fake.removeFolderMutex.RUnlock()
fake.requiresRestartMutex.RLock()
defer fake.requiresRestartMutex.RUnlock()
fake.saveMutex.RLock()
defer fake.saveMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.subscribeMutex.RLock()
defer fake.subscribeMutex.RUnlock()
fake.unsubscribeMutex.RLock()
defer fake.unsubscribeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
//go:generate go tool counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
package config

View File

@@ -403,18 +403,6 @@ func (fake *Service) ServeReturnsOnCall(i int, result1 error) {
func (fake *Service) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.allAddressesMutex.RLock()
defer fake.allAddressesMutex.RUnlock()
fake.connectionStatusMutex.RLock()
defer fake.connectionStatusMutex.RUnlock()
fake.externalAddressesMutex.RLock()
defer fake.externalAddressesMutex.RUnlock()
fake.listenerStatusMutex.RLock()
defer fake.listenerStatusMutex.RUnlock()
fake.nATTypeMutex.RLock()
defer fake.nATTypeMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/service.go --fake-name Service . Service
//go:generate go tool counterfeiter -o mocks/service.go --fake-name Service . Service
package connections

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/manager.go --fake-name Manager . Manager
//go:generate go tool counterfeiter -o mocks/manager.go --fake-name Manager . Manager
package discover

View File

@@ -420,18 +420,6 @@ func (fake *Manager) StringReturnsOnCall(i int, result1 string) {
func (fake *Manager) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.cacheMutex.RLock()
defer fake.cacheMutex.RUnlock()
fake.childErrorsMutex.RLock()
defer fake.childErrorsMutex.RUnlock()
fake.errorMutex.RLock()
defer fake.errorMutex.RUnlock()
fake.lookupMutex.RLock()
defer fake.lookupMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
//go:generate go tool counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
// Package events provides event subscription and polling functionality.
package events

View File

@@ -160,10 +160,6 @@ func (fake *BufferedSubscription) SinceReturnsOnCall(i int, result1 []events.Eve
func (fake *BufferedSubscription) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.maskMutex.RLock()
defer fake.maskMutex.RUnlock()
fake.sinceMutex.RLock()
defer fake.sinceMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -459,6 +459,7 @@ func TestRecvOnlyRevertOwnID(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
defer cancel()
for {
select {
case <-ctx.Done():
@@ -466,9 +467,9 @@ func TestRecvOnlyRevertOwnID(t *testing.T) {
case <-sub.C():
if file, _ := m.testCurrentFolderFile(f.ID, name); file.Deleted {
t.Error("local file was deleted")
cancel()
return
} else if file.IsEquivalent(fi, f.modTimeWindow) {
cancel() // That's what we are waiting for
return // That's what we are waiting for
}
}
}

View File

@@ -491,15 +491,26 @@ nextFile:
continue nextFile
}
// Verify there is some availability for the file before we start
// processing it
devices := f.model.fileAvailability(f.FolderConfiguration, fi)
if len(devices) > 0 {
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
if len(devices) == 0 {
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
continue
}
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
// Verify we have space to handle the file before we start
// creating temp files etc.
if err := f.CheckAvailableSpace(uint64(fi.Size)); err != nil { //nolint:gosec
f.newPullError(fileName, err)
f.queue.Done(fileName)
continue
}
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
}
return changed, fileDeletions, dirDeletions, nil
@@ -1327,13 +1338,6 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
}
for state := range in {
if err := f.CheckAvailableSpace(uint64(state.file.Size)); err != nil { //nolint:gosec
state.fail(err)
// Nothing more to do for this failed file, since it would use to much disk space
out <- state.sharedPullerState
continue
}
if f.Type != config.FolderTypeReceiveEncrypted {
f.model.progressEmitter.Register(state.sharedPullerState)
}

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
//go:generate go tool counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
package model

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/events"
)
@@ -125,11 +126,12 @@ func (s *stateTracker) setState(newState folderState) {
eventData["duration"] = time.Since(s.changed).Seconds()
}
slog.Debug("Folder changed state", "folder", s.folderID, "state", newState, "from", s.current)
s.current = newState
s.changed = time.Now().Truncate(time.Second)
s.evLogger.Log(events.StateChanged, eventData)
slog.Info("Folder changed state", "folder", s.folderID, "state", newState)
}
// getState returns the current state, the time when it last changed, and the
@@ -156,6 +158,12 @@ func (s *stateTracker) setError(err error) {
"from": s.current.String(),
}
if err != nil && s.current != FolderError {
slog.Warn("Folder is in error state", slog.String("folder", s.folderID), slogutil.Error(err))
} else if err == nil && s.current == FolderError {
slog.Info("Folder error state was cleared", slog.String("folder", s.folderID))
}
if err != nil {
eventData["error"] = err.Error()
s.current = FolderError

View File

@@ -165,10 +165,6 @@ func (fake *FolderSummaryService) SummaryReturnsOnCall(i int, result1 *model.Fol
func (fake *FolderSummaryService) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.summaryMutex.RLock()
defer fake.summaryMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -3837,112 +3837,6 @@ func (fake *Model) WatchErrorReturnsOnCall(i int, result1 error) {
func (fake *Model) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.addConnectionMutex.RLock()
defer fake.addConnectionMutex.RUnlock()
fake.allGlobalFilesMutex.RLock()
defer fake.allGlobalFilesMutex.RUnlock()
fake.availabilityMutex.RLock()
defer fake.availabilityMutex.RUnlock()
fake.bringToFrontMutex.RLock()
defer fake.bringToFrontMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.completionMutex.RLock()
defer fake.completionMutex.RUnlock()
fake.connectedToMutex.RLock()
defer fake.connectedToMutex.RUnlock()
fake.connectionStatsMutex.RLock()
defer fake.connectionStatsMutex.RUnlock()
fake.currentFolderFileMutex.RLock()
defer fake.currentFolderFileMutex.RUnlock()
fake.currentGlobalFileMutex.RLock()
defer fake.currentGlobalFileMutex.RUnlock()
fake.currentIgnoresMutex.RLock()
defer fake.currentIgnoresMutex.RUnlock()
fake.delayScanMutex.RLock()
defer fake.delayScanMutex.RUnlock()
fake.deviceStatisticsMutex.RLock()
defer fake.deviceStatisticsMutex.RUnlock()
fake.dismissPendingDeviceMutex.RLock()
defer fake.dismissPendingDeviceMutex.RUnlock()
fake.dismissPendingFolderMutex.RLock()
defer fake.dismissPendingFolderMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.folderErrorsMutex.RLock()
defer fake.folderErrorsMutex.RUnlock()
fake.folderProgressBytesCompletedMutex.RLock()
defer fake.folderProgressBytesCompletedMutex.RUnlock()
fake.folderStatisticsMutex.RLock()
defer fake.folderStatisticsMutex.RUnlock()
fake.getFolderVersionsMutex.RLock()
defer fake.getFolderVersionsMutex.RUnlock()
fake.globalDirectoryTreeMutex.RLock()
defer fake.globalDirectoryTreeMutex.RUnlock()
fake.globalSizeMutex.RLock()
defer fake.globalSizeMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.loadIgnoresMutex.RLock()
defer fake.loadIgnoresMutex.RUnlock()
fake.localChangedFolderFilesMutex.RLock()
defer fake.localChangedFolderFilesMutex.RUnlock()
fake.localFilesMutex.RLock()
defer fake.localFilesMutex.RUnlock()
fake.localFilesSequencedMutex.RLock()
defer fake.localFilesSequencedMutex.RUnlock()
fake.localSizeMutex.RLock()
defer fake.localSizeMutex.RUnlock()
fake.needFolderFilesMutex.RLock()
defer fake.needFolderFilesMutex.RUnlock()
fake.needSizeMutex.RLock()
defer fake.needSizeMutex.RUnlock()
fake.onHelloMutex.RLock()
defer fake.onHelloMutex.RUnlock()
fake.overrideMutex.RLock()
defer fake.overrideMutex.RUnlock()
fake.pendingDevicesMutex.RLock()
defer fake.pendingDevicesMutex.RUnlock()
fake.pendingFoldersMutex.RLock()
defer fake.pendingFoldersMutex.RUnlock()
fake.receiveOnlySizeMutex.RLock()
defer fake.receiveOnlySizeMutex.RUnlock()
fake.remoteNeedFolderFilesMutex.RLock()
defer fake.remoteNeedFolderFilesMutex.RUnlock()
fake.remoteSequencesMutex.RLock()
defer fake.remoteSequencesMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.requestGlobalMutex.RLock()
defer fake.requestGlobalMutex.RUnlock()
fake.resetFolderMutex.RLock()
defer fake.resetFolderMutex.RUnlock()
fake.restoreFolderVersionsMutex.RLock()
defer fake.restoreFolderVersionsMutex.RUnlock()
fake.revertMutex.RLock()
defer fake.revertMutex.RUnlock()
fake.scanFolderMutex.RLock()
defer fake.scanFolderMutex.RUnlock()
fake.scanFolderSubdirsMutex.RLock()
defer fake.scanFolderSubdirsMutex.RUnlock()
fake.scanFoldersMutex.RLock()
defer fake.scanFoldersMutex.RUnlock()
fake.sequenceMutex.RLock()
defer fake.sequenceMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.setIgnoresMutex.RLock()
defer fake.setIgnoresMutex.RUnlock()
fake.stateMutex.RLock()
defer fake.stateMutex.RUnlock()
fake.usageReportingStatsMutex.RLock()
defer fake.usageReportingStatsMutex.RUnlock()
fake.watchErrorMutex.RLock()
defer fake.watchErrorMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/model.go --fake-name Model . Model
//go:generate go tool counterfeiter -o mocks/model.go --fake-name Model . Model
package model
@@ -2549,6 +2548,12 @@ func (m *model) numHashers(folder string) int {
m.mut.RLock()
folderCfg := m.folderCfgs[folder]
numFolders := max(1, len(m.folderCfgs))
// MaxFolderConcurrency already limits the number of scanned folders, so
// prefer it over the overall number of folders to avoid limiting performance
// further for no reason.
if concurrency := m.cfg.Options().MaxFolderConcurrency(); concurrency > 0 {
numFolders = min(numFolders, concurrency)
}
m.mut.RUnlock()
if folderCfg.Hashers > 0 {
@@ -2556,16 +2561,17 @@ func (m *model) numHashers(folder string) int {
return folderCfg.Hashers
}
numCpus := runtime.GOMAXPROCS(-1)
if build.IsWindows || build.IsDarwin || build.IsIOS || build.IsAndroid {
// Interactive operating systems; don't load the system too heavily by
// default.
return 1
numCpus = max(1, numCpus/4)
}
// For other operating systems and architectures, lets try to get some
// work done... Divide the available CPU cores among the configured
// folders.
if perFolder := runtime.GOMAXPROCS(-1) / numFolders; perFolder > 0 {
if perFolder := numCpus / numFolders; perFolder > 0 {
return perFolder
}

View File

@@ -324,6 +324,15 @@ func (s *sharedPullerState) finalClose() (bool, error) {
return false, nil
}
if s.writer == nil {
// If we didn't even create a temp file up to this point, now is the
// time to do so. This also truncates the file to the correct size
// if we're using sparse file.
if err := s.addWriterLocked(); err != nil {
return false, err
}
}
if len(s.file.Encrypted) > 0 {
if err := s.finalizeEncrypted(); err != nil && s.err == nil {
// This is our error as we weren't errored before.
@@ -355,11 +364,6 @@ func (s *sharedPullerState) finalClose() (bool, error) {
// folder from encrypted data we can extract this FileInfo from the end of
// the file and regain the original metadata.
func (s *sharedPullerState) finalizeEncrypted() error {
if s.writer == nil {
if err := s.addWriterLocked(); err != nil {
return err
}
}
trailerSize, err := writeEncryptionTrailer(s.file, s.writer)
if err != nil {
return err

View File

@@ -149,7 +149,7 @@ type testModel struct {
func newModel(t testing.TB, cfg config.Wrapper, id protocol.DeviceID, protectedFiles []string) *testModel {
t.Helper()
evLogger := events.NewLogger()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -8,6 +8,7 @@
package osutil
import (
"os"
"path/filepath"
"strings"
"sync"
@@ -142,3 +143,21 @@ func IsDeleted(ffs fs.Filesystem, name string) bool {
}
return false
}
func DirSize(location string) int64 {
entries, err := os.ReadDir(location)
if err != nil {
return 0
}
var size int64
for _, entry := range entries {
fi, err := entry.Info()
if err != nil {
continue
}
size += fi.Size()
}
return size
}

View File

@@ -582,24 +582,6 @@ func (fake *mockedConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *mockedConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -1144,44 +1144,6 @@ func (fake *Connection) TypeReturnsOnCall(i int, result1 string) {
func (fake *Connection) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.closeMutex.RLock()
defer fake.closeMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.deviceIDMutex.RLock()
defer fake.deviceIDMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.startMutex.RLock()
defer fake.startMutex.RUnlock()
fake.statisticsMutex.RLock()
defer fake.statisticsMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -584,24 +584,6 @@ func (fake *ConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *ConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,14 +4,12 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
// Prevents import loop, for internal testing
//go:generate counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go run ../../script/prune_mocks.go -t mocked_connection_info_test.go
//go:generate counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate counterfeiter -o mocks/connection.go --fake-name Connection . Connection
//go:generate go tool counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocks/connection.go --fake-name Connection . Connection
package protocol

View File

@@ -1,7 +1,6 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate -command genxdr go run github.com/calmh/xdr/cmd/genxdr
//go:generate genxdr -o packets_xdr.go packets.go
//go:generate go tool genxdr -o packets_xdr.go packets.go
package protocol

View File

@@ -31,6 +31,8 @@ var bufPool = sync.Pool{
},
}
const hashLength = sha256.Size
var hashPool = sync.Pool{
New: func() any {
return sha256.New()
@@ -43,9 +45,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
counter = &noopCounter{}
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
const hashLength = sha256.Size
var blocks []protocol.BlockInfo
var hashes, thisHash []byte
@@ -62,8 +61,14 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hashes = make([]byte, 0, hashLength*numBlocks)
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
// A 32k buffer is used for copying into the hash function.
buf := bufPool.Get().(*[bufSize]byte)[:] //nolint:forcetypeassert
defer func() {
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
}()
var offset int64
lr := io.LimitReader(r, int64(blocksize)).(*io.LimitedReader)
@@ -102,9 +107,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hf.Reset()
}
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
if len(blocks) == 0 {
// Empty file

View File

@@ -18,7 +18,7 @@ import (
)
func TestDeviceStat(t *testing.T) {
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -72,7 +72,7 @@ func TestStartupFail(t *testing.T) {
}, protocol.LocalDeviceID, events.NoopLogger)
defer os.Remove(cfg.ConfigPath())
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -13,7 +13,6 @@ import (
"fmt"
"io"
"log/slog"
"net/http"
"os"
"sync"
"time"
@@ -158,7 +157,8 @@ func OpenDatabase(path string, deleteRetention time.Duration) (db.DB, error) {
}
// Attempts migration of the old (LevelDB-based) database type to the new (SQLite-based) type
func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiAddr string) error {
// This will attempt to provide a temporary API server during the migration, if `apiAddr` is not empty.
func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration) error {
oldDBDir := locations.Get(locations.LegacyDatabase)
if _, err := os.Lstat(oldDBDir); err != nil {
// No old database
@@ -172,12 +172,6 @@ func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiA
}
defer be.Close()
// Start a temporary API server during the migration
api := migratingAPI{addr: apiAddr}
apiCtx, cancel := context.WithCancel(ctx)
defer cancel()
go api.Serve(apiCtx)
sdb, err := sqlite.OpenForMigration(locations.Get(locations.Database))
if err != nil {
return err
@@ -233,7 +227,7 @@ func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiA
if time.Since(t1) > 10*time.Second {
d := time.Since(t0) + 1
t1 = time.Now()
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "filesrate", float64(files)/d.Seconds())
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "blocksrate", float64(blocks)/d.Seconds(), "filesrate", float64(files)/d.Seconds())
}
}
}
@@ -292,27 +286,3 @@ func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiA
slog.Info("Migration complete", "files", totFiles, "blocks", totBlocks/1000, "duration", time.Since(t0).Truncate(time.Second))
return nil
}
type migratingAPI struct {
addr string
}
func (m migratingAPI) Serve(ctx context.Context) error {
srv := &http.Server{
Addr: m.addr,
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("*** Database migration in progress ***\n\n"))
for _, line := range slogutil.GlobalRecorder.Since(time.Time{}) {
line.WriteTo(w)
}
}),
}
go func() {
slog.InfoContext(ctx, "Starting temporary GUI/API during migration", slogutil.Address(m.addr))
err := srv.ListenAndServe()
slog.InfoContext(ctx, "Temporary GUI/API closed", slogutil.Address(m.addr), slogutil.Error(err))
}()
<-ctx.Done()
return srv.Close()
}

View File

@@ -83,6 +83,8 @@ func SecureDefaultWithTLS12() *tls.Config {
// We've put some thought into this choice and would like it to
// matter.
PreferServerCipherSuites: true,
// We support HTTP/2 and HTTP/1.1
NextProtos: []string{"h2", "http/1.1"},
ClientSessionCache: tls.NewLRUClientSessionCache(0),
}

View File

@@ -188,6 +188,13 @@ type Report struct {
DistDist string `json:"distDist" metric:"distribution,gaugeVec:distribution"`
DistOS string `json:"distOS" metric:"distribution,gaugeVec:os"`
DistArch string `json:"distArch" metric:"distribution,gaugeVec:arch"`
// Database counts
Database struct {
ModernCSQLite bool `json:"modernCSQLite" metric:"database_engine{engine=modernc-sqlite},gauge"`
MattnSQLite bool `json:"mattnSQLite" metric:"database_engine{engine=mattn-sqlite},gauge"`
LevelDB bool `json:"levelDB" metric:"database_engine{engine=leveldb},gauge"`
} `json:"database"`
}
func New() *Report {

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "STDISCOSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "STRELAYSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,10 +27,15 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH SYNOPSIS
.SH OVERVIEW
.sp
This page covers how to configure Syncthing for file synchronization, including device setup, folder configuration, connection settings, and various configuration methods through the web GUI, command\-line, or direct file editing.
.SH CONFIGURATION FILE LOCATIONS
.sp
The default configuration and database directory locations are:
.INDENT 0.0
.INDENT 3.5
.sp
@@ -143,7 +148,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -245,7 +250,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -335,7 +340,7 @@ GUI.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -602,7 +607,7 @@ folder is located on a FAT partition, and \fB0\fP otherwise.
.TP
.B maxConcurrentWrites
Maximum number of concurrent write operations while syncing. Increasing this might increase or
decrease disk performance, depending on the underlying storage. Default is \fB2\fP\&.
decrease disk performance, depending on the underlying storage. Default is \fB16\fP\&.
.UNINDENT
.INDENT 0.0
.TP
@@ -1550,7 +1555,7 @@ are set, \fI\%\-\-auditfile\fP takes priority.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

Some files were not shown because too many files have changed in this diff Show More