Compare commits

..

26 Commits

Author SHA1 Message Date
Jakob Borg
3382ccc3f1 chore(model): slightly deflake TestRecvOnlyRevertOwnID (#10390)
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-12 09:41:47 +00:00
Jakob Borg
9ee208b441 chore(sqlite): use normalised tables for file names and versions (#10383)
This changes the files table to use normalisation for the names and
versions. The idea is that these are often common between all remote
devices, and repeating an integer is more efficient than repeating a
long string. A new benchmark bears this out; for a database with 100k
files shared between 31 devices, with some worst case assumption on
version vector size, the database is reduced in size by 50% and the test
finishes quicker:

    Current:
        db_bench_test.go:322: Total size: 6263.70 MiB
    --- PASS: TestBenchmarkSizeManyFilesRemotes (1084.89s)

    New:
        db_bench_test.go:326: Total size: 3049.95 MiB
    --- PASS: TestBenchmarkSizeManyFilesRemotes (776.97s)

The other benchmarks end up about the same within the margin of
variability, with one possible exception being that RemoteNeed seems to
be a little slower on average:

                                          old files/s   new files/s
    Update/n=RemoteNeed/size=1000-8            5.051k        4.654k
    Update/n=RemoteNeed/size=2000-8            5.201k        4.384k
    Update/n=RemoteNeed/size=4000-8            4.943k        4.242k
    Update/n=RemoteNeed/size=8000-8            5.099k        3.527k
    Update/n=RemoteNeed/size=16000-8           3.686k        3.847k
    Update/n=RemoteNeed/size=30000-8           4.456k        3.482k

I'm not sure why, possibly that query can be optimised anyhow.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-12 09:27:41 +00:00
Jakob Borg
dd90e8ec7a fix(api): limit size of allowed authentication request (#10386)
We have a slightly naive io.ReadAll on the authentication handler, which
can result in unlimited memory consumption from an unauthenticated API
endpoint. Add a reasonable limit there.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-11 10:11:29 +00:00
Jakob Borg
aa6ae0f3b0 fix(sqlite): add _txlock=immediate to modernc implementation (#10384)
For symmetry with the CGO variant.

https://pkg.go.dev/modernc.org/sqlite#Driver.Open

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-11 06:16:31 +00:00
Jakob Borg
e8b256793a chore: clean up migrated database (#10381)
Remove the migrated v0.14.0 database format after two weeks. Remove a
few old patterns that are no longer relevant. Ensure the cleanup runs in
both the config and database directories.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-10 12:23:35 +02:00
Catfriend1
8233279a65 chore(ursrv): update regex patterns for Syncthing-Fork entries (#10380)
Update regex patterns for Syncthing-Fork entries

Signed-off-by: Catfriend1 <16361913+Catfriend1@users.noreply.github.com>
2025-09-09 14:34:12 +02:00
Jakob Borg
8e5d5802cc chore(ursrv): calculate more fine-grained percentiles
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-09 07:37:39 +02:00
Jakob Borg
25ae01b0d7 chore(sqlite): skip database GC entirely when it's provably unnecessary (#10379)
Store the sequence number of the last GC sweep in a KV. Next time, if it
matches we can just skip GC because nothing has been added or removed.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-08 08:55:04 +02:00
Syncthing Release Automation
66583927f8 chore(gui, man, authors): update docs, translations, and contributors 2025-09-08 03:52:21 +00:00
Simon Frei
f0328abeaa chore(scanner): always return values to the pools when hashing blocks (#10377)
There are some return statements in between, but putting back the values
isn't my motivation (hardly ever happens), I just find this more
readable. Same with moving `hashLength`: Placed next to the pool the
connection with `sha256.New()` is closer.

Followup to:
chore(scanner): reduce memory pressure by using pools inside hasher #10222
6e26fab3a0

Signed-off-by: Simon Frei <freisim93@gmail.com>
2025-09-07 17:00:19 +02:00
Jakob Borg
4b8d07d91c fix(sqlite): explicitly set temporary directory location (fixes #10368) (#10376)
On Unixes, avoid the /tmp which is likely to become the chosen default.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:47 +02:00
Jakob Borg
c33daca3b4 fix(sqlite): less impactful periodic garbage collection (#10374)
Periodic garbage collection can take a long time on large folders. The worst
step is the one for blocks, which are typically orders of magnitude more
numerous than files or block lists.

This improves the situation in by running blocks GC in a number of smaller
range chunks, in random order, and stopping after a time limit. At most ten
minutes per run will be spent garbage collecting blocklists and blocks.

With this, we're not guaranteed to complete a full GC on every run, but
we'll make some progress and get there eventually.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:29 +02:00
Amin Vakil
a533f453f8 build: trigger nightly build only on syncthing repo (#10375)
Signed-off-by: Amin Vakil <info@aminvakil.com>
2025-09-07 14:03:33 +02:00
Jakob Borg
3c9e87d994 build: exclude illumos from cross building
Now that we have a native build for it.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 11:51:15 +02:00
Jakob Borg
f0180cb014 fix(sqlite): avoid rowid on kv table (#10367)
No migration on this as it has no practical impact, just a slight
cleanup for new installations.

Also a refactor of how we declare single column primary keys, for
consistency.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 09:31:07 +00:00
Jakob Borg
a99a730c0c fix(tlsutil): support HTTP/2 on GUI/API connections (#10366)
By not setting ALPN we were implicitly rejecting HTTP/2, completely
unnecessarily.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:57:39 +02:00
Jakob Borg
36254473a3 chore(slogutil): add configurable logging format (fixes #10352) (#10354)
This adds several options for configuring the log format of timestamps
and severity levels, making it more suitable for integration with log
systems like systemd.

      --log-format-timestamp="2006-01-02 15:04:05"
         Format for timestamp, set to empty to disable timestamps ($STLOGFORMATTIMESTAMP)

      --[no-]log-format-level-string
         Whether to include level string in log line ($STLOGFORMATLEVELSTRING)

      --[no-]log-format-level-syslog
         Whether to include level as syslog prefix in log line ($STLOGFORMATLEVELSYSLOG)

So, to get a timestamp suitable for systemd (syslog prefix, no level
string, no timestamp) we can pass `--log-format-timestamp=""
--no-log-format-level-string --log-format-level-syslog` or,
equivalently, set `STLOGFORMATTIMESTAMP="" STLOGFORMATLEVELSTRING=false
STLOGFORMATLEVELSYSLOG=true`.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:52:49 +02:00
Jakob Borg
800596139e chore(sqlite): stamp files with application_id
No practical effect, just a tiny bit of fun to stamp the database files
with an application ID that identifies them.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:15:38 +02:00
Jakob Borg
f48782e4df fix(sqlite): revert to default page cache size (#10362)
While we're figuring out optimal defaults, reduce the page cache size to
the compiled-in default. In my computer this makes no difference in
benchmarks. In forum threads, it solved the problem of massive memory
usage during initial scan.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:07:51 +02:00
Jakob Borg
922cc7544e docs: we now do binaries for illumos again
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 21:38:30 +02:00
Tommy van der Vorst
9e262d84de fix(api): redact device encryption passwords in support bundle config (#10359)
* fix(api): redact device encryption passwords in support bundle config

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>

* Update lib/api/support_bundle.go

Signed-off-by: Jakob Borg <jakob@kastelo.net>

---------

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 18:22:59 +00:00
Jakob Borg
42db6280e6 fix(model): earlier free-space check (fixes #10347) (#10348)
Since #10332 we'd create the temp file when closing out the puller state
for a file, but this is inappropriate if the reason we're bailing out is
that there isn't space for it to begin with. Instead, do the
free space check before we even start copying/pulling.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 16:53:30 +00:00
Albert Lee
8d8adae310 build: package for illumos using vmactions/omnios-vm (#10328)
Use GitHub Actions to build illumos/amd64 package.

Signed-off-by: Albert Lee <trisk@forkgnu.org>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 08:51:44 +00:00
Jakob Borg
12ba4b6aea chore(model): adjust folder state logging (fixes #10350) (#10353)
Removes the chitter-chatter of folder state changes from the info level,
while adding the error state at warning level and a corresponding
clearing of the error state at info level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 07:38:06 +00:00
Jakob Borg
372e3c26b0 fix(db): remove temp_store = MEMORY pragmas (#10343)
This reduces database migration memory usage in my test scenario from
3.8 GB to 440 MB. In principle I don't think we're causing many temp
tables to be generated anyway in normal usage, but if we do and someone
can benchmark a performance difference, we can add a tunable. I ran the
database benchmark before and after and didn't see a difference above
the noise level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 09:27:53 +02:00
Jakob Borg
01e2426a56 fix(syncthing): properly report kibibytes RSS in Linux perfstats
The value from getrusage is already in KiB, while on macOS it's in
bytes.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 07:52:19 +02:00
58 changed files with 842 additions and 276 deletions

View File

@@ -159,6 +159,7 @@ jobs:
needs:
- build-test
- package-linux
- package-illumos
- package-cross
- package-source
- package-debian
@@ -337,6 +338,39 @@ jobs:
*.tar.gz
compat.json
package-illumos:
runs-on: ubuntu-latest
name: Package for illumos
needs:
- facts
env:
VERSION: ${{ needs.facts.outputs.version }}
GO_VERSION: ${{ needs.facts.outputs.go-version }}
steps:
- uses: actions/checkout@v4
- name: Build syncthing in OmniOS VM
uses: vmactions/omnios-vm@v1
with:
envs: "VERSION GO_VERSION CGO_ENABLED"
usesh: true
prepare: |
pkg install developer/gcc14 web/curl archiver/gnu-tar
run: |
curl -L "https://go.dev/dl/go$GO_VERSION.illumos-amd64.tar.gz" | gtar xzf -
export PATH="$GITHUB_WORKSPACE/go/bin:$PATH"
go version
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" tar "$tgt"
done
env:
CGO_ENABLED: "1"
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: packages-illumos
path: "*.tar.gz"
#
# macOS. The entire build runs in the release environment because code
# signing is part of the build process, so it is limited to release
@@ -503,6 +537,7 @@ jobs:
| grep -v aix/ppc64 \
| grep -v android/ \
| grep -v darwin/ \
| grep -v illumos/ \
| grep -v ios/ \
| grep -v js/ \
| grep -v linux/ \
@@ -588,6 +623,7 @@ jobs:
needs:
- codesign-windows
- package-linux
- package-illumos
- package-macos
- package-cross
- package-source

View File

@@ -8,6 +8,7 @@ on:
jobs:
trigger-nightly:
if: github.repository_owner == 'syncthing'
runs-on: ubuntu-latest
name: Push to release-nightly to trigger build
steps:

View File

@@ -8,6 +8,7 @@ package serve
import (
"context"
"fmt"
"log/slog"
"reflect"
"slices"
@@ -335,12 +336,12 @@ func (q *metricSummary) Collect(c chan<- prometheus.Metric) {
}
slices.Sort(vs)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[0], append(labelVals, "0")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*5/100], append(labelVals, "0.05")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)/2], append(labelVals, "0.5")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*9/10], append(labelVals, "0.9")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)*95/100], append(labelVals, "0.95")...)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[len(vs)-1], append(labelVals, "1")...)
pctiles := []float64{0, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.975, 0.99, 1}
for _, pct := range pctiles {
idx := int(float64(len(vs)-1) * pct)
c <- prometheus.MustNewConstMetric(q.qDesc, prometheus.GaugeValue, vs[idx], append(labelVals, fmt.Sprint(pct))...)
}
}
}

View File

@@ -79,7 +79,8 @@ var (
{regexp.MustCompile(`\svagrant@bullseye`), "F-Droid"},
{regexp.MustCompile(`\svagrant@bookworm`), "F-Droid"},
{regexp.MustCompile(`Anwender@NET2017`), "Syncthing-Fork (3rd party)"},
{regexp.MustCompile(`\sreproducible-build@Catfriend1-syncthing-android`), "Syncthing-Fork Catfriend1 (3rd party)"},
{regexp.MustCompile(`\sreproducible-build@nel0x-syncthing-android-gplay`), "Syncthing-Fork nel0x (3rd party)"},
{regexp.MustCompile(`\sbuilduser@(archlinux|svetlemodry)`), "Arch (3rd party)"},
{regexp.MustCompile(`\ssyncthing@archlinux`), "Arch (3rd party)"},

View File

@@ -164,6 +164,9 @@ type serveCmd struct {
LogLevel slog.Level `help:"Log level for all packages (DEBUG,INFO,WARN,ERROR)" env:"STLOGLEVEL" default:"INFO"`
LogMaxFiles int `name:"log-max-old-files" help:"Number of old files to keep (zero to keep only current)" default:"${logMaxFiles}" placeholder:"N" env:"STLOGMAXOLDFILES"`
LogMaxSize int `help:"Maximum size of any file (zero to disable log rotation)" default:"${logMaxSize}" placeholder:"BYTES" env:"STLOGMAXSIZE"`
LogFormatTimestamp string `name:"log-format-timestamp" help:"Format for timestamp, set to empty to disable timestamps" env:"STLOGFORMATTIMESTAMP" default:"${timestampFormat}"`
LogFormatLevelString bool `name:"log-format-level-string" help:"Whether to include level string in log line" env:"STLOGFORMATLEVELSTRING" default:"${levelString}" negatable:""`
LogFormatLevelSyslog bool `name:"log-format-level-syslog" help:"Whether to include level as syslog prefix in log line" env:"STLOGFORMATLEVELSYSLOG" default:"${levelSyslog}" negatable:""`
NoBrowser bool `help:"Do not start browser" env:"STNOBROWSER"`
NoPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup" env:"STNOPORTPROBING"`
NoRestart bool `help:"Do not restart Syncthing when exiting due to API/GUI command, upgrade, or crash" env:"STNORESTART"`
@@ -186,10 +189,13 @@ type serveCmd struct {
}
func defaultVars() kong.Vars {
vars := kong.Vars{}
vars["logMaxSize"] = strconv.Itoa(10 << 20) // 10 MiB
vars["logMaxFiles"] = "3" // plus the current one
vars := kong.Vars{
"logMaxSize": strconv.Itoa(10 << 20), // 10 MiB
"logMaxFiles": "3", // plus the current one
"levelString": strconv.FormatBool(slogutil.DefaultLineFormat.LevelString),
"levelSyslog": strconv.FormatBool(slogutil.DefaultLineFormat.LevelSyslog),
"timestampFormat": slogutil.DefaultLineFormat.TimestampFormat,
}
// On non-Windows, we explicitly default to "-" which means stdout. On
// Windows, the "default" options.logFile will later be replaced with the
@@ -262,8 +268,14 @@ func (c *serveCmd) Run() error {
osutil.HideConsole()
}
// The default log level for all packages
// Customize the logging early
slogutil.SetLineFormat(slogutil.LineFormat{
TimestampFormat: c.LogFormatTimestamp,
LevelString: c.LogFormatLevelString,
LevelSyslog: c.LogFormatLevelSyslog,
})
slogutil.SetDefaultLevel(c.LogLevel)
slogutil.SetLevelOverrides(os.Getenv("STTRACE"))
// Treat an explicitly empty log file name as no log file
if c.LogFile == "" {
@@ -769,40 +781,39 @@ func initialAutoUpgradeCheck(misc *db.Typed) (upgrade.Release, error) {
// suitable time after they have gone out of fashion.
func cleanConfigDirectory() {
patterns := map[string]time.Duration{
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.11.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.13.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
"tmp-index-sorter.*": time.Minute, // these should never exist on startup
"support-bundle-*": 30 * 24 * time.Hour, // keep old support bundle zip or folder for a month
"csrftokens.txt": 0, // deprecated, remove immediately
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index-v0.14.0.db-migrated": 14 * 24 * time.Hour, // keep old index format for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"support-bundle-*": 30 * 24 * time.Hour, // keep old support bundle zip or folder for a month
}
for pat, dur := range patterns {
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, locations.GetBaseDir(locations.ConfigBaseDir))
files, err := fs.Glob(pat)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
continue
}
for _, file := range files {
info, err := fs.Lstat(file)
locations := slices.Compact([]string{
locations.GetBaseDir(locations.ConfigBaseDir),
locations.GetBaseDir(locations.DataBaseDir),
})
for _, loc := range locations {
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, loc)
for pat, dur := range patterns {
entries, err := fs.Glob(pat)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
continue
}
if time.Since(info.ModTime()) > dur {
if err = fs.RemoveAll(file); err != nil {
for _, entry := range entries {
info, err := fs.Lstat(entry)
if err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
} else {
slog.Warn("Cleaned away old file", slogutil.FilePath(filepath.Base(file)))
continue
}
if time.Since(info.ModTime()) > dur {
if err = fs.RemoveAll(entry); err != nil {
slog.Warn("Failed to clean config directory", slogutil.Error(err))
} else {
slog.Warn("Cleaned away old file", slogutil.FilePath(filepath.Base(entry)))
}
}
}
}
@@ -1039,7 +1050,7 @@ func (m migratingAPI) Serve(ctx context.Context) error {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("*** Database migration in progress ***\n\n"))
for _, line := range slogutil.GlobalRecorder.Since(time.Time{}) {
line.WriteTo(w)
_, _ = line.WriteTo(w, slogutil.DefaultLineFormat)
}
}),
}

View File

@@ -16,7 +16,9 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/exp/constraints"
)
@@ -48,14 +50,19 @@ func savePerfStats(file string) {
in, out := protocol.TotalInOut()
timeDiff := t.Sub(prevTime)
rss := curRus.Maxrss
if build.IsDarwin {
rss /= 1024
}
fmt.Fprintf(fd, "%.03f\t%f\t%d\t%d\t%.0f\t%.0f\t%d\n",
t.Sub(t0).Seconds(),
rate(cpusec(&prevRus), cpusec(&curRus), timeDiff, 1),
(curMem.Sys-curMem.HeapReleased)/1024,
curRus.Maxrss/1024,
rss,
rate(prevIn, in, timeDiff, 1e3),
rate(prevOut, out, timeDiff, 1e3),
dirsize(locations.Get(locations.Database))/1024,
osutil.DirSize(locations.Get(locations.Database))/1024,
)
prevTime = t
@@ -78,21 +85,3 @@ func rate[T number](prev, cur T, d time.Duration, div float64) float64 {
rate := float64(diff) / d.Seconds() / div
return rate
}
func dirsize(location string) int64 {
entries, err := os.ReadDir(location)
if err != nil {
return 0
}
var size int64
for _, entry := range entries {
fi, err := entry.Info()
if err != nil {
continue
}
size += fi.Size()
}
return size
}

View File

@@ -7,6 +7,9 @@ StartLimitBurst=4
[Service]
User=%i
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1

View File

@@ -5,7 +5,10 @@ StartLimitIntervalSec=60
StartLimitBurst=4
[Service]
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart --logflags=0
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4

View File

@@ -177,7 +177,7 @@
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Вида „{{receiveEncrypted}}“ може да бъде избран само при добавяне на папка.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Видът папката „{{receiveEncrypted}}“ не може да бъде променян след нейното създаване. Трябва да я премахнете, изтриете или разшифровате съдържанието и да добавите папката отново.",
"Folders": "Папки",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете нейната първопричина или потърсете помощ ако не съумявате.",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете първопричината или ако не съумявате потърсете помощ.",
"Forever": "Завинаги",
"Full Rescan Interval (s)": "Интервал на пълно обхождане (секунди)",
"GUI": "Интерфейс",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Redes permitidas",
"Alphabetic": "Alfabética",
"Altered by ignoring deletes.": "Cambiado por ignorar o borrado.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Sempre acendido cando o cartafol é de tipo \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Un comando externo xestiona as versións. Ten que eliminar o ficheiro do cartafol compartido. Si a ruta ao aplicativo contén espazos, deberían ir acotados.",
"Anonymous Usage Reporting": "Informe anónimo de uso",
"Anonymous usage report format has changed. Would you like to move to the new format?": "O formato do informe de uso anónimo cambiou. Quere usar o novo formato?",
@@ -52,6 +53,7 @@
"Body:": "Corpo:",
"Bugs": "Erros",
"Cancel": "Cancelar",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Non se pode activar cando o cartafol é de tipo \"{{foldertype}}\".",
"Changelog": "Rexistro de cambios",
"Clean out after": "Limpar despois",
"Cleaning Versions": "Limpando Versións",
@@ -80,6 +82,7 @@
"Custom Range": "Rango personalizado",
"Danger!": "Perigo!",
"Database Location": "Localización da Base de Datos",
"Debug": "Depurar",
"Debugging Facilities": "Ferramentas de depuración",
"Default": "Predeterminado",
"Default Configuration": "Configuración Predeterminada",
@@ -140,6 +143,7 @@
"Enables sending extended attributes to other devices, and applying incoming extended attributes. May require running with elevated privileges.": "Activa o envío de atributos extendidos a outros dispositivos, e aplicar os atributos extendidos recibidos. Podería requerir a execución con privilexios elevados.",
"Enables sending extended attributes to other devices, but not applying incoming extended attributes. This can have a significant performance impact. Always enabled when \"Sync Extended Attributes\" is enabled.": "Activa o envío de atributos extendidos a outros dispositivos, pero non aplica atributos extendidos que se reciben. Isto podería afectar significativamente ao rendemento. Sempre está activado cando «Sincr Atributos Extendidos\" está activado.",
"Enables sending ownership information to other devices, and applying incoming ownership information. Typically requires running with elevated privileges.": "Activa o envío de información sobre a propiedade a outros dispositivos, e aplica a información sobre a propiedade cando se recibe. Normalmente require a execución con privilexios elevados.",
"Enables sending ownership information to other devices, but not applying incoming ownership information. This can have a significant performance impact. Always enabled when \"Sync Ownership\" is enabled.": "Activa o envío a outros dispositivos de información sobre a propiedade, pero non aplica información entrante sobre a propiedade. Isto pode afectar en gran medida ao rendemento. Está sempre activado cando \"Sincronización da propiedade\" está activada.",
"Enter a non-negative number (e.g., \"2.35\") and select a unit. Percentages are as part of the total disk size.": "Introduza un número non negativo (por exemplo, \"2.35\") e seleccione unha unidade. As porcentaxes son como partes totais do tamaño do disco.",
"Enter a non-privileged port number (1024 - 65535).": "Introduza un número de porto non privilexiado (1024-65535).",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduza direccións separadas por comas (\"tcp://ip:porto\", \"tcp://host:porto\") ou \"dynamic\" para realizar o descubrimento automático da dirección.",

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"context"
"database/sql"
"embed"
"io/fs"
@@ -25,7 +26,11 @@ import (
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 4
const (
currentSchemaVersion = 5
applicationIDMain = 0x53546d6e // "STmn", Syncthing main database
applicationIDFolder = 0x53546664 // "STfd", Syncthing folder database
)
//go:embed sql/**
var embedded embed.FS
@@ -83,7 +88,31 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
},
}
tx, err := db.sql.Beginx()
// Create a specific connection for the schema setup and migration to
// run in. We do this because we need to disable foreign keys for the
// duration, which is a thing that needs to happen outside of a
// transaction and affects the connection it's run on. So we need to a)
// make sure all our commands run on this specific connection (which the
// transaction accomplishes naturally) and b) make sure these pragmas
// don't leak to anyone else afterwards.
ctx := context.TODO()
conn, err := db.sql.Connx(ctx)
if err != nil {
return nil, wrap(err)
}
defer func() {
_, _ = conn.ExecContext(ctx, "PRAGMA foreign_keys = ON")
_, _ = conn.ExecContext(ctx, "PRAGMA legacy_alter_table = OFF")
conn.Close()
}()
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_keys = OFF"); err != nil {
return nil, wrap(err)
}
if _, err := conn.ExecContext(ctx, "PRAGMA legacy_alter_table = ON"); err != nil {
return nil, wrap(err)
}
tx, err := conn.BeginTxx(ctx, nil)
if err != nil {
return nil, wrap(err)
}
@@ -120,6 +149,22 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
return nil, wrap(err)
}
}
// Run the initial schema scripts once more. This is generally a
// no-op. However, dropping a table removes associated triggers etc,
// and that's a thing we sometimes do in migrations. To avoid having
// to repeat the setup of associated triggers and indexes in the
// migration, we re-run the initial schema scripts.
for _, script := range schemaScripts {
if err := db.runScripts(tx, script); err != nil {
return nil, wrap(err)
}
}
// Finally, ensure nothing we've done along the way has violated key integrity.
if _, err := conn.ExecContext(ctx, "PRAGMA foreign_key_check"); err != nil {
return nil, wrap(err)
}
}
// Set the current schema version, if not already set
@@ -267,7 +312,12 @@ nextScript:
// also statement-internal semicolons in the triggers.
for _, stmt := range strings.Split(string(bs), "\n;") {
if _, err := tx.Exec(s.expandTemplateVars(stmt)); err != nil {
return wrap(err, stmt)
if strings.Contains(stmt, "syncthing:ignore-failure") {
// We're ok with this failing. Just note it.
slog.Debug("Script failed, but with ignore-failure annotation", slog.String("script", scr), slogutil.Error(wrap(err, stmt)))
} else {
return wrap(err, stmt)
}
}
}
}

View File

@@ -8,11 +8,13 @@ package sqlite
import (
"fmt"
"os"
"testing"
"time"
"github.com/syncthing/syncthing/internal/timeutil"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
)
@@ -223,7 +225,7 @@ func BenchmarkUpdate(b *testing.B) {
}
func TestBenchmarkDropAllRemote(t *testing.T) {
if testing.Short() {
if testing.Short() || os.Getenv("LONG_TEST") == "" {
t.Skip("slow test")
}
@@ -266,3 +268,61 @@ func TestBenchmarkDropAllRemote(t *testing.T) {
d := time.Since(t0)
t.Log("drop all took", d)
}
func TestBenchmarkSizeManyFilesRemotes(t *testing.T) {
// Reports the database size for a setup with many files and many remote
// devices each announcing every files, with fairly long file names and
// "worst case" version vectors.
if testing.Short() || os.Getenv("LONG_TEST") == "" {
t.Skip("slow test")
}
dir := t.TempDir()
db, err := Open(dir)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
if err := db.Close(); err != nil {
t.Fatal(err)
}
})
// This is equivalent to about 800 GiB in 100k files (i.e., 8 MiB per
// file), shared between 31 devices where each have touched every file.
const numFiles = 1e5
const numRemotes = 30
const numBlocks = 64
const filenameLen = 64
fs := make([]protocol.FileInfo, 1000)
n := 0
seq := 0
for n < numFiles {
for i := range fs {
seq++
fs[i] = genFile(rand.String(filenameLen), numBlocks, seq)
for r := range numRemotes {
fs[i].Version = fs[i].Version.Update(42 + protocol.ShortID(r))
}
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
t.Fatal(err)
}
for r := range numRemotes {
if err := db.Update(folderID, protocol.DeviceID{byte(42 + r)}, fs); err != nil {
t.Fatal(err)
}
}
n += len(fs)
t.Log(n, (numRemotes+1)*n)
}
if err := db.Close(); err != nil {
t.Fatal(err)
}
size := osutil.DirSize(dir)
t.Logf("Total size: %.02f MiB", float64(size)/1024/1024)
}

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"log/slog"
"os"
"path/filepath"
@@ -15,6 +16,7 @@ import (
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
)
const (
@@ -52,8 +54,7 @@ func Open(path string, opts ...Option) (*DB, error) {
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -65,6 +66,8 @@ func Open(path string, opts ...Option) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, maxDBConns, pragmas, schemas, migrations)
if err != nil {
@@ -99,11 +102,10 @@ func Open(path string, opts ...Option) (*DB, error) {
func OpenForMigration(path string) (*DB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -115,6 +117,8 @@ func OpenForMigration(path string) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, 1, pragmas, schemas, migrations)
if err != nil {
@@ -144,3 +148,24 @@ func (s *DB) Close() error {
}
return wrap(s.baseDB.Close())
}
func initTmpDir(path string) {
if build.IsWindows || build.IsDarwin || os.Getenv("SQLITE_TMPDIR") != "" {
// Doesn't use SQLITE_TMPDIR, isn't likely to have a tiny
// ram-backed temp directory, or already set to something.
return
}
// Attempt to override the SQLite temporary directory by setting the
// env var prior to the (first) database being opened and hence
// SQLite becoming initialized. We set the temp dir to the same
// place we store the database, in the hope that there will be
// enough space there for the operations it needs to perform, as
// opposed to /tmp and similar, on some systems.
dbTmpDir := filepath.Join(path, ".tmp")
if err := os.MkdirAll(dbTmpDir, 0o700); err == nil {
os.Setenv("SQLITE_TMPDIR", dbTmpDir)
} else {
slog.Warn("Failed to create temp directory for SQLite", slogutil.FilePath(dbTmpDir), slogutil.Error(err))
}
}

View File

@@ -14,5 +14,5 @@ import (
const (
dbDriver = "sqlite3"
commonOptions = "_fk=true&_rt=true&_cache_size=-65536&_sync=1&_txlock=immediate"
commonOptions = "_fk=true&_rt=true&_sync=1&_txlock=immediate"
)

View File

@@ -15,7 +15,7 @@ import (
const (
dbDriver = "sqlite"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=cache_size(-65536)&_pragma=synchronous(1)"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=synchronous(1)&_txlock=immediate"
)
func init() {

View File

@@ -8,19 +8,28 @@ package sqlite
import (
"context"
"encoding/binary"
"fmt"
"log/slog"
"math/rand"
"strings"
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
)
const (
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
lastSuccessfulGCSeqKey = "lastSuccessfulGCSeq"
gcMinChunks = 5
gcChunkSize = 100_000 // approximate number of rows to process in a single gc query
gcMaxRuntime = 5 * time.Minute // max time to spend on gc, per table, per run
)
func (s *DB) Service(maintenanceInterval time.Duration) suture.Service {
@@ -91,16 +100,44 @@ func (s *Service) periodic(ctx context.Context) error {
}
return wrap(s.sdb.forEachFolder(func(fdb *folderDB) error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
// Get the current device sequence, for comparison in the next step.
seq, err := fdb.GetDeviceSequence(protocol.LocalDeviceID)
if err != nil {
return wrap(err)
}
// Get the last successful GC sequence. If it's the same as the
// current sequence, nothing has changed and we can skip the GC
// entirely.
meta := db.NewTyped(fdb, internalMetaPrefix)
if prev, _, err := meta.Int64(lastSuccessfulGCSeqKey); err != nil {
return wrap(err)
} else if seq == prev {
slog.DebugContext(ctx, "Skipping unnecessary GC", "folder", fdb.folderID, "fdb", fdb.baseName)
return nil
}
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
// Run the GC steps, in a function to be able to use a deferred
// unlock.
if err := func() error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
return wrap(err)
}
if err := garbageCollectNamesAndVersions(ctx, fdb); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
}(); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
// Update the successful GC sequence.
return wrap(meta.PutInt64(lastSuccessfulGCSeqKey, seq))
}))
}
@@ -118,8 +155,36 @@ func tidy(ctx context.Context, db *sqlx.DB) error {
return nil
}
func garbageCollectNamesAndVersions(ctx context.Context, fdb *folderDB) error {
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName)
res, err := fdb.stmt(`
DELETE FROM file_names
WHERE NOT EXISTS (SELECT 1 FROM files f WHERE f.name_idx = idx)
`).Exec()
if err != nil {
return wrap(err, "delete names")
}
if aff, err := res.RowsAffected(); err == nil {
l.DebugContext(ctx, "Removed old file names", "affected", aff)
}
res, err = fdb.stmt(`
DELETE FROM file_versions
WHERE NOT EXISTS (SELECT 1 FROM files f WHERE f.version_idx = idx)
`).Exec()
if err != nil {
return wrap(err, "delete versions")
}
if aff, err := res.RowsAffected(); err == nil {
l.DebugContext(ctx, "Removed old file versions", "affected", aff)
}
return nil
}
func garbageCollectOldDeletedLocked(ctx context.Context, fdb *folderDB) error {
l := slog.With("fdb", fdb.baseDB)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName)
if fdb.deleteRetention <= 0 {
slog.DebugContext(ctx, "Delete retention is infinite, skipping cleanup")
return nil
@@ -171,37 +236,108 @@ func garbageCollectBlocklistsAndBlocksLocked(ctx context.Context, fdb *folderDB)
}
defer tx.Rollback() //nolint:errcheck
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocklists
WHERE NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = blocklists.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocklists")
} else {
slog.DebugContext(ctx, "Blocklist GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
// Both blocklists and blocks refer to blocklists_hash from the files table.
for _, table := range []string{"blocklists", "blocks"} {
// Count the number of rows
var rows int64
if err := tx.GetContext(ctx, &rows, `SELECT count(*) FROM `+table); err != nil {
return wrap(err)
}
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocks
WHERE NOT EXISTS (
SELECT 1 FROM blocklists WHERE blocklists.blocklist_hash = blocks.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocks")
} else {
slog.DebugContext(ctx, "Blocks GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
chunks := max(gcMinChunks, rows/gcChunkSize)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName, "table", table, "rows", rows, "chunks", chunks)
// Process rows in chunks up to a given time limit. We always use at
// least gcMinChunks chunks, then increase the number as the number of rows
// exceeds gcMinChunks*gcChunkSize.
t0 := time.Now()
for i, br := range randomBlobRanges(int(chunks)) {
if d := time.Since(t0); d > gcMaxRuntime {
l.InfoContext(ctx, "GC was interrupted due to exceeding time limit", "processed", i, "runtime", time.Since(t0))
break
}
return slog.Int64("rows", rows)
}))
// The limit column must be an indexed column with a mostly random distribution of blobs.
// That's the blocklist_hash column for blocklists, and the hash column for blocks.
limitColumn := table + ".blocklist_hash"
if table == "blocks" {
limitColumn = "blocks.hash"
}
q := fmt.Sprintf(`
DELETE FROM %s
WHERE %s AND NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = %s.blocklist_hash
)`, table, br.SQL(limitColumn), table)
if res, err := tx.ExecContext(ctx, q); err != nil {
return wrap(err, "delete from "+table)
} else {
l.DebugContext(ctx, "GC query result", "processed", i, "runtime", time.Since(t0), "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
}
}
return wrap(tx.Commit())
}
// blobRange defines a range for blob searching. A range is open ended if
// start or end is nil.
type blobRange struct {
start, end []byte
}
// SQL returns the SQL where clause for the given range, e.g.
// `column >= x'49249248' AND column < x'6db6db6c'`
func (r blobRange) SQL(name string) string {
var sb strings.Builder
if r.start != nil {
fmt.Fprintf(&sb, "%s >= x'%x'", name, r.start)
}
if r.start != nil && r.end != nil {
sb.WriteString(" AND ")
}
if r.end != nil {
fmt.Fprintf(&sb, "%s < x'%x'", name, r.end)
}
return sb.String()
}
// randomBlobRanges returns n blobRanges in random order
func randomBlobRanges(n int) []blobRange {
ranges := blobRanges(n)
rand.Shuffle(len(ranges), func(i, j int) { ranges[i], ranges[j] = ranges[j], ranges[i] })
return ranges
}
// blobRanges returns n blobRanges
func blobRanges(n int) []blobRange {
// We use three byte (24 bit) prefixes to get fairly granular ranges and easy bit
// conversions.
rangeSize := (1 << 24) / n
ranges := make([]blobRange, 0, n)
var prev []byte
for i := range n {
var pref []byte
if i < n-1 {
end := (i + 1) * rangeSize
pref = intToBlob(end)
}
ranges = append(ranges, blobRange{prev, pref})
prev = pref
}
return ranges
}
func intToBlob(n int) []byte {
var pref [4]byte
binary.BigEndian.PutUint32(pref[:], uint32(n)) //nolint:gosec
// first byte is always zero and not part of the range
return pref[1:]
}

View File

@@ -0,0 +1,37 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"bytes"
"fmt"
"strings"
"testing"
)
func TestBlobRange(t *testing.T) {
exp := `
hash < x'249249'
hash >= x'249249' AND hash < x'492492'
hash >= x'492492' AND hash < x'6db6db'
hash >= x'6db6db' AND hash < x'924924'
hash >= x'924924' AND hash < x'b6db6d'
hash >= x'b6db6d' AND hash < x'db6db6'
hash >= x'db6db6'
`
ranges := blobRanges(7)
buf := new(bytes.Buffer)
for _, r := range ranges {
fmt.Fprintln(buf, r.SQL("hash"))
}
if strings.TrimSpace(buf.String()) != strings.TrimSpace(exp) {
t.Log(buf.String())
t.Error("unexpected output")
}
}

View File

@@ -84,7 +84,7 @@ func (s *folderDB) needSizeRemote(device protocol.DeviceID) (db.Counts, error) {
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
WHERE f.name_idx = g.name_idx AND f.version_idx = g.version_idx AND d.device_id = ?
)
GROUP BY g.type, g.local_flags, g.deleted
@@ -94,7 +94,7 @@ func (s *folderDB) needSizeRemote(device protocol.DeviceID) (db.Counts, error) {
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
WHERE f.name_idx = g.name_idx AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
)
GROUP BY g.type, g.local_flags, g.deleted
`).Select(&res, device.String(),

View File

@@ -27,7 +27,8 @@ func (s *folderDB) GetGlobalFile(file string) (protocol.FileInfo, bool, error) {
SELECT fi.fiprotobuf, bl.blprotobuf FROM fileinfos fi
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
WHERE f.name = ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name = ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
`).Get(&ind, file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
@@ -49,8 +50,9 @@ func (s *folderDB) GetGlobalAvailability(file string) ([]protocol.DeviceID, erro
err := s.stmt(`
SELECT d.device_id FROM files f
INNER JOIN devices d ON d.idx = f.device_idx
INNER JOIN files g ON g.version = f.version AND g.name = f.name
WHERE g.name = ? AND g.local_flags & {{.FlagLocalGlobal}} != 0 AND f.device_idx != {{.LocalDeviceIdx}}
INNER JOIN files g ON g.version_idx = f.version_idx AND g.name_idx = f.name_idx
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name = ? AND g.local_flags & {{.FlagLocalGlobal}} != 0 AND f.device_idx != {{.LocalDeviceIdx}}
ORDER BY d.device_id
`).Select(&devStrs, file)
if errors.Is(err, sql.ErrNoRows) {
@@ -74,9 +76,10 @@ func (s *folderDB) GetGlobalAvailability(file string) ([]protocol.DeviceID, erro
func (s *folderDB) AllGlobalFiles() (iter.Seq[db.FileMetadata], func() error) {
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
ORDER BY n.name
`).Queryx())
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
@@ -93,9 +96,10 @@ func (s *folderDB) AllGlobalFilesPrefix(prefix string) (iter.Seq[db.FileMetadata
end := prefixEnd(prefix)
it, errFn := iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
WHERE f.name >= ? AND f.name < ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY f.name
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE n.name >= ? AND n.name < ? AND f.local_flags & {{.FlagLocalGlobal}} != 0
ORDER BY n.name
`).Queryx(prefix, end))
return itererr.Map(it, errFn, func(m db.FileMetadata) (db.FileMetadata, error) {
m.Name = osutil.NativeFilename(m.Name)
@@ -109,7 +113,7 @@ func (s *folderDB) AllNeededGlobalFiles(device protocol.DeviceID, order config.P
case config.PullOrderRandom:
selectOpts = "ORDER BY RANDOM()"
case config.PullOrderAlphabetic:
selectOpts = "ORDER BY g.name ASC"
selectOpts = "ORDER BY n.name ASC"
case config.PullOrderSmallestFirst:
selectOpts = "ORDER BY g.size ASC"
case config.PullOrderLargestFirst:
@@ -137,9 +141,10 @@ func (s *folderDB) AllNeededGlobalFiles(device protocol.DeviceID, order config.P
func (s *folderDB) neededGlobalFilesLocal(selectOpts string) (iter.Seq[protocol.FileInfo], func() error) {
// Select all the non-ignored files with the need bit set.
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalIgnored}} = 0 AND g.local_flags & {{.FlagLocalNeeded}} != 0
` + selectOpts).Queryx())
return itererr.Map(it, errFn, indirectFI.FileInfo)
@@ -155,24 +160,26 @@ func (s *folderDB) neededGlobalFilesRemote(device protocol.DeviceID, selectOpts
// non-deleted and valid remote file (of any version)
it, errFn := iterStructs[indirectFI](s.stmt(`
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND NOT g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND NOT EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND f.version = g.version AND d.device_id = ?
WHERE f.name_idx = g.name_idx AND f.version_idx = g.version_idx AND d.device_id = ?
)
UNION ALL
SELECT fi.fiprotobuf, bl.blprotobuf, g.name, g.size, g.modified FROM fileinfos fi
SELECT fi.fiprotobuf, bl.blprotobuf, n.name, g.size, g.modified FROM fileinfos fi
INNER JOIN files g on fi.sequence = g.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = g.blocklist_hash
INNER JOIN file_names n ON g.name_idx = n.idx
WHERE g.local_flags & {{.FlagLocalGlobal}} != 0 AND g.deleted AND g.local_flags & {{.LocalInvalidFlags}} = 0 AND EXISTS (
SELECT 1 FROM FILES f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name = g.name AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
WHERE f.name_idx = g.name_idx AND d.device_id = ? AND NOT f.deleted AND f.local_flags & {{.LocalInvalidFlags}} = 0
)
`+selectOpts).Queryx(
device.String(),

View File

@@ -32,7 +32,8 @@ func (s *folderDB) GetDeviceFile(device protocol.DeviceID, file string) (protoco
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON f.device_idx = d.idx
WHERE d.device_id = ? AND f.name = ?
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE d.device_id = ? AND n.name = ?
`).Get(&ind, device.String(), file)
if errors.Is(err, sql.ErrNoRows) {
return protocol.FileInfo{}, false, nil
@@ -87,14 +88,16 @@ func (s *folderDB) AllLocalFilesWithPrefix(device protocol.DeviceID, prefix stri
INNER JOIN files f on fi.sequence = f.sequence
LEFT JOIN blocklists bl ON bl.blocklist_hash = f.blocklist_hash
INNER JOIN devices d ON d.idx = f.device_idx
WHERE d.device_id = ? AND f.name >= ? AND f.name < ?
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE d.device_id = ? AND n.name >= ? AND n.name < ?
`, device.String(), prefix, end))
return itererr.Map(it, errFn, indirectFI.FileInfo)
}
func (s *folderDB) AllLocalFilesWithBlocksHash(h []byte) (iter.Seq[db.FileMetadata], func() error) {
return iterStructs[db.FileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
WHERE f.device_idx = {{.LocalDeviceIdx}} AND f.blocklist_hash = ?
`).Queryx(h))
}
@@ -104,7 +107,8 @@ func (s *folderDB) AllLocalBlocksWithHash(hash []byte) (iter.Seq[db.BlockMapEntr
// & blocklists is deferred (garbage collected) while the files list is
// not. This filters out blocks that are in fact deleted.
return iterStructs[db.BlockMapEntry](s.stmt(`
SELECT f.blocklist_hash as blocklisthash, b.idx as blockindex, b.offset, b.size, f.name as filename FROM files f
SELECT f.blocklist_hash as blocklisthash, b.idx as blockindex, b.offset, b.size, n.name as filename FROM files f
INNER JOIN file_names n ON f.name_idx = n.idx
LEFT JOIN blocks b ON f.blocklist_hash = b.blocklist_hash
WHERE f.device_idx = {{.LocalDeviceIdx}} AND b.hash = ?
`).Queryx(hash))
@@ -170,10 +174,12 @@ func (s *folderDB) DebugFilePattern(out io.Writer, name string) error {
}
name = "%" + name + "%"
res := itererr.Zip(iterStructs[hashFileMetadata](s.stmt(`
SELECT f.sequence, f.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags, f.version, f.blocklist_hash as blocklisthash, d.device_id as deviceid FROM files f
SELECT f.sequence, n.name, f.type, f.modified as modnanos, f.size, f.deleted, f.local_flags as localflags, v.version, f.blocklist_hash as blocklisthash, d.device_id as deviceid FROM files f
INNER JOIN devices d ON d.idx = f.device_idx
WHERE f.name LIKE ?
ORDER BY f.name, f.device_idx
INNER JOIN file_names n ON n.idx = f.name_idx
INNER JOIN file_versions v ON v.idx = f.version_idx
WHERE n.name LIKE ?
ORDER BY n.name, f.device_idx
`).Queryx(name)))
delMap := map[bool]string{

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"time"
"github.com/syncthing/syncthing/lib/protocol"
@@ -25,8 +26,7 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",
@@ -64,11 +64,10 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
func openFolderDBForMigration(folder, path string, deleteRetention time.Duration) (*folderDB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",
@@ -96,16 +95,13 @@ func openFolderDBForMigration(folder, path string, deleteRetention time.Duration
func (s *folderDB) deviceIdxLocked(deviceID protocol.DeviceID) (int64, error) {
devStr := deviceID.String()
if _, err := s.stmt(`
INSERT OR IGNORE INTO devices(device_id)
VALUES (?)
`).Exec(devStr); err != nil {
return 0, wrap(err)
}
var idx int64
if err := s.stmt(`
SELECT idx FROM devices
WHERE device_id = ?
INSERT INTO devices(device_id)
VALUES (?)
ON CONFLICT(device_id) DO UPDATE
SET device_id = excluded.device_id
RETURNING idx
`).Get(&idx, devStr); err != nil {
return 0, wrap(err)
}

View File

@@ -46,9 +46,33 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
defer tx.Rollback() //nolint:errcheck
txp := &txPreparedStmts{Tx: tx}
//nolint:sqlclosecheck
insertNameStmt, err := txp.Preparex(`
INSERT INTO file_names(name)
VALUES (?)
ON CONFLICT(name) DO UPDATE
SET name = excluded.name
RETURNING idx
`)
if err != nil {
return wrap(err, "prepare insert name")
}
//nolint:sqlclosecheck
insertVersionStmt, err := txp.Preparex(`
INSERT INTO file_versions (version)
VALUES (?)
ON CONFLICT(version) DO UPDATE
SET version = excluded.version
RETURNING idx
`)
if err != nil {
return wrap(err, "prepare insert version")
}
//nolint:sqlclosecheck
insertFileStmt, err := txp.Preparex(`
INSERT OR REPLACE INTO files (device_idx, remote_sequence, name, type, modified, size, version, deleted, local_flags, blocklist_hash)
INSERT OR REPLACE INTO files (device_idx, remote_sequence, type, modified, size, deleted, local_flags, blocklist_hash, name_idx, version_idx)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
RETURNING sequence
`)
@@ -102,8 +126,19 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
prevRemoteSeq = f.Sequence
remoteSeq = &f.Sequence
}
var nameIdx int64
if err := insertNameStmt.Get(&nameIdx, f.Name); err != nil {
return wrap(err, "insert name")
}
var versionIdx int64
if err := insertVersionStmt.Get(&versionIdx, f.Version.String()); err != nil {
return wrap(err, "insert version")
}
var localSeq int64
if err := insertFileStmt.Get(&localSeq, deviceIdx, remoteSeq, f.Name, f.Type, f.ModTime().UnixNano(), f.Size, f.Version.String(), f.IsDeleted(), f.LocalFlags, blockshash); err != nil {
if err := insertFileStmt.Get(&localSeq, deviceIdx, remoteSeq, f.Type, f.ModTime().UnixNano(), f.Size, f.IsDeleted(), f.LocalFlags, blockshash, nameIdx, versionIdx); err != nil {
return wrap(err, "insert file")
}
@@ -246,7 +281,9 @@ func (s *folderDB) DropFilesNamed(device protocol.DeviceID, names []string) erro
query, args, err := sqlx.In(`
DELETE FROM files
WHERE device_idx = ? AND name IN (?)
WHERE device_idx = ? AND name_idx IN (
SELECT idx FROM file_names WHERE name IN (?)
)
`, deviceIdx, names)
if err != nil {
return wrap(err)
@@ -299,12 +336,13 @@ func (s *folderDB) recalcGlobalForFolder(txp *txPreparedStmts) error {
// recalculate.
//nolint:sqlclosecheck
namesStmt, err := txp.Preparex(`
SELECT f.name FROM files f
SELECT n.name FROM files f
INNER JOIN file_names n ON n.idx = f.name_idx
WHERE NOT EXISTS (
SELECT 1 FROM files g
WHERE g.name = f.name AND g.local_flags & ? != 0
WHERE g.name_idx = f.name_idx AND g.local_flags & ? != 0
)
GROUP BY name
GROUP BY n.name
`)
if err != nil {
return wrap(err)
@@ -329,11 +367,13 @@ func (s *folderDB) recalcGlobalForFolder(txp *txPreparedStmts) error {
func (s *folderDB) recalcGlobalForFile(txp *txPreparedStmts, file string) error {
//nolint:sqlclosecheck
selStmt, err := txp.Preparex(`
SELECT name, device_idx, sequence, modified, version, deleted, local_flags FROM files
WHERE name = ?
SELECT n.name, f.device_idx, f.sequence, f.modified, v.version, f.deleted, f.local_flags FROM files f
INNER JOIN file_versions v ON v.idx = f.version_idx
INNER JOIN file_names n ON n.idx = f.name_idx
WHERE n.name = ?
`)
if err != nil {
return wrap(err)
return wrap(err, "prepare select")
}
es, err := itererr.Collect(iterStructs[fileRow](selStmt.Queryx(file)))
if err != nil {
@@ -389,10 +429,10 @@ func (s *folderDB) recalcGlobalForFile(txp *txPreparedStmts, file string) error
//nolint:sqlclosecheck
upStmt, err = txp.Preparex(`
UPDATE files SET local_flags = local_flags & ?
WHERE name = ? AND sequence != ? AND local_flags & ? != 0
WHERE name_idx = (SELECT idx FROM file_names WHERE name = ?) AND sequence != ? AND local_flags & ? != 0
`)
if err != nil {
return wrap(err)
return wrap(err, "prepare update")
}
if _, err := upStmt.Exec(^(protocol.FlagLocalNeeded | protocol.FlagLocalGlobal), global.Name, global.Sequence, protocol.FlagLocalNeeded|protocol.FlagLocalGlobal); err != nil {
return wrap(err)

View File

@@ -0,0 +1,53 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Grab all unique names into the names table
INSERT INTO file_names (idx, name) SELECT DISTINCT null, name FROM files
;
-- Grab all unique versions into the versions table
INSERT INTO file_versions (idx, version) SELECT DISTINCT null, version FROM files
;
-- Create the new files table
DROP TABLE IF EXISTS files_v5
;
CREATE TABLE files_v5 (
device_idx INTEGER NOT NULL,
sequence INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
remote_sequence INTEGER,
name_idx INTEGER NOT NULL, -- changed
type INTEGER NOT NULL,
modified INTEGER NOT NULL,
size INTEGER NOT NULL,
version_idx INTEGER NOT NULL, -- changed
deleted INTEGER NOT NULL,
local_flags INTEGER NOT NULL,
blocklist_hash BLOB,
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE,
FOREIGN KEY(name_idx) REFERENCES file_names(idx), -- added
FOREIGN KEY(version_idx) REFERENCES file_versions(idx) -- added
) STRICT
;
-- Populate the new files table and move it in place
INSERT INTO files_v5
SELECT f.device_idx, f.sequence, f.remote_sequence, n.idx as name_idx, f.type, f.modified, f.size, v.idx as version_idx, f.deleted, f.local_flags, f.blocklist_hash
FROM files f
INNER JOIN file_names n ON n.name = f.name
INNER JOIN file_versions v ON v.version = f.version
;
DROP TABLE files
;
ALTER TABLE files_v5 RENAME TO files
;

View File

@@ -6,9 +6,8 @@
-- Schema migrations hold the list of historical migrations applied
CREATE TABLE IF NOT EXISTS schemamigrations (
schema_version INTEGER NOT NULL,
schema_version INTEGER NOT NULL PRIMARY KEY,
applied_at INTEGER NOT NULL, -- unix nanos
syncthing_version TEXT NOT NULL COLLATE BINARY,
PRIMARY KEY(schema_version)
syncthing_version TEXT NOT NULL COLLATE BINARY
) STRICT
;

View File

@@ -9,5 +9,5 @@
CREATE TABLE IF NOT EXISTS kv (
key TEXT NOT NULL PRIMARY KEY COLLATE BINARY,
value BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;

View File

@@ -25,15 +25,27 @@ CREATE TABLE IF NOT EXISTS files (
device_idx INTEGER NOT NULL, -- actual device ID or LocalDeviceID
sequence INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, -- our local database sequence, for each and every entry
remote_sequence INTEGER, -- remote device's sequence number, null for local or synthetic entries
name TEXT NOT NULL COLLATE BINARY,
name_idx INTEGER NOT NULL,
type INTEGER NOT NULL, -- protocol.FileInfoType
modified INTEGER NOT NULL, -- Unix nanos
size INTEGER NOT NULL,
version TEXT NOT NULL COLLATE BINARY,
version_idx INTEGER NOT NULL,
deleted INTEGER NOT NULL, -- boolean
local_flags INTEGER NOT NULL,
blocklist_hash BLOB, -- null when there are no blocks
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE,
FOREIGN KEY(name_idx) REFERENCES file_names(idx),
FOREIGN KEY(version_idx) REFERENCES file_versions(idx)
) STRICT
;
CREATE TABLE IF NOT EXISTS file_names (
idx INTEGER NOT NULL PRIMARY KEY,
name TEXT NOT NULL UNIQUE COLLATE BINARY
) STRICT
;
CREATE TABLE IF NOT EXISTS file_versions (
idx INTEGER NOT NULL PRIMARY KEY,
version TEXT NOT NULL UNIQUE COLLATE BINARY
) STRICT
;
-- FileInfos store the actual protobuf object. We do this separately to keep
@@ -49,11 +61,17 @@ CREATE UNIQUE INDEX IF NOT EXISTS files_remote_sequence ON files (device_idx, re
WHERE remote_sequence IS NOT NULL
;
-- There can be only one file per folder, device, and name
CREATE UNIQUE INDEX IF NOT EXISTS files_device_name ON files (device_idx, name)
;
-- We want to be able to look up & iterate files based on just folder and name
CREATE INDEX IF NOT EXISTS files_name_only ON files (name)
CREATE UNIQUE INDEX IF NOT EXISTS files_device_name ON files (device_idx, name_idx)
;
-- We want to be able to look up & iterate files based on blocks hash
CREATE INDEX IF NOT EXISTS files_blocklist_hash_only ON files (blocklist_hash, device_idx) WHERE blocklist_hash IS NOT NULL
;
-- We need to look by name_idx or version_idx for garbage collection.
-- This will fail pre-migration for v4 schemas, which is fine.
-- syncthing:ignore-failure
CREATE INDEX IF NOT EXISTS files_name_idx_only ON files (name_idx)
;
-- This will fail pre-migration for v4 schemas, which is fine.
-- syncthing:ignore-failure
CREATE INDEX IF NOT EXISTS files_version_idx_only ON files (version_idx)
;

View File

@@ -6,10 +6,9 @@
-- indexids holds the index ID and maximum sequence for a given device and folder
CREATE TABLE IF NOT EXISTS indexids (
device_idx INTEGER NOT NULL,
device_idx INTEGER NOT NULL PRIMARY KEY,
index_id TEXT NOT NULL COLLATE BINARY,
sequence INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY(device_idx),
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT, WITHOUT ROWID
;

View File

@@ -6,9 +6,8 @@
--- Backing for the MtimeFS
CREATE TABLE IF NOT EXISTS mtimes (
name TEXT NOT NULL,
name TEXT NOT NULL PRIMARY KEY,
ondisk INTEGER NOT NULL, -- unix nanos
virtual INTEGER NOT NULL, -- unix nanos
PRIMARY KEY(name)
virtual INTEGER NOT NULL -- unix nanos
) STRICT, WITHOUT ROWID
;

View File

@@ -18,14 +18,30 @@ import (
"time"
)
type formattingHandler struct {
attrs []slog.Attr
groups []string
type LineFormat struct {
TimestampFormat string
LevelString bool
LevelSyslog bool
}
type formattingOptions struct {
LineFormat
out io.Writer
recs []*lineRecorder
timeOverride time.Time
}
type formattingHandler struct {
attrs []slog.Attr
groups []string
opts *formattingOptions
}
func SetLineFormat(f LineFormat) {
globalFormatter.LineFormat = f
}
var _ slog.Handler = (*formattingHandler)(nil)
func (h *formattingHandler) Enabled(context.Context, slog.Level) bool {
@@ -83,19 +99,19 @@ func (h *formattingHandler) Handle(_ context.Context, rec slog.Record) error {
}
line := Line{
When: cmp.Or(h.timeOverride, rec.Time),
When: cmp.Or(h.opts.timeOverride, rec.Time),
Message: sb.String(),
Level: rec.Level,
}
// If there is a recorder, record the line.
for _, rec := range h.recs {
for _, rec := range h.opts.recs {
rec.record(line)
}
// If there's an output, print the line.
if h.out != nil {
_, _ = line.WriteTo(h.out)
if h.opts.out != nil {
_, _ = line.WriteTo(h.opts.out, h.opts.LineFormat)
}
return nil
}
@@ -143,11 +159,9 @@ func (h *formattingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
}
}
return &formattingHandler{
attrs: append(h.attrs, attrs...),
groups: h.groups,
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: append(h.attrs, attrs...),
groups: h.groups,
opts: h.opts,
}
}
@@ -156,11 +170,9 @@ func (h *formattingHandler) WithGroup(name string) slog.Handler {
return h
}
return &formattingHandler{
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
opts: h.opts,
}
}

View File

@@ -17,8 +17,11 @@ import (
func TestFormattingHandler(t *testing.T) {
buf := new(bytes.Buffer)
h := &formattingHandler{
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
opts: &formattingOptions{
LineFormat: DefaultLineFormat,
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
},
}
l := slog.New(h).With("a", "a")

View File

@@ -9,6 +9,7 @@ package slogutil
import (
"log/slog"
"maps"
"strings"
"sync"
)
@@ -39,6 +40,24 @@ func SetDefaultLevel(level slog.Level) {
globalLevels.SetDefault(level)
}
func SetLevelOverrides(sttrace string) {
pkgs := strings.Split(sttrace, ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}
type levelTracker struct {
mut sync.RWMutex
defLevel slog.Level

View File

@@ -7,6 +7,7 @@
package slogutil
import (
"bytes"
"encoding/json"
"fmt"
"io"
@@ -22,13 +23,22 @@ type Line struct {
Level slog.Level `json:"level"`
}
func (l *Line) WriteTo(w io.Writer) (int64, error) {
n, err := fmt.Fprintf(w, "%s %s %s\n", l.timeStr(), l.levelStr(), l.Message)
return int64(n), err
}
func (l *Line) timeStr() string {
return l.When.Format("2006-01-02 15:04:05")
func (l *Line) WriteTo(w io.Writer, f LineFormat) (int64, error) {
buf := new(bytes.Buffer)
if f.LevelSyslog {
_, _ = fmt.Fprintf(buf, "<%d>", l.syslogPriority())
}
if f.TimestampFormat != "" {
buf.WriteString(l.When.Format(f.TimestampFormat))
buf.WriteRune(' ')
}
if f.LevelString {
buf.WriteString(l.levelStr())
buf.WriteRune(' ')
}
buf.WriteString(l.Message)
buf.WriteRune('\n')
return buf.WriteTo(w)
}
func (l *Line) levelStr() string {
@@ -51,6 +61,19 @@ func (l *Line) levelStr() string {
}
}
func (l *Line) syslogPriority() int {
switch {
case l.Level < slog.LevelInfo:
return 7
case l.Level < slog.LevelWarn:
return 6
case l.Level < slog.LevelError:
return 4
default:
return 3
}
}
func (l *Line) MarshalJSON() ([]byte, error) {
// Custom marshal to get short level strings instead of default JSON serialisation
return json.Marshal(map[string]any{

View File

@@ -10,20 +10,26 @@ import (
"io"
"log/slog"
"os"
"strings"
"time"
)
var (
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
globalLevels = &levelTracker{
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
DefaultLineFormat = LineFormat{
TimestampFormat: time.DateTime,
LevelString: true,
}
globalLevels = &levelTracker{
levels: make(map[string]slog.Level),
descrs: make(map[string]string),
}
slogDef = slog.New(&formattingHandler{
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
})
globalFormatter = &formattingOptions{
LineFormat: DefaultLineFormat,
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
}
slogDef = slog.New(&formattingHandler{opts: globalFormatter})
)
func logWriter() io.Writer {
@@ -38,21 +44,4 @@ func logWriter() io.Writer {
func init() {
slog.SetDefault(slogDef)
// Handle legacy STTRACE var
pkgs := strings.Split(os.Getenv("STTRACE"), ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}

View File

@@ -25,9 +25,10 @@ import (
)
const (
maxSessionLifetime = 7 * 24 * time.Hour
maxActiveSessions = 25
randomTokenLength = 64
maxSessionLifetime = 7 * 24 * time.Hour
maxActiveSessions = 25
randomTokenLength = 64
maxLoginRequestSize = 1 << 10 // one kibibyte for username+password
)
func emitLoginAttempt(success bool, username string, r *http.Request, evLogger events.Logger) {
@@ -182,7 +183,7 @@ func (m *basicAuthAndSessionMiddleware) passwordAuthHandler(w http.ResponseWrite
Password string
StayLoggedIn bool
}
if err := unmarshalTo(r.Body, &req); err != nil {
if err := unmarshalTo(http.MaxBytesReader(w, r.Body, maxLoginRequestSize), &req); err != nil {
l.Debugln("Failed to parse username and password:", err)
http.Error(w, "Failed to parse username and password.", http.StatusBadRequest)
return

View File

@@ -23,6 +23,15 @@ func getRedactedConfig(s *service) config.Configuration {
if rawConf.GUI.User != "" {
rawConf.GUI.User = "REDACTED"
}
for folderIdx, folderCfg := range rawConf.Folders {
for deviceIdx, deviceCfg := range folderCfg.Devices {
if deviceCfg.EncryptionPassword != "" {
rawConf.Folders[folderIdx].Devices[deviceIdx].EncryptionPassword = "REDACTED"
}
}
}
return rawConf
}

View File

@@ -459,6 +459,7 @@ func TestRecvOnlyRevertOwnID(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
defer cancel()
for {
select {
case <-ctx.Done():
@@ -466,9 +467,9 @@ func TestRecvOnlyRevertOwnID(t *testing.T) {
case <-sub.C():
if file, _ := m.testCurrentFolderFile(f.ID, name); file.Deleted {
t.Error("local file was deleted")
cancel()
return
} else if file.IsEquivalent(fi, f.modTimeWindow) {
cancel() // That's what we are waiting for
return // That's what we are waiting for
}
}
}

View File

@@ -491,15 +491,26 @@ nextFile:
continue nextFile
}
// Verify there is some availability for the file before we start
// processing it
devices := f.model.fileAvailability(f.FolderConfiguration, fi)
if len(devices) > 0 {
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
if len(devices) == 0 {
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
continue
}
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
// Verify we have space to handle the file before we start
// creating temp files etc.
if err := f.CheckAvailableSpace(uint64(fi.Size)); err != nil { //nolint:gosec
f.newPullError(fileName, err)
f.queue.Done(fileName)
continue
}
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
}
return changed, fileDeletions, dirDeletions, nil
@@ -1327,13 +1338,6 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
}
for state := range in {
if err := f.CheckAvailableSpace(uint64(state.file.Size)); err != nil { //nolint:gosec
state.fail(err)
// Nothing more to do for this failed file, since it would use to much disk space
out <- state.sharedPullerState
continue
}
if f.Type != config.FolderTypeReceiveEncrypted {
f.model.progressEmitter.Register(state.sharedPullerState)
}

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/events"
)
@@ -125,11 +126,12 @@ func (s *stateTracker) setState(newState folderState) {
eventData["duration"] = time.Since(s.changed).Seconds()
}
slog.Debug("Folder changed state", "folder", s.folderID, "state", newState, "from", s.current)
s.current = newState
s.changed = time.Now().Truncate(time.Second)
s.evLogger.Log(events.StateChanged, eventData)
slog.Info("Folder changed state", "folder", s.folderID, "state", newState)
}
// getState returns the current state, the time when it last changed, and the
@@ -156,6 +158,12 @@ func (s *stateTracker) setError(err error) {
"from": s.current.String(),
}
if err != nil && s.current != FolderError {
slog.Warn("Folder is in error state", slog.String("folder", s.folderID), slogutil.Error(err))
} else if err == nil && s.current == FolderError {
slog.Info("Folder error state was cleared", slog.String("folder", s.folderID))
}
if err != nil {
eventData["error"] = err.Error()
s.current = FolderError

View File

@@ -8,6 +8,7 @@
package osutil
import (
"os"
"path/filepath"
"strings"
"sync"
@@ -142,3 +143,21 @@ func IsDeleted(ffs fs.Filesystem, name string) bool {
}
return false
}
func DirSize(location string) int64 {
entries, err := os.ReadDir(location)
if err != nil {
return 0
}
var size int64
for _, entry := range entries {
fi, err := entry.Info()
if err != nil {
continue
}
size += fi.Size()
}
return size
}

View File

@@ -31,6 +31,8 @@ var bufPool = sync.Pool{
},
}
const hashLength = sha256.Size
var hashPool = sync.Pool{
New: func() any {
return sha256.New()
@@ -43,9 +45,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
counter = &noopCounter{}
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
const hashLength = sha256.Size
var blocks []protocol.BlockInfo
var hashes, thisHash []byte
@@ -62,8 +61,14 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hashes = make([]byte, 0, hashLength*numBlocks)
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
// A 32k buffer is used for copying into the hash function.
buf := bufPool.Get().(*[bufSize]byte)[:] //nolint:forcetypeassert
defer func() {
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
}()
var offset int64
lr := io.LimitReader(r, int64(blocksize)).(*io.LimitedReader)
@@ -102,9 +107,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hf.Reset()
}
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
if len(blocks) == 0 {
// Empty file

View File

@@ -83,6 +83,8 @@ func SecureDefaultWithTLS12() *tls.Config {
// We've put some thought into this choice and would like it to
// matter.
PreferServerCipherSuites: true,
// We support HTTP/2 and HTTP/1.1
NextProtos: []string{"h2", "http/1.1"},
ClientSessionCache: tls.NewLRUClientSessionCache(0),
}

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "STDISCOSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "STRELAYSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH OVERVIEW
@@ -148,7 +148,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -250,7 +250,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -340,7 +340,7 @@ GUI.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -607,7 +607,7 @@ folder is located on a FAT partition, and \fB0\fP otherwise.
.TP
.B maxConcurrentWrites
Maximum number of concurrent write operations while syncing. Increasing this might increase or
decrease disk performance, depending on the underlying storage. Default is \fB2\fP\&.
decrease disk performance, depending on the underlying storage. Default is \fB16\fP\&.
.UNINDENT
.INDENT 0.0
.TP
@@ -1555,7 +1555,7 @@ are set, \fI\%\-\-auditfile\fP takes priority.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp
@@ -156,7 +156,7 @@ Returns the current configuration.
\(dqmarkerName\(dq: \(dq.stfolder\(dq,
\(dqcopyOwnershipFromParent\(dq: false,
\(dqmodTimeWindowS\(dq: 0,
\(dqmaxConcurrentWrites\(dq: 2,
\(dqmaxConcurrentWrites\(dq: 16,
\(dqdisableFsync\(dq: false,
\(dqblockPullOrder\(dq: \(dqstandard\(dq,
\(dqcopyRangeMethod\(dq: \(dqstandard\(dq,
@@ -328,7 +328,7 @@ Returns the current configuration.
\(dqmarkerName\(dq: \(dq.stfolder\(dq,
\(dqcopyOwnershipFromParent\(dq: false,
\(dqmodTimeWindowS\(dq: 0,
\(dqmaxConcurrentWrites\(dq: 2,
\(dqmaxConcurrentWrites\(dq: 16,
\(dqdisableFsync\(dq: false,
\(dqblockPullOrder\(dq: \(dqstandard\(dq,
\(dqcopyRangeMethod\(dq: \(dqstandard\(dq,
@@ -398,14 +398,14 @@ config, modify the needed parts and post it again.
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Return format changed in versions 0.13.0, 1.19.0 and 1.23.0.
Return format changed in versions 0.13.0, 0.14.14, 1.2.0, 1.19.0, 1.23.0 and 1.25.0.
.UNINDENT
.UNINDENT
.sp
Returns the list of configured devices and some metadata associated
with them. The list also contains the local device itself as not connected.
with them.
.sp
The connection types are \fBTCP (Client)\fP, \fBTCP (Server)\fP, \fBRelay (Client)\fP and \fBRelay (Server)\fP\&.
The connection types are \fBtcp\-client\fP, \fBtcp\-server\fP, \fBrelay\-client\fP, \fBrelay\-server\fP, \fBquic\-client\fP and \fBquic\-server\fP\&.
.INDENT 0.0
.INDENT 3.5
.sp
@@ -446,7 +446,7 @@ The connection types are \fBTCP (Client)\fP, \fBTCP (Server)\fP, \fBRelay (Clien
\(dqoutBytesTotal\(dq: 550,
\(dqpaused\(dq: false,
\(dqstartedAt\(dq: \(dq2015\-11\-07T00:09:47Z\(dq,
\(dqtype\(dq: \(dqTCP (Client)\(dq
\(dqtype\(dq: \(dqtcp\-client\(dq
}
},
\(dqtotal\(dq: {

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing \- Syncthing
.SH SYNOPSIS

View File

@@ -41,7 +41,7 @@
cross compilation with SQLite:
- dragonfly/amd64
- illumos/amd64 and solaris/amd64
- solaris/amd64
- linux/ppc64
- netbsd/*
- openbsd/386 and openbsd/arm