Compare commits

...

23 Commits

Author SHA1 Message Date
Jakob Borg
25ae01b0d7 chore(sqlite): skip database GC entirely when it's provably unnecessary (#10379)
Store the sequence number of the last GC sweep in a KV. Next time, if it
matches we can just skip GC because nothing has been added or removed.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-08 08:55:04 +02:00
Syncthing Release Automation
66583927f8 chore(gui, man, authors): update docs, translations, and contributors 2025-09-08 03:52:21 +00:00
Simon Frei
f0328abeaa chore(scanner): always return values to the pools when hashing blocks (#10377)
There are some return statements in between, but putting back the values
isn't my motivation (hardly ever happens), I just find this more
readable. Same with moving `hashLength`: Placed next to the pool the
connection with `sha256.New()` is closer.

Followup to:
chore(scanner): reduce memory pressure by using pools inside hasher #10222
6e26fab3a0

Signed-off-by: Simon Frei <freisim93@gmail.com>
2025-09-07 17:00:19 +02:00
Jakob Borg
4b8d07d91c fix(sqlite): explicitly set temporary directory location (fixes #10368) (#10376)
On Unixes, avoid the /tmp which is likely to become the chosen default.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:47 +02:00
Jakob Borg
c33daca3b4 fix(sqlite): less impactful periodic garbage collection (#10374)
Periodic garbage collection can take a long time on large folders. The worst
step is the one for blocks, which are typically orders of magnitude more
numerous than files or block lists.

This improves the situation in by running blocks GC in a number of smaller
range chunks, in random order, and stopping after a time limit. At most ten
minutes per run will be spent garbage collecting blocklists and blocks.

With this, we're not guaranteed to complete a full GC on every run, but
we'll make some progress and get there eventually.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-07 14:04:29 +02:00
Amin Vakil
a533f453f8 build: trigger nightly build only on syncthing repo (#10375)
Signed-off-by: Amin Vakil <info@aminvakil.com>
2025-09-07 14:03:33 +02:00
Jakob Borg
3c9e87d994 build: exclude illumos from cross building
Now that we have a native build for it.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 11:51:15 +02:00
Jakob Borg
f0180cb014 fix(sqlite): avoid rowid on kv table (#10367)
No migration on this as it has no practical impact, just a slight
cleanup for new installations.

Also a refactor of how we declare single column primary keys, for
consistency.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 09:31:07 +00:00
Jakob Borg
a99a730c0c fix(tlsutil): support HTTP/2 on GUI/API connections (#10366)
By not setting ALPN we were implicitly rejecting HTTP/2, completely
unnecessarily.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:57:39 +02:00
Jakob Borg
36254473a3 chore(slogutil): add configurable logging format (fixes #10352) (#10354)
This adds several options for configuring the log format of timestamps
and severity levels, making it more suitable for integration with log
systems like systemd.

      --log-format-timestamp="2006-01-02 15:04:05"
         Format for timestamp, set to empty to disable timestamps ($STLOGFORMATTIMESTAMP)

      --[no-]log-format-level-string
         Whether to include level string in log line ($STLOGFORMATLEVELSTRING)

      --[no-]log-format-level-syslog
         Whether to include level as syslog prefix in log line ($STLOGFORMATLEVELSYSLOG)

So, to get a timestamp suitable for systemd (syslog prefix, no level
string, no timestamp) we can pass `--log-format-timestamp=""
--no-log-format-level-string --log-format-level-syslog` or,
equivalently, set `STLOGFORMATTIMESTAMP="" STLOGFORMATLEVELSTRING=false
STLOGFORMATLEVELSYSLOG=true`.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-05 10:52:49 +02:00
Jakob Borg
800596139e chore(sqlite): stamp files with application_id
No practical effect, just a tiny bit of fun to stamp the database files
with an application ID that identifies them.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:15:38 +02:00
Jakob Borg
f48782e4df fix(sqlite): revert to default page cache size (#10362)
While we're figuring out optimal defaults, reduce the page cache size to
the compiled-in default. In my computer this makes no difference in
benchmarks. In forum threads, it solved the problem of massive memory
usage during initial scan.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 23:07:51 +02:00
Jakob Borg
922cc7544e docs: we now do binaries for illumos again
Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 21:38:30 +02:00
Tommy van der Vorst
9e262d84de fix(api): redact device encryption passwords in support bundle config (#10359)
* fix(api): redact device encryption passwords in support bundle config

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>

* Update lib/api/support_bundle.go

Signed-off-by: Jakob Borg <jakob@kastelo.net>

---------

Signed-off-by: Tommy van der Vorst <tommy@pixelspark.nl>
Signed-off-by: Jakob Borg <jakob@kastelo.net>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 18:22:59 +00:00
Jakob Borg
42db6280e6 fix(model): earlier free-space check (fixes #10347) (#10348)
Since #10332 we'd create the temp file when closing out the puller state
for a file, but this is inappropriate if the reason we're bailing out is
that there isn't space for it to begin with. Instead, do the
free space check before we even start copying/pulling.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 16:53:30 +00:00
Albert Lee
8d8adae310 build: package for illumos using vmactions/omnios-vm (#10328)
Use GitHub Actions to build illumos/amd64 package.

Signed-off-by: Albert Lee <trisk@forkgnu.org>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 08:51:44 +00:00
Jakob Borg
12ba4b6aea chore(model): adjust folder state logging (fixes #10350) (#10353)
Removes the chitter-chatter of folder state changes from the info level,
while adding the error state at warning level and a corresponding
clearing of the error state at info level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-04 07:38:06 +00:00
Jakob Borg
372e3c26b0 fix(db): remove temp_store = MEMORY pragmas (#10343)
This reduces database migration memory usage in my test scenario from
3.8 GB to 440 MB. In principle I don't think we're causing many temp
tables to be generated anyway in normal usage, but if we do and someone
can benchmark a performance difference, we can add a tunable. I ran the
database benchmark before and after and didn't see a difference above
the noise level.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 09:27:53 +02:00
Jakob Borg
01e2426a56 fix(syncthing): properly report kibibytes RSS in Linux perfstats
The value from getrusage is already in KiB, while on macOS it's in
bytes.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-03 07:52:19 +02:00
Tommy van der Vorst
6e9ccf7211 fix(db): only vacuum database on startup when a migration script was actually run (#10339) 2025-09-02 12:03:22 -07:00
Jakob Borg
4986fc1676 docs: minor formatting fixup of previous 2025-09-02 09:19:43 +02:00
Jakob Borg
5ff050e665 docs: update contribution guidelines from the docs site (#10336)
This copies the relevant parts of the contribution guidelines in the
docs, for the purpose of keeping them in a single place. The in-docs
contribution guidelines can become a link to this document.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 09:16:36 +02:00
Jakob Borg
fc40dc8af2 docs: add DCO requirement to contribution guidelines (#10333)
This adds the requirement to have a DCO sign-off on commits.

Signed-off-by: Jakob Borg <jakob@kastelo.net>
2025-09-02 08:24:03 +02:00
47 changed files with 621 additions and 178 deletions

View File

@@ -159,6 +159,7 @@ jobs:
needs:
- build-test
- package-linux
- package-illumos
- package-cross
- package-source
- package-debian
@@ -337,6 +338,39 @@ jobs:
*.tar.gz
compat.json
package-illumos:
runs-on: ubuntu-latest
name: Package for illumos
needs:
- facts
env:
VERSION: ${{ needs.facts.outputs.version }}
GO_VERSION: ${{ needs.facts.outputs.go-version }}
steps:
- uses: actions/checkout@v4
- name: Build syncthing in OmniOS VM
uses: vmactions/omnios-vm@v1
with:
envs: "VERSION GO_VERSION CGO_ENABLED"
usesh: true
prepare: |
pkg install developer/gcc14 web/curl archiver/gnu-tar
run: |
curl -L "https://go.dev/dl/go$GO_VERSION.illumos-amd64.tar.gz" | gtar xzf -
export PATH="$GITHUB_WORKSPACE/go/bin:$PATH"
go version
for tgt in syncthing stdiscosrv strelaysrv ; do
go run build.go -tags "${{env.TAGS}}" tar "$tgt"
done
env:
CGO_ENABLED: "1"
- name: Archive artifacts
uses: actions/upload-artifact@v4
with:
name: packages-illumos
path: "*.tar.gz"
#
# macOS. The entire build runs in the release environment because code
# signing is part of the build process, so it is limited to release
@@ -503,6 +537,7 @@ jobs:
| grep -v aix/ppc64 \
| grep -v android/ \
| grep -v darwin/ \
| grep -v illumos/ \
| grep -v ios/ \
| grep -v js/ \
| grep -v linux/ \
@@ -588,6 +623,7 @@ jobs:
needs:
- codesign-windows
- package-linux
- package-illumos
- package-macos
- package-cross
- package-source

View File

@@ -8,6 +8,7 @@ on:
jobs:
trigger-nightly:
if: github.repository_owner == 'syncthing'
runs-on: ubuntu-latest
name: Push to release-nightly to trigger build
steps:

View File

@@ -34,19 +34,163 @@ Note that the previously used service at
retired and we kindly ask you to sign up on Weblate for continued
involvement.
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! See the [Contribution
Guidelines](https://docs.syncthing.net/dev/contributing.html) for the full
story on committing code.
## Contributing Documentation
Updates to the [documentation site](https://docs.syncthing.net/) can be
made as pull requests on the [documentation
repository](https://github.com/syncthing/docs).
## Contributing Code
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Here's a short rundown of
what you need to keep in mind:
- Don't worry. You are not expected to get everything right on the first
attempt, we'll guide you through it.
- Make sure there is an
[issue](https://github.com/syncthing/syncthing/issues) that describes the
change you want to do. If the thing you want to do does not have an issue
yet, please file one before starting work on it.
- Fork the repository and make your changes in a new branch. Once it's ready
for review, create a pull request.
### Authorship
All code authors are listed in the AUTHORS file. When your first pull
request is accepted your details are added to the AUTHORS file and the list
of authors in the GUI. Commits must be made with the same name and email as
listed in the AUTHORS file. To accomplish this, ensure that your git
configuration is set correctly prior to making your first commit:
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to use
your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### The Developer Certificate of Origin (DCO)
The Syncthing project requires the Developer Certificate of Origin (DCO)
sign-off on pull requests (PRs). This means that all commit messages must
contain a signature line to indicate that the developer accepts the DCO.
The DCO is a lightweight way for contributors to certify that they wrote (or
otherwise have the right to submit) the code and changes they are
contributing to the project. Here is the full [text of the
DCO](https://developercertificate.org):
---
By making a contribution to this project, I certify that:
1. The contribution was created in whole or in part by me and I have the
right to submit it under the open source license indicated in the file;
or
2. The contribution is based upon previous work that, to the best of my
knowledge, is covered under an appropriate open source license and I have
the right under that license to submit that work with modifications,
whether created in whole or in part by me, under the same open source
license (unless I am permitted to submit under a different license), as
indicated in the file; or
3. The contribution was provided directly to me by some other person who
certified (1), (2) or (3) and I have not modified it.
4. I understand and agree that this project and the contribution are public
and that a record of the contribution (including all personal information
I submit with it, including my sign-off) is maintained indefinitely and
may be redistributed consistent with this project or the open source
license(s) involved.
---
Contributors indicate that they adhere to these requirements by adding
a `Signed-off-by` line to their commit messages. For example:
This is my commit message
Signed-off-by: Random J Developer <random@developer.example.org>
The name and email address in this line must match those of the committing
author, and be the same as what you want in the AUTHORS file as per above.
### Coding Style
#### General
- All text files use Unix line endings. The git settings already present in
the repository attempt to enforce this.
- When making changes, follow the brace and parenthesis style of the
surrounding code.
#### Go Specific
- Follow the conventions laid out in [Effective
Go](https://go.dev/doc/effective_go) as much as makes sense. The review
guidelines in [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments) should
generally be followed.
- Each commit should be `go fmt` clean.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
### Commits
- Commit messages (and pull request titles) should follow the [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/) specification and
be in lower case.
- We use a scope description in the commit message subject. This is the
component of Syncthing that the commit affects. For example, `gui`,
`protocol`, `scanner`, `upnp`, etc -- typically, the part after
`internal/`, `lib/` or `cmd/` in the package path. If the commit doesn't
affect a specific component, such as for changes to the build system or
documentation, the scope should be omitted. The same goes for changes that
affect many components which would be cumbersome to list.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject. A correctly
formatted commit message subject looks like this:
feat(dialer): add env var to disable proxy fallback (fixes #3006)
- If the commit message subject doesn't say it all, one or more paragraphs of
describing text should be added to the commit message. This should explain
why the change is made and what it accomplishes.
- When drafting a pull request, please feel free to add commits with
corrections and merge from `main` when necessary. This provides a clear time
line with changes and simplifies review. Do not, in general, rebase your
commits, as this makes review harder.
- Pull requests are merged to `main` using squash merge. The "stream of
consciousness" set of commits described in the previous point will be reduced
to a single commit at merge time. The pull request title and description will
be used as the commit message.
### Tests
Yes please, do add tests when adding features or fixing bugs. Also, when a
pull request is filed a number of automatic tests are run on the code. This
includes:
- That the code actually builds and the test suite passes.
- That the code is correctly formatted (`go fmt`).
- That the commits are based on a reasonably recent `main`.
- That the output from `go lint` and `go vet` is clean. (This checks for a
number of potential problems the compiler doesn't catch.)
## Licensing
All contributions are made available under the same license as the already
@@ -59,10 +203,6 @@ otherwise stated this means MPLv2, but there are exceptions:
- The documentation (man/...) is licensed under the Creative Commons
Attribution 4.0 International License.
- Projects under vendor/... are copyright by and licensed from their
respective original authors. Contributions should be made to the original
project, not here.
Regardless of the license in effect, you retain the copyright to your
contribution.

View File

@@ -164,6 +164,9 @@ type serveCmd struct {
LogLevel slog.Level `help:"Log level for all packages (DEBUG,INFO,WARN,ERROR)" env:"STLOGLEVEL" default:"INFO"`
LogMaxFiles int `name:"log-max-old-files" help:"Number of old files to keep (zero to keep only current)" default:"${logMaxFiles}" placeholder:"N" env:"STLOGMAXOLDFILES"`
LogMaxSize int `help:"Maximum size of any file (zero to disable log rotation)" default:"${logMaxSize}" placeholder:"BYTES" env:"STLOGMAXSIZE"`
LogFormatTimestamp string `name:"log-format-timestamp" help:"Format for timestamp, set to empty to disable timestamps" env:"STLOGFORMATTIMESTAMP" default:"${timestampFormat}"`
LogFormatLevelString bool `name:"log-format-level-string" help:"Whether to include level string in log line" env:"STLOGFORMATLEVELSTRING" default:"${levelString}" negatable:""`
LogFormatLevelSyslog bool `name:"log-format-level-syslog" help:"Whether to include level as syslog prefix in log line" env:"STLOGFORMATLEVELSYSLOG" default:"${levelSyslog}" negatable:""`
NoBrowser bool `help:"Do not start browser" env:"STNOBROWSER"`
NoPortProbing bool `help:"Don't try to find free ports for GUI and listen addresses on first startup" env:"STNOPORTPROBING"`
NoRestart bool `help:"Do not restart Syncthing when exiting due to API/GUI command, upgrade, or crash" env:"STNORESTART"`
@@ -186,10 +189,13 @@ type serveCmd struct {
}
func defaultVars() kong.Vars {
vars := kong.Vars{}
vars["logMaxSize"] = strconv.Itoa(10 << 20) // 10 MiB
vars["logMaxFiles"] = "3" // plus the current one
vars := kong.Vars{
"logMaxSize": strconv.Itoa(10 << 20), // 10 MiB
"logMaxFiles": "3", // plus the current one
"levelString": strconv.FormatBool(slogutil.DefaultLineFormat.LevelString),
"levelSyslog": strconv.FormatBool(slogutil.DefaultLineFormat.LevelSyslog),
"timestampFormat": slogutil.DefaultLineFormat.TimestampFormat,
}
// On non-Windows, we explicitly default to "-" which means stdout. On
// Windows, the "default" options.logFile will later be replaced with the
@@ -262,8 +268,14 @@ func (c *serveCmd) Run() error {
osutil.HideConsole()
}
// The default log level for all packages
// Customize the logging early
slogutil.SetLineFormat(slogutil.LineFormat{
TimestampFormat: c.LogFormatTimestamp,
LevelString: c.LogFormatLevelString,
LevelSyslog: c.LogFormatLevelSyslog,
})
slogutil.SetDefaultLevel(c.LogLevel)
slogutil.SetLevelOverrides(os.Getenv("STTRACE"))
// Treat an explicitly empty log file name as no log file
if c.LogFile == "" {
@@ -1039,7 +1051,7 @@ func (m migratingAPI) Serve(ctx context.Context) error {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("*** Database migration in progress ***\n\n"))
for _, line := range slogutil.GlobalRecorder.Since(time.Time{}) {
line.WriteTo(w)
_, _ = line.WriteTo(w, slogutil.DefaultLineFormat)
}
}),
}

View File

@@ -16,6 +16,7 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/exp/constraints"
@@ -48,11 +49,16 @@ func savePerfStats(file string) {
in, out := protocol.TotalInOut()
timeDiff := t.Sub(prevTime)
rss := curRus.Maxrss
if build.IsDarwin {
rss /= 1024
}
fmt.Fprintf(fd, "%.03f\t%f\t%d\t%d\t%.0f\t%.0f\t%d\n",
t.Sub(t0).Seconds(),
rate(cpusec(&prevRus), cpusec(&curRus), timeDiff, 1),
(curMem.Sys-curMem.HeapReleased)/1024,
curRus.Maxrss/1024,
rss,
rate(prevIn, in, timeDiff, 1e3),
rate(prevOut, out, timeDiff, 1e3),
dirsize(locations.Get(locations.Database))/1024,

View File

@@ -7,6 +7,9 @@ StartLimitBurst=4
[Service]
User=%i
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1

View File

@@ -5,7 +5,10 @@ StartLimitIntervalSec=60
StartLimitBurst=4
[Service]
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart --logflags=0
Environment="STLOGFORMATTIMESTAMP="
Environment="STLOGFORMATLEVELSTRING=false"
Environment="STLOGFORMATLEVELSYSLOG=true"
ExecStart=/usr/bin/syncthing serve --no-browser --no-restart
Restart=on-failure
RestartSec=1
SuccessExitStatus=3 4

View File

@@ -177,7 +177,7 @@
"Folder type \"{%receiveEncrypted%}\" can only be set when adding a new folder.": "Вида „{{receiveEncrypted}}“ може да бъде избран само при добавяне на папка.",
"Folder type \"{%receiveEncrypted%}\" cannot be changed after adding the folder. You need to remove the folder, delete or decrypt the data on disk, and add the folder again.": "Видът папката „{{receiveEncrypted}}“ не може да бъде променян след нейното създаване. Трябва да я премахнете, изтриете или разшифровате съдържанието и да добавите папката отново.",
"Folders": "Папки",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете нейната първопричина или потърсете помощ ако не съумявате.",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Грешка при започване на наблюдението за промени на следните папки. Всяка минута ще бъде извършван нов опит, така че грешката скоро може да изчезне. Ако все пак не изчезне, отстранете първопричината или ако не съумявате потърсете помощ.",
"Forever": "Завинаги",
"Full Rescan Interval (s)": "Интервал на пълно обхождане (секунди)",
"GUI": "Интерфейс",

View File

@@ -27,6 +27,7 @@
"Allowed Networks": "Redes permitidas",
"Alphabetic": "Alfabética",
"Altered by ignoring deletes.": "Cambiado por ignorar o borrado.",
"Always turned on when the folder type is \"{%foldertype%}\".": "Sempre acendido cando o cartafol é de tipo \"{{foldertype}}\".",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Un comando externo xestiona as versións. Ten que eliminar o ficheiro do cartafol compartido. Si a ruta ao aplicativo contén espazos, deberían ir acotados.",
"Anonymous Usage Reporting": "Informe anónimo de uso",
"Anonymous usage report format has changed. Would you like to move to the new format?": "O formato do informe de uso anónimo cambiou. Quere usar o novo formato?",
@@ -52,6 +53,7 @@
"Body:": "Corpo:",
"Bugs": "Erros",
"Cancel": "Cancelar",
"Cannot be enabled when the folder type is \"{%foldertype%}\".": "Non se pode activar cando o cartafol é de tipo \"{{foldertype}}\".",
"Changelog": "Rexistro de cambios",
"Clean out after": "Limpar despois",
"Cleaning Versions": "Limpando Versións",
@@ -80,6 +82,7 @@
"Custom Range": "Rango personalizado",
"Danger!": "Perigo!",
"Database Location": "Localización da Base de Datos",
"Debug": "Depurar",
"Debugging Facilities": "Ferramentas de depuración",
"Default": "Predeterminado",
"Default Configuration": "Configuración Predeterminada",
@@ -140,6 +143,7 @@
"Enables sending extended attributes to other devices, and applying incoming extended attributes. May require running with elevated privileges.": "Activa o envío de atributos extendidos a outros dispositivos, e aplicar os atributos extendidos recibidos. Podería requerir a execución con privilexios elevados.",
"Enables sending extended attributes to other devices, but not applying incoming extended attributes. This can have a significant performance impact. Always enabled when \"Sync Extended Attributes\" is enabled.": "Activa o envío de atributos extendidos a outros dispositivos, pero non aplica atributos extendidos que se reciben. Isto podería afectar significativamente ao rendemento. Sempre está activado cando «Sincr Atributos Extendidos\" está activado.",
"Enables sending ownership information to other devices, and applying incoming ownership information. Typically requires running with elevated privileges.": "Activa o envío de información sobre a propiedade a outros dispositivos, e aplica a información sobre a propiedade cando se recibe. Normalmente require a execución con privilexios elevados.",
"Enables sending ownership information to other devices, but not applying incoming ownership information. This can have a significant performance impact. Always enabled when \"Sync Ownership\" is enabled.": "Activa o envío a outros dispositivos de información sobre a propiedade, pero non aplica información entrante sobre a propiedade. Isto pode afectar en gran medida ao rendemento. Está sempre activado cando \"Sincronización da propiedade\" está activada.",
"Enter a non-negative number (e.g., \"2.35\") and select a unit. Percentages are as part of the total disk size.": "Introduza un número non negativo (por exemplo, \"2.35\") e seleccione unha unidade. As porcentaxes son como partes totais do tamaño do disco.",
"Enter a non-privileged port number (1024 - 65535).": "Introduza un número de porto non privilexiado (1024-65535).",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduza direccións separadas por comas (\"tcp://ip:porto\", \"tcp://host:porto\") ou \"dynamic\" para realizar o descubrimento automático da dirección.",

View File

@@ -25,7 +25,11 @@ import (
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 4
const (
currentSchemaVersion = 4
applicationIDMain = 0x53546d6e // "STmn", Syncthing main database
applicationIDFolder = 0x53546664 // "STfd", Syncthing folder database
)
//go:embed sql/**
var embedded embed.FS
@@ -110,6 +114,7 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
}
if int(n) > ver.SchemaVersion {
slog.Info("Applying database migration", slogutil.FilePath(db.baseName), slog.String("script", scr))
shouldVacuum = true
return true
}
return false
@@ -118,7 +123,6 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
if err := db.runScripts(tx, script, filter); err != nil {
return nil, wrap(err)
}
shouldVacuum = true
}
}

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"log/slog"
"os"
"path/filepath"
@@ -15,6 +16,7 @@ import (
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
)
const (
@@ -52,8 +54,7 @@ func Open(path string, opts ...Option) (*DB, error) {
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -65,6 +66,8 @@ func Open(path string, opts ...Option) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, maxDBConns, pragmas, schemas, migrations)
if err != nil {
@@ -99,11 +102,10 @@ func Open(path string, opts ...Option) (*DB, error) {
func OpenForMigration(path string) (*DB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDMain),
}
schemas := []string{
"sql/schema/common/*",
@@ -115,6 +117,8 @@ func OpenForMigration(path string) (*DB, error) {
}
_ = os.MkdirAll(path, 0o700)
initTmpDir(path)
mainPath := filepath.Join(path, "main.db")
mainBase, err := openBase(mainPath, 1, pragmas, schemas, migrations)
if err != nil {
@@ -144,3 +148,24 @@ func (s *DB) Close() error {
}
return wrap(s.baseDB.Close())
}
func initTmpDir(path string) {
if build.IsWindows || build.IsDarwin || os.Getenv("SQLITE_TMPDIR") != "" {
// Doesn't use SQLITE_TMPDIR, isn't likely to have a tiny
// ram-backed temp directory, or already set to something.
return
}
// Attempt to override the SQLite temporary directory by setting the
// env var prior to the (first) database being opened and hence
// SQLite becoming initialized. We set the temp dir to the same
// place we store the database, in the hope that there will be
// enough space there for the operations it needs to perform, as
// opposed to /tmp and similar, on some systems.
dbTmpDir := filepath.Join(path, ".tmp")
if err := os.MkdirAll(dbTmpDir, 0o700); err == nil {
os.Setenv("SQLITE_TMPDIR", dbTmpDir)
} else {
slog.Warn("Failed to create temp directory for SQLite", slogutil.FilePath(dbTmpDir), slogutil.Error(err))
}
}

View File

@@ -14,5 +14,5 @@ import (
const (
dbDriver = "sqlite3"
commonOptions = "_fk=true&_rt=true&_cache_size=-65536&_sync=1&_txlock=immediate"
commonOptions = "_fk=true&_rt=true&_sync=1&_txlock=immediate"
)

View File

@@ -15,7 +15,7 @@ import (
const (
dbDriver = "sqlite"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=cache_size(-65536)&_pragma=synchronous(1)"
commonOptions = "_pragma=foreign_keys(1)&_pragma=recursive_triggers(1)&_pragma=synchronous(1)"
)
func init() {

View File

@@ -8,19 +8,28 @@ package sqlite
import (
"context"
"encoding/binary"
"fmt"
"log/slog"
"math/rand"
"strings"
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/thejerf/suture/v4"
)
const (
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
internalMetaPrefix = "dbsvc"
lastMaintKey = "lastMaint"
lastSuccessfulGCSeqKey = "lastSuccessfulGCSeq"
gcMinChunks = 5
gcChunkSize = 100_000 // approximate number of rows to process in a single gc query
gcMaxRuntime = 5 * time.Minute // max time to spend on gc, per table, per run
)
func (s *DB) Service(maintenanceInterval time.Duration) suture.Service {
@@ -91,16 +100,41 @@ func (s *Service) periodic(ctx context.Context) error {
}
return wrap(s.sdb.forEachFolder(func(fdb *folderDB) error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
// Get the current device sequence, for comparison in the next step.
seq, err := fdb.GetDeviceSequence(protocol.LocalDeviceID)
if err != nil {
return wrap(err)
}
// Get the last successful GC sequence. If it's the same as the
// current sequence, nothing has changed and we can skip the GC
// entirely.
meta := db.NewTyped(fdb, internalMetaPrefix)
if prev, _, err := meta.Int64(lastSuccessfulGCSeqKey); err != nil {
return wrap(err)
} else if seq == prev {
slog.DebugContext(ctx, "Skipping unnecessary GC", "folder", fdb.folderID, "fdb", fdb.baseName)
return nil
}
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
// Run the GC steps, in a function to be able to use a deferred
// unlock.
if err := func() error {
fdb.updateLock.Lock()
defer fdb.updateLock.Unlock()
if err := garbageCollectOldDeletedLocked(ctx, fdb); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
}(); err != nil {
return wrap(err)
}
if err := garbageCollectBlocklistsAndBlocksLocked(ctx, fdb); err != nil {
return wrap(err)
}
return tidy(ctx, fdb.sql)
// Update the successful GC sequence.
return wrap(meta.PutInt64(lastSuccessfulGCSeqKey, seq))
}))
}
@@ -119,7 +153,7 @@ func tidy(ctx context.Context, db *sqlx.DB) error {
}
func garbageCollectOldDeletedLocked(ctx context.Context, fdb *folderDB) error {
l := slog.With("fdb", fdb.baseDB)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName)
if fdb.deleteRetention <= 0 {
slog.DebugContext(ctx, "Delete retention is infinite, skipping cleanup")
return nil
@@ -171,37 +205,108 @@ func garbageCollectBlocklistsAndBlocksLocked(ctx context.Context, fdb *folderDB)
}
defer tx.Rollback() //nolint:errcheck
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocklists
WHERE NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = blocklists.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocklists")
} else {
slog.DebugContext(ctx, "Blocklist GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
// Both blocklists and blocks refer to blocklists_hash from the files table.
for _, table := range []string{"blocklists", "blocks"} {
// Count the number of rows
var rows int64
if err := tx.GetContext(ctx, &rows, `SELECT count(*) FROM `+table); err != nil {
return wrap(err)
}
if res, err := tx.ExecContext(ctx, `
DELETE FROM blocks
WHERE NOT EXISTS (
SELECT 1 FROM blocklists WHERE blocklists.blocklist_hash = blocks.blocklist_hash
)`); err != nil {
return wrap(err, "delete blocks")
} else {
slog.DebugContext(ctx, "Blocks GC", "fdb", fdb.baseName, "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
chunks := max(gcMinChunks, rows/gcChunkSize)
l := slog.With("folder", fdb.folderID, "fdb", fdb.baseName, "table", table, "rows", rows, "chunks", chunks)
// Process rows in chunks up to a given time limit. We always use at
// least gcMinChunks chunks, then increase the number as the number of rows
// exceeds gcMinChunks*gcChunkSize.
t0 := time.Now()
for i, br := range randomBlobRanges(int(chunks)) {
if d := time.Since(t0); d > gcMaxRuntime {
l.InfoContext(ctx, "GC was interrupted due to exceeding time limit", "processed", i, "runtime", time.Since(t0))
break
}
return slog.Int64("rows", rows)
}))
// The limit column must be an indexed column with a mostly random distribution of blobs.
// That's the blocklist_hash column for blocklists, and the hash column for blocks.
limitColumn := table + ".blocklist_hash"
if table == "blocks" {
limitColumn = "blocks.hash"
}
q := fmt.Sprintf(`
DELETE FROM %s
WHERE %s AND NOT EXISTS (
SELECT 1 FROM files WHERE files.blocklist_hash = %s.blocklist_hash
)`, table, br.SQL(limitColumn), table)
if res, err := tx.ExecContext(ctx, q); err != nil {
return wrap(err, "delete from "+table)
} else {
l.DebugContext(ctx, "GC query result", "processed", i, "runtime", time.Since(t0), "result", slogutil.Expensive(func() any {
rows, err := res.RowsAffected()
if err != nil {
return slogutil.Error(err)
}
return slog.Int64("rows", rows)
}))
}
}
}
return wrap(tx.Commit())
}
// blobRange defines a range for blob searching. A range is open ended if
// start or end is nil.
type blobRange struct {
start, end []byte
}
// SQL returns the SQL where clause for the given range, e.g.
// `column >= x'49249248' AND column < x'6db6db6c'`
func (r blobRange) SQL(name string) string {
var sb strings.Builder
if r.start != nil {
fmt.Fprintf(&sb, "%s >= x'%x'", name, r.start)
}
if r.start != nil && r.end != nil {
sb.WriteString(" AND ")
}
if r.end != nil {
fmt.Fprintf(&sb, "%s < x'%x'", name, r.end)
}
return sb.String()
}
// randomBlobRanges returns n blobRanges in random order
func randomBlobRanges(n int) []blobRange {
ranges := blobRanges(n)
rand.Shuffle(len(ranges), func(i, j int) { ranges[i], ranges[j] = ranges[j], ranges[i] })
return ranges
}
// blobRanges returns n blobRanges
func blobRanges(n int) []blobRange {
// We use three byte (24 bit) prefixes to get fairly granular ranges and easy bit
// conversions.
rangeSize := (1 << 24) / n
ranges := make([]blobRange, 0, n)
var prev []byte
for i := range n {
var pref []byte
if i < n-1 {
end := (i + 1) * rangeSize
pref = intToBlob(end)
}
ranges = append(ranges, blobRange{prev, pref})
prev = pref
}
return ranges
}
func intToBlob(n int) []byte {
var pref [4]byte
binary.BigEndian.PutUint32(pref[:], uint32(n)) //nolint:gosec
// first byte is always zero and not part of the range
return pref[1:]
}

View File

@@ -0,0 +1,37 @@
// Copyright (C) 2025 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package sqlite
import (
"bytes"
"fmt"
"strings"
"testing"
)
func TestBlobRange(t *testing.T) {
exp := `
hash < x'249249'
hash >= x'249249' AND hash < x'492492'
hash >= x'492492' AND hash < x'6db6db'
hash >= x'6db6db' AND hash < x'924924'
hash >= x'924924' AND hash < x'b6db6d'
hash >= x'b6db6d' AND hash < x'db6db6'
hash >= x'db6db6'
`
ranges := blobRanges(7)
buf := new(bytes.Buffer)
for _, r := range ranges {
fmt.Fprintln(buf, r.SQL("hash"))
}
if strings.TrimSpace(buf.String()) != strings.TrimSpace(exp) {
t.Log(buf.String())
t.Error("unexpected output")
}
}

View File

@@ -7,6 +7,7 @@
package sqlite
import (
"fmt"
"time"
"github.com/syncthing/syncthing/lib/protocol"
@@ -25,8 +26,7 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
"journal_mode = WAL",
"optimize = 0x10002",
"auto_vacuum = INCREMENTAL",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",
@@ -64,11 +64,10 @@ func openFolderDB(folder, path string, deleteRetention time.Duration) (*folderDB
func openFolderDBForMigration(folder, path string, deleteRetention time.Duration) (*folderDB, error) {
pragmas := []string{
"journal_mode = OFF",
"default_temp_store = MEMORY",
"temp_store = MEMORY",
"foreign_keys = 0",
"synchronous = 0",
"locking_mode = EXCLUSIVE",
fmt.Sprintf("application_id = %d", applicationIDFolder),
}
schemas := []string{
"sql/schema/common/*",

View File

@@ -6,9 +6,8 @@
-- Schema migrations hold the list of historical migrations applied
CREATE TABLE IF NOT EXISTS schemamigrations (
schema_version INTEGER NOT NULL,
schema_version INTEGER NOT NULL PRIMARY KEY,
applied_at INTEGER NOT NULL, -- unix nanos
syncthing_version TEXT NOT NULL COLLATE BINARY,
PRIMARY KEY(schema_version)
syncthing_version TEXT NOT NULL COLLATE BINARY
) STRICT
;

View File

@@ -9,5 +9,5 @@
CREATE TABLE IF NOT EXISTS kv (
key TEXT NOT NULL PRIMARY KEY COLLATE BINARY,
value BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;

View File

@@ -6,10 +6,9 @@
-- indexids holds the index ID and maximum sequence for a given device and folder
CREATE TABLE IF NOT EXISTS indexids (
device_idx INTEGER NOT NULL,
device_idx INTEGER NOT NULL PRIMARY KEY,
index_id TEXT NOT NULL COLLATE BINARY,
sequence INTEGER NOT NULL DEFAULT 0,
PRIMARY KEY(device_idx),
FOREIGN KEY(device_idx) REFERENCES devices(idx) ON DELETE CASCADE
) STRICT, WITHOUT ROWID
;

View File

@@ -6,9 +6,8 @@
--- Backing for the MtimeFS
CREATE TABLE IF NOT EXISTS mtimes (
name TEXT NOT NULL,
name TEXT NOT NULL PRIMARY KEY,
ondisk INTEGER NOT NULL, -- unix nanos
virtual INTEGER NOT NULL, -- unix nanos
PRIMARY KEY(name)
virtual INTEGER NOT NULL -- unix nanos
) STRICT, WITHOUT ROWID
;

View File

@@ -18,14 +18,30 @@ import (
"time"
)
type formattingHandler struct {
attrs []slog.Attr
groups []string
type LineFormat struct {
TimestampFormat string
LevelString bool
LevelSyslog bool
}
type formattingOptions struct {
LineFormat
out io.Writer
recs []*lineRecorder
timeOverride time.Time
}
type formattingHandler struct {
attrs []slog.Attr
groups []string
opts *formattingOptions
}
func SetLineFormat(f LineFormat) {
globalFormatter.LineFormat = f
}
var _ slog.Handler = (*formattingHandler)(nil)
func (h *formattingHandler) Enabled(context.Context, slog.Level) bool {
@@ -83,19 +99,19 @@ func (h *formattingHandler) Handle(_ context.Context, rec slog.Record) error {
}
line := Line{
When: cmp.Or(h.timeOverride, rec.Time),
When: cmp.Or(h.opts.timeOverride, rec.Time),
Message: sb.String(),
Level: rec.Level,
}
// If there is a recorder, record the line.
for _, rec := range h.recs {
for _, rec := range h.opts.recs {
rec.record(line)
}
// If there's an output, print the line.
if h.out != nil {
_, _ = line.WriteTo(h.out)
if h.opts.out != nil {
_, _ = line.WriteTo(h.opts.out, h.opts.LineFormat)
}
return nil
}
@@ -143,11 +159,9 @@ func (h *formattingHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
}
}
return &formattingHandler{
attrs: append(h.attrs, attrs...),
groups: h.groups,
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: append(h.attrs, attrs...),
groups: h.groups,
opts: h.opts,
}
}
@@ -156,11 +170,9 @@ func (h *formattingHandler) WithGroup(name string) slog.Handler {
return h
}
return &formattingHandler{
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
recs: h.recs,
out: h.out,
timeOverride: h.timeOverride,
attrs: h.attrs,
groups: append([]string{name}, h.groups...),
opts: h.opts,
}
}

View File

@@ -17,8 +17,11 @@ import (
func TestFormattingHandler(t *testing.T) {
buf := new(bytes.Buffer)
h := &formattingHandler{
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
opts: &formattingOptions{
LineFormat: DefaultLineFormat,
out: buf,
timeOverride: time.Unix(1234567890, 0).In(time.UTC),
},
}
l := slog.New(h).With("a", "a")

View File

@@ -9,6 +9,7 @@ package slogutil
import (
"log/slog"
"maps"
"strings"
"sync"
)
@@ -39,6 +40,24 @@ func SetDefaultLevel(level slog.Level) {
globalLevels.SetDefault(level)
}
func SetLevelOverrides(sttrace string) {
pkgs := strings.Split(sttrace, ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}
type levelTracker struct {
mut sync.RWMutex
defLevel slog.Level

View File

@@ -7,6 +7,7 @@
package slogutil
import (
"bytes"
"encoding/json"
"fmt"
"io"
@@ -22,13 +23,22 @@ type Line struct {
Level slog.Level `json:"level"`
}
func (l *Line) WriteTo(w io.Writer) (int64, error) {
n, err := fmt.Fprintf(w, "%s %s %s\n", l.timeStr(), l.levelStr(), l.Message)
return int64(n), err
}
func (l *Line) timeStr() string {
return l.When.Format("2006-01-02 15:04:05")
func (l *Line) WriteTo(w io.Writer, f LineFormat) (int64, error) {
buf := new(bytes.Buffer)
if f.LevelSyslog {
_, _ = fmt.Fprintf(buf, "<%d>", l.syslogPriority())
}
if f.TimestampFormat != "" {
buf.WriteString(l.When.Format(f.TimestampFormat))
buf.WriteRune(' ')
}
if f.LevelString {
buf.WriteString(l.levelStr())
buf.WriteRune(' ')
}
buf.WriteString(l.Message)
buf.WriteRune('\n')
return buf.WriteTo(w)
}
func (l *Line) levelStr() string {
@@ -51,6 +61,19 @@ func (l *Line) levelStr() string {
}
}
func (l *Line) syslogPriority() int {
switch {
case l.Level < slog.LevelInfo:
return 7
case l.Level < slog.LevelWarn:
return 6
case l.Level < slog.LevelError:
return 4
default:
return 3
}
}
func (l *Line) MarshalJSON() ([]byte, error) {
// Custom marshal to get short level strings instead of default JSON serialisation
return json.Marshal(map[string]any{

View File

@@ -10,20 +10,26 @@ import (
"io"
"log/slog"
"os"
"strings"
"time"
)
var (
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
globalLevels = &levelTracker{
GlobalRecorder = &lineRecorder{level: -1000}
ErrorRecorder = &lineRecorder{level: slog.LevelError}
DefaultLineFormat = LineFormat{
TimestampFormat: time.DateTime,
LevelString: true,
}
globalLevels = &levelTracker{
levels: make(map[string]slog.Level),
descrs: make(map[string]string),
}
slogDef = slog.New(&formattingHandler{
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
})
globalFormatter = &formattingOptions{
LineFormat: DefaultLineFormat,
recs: []*lineRecorder{GlobalRecorder, ErrorRecorder},
out: logWriter(),
}
slogDef = slog.New(&formattingHandler{opts: globalFormatter})
)
func logWriter() io.Writer {
@@ -38,21 +44,4 @@ func logWriter() io.Writer {
func init() {
slog.SetDefault(slogDef)
// Handle legacy STTRACE var
pkgs := strings.Split(os.Getenv("STTRACE"), ",")
for _, pkg := range pkgs {
pkg = strings.TrimSpace(pkg)
if pkg == "" {
continue
}
level := slog.LevelDebug
if cutPkg, levelStr, ok := strings.Cut(pkg, ":"); ok {
pkg = cutPkg
if err := level.UnmarshalText([]byte(levelStr)); err != nil {
slog.Warn("Bad log level requested in STTRACE", slog.String("pkg", pkg), slog.String("level", levelStr), Error(err))
}
}
globalLevels.Set(pkg, level)
}
}

View File

@@ -23,6 +23,15 @@ func getRedactedConfig(s *service) config.Configuration {
if rawConf.GUI.User != "" {
rawConf.GUI.User = "REDACTED"
}
for folderIdx, folderCfg := range rawConf.Folders {
for deviceIdx, deviceCfg := range folderCfg.Devices {
if deviceCfg.EncryptionPassword != "" {
rawConf.Folders[folderIdx].Devices[deviceIdx].EncryptionPassword = "REDACTED"
}
}
}
return rawConf
}

View File

@@ -491,15 +491,26 @@ nextFile:
continue nextFile
}
// Verify there is some availability for the file before we start
// processing it
devices := f.model.fileAvailability(f.FolderConfiguration, fi)
if len(devices) > 0 {
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
if len(devices) == 0 {
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
continue
}
f.newPullError(fileName, errNotAvailable)
f.queue.Done(fileName)
// Verify we have space to handle the file before we start
// creating temp files etc.
if err := f.CheckAvailableSpace(uint64(fi.Size)); err != nil { //nolint:gosec
f.newPullError(fileName, err)
f.queue.Done(fileName)
continue
}
if err := f.handleFile(fi, copyChan); err != nil {
f.newPullError(fileName, err)
}
}
return changed, fileDeletions, dirDeletions, nil
@@ -1327,13 +1338,6 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
}
for state := range in {
if err := f.CheckAvailableSpace(uint64(state.file.Size)); err != nil { //nolint:gosec
state.fail(err)
// Nothing more to do for this failed file, since it would use to much disk space
out <- state.sharedPullerState
continue
}
if f.Type != config.FolderTypeReceiveEncrypted {
f.model.progressEmitter.Register(state.sharedPullerState)
}

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/events"
)
@@ -125,11 +126,12 @@ func (s *stateTracker) setState(newState folderState) {
eventData["duration"] = time.Since(s.changed).Seconds()
}
slog.Debug("Folder changed state", "folder", s.folderID, "state", newState, "from", s.current)
s.current = newState
s.changed = time.Now().Truncate(time.Second)
s.evLogger.Log(events.StateChanged, eventData)
slog.Info("Folder changed state", "folder", s.folderID, "state", newState)
}
// getState returns the current state, the time when it last changed, and the
@@ -156,6 +158,12 @@ func (s *stateTracker) setError(err error) {
"from": s.current.String(),
}
if err != nil && s.current != FolderError {
slog.Warn("Folder is in error state", slog.String("folder", s.folderID), slogutil.Error(err))
} else if err == nil && s.current == FolderError {
slog.Info("Folder error state was cleared", slog.String("folder", s.folderID))
}
if err != nil {
eventData["error"] = err.Error()
s.current = FolderError

View File

@@ -31,6 +31,8 @@ var bufPool = sync.Pool{
},
}
const hashLength = sha256.Size
var hashPool = sync.Pool{
New: func() any {
return sha256.New()
@@ -43,9 +45,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
counter = &noopCounter{}
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
const hashLength = sha256.Size
var blocks []protocol.BlockInfo
var hashes, thisHash []byte
@@ -62,8 +61,14 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hashes = make([]byte, 0, hashLength*numBlocks)
}
hf := hashPool.Get().(hash.Hash) //nolint:forcetypeassert
// A 32k buffer is used for copying into the hash function.
buf := bufPool.Get().(*[bufSize]byte)[:] //nolint:forcetypeassert
defer func() {
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
}()
var offset int64
lr := io.LimitReader(r, int64(blocksize)).(*io.LimitedReader)
@@ -102,9 +107,6 @@ func Blocks(ctx context.Context, r io.Reader, blocksize int, sizehint int64, cou
hf.Reset()
}
bufPool.Put((*[bufSize]byte)(buf))
hf.Reset()
hashPool.Put(hf)
if len(blocks) == 0 {
// Empty file

View File

@@ -83,6 +83,8 @@ func SecureDefaultWithTLS12() *tls.Config {
// We've put some thought into this choice and would like it to
// matter.
PreferServerCipherSuites: true,
// We support HTTP/2 and HTTP/1.1
NextProtos: []string{"h2", "http/1.1"},
ClientSessionCache: tls.NewLRUClientSessionCache(0),
}

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "STDISCOSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "STRELAYSRV" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH OVERVIEW
@@ -148,7 +148,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -250,7 +250,7 @@ may no longer correspond to the defaults.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -340,7 +340,7 @@ GUI.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>
@@ -607,7 +607,7 @@ folder is located on a FAT partition, and \fB0\fP otherwise.
.TP
.B maxConcurrentWrites
Maximum number of concurrent write operations while syncing. Increasing this might increase or
decrease disk performance, depending on the underlying storage. Default is \fB2\fP\&.
decrease disk performance, depending on the underlying storage. Default is \fB16\fP\&.
.UNINDENT
.INDENT 0.0
.TP
@@ -1555,7 +1555,7 @@ are set, \fI\%\-\-auditfile\fP takes priority.
<markerName>.stfolder</markerName>
<copyOwnershipFromParent>false</copyOwnershipFromParent>
<modTimeWindowS>0</modTimeWindowS>
<maxConcurrentWrites>2</maxConcurrentWrites>
<maxConcurrentWrites>16</maxConcurrentWrites>
<disableFsync>false</disableFsync>
<blockPullOrder>standard</blockPullOrder>
<copyRangeMethod>standard</copyRangeMethod>

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp
@@ -156,7 +156,7 @@ Returns the current configuration.
\(dqmarkerName\(dq: \(dq.stfolder\(dq,
\(dqcopyOwnershipFromParent\(dq: false,
\(dqmodTimeWindowS\(dq: 0,
\(dqmaxConcurrentWrites\(dq: 2,
\(dqmaxConcurrentWrites\(dq: 16,
\(dqdisableFsync\(dq: false,
\(dqblockPullOrder\(dq: \(dqstandard\(dq,
\(dqcopyRangeMethod\(dq: \(dqstandard\(dq,
@@ -328,7 +328,7 @@ Returns the current configuration.
\(dqmarkerName\(dq: \(dq.stfolder\(dq,
\(dqcopyOwnershipFromParent\(dq: false,
\(dqmodTimeWindowS\(dq: 0,
\(dqmaxConcurrentWrites\(dq: 2,
\(dqmaxConcurrentWrites\(dq: 16,
\(dqdisableFsync\(dq: false,
\(dqblockPullOrder\(dq: \(dqstandard\(dq,
\(dqcopyRangeMethod\(dq: \(dqstandard\(dq,
@@ -398,14 +398,14 @@ config, modify the needed parts and post it again.
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Return format changed in versions 0.13.0, 1.19.0 and 1.23.0.
Return format changed in versions 0.13.0, 0.14.14, 1.2.0, 1.19.0, 1.23.0 and 1.25.0.
.UNINDENT
.UNINDENT
.sp
Returns the list of configured devices and some metadata associated
with them. The list also contains the local device itself as not connected.
with them.
.sp
The connection types are \fBTCP (Client)\fP, \fBTCP (Server)\fP, \fBRelay (Client)\fP and \fBRelay (Server)\fP\&.
The connection types are \fBtcp\-client\fP, \fBtcp\-server\fP, \fBrelay\-client\fP, \fBrelay\-server\fP, \fBquic\-client\fP and \fBquic\-server\fP\&.
.INDENT 0.0
.INDENT 3.5
.sp
@@ -446,7 +446,7 @@ The connection types are \fBTCP (Client)\fP, \fBTCP (Server)\fP, \fBRelay (Clien
\(dqoutBytesTotal\(dq: 550,
\(dqpaused\(dq: false,
\(dqstartedAt\(dq: \(dq2015\-11\-07T00:09:47Z\(dq,
\(dqtype\(dq: \(dqTCP (Client)\(dq
\(dqtype\(dq: \(dqtcp\-client\(dq
}
},
\(dqtotal\(dq: {

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING" "1" "Aug 29, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING" "1" "Sep 06, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing \- Syncthing
.SH SYNOPSIS

View File

@@ -41,7 +41,7 @@
cross compilation with SQLite:
- dragonfly/amd64
- illumos/amd64 and solaris/amd64
- solaris/amd64
- linux/ppc64
- netbsd/*
- openbsd/386 and openbsd/arm