Compare commits

...

18 Commits

Author SHA1 Message Date
Jakob Borg
e41d6b9c1e fix(db): apply all migrations and schema in one transaction 2025-08-31 12:43:41 +02:00
Jakob Borg
21ad99c80a Revert "chore(db): update schema version in the same transaction as migration (#10321)"
This reverts commit 4459438245.
2025-08-31 12:43:41 +02:00
Jakob Borg
4ad3f07691 chore(db): migration for previous commits (#10319)
Recreate the blocks and block lists tables.

---------

Co-authored-by: bt90 <btom1990@googlemail.com>
2025-08-31 09:27:33 +02:00
Simon Frei
4459438245 chore(db): update schema version in the same transaction as migration (#10321)
Just to be entirely sure that if the migration succeeds the schema
version is always also updated. Currently if a migration succeeds but a
later migration doesn't, the changes of the migration apply but the
version stays - if the migration is breaking/non-idempotent, it will
fail when it tries to rerun it next time (otherwise it's just a
pointless re-execution).

Unfortunately with the current `db.runScripts` it wasn't that easy to
do, so I had to do quite a bit of refactoring. I am also ensuring the
right order of transactions now, though I assume that was already the
case lexicographically - can't hurt to be safe.
2025-08-30 13:18:31 +02:00
Jakob Borg
2306c6d989 chore(db): benchmark output, migration blocks/s output (#10320)
Just minor tweaks
2025-08-29 14:58:38 +00:00
Tomasz Wilczyński
0de55ef262 chore(gui): use step of 3600 for versions cleanup interval (#10317)
Currently, the input field has no step defined, meaning that it can be
increased with the arrow keys by the default value of "1". Considering
the fact that the default value is "3600" (seconds or one hour), it is
unlikely that the user wants to change it with such minimal steps.

For this reason, change the default step to "3600" (one hour). If the
user needs more granual control, they can still input the value
in seconds manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:57:27 +02:00
Tomasz Wilczyński
d083682418 chore(gui): use steps of 1024 KiB for bandwidth rate limits (#10316)
Currently, the bandwidth limit input fields have no step defined, and as
such they use the default value of "1". Taking into account the fact
that these fields use KiB as their measurements, it makes more sense to
use larger steps, such as "1024" (1 MiB), as in most cases, it is very
unlikely that the user needs to have byte-level control over the limits.

Note that these steps only apply to increasing the values by using the
arrow keys, and the user is still allowed to input any value they want
manually.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2025-08-29 15:56:55 +02:00
Jakob Borg
c918299eab refactor(db): slightly improve insert performance (#10318)
This just removes an unnecessary foreign key constraint, where we
already do the garbage collection manually in the database service.
However, as part of getting here I tried a couple of other variants
along the way:

- Changing the order of the primary key from `(hash, blocklist_hash,
idx)` to `(blocklist_hash, idx, hash)` so that inserts would be
naturally ordered. However this requires a new index `on blocks (hash)`
so that we can still look up blocks by hash, and turns out to be
strictly worse than what we already have.
- Removing the primary key entirely and the `WITHOUT ROWID` to make it a
rowid table without any required order, and an index as above. This is
faster when the table is small, but becomes slower when it's large (due
to dual indexes I guess).

These are the benchmark results from current `main`, the second
alternative below ("Index(hash)") and this proposal that retains the
combined primary key ("combined"). Overall it ends up being about 65%
faster.

<img width="764" height="452" alt="Screenshot 2025-08-29 at 14 36 28"
src="https://github.com/user-attachments/assets/bff3f9d1-916a-485f-91b7-b54b477f5aac"
/>

Ref #10264
2025-08-29 15:26:23 +02:00
bt90
b59443f136 chore(db): avoid rowid for blocks and blocklists (#10315)
### Purpose

Noticed "large" autgenerated indices on blocks and blocklists in
https://forum.syncthing.net/t/database-or-disk-is-full-might-be-syncthing-might-be-qnap/24930/7

They both have a primary key and don't need rowids

## Authorship

Your name and email will be added automatically to the AUTHORS file
based on the commit metadata.
2025-08-29 11:12:39 +02:00
Tomasz Wilczyński
7189a3ebff fix(model): consider number of CPU cores when calculating hashers on interactive OS (#10284) (#10286)
Currently, the number of hashers is always set to 1 on interactive
operating systems, which are defined as Windows, macOS, iOS, and
Android. However, with modern multicore CPUs, it does not make much
sense to limit performance so much.

For example, without this fix, a CPU with 16 cores / 32 threads is
still limited to using just a single thread to hash files per folder,
which may severely affect its performance.

For this reason, instead of using a fixed value, calculate the number
dynamically, so that it equals one-fourth of the total number of CPU
cores. This way, the value of hashes will still end up being just 1 on
a slower 4-thread CPU, but it will be allowed to take larger values when
the number of threads is higher, increasing hashing performance in the
process.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 10:04:08 +00:00
Tomasz Wilczyński
6ed4cca691 fix(model): consider MaxFolderConcurrency when calculating number of hashers (#10285)
Currently, the number of hashers, with the exception of some specific
operating systems or when defined manually, equals the number of CPU
cores divided by the overall number of folders, and it does not take
into account the value of MaxFolderConcurrency at all. This leads to
artificial performance limits even when MaxFolderConcurrency is set to
values lower than the number of cores.

For example, let's say that the number of folders is 50 and
MaxFolderConcurrency is set a value of 4 on a 16-core CPU. With the old
calculation, the number of hashers would still end up being just 1 due
to the large number of folders. However, with the new calculation, the
number of hashers in this case will be 4, leading to better hashing
performance per folder.

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Co-authored-by: Jakob Borg <jakob@kastelo.net>
2025-08-26 11:33:58 +02:00
Tommy van der Vorst
958f51ace6 fix(cmd): only start temporary API server during migration if it's enabled (#10284) 2025-08-25 05:46:23 +00:00
Syncthing Release Automation
07f1320e00 chore(gui, man, authors): update docs, translations, and contributors 2025-08-25 03:57:29 +00:00
Jakob Borg
3da449cfa3 chore(ursrv): count database engines 2025-08-24 22:35:00 +02:00
Jakob Borg
655ef63c74 chore(ursrv): separate calculation from serving metrics 2025-08-24 22:34:58 +02:00
Jakob Borg
01257e838b build: use Go 1.24 tools pattern (#10281) 2025-08-24 12:17:20 +00:00
Simon Frei
e54f51c9c5 chore(db): cleanup DB in tests and remove OpenTemp (#10282)
Filled up my tmpfs with test DBs when running benchmarks :)
2025-08-24 09:58:56 +00:00
Simon Frei
a259a009c8 chore(db): adjust db bench name to improve benchstat grouping (#10283)
The benchstat tool allows custom grouping when comparing with what it
calls "sub-name configuration keys":

https://pkg.go.dev/golang.org/x/perf@v0.0.0-20250813145418-2f7363a06fe1/cmd/benchstat#hdr-Configuring_comparisons

That's quite useful for these benchmarks, as we basically have two
independent configs: The type of benchmark and the size. Real example
usage for the prepared named statements PR (results are rubbish for
unrelated reasons):

```
$ benchstat -row ".name /n" bench-main.out bench-prepared.out
goos: linux
goarch: amd64
pkg: github.com/syncthing/syncthing/internal/db/sqlite
cpu: Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz
                            │ bench-main-20250823_014059.out │   bench-prepared-20250823_022849.out   │
                            │             sec/op             │     sec/op      vs base                │
Update Insert100Loc                           248.5m ±  8% ¹   157.7m ±  7% ¹  -36.54% (p=0.000 n=50)
Update RepBlocks100                           253.7m ±  4% ¹   163.6m ±  7% ¹  -35.49% (p=0.000 n=50)
Update RepSame100                            130.42m ±  3% ¹   60.26m ±  2% ¹  -53.80% (p=0.000 n=50)
Update Insert100Rem                           38.54m ±  5% ¹   21.94m ±  1% ¹  -43.07% (p=0.000 n=50)
Update GetGlobal100                          10.897m ±  4% ¹   4.231m ±  1% ¹  -61.17% (p=0.000 n=50)
Update LocalSequenced                         7.560m ±  5% ¹   3.124m ±  2% ¹  -58.68% (p=0.000 n=50)
Update GetDeviceSequenceLoc                  17.554µ ±  6% ¹   8.400µ ±  1% ¹  -52.15% (n=50)
Update GetDeviceSequenceRem                  17.727µ ±  4% ¹   8.237µ ±  2% ¹  -53.54% (p=0.000 n=50)
Update RemoteNeed                              4.147 ± 77% ¹    1.903 ± 78% ¹  -54.11% (p=0.000 n=50)
Update LocalNeed100Largest                   21.516m ± 22% ¹   9.312m ± 47% ¹  -56.72% (p=0.000 n=50)
geomean                                       15.35m           7.486m          -51.22%
¹ benchmarks vary in .fullname
```
2025-08-23 16:12:55 +02:00
68 changed files with 389 additions and 466 deletions

View File

@@ -64,6 +64,12 @@ linters:
# relax the slog rules for debug lines, for now
- linters: [sloglint]
source: Debug
# contexts are irrelevant for SQLite
- linters: [noctx]
text: database/sql
# Rollback errors can be ignored
- linters: [errcheck]
source: Rollback
settings:
sloglint:
context: "scope"

View File

@@ -32,6 +32,21 @@ var (
Subsystem: "ursrv_v2",
Name: "collect_seconds_last",
})
metricsRecalcsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalcs_total",
})
metricsRecalcSecondsTotal = promauto.NewCounter(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_total",
})
metricsRecalcSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",
Name: "recalc_seconds_last",
})
metricsWriteSecondsLast = promauto.NewGauge(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "ursrv_v2",

View File

@@ -7,6 +7,8 @@
package serve
import (
"context"
"log/slog"
"reflect"
"slices"
"strconv"
@@ -28,7 +30,7 @@ type metricsSet struct {
gaugeVecLabels map[string][]string
summaries map[string]*metricSummary
collectMut sync.Mutex
collectMut sync.RWMutex
collectCutoff time.Duration
}
@@ -108,6 +110,60 @@ func nameConstLabels(name string) (string, prometheus.Labels) {
return name, m
}
func (s *metricsSet) Serve(ctx context.Context) error {
s.recalc()
const recalcInterval = 5 * time.Minute
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer := time.NewTimer(next)
defer recalcTimer.Stop()
for {
select {
case <-recalcTimer.C:
s.recalc()
next := time.Until(time.Now().Truncate(recalcInterval).Add(recalcInterval))
recalcTimer.Reset(next)
case <-ctx.Done():
return ctx.Err()
}
}
}
func (s *metricsSet) recalc() {
s.collectMut.Lock()
defer s.collectMut.Unlock()
t0 := time.Now()
defer func() {
dur := time.Since(t0)
slog.Info("Metrics recalculated", "d", dur.String())
metricsRecalcSecondsLast.Set(dur.Seconds())
metricsRecalcSecondsTotal.Add(dur.Seconds())
metricsRecalcsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
}
func (s *metricsSet) addReport(r *contract.Report) {
gaugeVecs := make(map[string][]string)
s.addReportStruct(reflect.ValueOf(r).Elem(), gaugeVecs)
@@ -198,8 +254,8 @@ func (s *metricsSet) Describe(c chan<- *prometheus.Desc) {
}
func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
s.collectMut.Lock()
defer s.collectMut.Unlock()
s.collectMut.RLock()
defer s.collectMut.RUnlock()
t0 := time.Now()
defer func() {
@@ -209,26 +265,6 @@ func (s *metricsSet) Collect(c chan<- prometheus.Metric) {
metricsCollectsTotal.Inc()
}()
for _, g := range s.gauges {
g.Set(0)
}
for _, g := range s.gaugeVecs {
g.Reset()
}
for _, g := range s.summaries {
g.Reset()
}
cutoff := time.Now().Add(s.collectCutoff)
s.srv.reports.Range(func(key string, r *contract.Report) bool {
if s.collectCutoff < 0 && r.Received.Before(cutoff) {
s.srv.reports.Delete(key)
return true
}
s.addReport(r)
return true
})
for _, g := range s.gauges {
c <- g
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/geoip"
"github.com/syncthing/syncthing/lib/ur/contract"
"github.com/thejerf/suture/v4"
)
type CLI struct {
@@ -187,7 +188,12 @@ func (cli *CLI) Run() error {
// New external metrics endpoint accepts reports from clients and serves
// aggregated usage reporting metrics.
main := suture.NewSimple("main")
main.ServeBackground(context.Background())
ms := newMetricsSet(srv)
main.Add(ms)
reg := prometheus.NewRegistry()
reg.MustRegister(ms)
@@ -198,7 +204,7 @@ func (cli *CLI) Run() error {
metricsSrv := http.Server{
ReadTimeout: 5 * time.Second,
WriteTimeout: 15 * time.Second,
WriteTimeout: 60 * time.Second,
Handler: mux,
}
@@ -227,6 +233,11 @@ func (cli *CLI) downloadDumpFile(blobs blob.Store) error {
}
func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
t0 := time.Now()
defer func() {
metricsWriteSecondsLast.Set(float64(time.Since(t0)))
}()
fd, err := os.Create(cli.DumpFile + ".tmp")
if err != nil {
return fmt.Errorf("creating dump file: %w", err)
@@ -245,9 +256,10 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
if err := os.Rename(cli.DumpFile+".tmp", cli.DumpFile); err != nil {
return fmt.Errorf("renaming dump file: %w", err)
}
slog.Info("Dump file saved")
slog.Info("Dump file saved", "d", time.Since(t0).String())
if blobs != nil {
t1 := time.Now()
key := fmt.Sprintf("reports-%s.jsons.gz", time.Now().UTC().Format("2006-01-02"))
fd, err := os.Open(cli.DumpFile)
if err != nil {
@@ -257,7 +269,7 @@ func (cli *CLI) saveDumpFile(srv *server, blobs blob.Store) error {
return fmt.Errorf("uploading dump file: %w", err)
}
_ = fd.Close()
slog.Info("Dump file uploaded")
slog.Info("Dump file uploaded", "d", time.Since(t1).String())
}
return nil
@@ -369,6 +381,13 @@ func (s *server) addReport(rep *contract.Report) bool {
rep.DistOS = rep.OS
rep.DistArch = rep.Arch
if strings.HasPrefix(rep.Version, "v2.") {
rep.Database.ModernCSQLite = strings.Contains(rep.LongVersion, "modernc-sqlite")
rep.Database.MattnSQLite = !rep.Database.ModernCSQLite
} else {
rep.Database.LevelDB = true
}
_, loaded := s.reports.LoadAndStore(rep.UniqueID, rep)
return loaded
}
@@ -388,6 +407,7 @@ func (s *server) save(w io.Writer) error {
}
func (s *server) load(r io.Reader) {
t0 := time.Now()
dec := json.NewDecoder(r)
s.reports.Clear()
for {
@@ -400,7 +420,7 @@ func (s *server) load(r io.Reader) {
}
s.addReport(&rep)
}
slog.Info("Loaded reports", "count", s.reports.Size())
slog.Info("Loaded reports", "count", s.reports.Size(), "d", time.Since(t0).String())
}
var (

View File

@@ -479,7 +479,11 @@ func (c *serveCmd) syncthingMain() {
})
}
if err := syncthing.TryMigrateDatabase(ctx, c.DBDeleteRetentionInterval, cfgWrapper.GUI().Address()); err != nil {
var tempApiAddress string
if cfgWrapper.GUI().Enabled {
tempApiAddress = cfgWrapper.GUI().Address()
}
if err := syncthing.TryMigrateDatabase(ctx, c.DBDeleteRetentionInterval, tempApiAddress); err != nil {
slog.Error("Failed to migrate old-style database", slogutil.Error(err))
os.Exit(1)
}

10
go.mod
View File

@@ -24,7 +24,6 @@ require (
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/maruel/panicparse/v2 v2.5.0
github.com/mattn/go-sqlite3 v1.14.31
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3
github.com/maxmind/geoipupdate/v6 v6.1.0
github.com/miscreant/miscreant.go v0.0.0-20200214223636-26d376326b75
github.com/oschwald/geoip2-golang v1.13.0
@@ -49,7 +48,6 @@ require (
golang.org/x/sys v0.35.0
golang.org/x/text v0.28.0
golang.org/x/time v0.12.0
golang.org/x/tools v0.36.0
google.golang.org/protobuf v1.36.7
modernc.org/sqlite v1.38.2
sigs.k8s.io/yaml v1.6.0
@@ -79,6 +77,7 @@ require (
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20240909124753-873cd0166683 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/maxbrunsfeld/counterfeiter/v6 v6.11.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/nxadm/tail v1.4.11 // indirect
@@ -103,6 +102,7 @@ require (
go.yaml.in/yaml/v2 v2.4.2 // indirect
golang.org/x/mod v0.27.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/tools v0.36.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.66.3 // indirect
modernc.org/mathutil v1.7.1 // indirect
@@ -114,3 +114,9 @@ replace github.com/gobwas/glob v0.2.3 => github.com/calmh/glob v0.0.0-2022061508
// https://github.com/mattn/go-sqlite3/pull/1338
replace github.com/mattn/go-sqlite3 v1.14.31 => github.com/calmh/go-sqlite3 v1.14.32-0.20250812195006-80712c77b76a
tool (
github.com/calmh/xdr/cmd/genxdr
github.com/maxbrunsfeld/counterfeiter/v6
golang.org/x/tools/cmd/goimports
)

View File

@@ -82,6 +82,7 @@
"Custom Range": "Custom na Saklaw",
"Danger!": "Panganib!",
"Database Location": "Lokasyon ng Database",
"Debug": "Debug",
"Debugging Facilities": "Mga Facility ng Pag-debug",
"Default": "Default",
"Default Configuration": "Default na Configuration",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Rate Limit ng Papasok (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Maaring sirain ng maling pagsasaayos ang nilalaman ng iyong mga folder at gawing inoperable ang Syncthing.",
"Incorrect user name or password.": "Maling user name o password.",
"Info": "Impormasyon",
"Internally used paths:": "Mga internal na ginamit na path:",
"Introduced By": "Ipinakilala Ni/Ng",
"Introducer": "Tagapagpakilala",
@@ -227,6 +229,7 @@
"Learn more": "Matuto pa",
"Learn more at {%url%}": "Matuto pa sa {{url}}",
"Limit": "Limitasyon",
"Limit Bandwidth in LAN": "Limitahan ang Bandwidth sa LAN",
"Listener Failures": "Mga Pagbibigo ng Listener",
"Listener Status": "Status ng Listener",
"Listeners": "Mga Listener",

View File

@@ -82,6 +82,7 @@
"Custom Range": "Plage personnalisée",
"Danger!": "Attention !",
"Database Location": "Emplacement de la base de données",
"Debug": "Débogage",
"Debugging Facilities": "Outils de débogage",
"Default": "Par défaut",
"Default Configuration": "Préférences pour les créations (non rétroactif)",
@@ -210,6 +211,7 @@
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Incorrect user name or password.": "Nom d'utilisateur ou mot de passe incorrect.",
"Info": "Informations",
"Internally used paths:": "Chemins utilisés en interne :",
"Introduced By": "Introduit par",
"Introducer": "Appareil introducteur",
@@ -322,7 +324,7 @@
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Les versions préliminaires contiennent les dernières fonctionnalités et derniers correctifs. Elles sont identiques aux traditionnelles mises à jour bimensuelles.",
"Remote Devices": "Autres appareils",
"Remote GUI": "IHM distant",
"Remove": "Supprimer",
"Remove": "Enlever",
"Remove Device": "Supprimer l'appareil",
"Remove Folder": "Supprimer le partage",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant du partage. Doit être le même sur tous les appareils concernés (généré aléatoirement, mais modifiable à la création, par exemple pour faire entrer un appareil dans un partage pré-existant actuellement non connecté mais dont on connaît déjà l'ID, ou s'il n'y a personne à l'autre bout pour vous inviter à y participer).",

View File

@@ -159,7 +159,7 @@
<div class="row">
<span class="col-md-8" translate>Incoming Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" />
<input name="maxRecvKbps" id="maxRecvKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxRecvKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxRecvKbps.$valid && deviceEditor.maxRecvKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>
@@ -168,7 +168,7 @@
<div class="row">
<span class="col-md-8" translate>Outgoing Rate Limit (KiB/s)</span>
<div class="col-md-4">
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" />
<input name="maxSendKbps" id="maxSendKbps" class="form-control" type="number" pattern="\d+" ng-model="currentDevice.maxSendKbps" min="0" step="1024" />
</div>
</div>
<p class="help-block" ng-if="!deviceEditor.maxSendKbps.$valid && deviceEditor.maxSendKbps.$dirty" translate>The rate limit must be a non-negative number (0: no limit)</p>

View File

@@ -146,7 +146,7 @@
<div class="form-group" ng-if="internalVersioningEnabled()" ng-class="{'has-error': folderEditor.cleanupIntervalS.$invalid && folderEditor.cleanupIntervalS.$dirty}">
<label translate for="cleanupIntervalS">Cleanup Interval</label>
<div class="input-group">
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" aria-required="true" />
<input name="cleanupIntervalS" id="cleanupIntervalS" class="form-control text-right" type="number" ng-model="currentFolder._guiVersioning.cleanupIntervalS" required="" min="0" max="31536000" step="3600" aria-required="true" />
<div class="input-group-addon" translate>seconds</div>
</div>
<p class="help-block">

View File

@@ -194,7 +194,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxRecvKbps.$invalid && settingsEditor.MaxRecvKbps.$dirty}">
<label translate for="MaxRecvKbps">Incoming Rate Limit (KiB/s)</label>
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" />
<input id="MaxRecvKbps" name="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.maxRecvKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxRecvKbps.$error.min && settingsEditor.MaxRecvKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>
@@ -203,7 +203,7 @@
<div class="col-md-6">
<div class="form-group" ng-class="{'has-error': settingsEditor.MaxSendKbps.$invalid && settingsEditor.MaxSendKbps.$dirty}">
<label translate for="MaxSendKbps">Outgoing Rate Limit (KiB/s)</label>
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" />
<input id="MaxSendKbps" name="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.maxSendKbps" min="0" step="1024" />
<p class="help-block">
<span translate ng-if="settingsEditor.MaxSendKbps.$error.min && settingsEditor.MaxSendKbps.$dirty">The rate limit must be a non-negative number (0: no limit)</span>
</p>

View File

@@ -10,6 +10,7 @@ import (
"database/sql"
"embed"
"io/fs"
"log/slog"
"net/url"
"path/filepath"
"strconv"
@@ -19,11 +20,12 @@ import (
"time"
"github.com/jmoiron/sqlx"
"github.com/syncthing/syncthing/internal/slogutil"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/protocol"
)
const currentSchemaVersion = 3
const currentSchemaVersion = 4
//go:embed sql/**
var embedded embed.FS
@@ -81,13 +83,20 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
},
}
tx, err := db.sql.Beginx()
if err != nil {
return nil, wrap(err)
}
defer tx.Rollback()
for _, script := range schemaScripts {
if err := db.runScripts(script); err != nil {
if err := db.runScripts(tx, script); err != nil {
return nil, wrap(err)
}
}
ver, _ := db.getAppliedSchemaVersion()
ver, _ := db.getAppliedSchemaVersion(tx)
shouldVacuum := false
if ver.SchemaVersion > 0 {
filter := func(scr string) bool {
scr = filepath.Base(scr)
@@ -99,20 +108,37 @@ func openBase(path string, maxConns int, pragmas, schemaScripts, migrationScript
if err != nil {
return false
}
return int(n) > ver.SchemaVersion
if int(n) > ver.SchemaVersion {
slog.Info("Applying database migration", slogutil.FilePath(db.baseName), slog.String("script", scr))
return true
}
return false
}
for _, script := range migrationScripts {
if err := db.runScripts(script, filter); err != nil {
if err := db.runScripts(tx, script, filter); err != nil {
return nil, wrap(err)
}
shouldVacuum = true
}
}
// Set the current schema version, if not already set
if err := db.setAppliedSchemaVersion(currentSchemaVersion); err != nil {
if err := db.setAppliedSchemaVersion(tx, currentSchemaVersion); err != nil {
return nil, wrap(err)
}
if err := tx.Commit(); err != nil {
return nil, wrap(err)
}
if shouldVacuum {
// We applied migrations and should take the opportunity to vaccuum
// the database.
if err := db.vacuumAndOptimize(); err != nil {
return nil, wrap(err)
}
}
return db, nil
}
@@ -188,6 +214,20 @@ func (s *baseDB) expandTemplateVars(tpl string) string {
return sb.String()
}
func (s *baseDB) vacuumAndOptimize() error {
stmts := []string{
"VACUUM;",
"PRAGMA optimize;",
"PRAGMA wal_checkpoint(truncate);",
}
for _, stmt := range stmts {
if _, err := s.sql.Exec(stmt); err != nil {
return wrap(err, stmt)
}
}
return nil
}
type stmt interface {
Exec(args ...any) (sql.Result, error)
Get(dest any, args ...any) error
@@ -204,18 +244,12 @@ func (f failedStmt) Get(_ any, _ ...any) error { return f.err }
func (f failedStmt) Queryx(_ ...any) (*sqlx.Rows, error) { return nil, f.err }
func (f failedStmt) Select(_ any, _ ...any) error { return f.err }
func (s *baseDB) runScripts(glob string, filter ...func(s string) bool) error {
func (s *baseDB) runScripts(tx *sqlx.Tx, glob string, filter ...func(s string) bool) error {
scripts, err := fs.Glob(embedded, glob)
if err != nil {
return wrap(err)
}
tx, err := s.sql.Begin()
if err != nil {
return wrap(err)
}
defer tx.Rollback() //nolint:errcheck
nextScript:
for _, scr := range scripts {
for _, fn := range filter {
@@ -238,7 +272,7 @@ nextScript:
}
}
return wrap(tx.Commit())
return nil
}
type schemaVersion struct {
@@ -251,20 +285,20 @@ func (s *schemaVersion) AppliedTime() time.Time {
return time.Unix(0, s.AppliedAt)
}
func (s *baseDB) setAppliedSchemaVersion(ver int) error {
_, err := s.stmt(`
func (s *baseDB) setAppliedSchemaVersion(tx *sqlx.Tx, ver int) error {
_, err := tx.Exec(`
INSERT OR IGNORE INTO schemamigrations (schema_version, applied_at, syncthing_version)
VALUES (?, ?, ?)
`).Exec(ver, time.Now().UnixNano(), build.LongVersion)
`, ver, time.Now().UnixNano(), build.LongVersion)
return wrap(err)
}
func (s *baseDB) getAppliedSchemaVersion() (schemaVersion, error) {
func (s *baseDB) getAppliedSchemaVersion(tx *sqlx.Tx) (schemaVersion, error) {
var v schemaVersion
err := s.stmt(`
err := tx.Get(&v, `
SELECT schema_version as schemaversion, applied_at as appliedat, syncthing_version as syncthingversion FROM schemamigrations
ORDER BY schema_version DESC
LIMIT 1
`).Get(&v)
`)
return v, wrap(err)
}

View File

@@ -7,7 +7,6 @@
package sqlite
import (
"context"
"fmt"
"testing"
"time"
@@ -21,7 +20,7 @@ import (
var globalFi protocol.FileInfo
func BenchmarkUpdate(b *testing.B) {
db, err := OpenTemp()
db, err := Open(b.TempDir())
if err != nil {
b.Fatal(err)
}
@@ -30,19 +29,20 @@ func BenchmarkUpdate(b *testing.B) {
b.Fatal(err)
}
})
svc := db.Service(time.Hour).(*Service)
fs := make([]protocol.FileInfo, 100)
t0 := time.Now()
seed := 0
size := 1000
const numBlocks = 500
fdb, err := db.getFolderDB(folderID, true)
if err != nil {
b.Fatal(err)
}
size := 10000
for size < 200_000 {
t0 := time.Now()
if err := svc.periodic(context.Background()); err != nil {
b.Fatal(err)
}
b.Log("garbage collect in", time.Since(t0))
for {
local, err := db.CountLocal(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -53,17 +53,28 @@ func BenchmarkUpdate(b *testing.B) {
}
fs := make([]protocol.FileInfo, 1000)
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
}
}
b.Run(fmt.Sprintf("Insert100Loc@%d", size), func(b *testing.B) {
var files, blocks int
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM files`).Scan(&files); err != nil {
b.Fatal(err)
}
if err := fdb.sql.QueryRowx(`SELECT count(*) FROM blocks`).Scan(&blocks); err != nil {
b.Fatal(err)
}
d := time.Since(t0)
b.Logf("t=%s, files=%d, blocks=%d, files/s=%.01f, blocks/s=%.01f", d, files, blocks, float64(files)/d.Seconds(), float64(blocks)/d.Seconds())
b.Run(fmt.Sprintf("n=Insert100Loc/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i] = genFile(rand.String(24), 64, 0)
fs[i] = genFile(rand.String(24), numBlocks, 0)
}
if err := db.Update(folderID, protocol.LocalDeviceID, fs); err != nil {
b.Fatal(err)
@@ -72,7 +83,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepBlocks100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepBlocks100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -86,7 +97,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("RepSame100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RepSame100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Version = fs[i].Version.Update(42)
@@ -98,7 +109,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("Insert100Rem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=Insert100Rem/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
fs[i].Blocks = genBlocks(fs[i].Name, seed, 64)
@@ -112,7 +123,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetGlobal100@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetGlobal100/size=%d", size), func(b *testing.B) {
for range b.N {
for i := range fs {
_, ok, err := db.GetGlobalFile(folderID, fs[i].Name)
@@ -127,7 +138,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(b.N)*100.0/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalSequenced@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalSequenced/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
cur, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
@@ -146,7 +157,21 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("GetDeviceSequenceLoc@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=AllLocalBlocksWithHash/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllLocalBlocksWithHash(folderID, globalFi.Blocks[0].Hash)
for range it {
count++
}
if err := errFn(); err != nil {
b.Fatal(err)
}
}
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "blocks/s")
})
b.Run(fmt.Sprintf("n=GetDeviceSequenceLoc/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.LocalDeviceID)
if err != nil {
@@ -154,7 +179,7 @@ func BenchmarkUpdate(b *testing.B) {
}
}
})
b.Run(fmt.Sprintf("GetDeviceSequenceRem@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=GetDeviceSequenceRem/size=%d", size), func(b *testing.B) {
for range b.N {
_, err := db.GetDeviceSequence(folderID, protocol.DeviceID{42})
if err != nil {
@@ -163,7 +188,7 @@ func BenchmarkUpdate(b *testing.B) {
}
})
b.Run(fmt.Sprintf("RemoteNeed@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=RemoteNeed/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.DeviceID{42}, config.PullOrderAlphabetic, 0, 0)
@@ -178,7 +203,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
b.Run(fmt.Sprintf("LocalNeed100Largest@%d", size), func(b *testing.B) {
b.Run(fmt.Sprintf("n=LocalNeed100Largest/size=%d", size), func(b *testing.B) {
count := 0
for range b.N {
it, errFn := db.AllNeededGlobalFiles(folderID, protocol.LocalDeviceID, config.PullOrderLargestFirst, 100, 0)
@@ -193,7 +218,7 @@ func BenchmarkUpdate(b *testing.B) {
b.ReportMetric(float64(count)/b.Elapsed().Seconds(), "files/s")
})
size <<= 1
size += 1000
}
}
@@ -202,7 +227,7 @@ func TestBenchmarkDropAllRemote(t *testing.T) {
t.Skip("slow test")
}
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -17,7 +17,7 @@ import (
func TestNeed(t *testing.T) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -115,7 +115,7 @@ func TestDropRecalcsGlobal(t *testing.T) {
func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
t.Helper()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -181,7 +181,7 @@ func testDropWithDropper(t *testing.T, dropper func(t *testing.T, db *DB)) {
func TestNeedDeleted(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -224,7 +224,7 @@ func TestNeedDeleted(t *testing.T) {
func TestDontNeedIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -271,7 +271,7 @@ func TestDontNeedIgnored(t *testing.T) {
func TestDontNeedRemoteInvalid(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -322,7 +322,7 @@ func TestDontNeedRemoteInvalid(t *testing.T) {
func TestRemoteDontNeedLocalIgnored(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -364,7 +364,7 @@ func TestRemoteDontNeedLocalIgnored(t *testing.T) {
func TestLocalDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -406,7 +406,7 @@ func TestLocalDontNeedDeletedMissing(t *testing.T) {
func TestRemoteDontNeedDeletedMissing(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -474,7 +474,7 @@ func TestRemoteDontNeedDeletedMissing(t *testing.T) {
func TestNeedRemoteSymlinkAndDir(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -517,7 +517,7 @@ func TestNeedRemoteSymlinkAndDir(t *testing.T) {
func TestNeedPagination(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -583,7 +583,7 @@ func TestDeletedAfterConflict(t *testing.T) {
// 23NHXGS FILE TreeSizeFreeSetup.exe 445 --- 2025-06-23T03:16:10.2804841Z 13832808 -nG---- HZJYWFM:1751507473 7B4kLitF
// JKX6ZDN FILE TreeSizeFreeSetup.exe 320 --- 2025-06-23T03:16:10.2804841Z 13832808 ------- JKX6ZDN:1750992570 7B4kLitF
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -15,7 +15,7 @@ import (
func TestIndexIDs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -16,7 +16,7 @@ import (
func TestBlocks(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -89,7 +89,7 @@ func TestBlocks(t *testing.T) {
func TestBlocksDeleted(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}
@@ -141,7 +141,7 @@ func TestBlocksDeleted(t *testing.T) {
func TestRemoteSequence(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -14,7 +14,7 @@ import (
func TestMtimePairs(t *testing.T) {
t.Parallel()
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal()
}

View File

@@ -131,19 +131,6 @@ func OpenForMigration(path string) (*DB, error) {
return db, nil
}
func OpenTemp() (*DB, error) {
// SQLite has a memory mode, but it works differently with concurrency
// compared to what we need with the WAL mode. So, no memory databases
// for now.
dir, err := os.MkdirTemp("", "syncthing-db")
if err != nil {
return nil, wrap(err)
}
path := filepath.Join(dir, "db")
slog.Debug("Test DB", slogutil.FilePath(path))
return Open(path)
}
func (s *DB) Close() error {
s.folderDBsMut.Lock()
defer s.folderDBsMut.Unlock()

View File

@@ -36,7 +36,7 @@ const (
func TestBasics(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -81,7 +81,12 @@ func TestBasics(t *testing.T) {
)
t.Run("SchemaVersion", func(t *testing.T) {
ver, err := sdb.getAppliedSchemaVersion()
tx, err := sdb.sql.Beginx()
if err != nil {
t.Fatal(err)
}
defer tx.Rollback()
ver, err := sdb.getAppliedSchemaVersion(tx)
if err != nil {
t.Fatal(err)
}
@@ -427,7 +432,7 @@ func TestBasics(t *testing.T) {
func TestPrefixGlobbing(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -496,7 +501,7 @@ func TestPrefixGlobbing(t *testing.T) {
func TestPrefixGlobbingStar(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -529,7 +534,7 @@ func TestPrefixGlobbingStar(t *testing.T) {
}
func TestAvailability(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -596,7 +601,7 @@ func TestAvailability(t *testing.T) {
}
func TestDropFilesNamed(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -640,7 +645,7 @@ func TestDropFilesNamed(t *testing.T) {
}
func TestDropFolder(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -700,7 +705,7 @@ func TestDropFolder(t *testing.T) {
}
func TestDropDevice(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -764,7 +769,7 @@ func TestDropDevice(t *testing.T) {
}
func TestDropAllFiles(t *testing.T) {
db, err := OpenTemp()
db, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -926,7 +931,7 @@ func TestConcurrentUpdateSelect(t *testing.T) {
func TestAllForBlocksHash(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -988,7 +993,7 @@ func TestAllForBlocksHash(t *testing.T) {
func TestBlocklistGarbageCollection(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1067,7 +1072,7 @@ func TestBlocklistGarbageCollection(t *testing.T) {
func TestInsertLargeFile(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1119,7 +1124,7 @@ func TestStrangeDeletedGlobalBug(t *testing.T) {
t.Parallel()
sdb, err := OpenTemp()
sdb, err := Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -114,11 +114,9 @@ func (s *folderDB) Update(device protocol.DeviceID, fs []protocol.FileInfo) erro
if err != nil {
return wrap(err, "marshal blocklist")
}
if _, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
if res, err := insertBlockListStmt.Exec(f.BlocksHash, bs); err != nil {
return wrap(err, "insert blocklist")
}
if device == protocol.LocalDeviceID {
} else if aff, _ := res.RowsAffected(); aff != 0 && device == protocol.LocalDeviceID {
// Insert all blocks
if err := s.insertBlocksLocked(txp, f.BlocksHash, f.Blocks); err != nil {
return wrap(err, "insert blocks")

View File

@@ -2,7 +2,7 @@ These SQL scripts are embedded in the binary.
Scripts in `schema/` are run at every startup, in alphanumerical order.
Scripts in `migrations/` are run when a migration is needed; the must begin
Scripts in `migrations/` are run when a migration is needed; they must begin
with a number that equals the schema version that results from that
migration. Migrations are not run on initial database creation, so the
scripts in `schema/` should create the latest version.

View File

@@ -0,0 +1,51 @@
-- Copyright (C) 2025 The Syncthing Authors.
--
-- This Source Code Form is subject to the terms of the Mozilla Public
-- License, v. 2.0. If a copy of the MPL was not distributed with this file,
-- You can obtain one at https://mozilla.org/MPL/2.0/.
-- Copy blocks to new table with fewer indexes
DROP TABLE IF EXISTS blocks_v4
;
CREATE TABLE blocks_v4 (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;
INSERT INTO blocks_v4 (hash, blocklist_hash, idx, offset, size)
SELECT hash, blocklist_hash, idx, offset, size FROM blocks ORDER BY hash, blocklist_hash, idx
;
DROP TABLE blocks
;
ALTER TABLE blocks_v4 RENAME TO blocks
;
-- Copy blocklists to new table with fewer indexes
DROP TABLE IF EXISTS blocklists_v4
;
CREATE TABLE blocklists_v4 (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT, WITHOUT ROWID
;
INSERT INTO blocklists_v4 (blocklist_hash, blprotobuf)
SELECT blocklist_hash, blprotobuf FROM blocklists ORDER BY blocklist_hash
;
DROP TABLE blocklists
;
ALTER TABLE blocklists_v4 RENAME TO blocklists
;

View File

@@ -14,21 +14,21 @@
CREATE TABLE IF NOT EXISTS blocklists (
blocklist_hash BLOB NOT NULL PRIMARY KEY,
blprotobuf BLOB NOT NULL
) STRICT
) STRICT, WITHOUT ROWID
;
-- Blocks
--
-- For all local files we store the blocks individually for quick lookup. A
-- given block can exist in multiple blocklists and at multiple offsets in a
-- blocklist.
-- blocklist. We eschew most indexes here as inserting millions of blocks is
-- common and performance is critical.
CREATE TABLE IF NOT EXISTS blocks (
hash BLOB NOT NULL,
blocklist_hash BLOB NOT NULL,
idx INTEGER NOT NULL,
offset INTEGER NOT NULL,
size INTEGER NOT NULL,
PRIMARY KEY (hash, blocklist_hash, idx),
FOREIGN KEY(blocklist_hash) REFERENCES blocklists(blocklist_hash) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED
) STRICT
PRIMARY KEY(hash, blocklist_hash, idx)
) STRICT, WITHOUT ROWID
;

View File

@@ -17,7 +17,7 @@ import (
func TestNamespacedInt(t *testing.T) {
t.Parallel()
ldb, err := sqlite.OpenTemp()
ldb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -130,7 +130,7 @@ func (c *mockClock) wind(t time.Duration) {
func TestTokenManager(t *testing.T) {
t.Parallel()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -82,7 +82,7 @@ func TestStopAfterBrokenConfig(t *testing.T) {
}
w := config.Wrap("/dev/null", cfg, protocol.LocalDeviceID, events.NoopLogger)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1049,7 +1049,7 @@ func startHTTPWithShutdownTimeout(t *testing.T, cfg config.Wrapper, shutdownTime
// Instantiate the API service
urService := ur.New(cfg, m, connections, false)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}
@@ -1567,7 +1567,7 @@ func TestEventMasks(t *testing.T) {
cfg := newMockedConfig()
defSub := new(eventmocks.BufferedSubscription)
diskSub := new(eventmocks.BufferedSubscription)
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -1809,60 +1809,6 @@ func (fake *Wrapper) UnsubscribeArgsForCall(i int) config.Committer {
func (fake *Wrapper) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.configPathMutex.RLock()
defer fake.configPathMutex.RUnlock()
fake.defaultDeviceMutex.RLock()
defer fake.defaultDeviceMutex.RUnlock()
fake.defaultFolderMutex.RLock()
defer fake.defaultFolderMutex.RUnlock()
fake.defaultIgnoresMutex.RLock()
defer fake.defaultIgnoresMutex.RUnlock()
fake.deviceMutex.RLock()
defer fake.deviceMutex.RUnlock()
fake.deviceListMutex.RLock()
defer fake.deviceListMutex.RUnlock()
fake.devicesMutex.RLock()
defer fake.devicesMutex.RUnlock()
fake.folderMutex.RLock()
defer fake.folderMutex.RUnlock()
fake.folderListMutex.RLock()
defer fake.folderListMutex.RUnlock()
fake.folderPasswordsMutex.RLock()
defer fake.folderPasswordsMutex.RUnlock()
fake.foldersMutex.RLock()
defer fake.foldersMutex.RUnlock()
fake.gUIMutex.RLock()
defer fake.gUIMutex.RUnlock()
fake.ignoredDeviceMutex.RLock()
defer fake.ignoredDeviceMutex.RUnlock()
fake.ignoredDevicesMutex.RLock()
defer fake.ignoredDevicesMutex.RUnlock()
fake.ignoredFolderMutex.RLock()
defer fake.ignoredFolderMutex.RUnlock()
fake.lDAPMutex.RLock()
defer fake.lDAPMutex.RUnlock()
fake.modifyMutex.RLock()
defer fake.modifyMutex.RUnlock()
fake.myIDMutex.RLock()
defer fake.myIDMutex.RUnlock()
fake.optionsMutex.RLock()
defer fake.optionsMutex.RUnlock()
fake.rawCopyMutex.RLock()
defer fake.rawCopyMutex.RUnlock()
fake.removeDeviceMutex.RLock()
defer fake.removeDeviceMutex.RUnlock()
fake.removeFolderMutex.RLock()
defer fake.removeFolderMutex.RUnlock()
fake.requiresRestartMutex.RLock()
defer fake.requiresRestartMutex.RUnlock()
fake.saveMutex.RLock()
defer fake.saveMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.subscribeMutex.RLock()
defer fake.subscribeMutex.RUnlock()
fake.unsubscribeMutex.RLock()
defer fake.unsubscribeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
//go:generate go tool counterfeiter -o mocks/mocked_wrapper.go --fake-name Wrapper . Wrapper
package config

View File

@@ -403,18 +403,6 @@ func (fake *Service) ServeReturnsOnCall(i int, result1 error) {
func (fake *Service) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.allAddressesMutex.RLock()
defer fake.allAddressesMutex.RUnlock()
fake.connectionStatusMutex.RLock()
defer fake.connectionStatusMutex.RUnlock()
fake.externalAddressesMutex.RLock()
defer fake.externalAddressesMutex.RUnlock()
fake.listenerStatusMutex.RLock()
defer fake.listenerStatusMutex.RUnlock()
fake.nATTypeMutex.RLock()
defer fake.nATTypeMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/service.go --fake-name Service . Service
//go:generate go tool counterfeiter -o mocks/service.go --fake-name Service . Service
package connections

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/manager.go --fake-name Manager . Manager
//go:generate go tool counterfeiter -o mocks/manager.go --fake-name Manager . Manager
package discover

View File

@@ -420,18 +420,6 @@ func (fake *Manager) StringReturnsOnCall(i int, result1 string) {
func (fake *Manager) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.cacheMutex.RLock()
defer fake.cacheMutex.RUnlock()
fake.childErrorsMutex.RLock()
defer fake.childErrorsMutex.RUnlock()
fake.errorMutex.RLock()
defer fake.errorMutex.RUnlock()
fake.lookupMutex.RLock()
defer fake.lookupMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
//go:generate go tool counterfeiter -o mocks/buffered_subscription.go --fake-name BufferedSubscription . BufferedSubscription
// Package events provides event subscription and polling functionality.
package events

View File

@@ -160,10 +160,6 @@ func (fake *BufferedSubscription) SinceReturnsOnCall(i int, result1 []events.Eve
func (fake *BufferedSubscription) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.maskMutex.RLock()
defer fake.maskMutex.RUnlock()
fake.sinceMutex.RLock()
defer fake.sinceMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
//go:generate go tool counterfeiter -o mocks/folderSummaryService.go --fake-name FolderSummaryService . FolderSummaryService
package model

View File

@@ -165,10 +165,6 @@ func (fake *FolderSummaryService) SummaryReturnsOnCall(i int, result1 *model.Fol
func (fake *FolderSummaryService) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.summaryMutex.RLock()
defer fake.summaryMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -3837,112 +3837,6 @@ func (fake *Model) WatchErrorReturnsOnCall(i int, result1 error) {
func (fake *Model) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.addConnectionMutex.RLock()
defer fake.addConnectionMutex.RUnlock()
fake.allGlobalFilesMutex.RLock()
defer fake.allGlobalFilesMutex.RUnlock()
fake.availabilityMutex.RLock()
defer fake.availabilityMutex.RUnlock()
fake.bringToFrontMutex.RLock()
defer fake.bringToFrontMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.completionMutex.RLock()
defer fake.completionMutex.RUnlock()
fake.connectedToMutex.RLock()
defer fake.connectedToMutex.RUnlock()
fake.connectionStatsMutex.RLock()
defer fake.connectionStatsMutex.RUnlock()
fake.currentFolderFileMutex.RLock()
defer fake.currentFolderFileMutex.RUnlock()
fake.currentGlobalFileMutex.RLock()
defer fake.currentGlobalFileMutex.RUnlock()
fake.currentIgnoresMutex.RLock()
defer fake.currentIgnoresMutex.RUnlock()
fake.delayScanMutex.RLock()
defer fake.delayScanMutex.RUnlock()
fake.deviceStatisticsMutex.RLock()
defer fake.deviceStatisticsMutex.RUnlock()
fake.dismissPendingDeviceMutex.RLock()
defer fake.dismissPendingDeviceMutex.RUnlock()
fake.dismissPendingFolderMutex.RLock()
defer fake.dismissPendingFolderMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.folderErrorsMutex.RLock()
defer fake.folderErrorsMutex.RUnlock()
fake.folderProgressBytesCompletedMutex.RLock()
defer fake.folderProgressBytesCompletedMutex.RUnlock()
fake.folderStatisticsMutex.RLock()
defer fake.folderStatisticsMutex.RUnlock()
fake.getFolderVersionsMutex.RLock()
defer fake.getFolderVersionsMutex.RUnlock()
fake.globalDirectoryTreeMutex.RLock()
defer fake.globalDirectoryTreeMutex.RUnlock()
fake.globalSizeMutex.RLock()
defer fake.globalSizeMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.loadIgnoresMutex.RLock()
defer fake.loadIgnoresMutex.RUnlock()
fake.localChangedFolderFilesMutex.RLock()
defer fake.localChangedFolderFilesMutex.RUnlock()
fake.localFilesMutex.RLock()
defer fake.localFilesMutex.RUnlock()
fake.localFilesSequencedMutex.RLock()
defer fake.localFilesSequencedMutex.RUnlock()
fake.localSizeMutex.RLock()
defer fake.localSizeMutex.RUnlock()
fake.needFolderFilesMutex.RLock()
defer fake.needFolderFilesMutex.RUnlock()
fake.needSizeMutex.RLock()
defer fake.needSizeMutex.RUnlock()
fake.onHelloMutex.RLock()
defer fake.onHelloMutex.RUnlock()
fake.overrideMutex.RLock()
defer fake.overrideMutex.RUnlock()
fake.pendingDevicesMutex.RLock()
defer fake.pendingDevicesMutex.RUnlock()
fake.pendingFoldersMutex.RLock()
defer fake.pendingFoldersMutex.RUnlock()
fake.receiveOnlySizeMutex.RLock()
defer fake.receiveOnlySizeMutex.RUnlock()
fake.remoteNeedFolderFilesMutex.RLock()
defer fake.remoteNeedFolderFilesMutex.RUnlock()
fake.remoteSequencesMutex.RLock()
defer fake.remoteSequencesMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.requestGlobalMutex.RLock()
defer fake.requestGlobalMutex.RUnlock()
fake.resetFolderMutex.RLock()
defer fake.resetFolderMutex.RUnlock()
fake.restoreFolderVersionsMutex.RLock()
defer fake.restoreFolderVersionsMutex.RUnlock()
fake.revertMutex.RLock()
defer fake.revertMutex.RUnlock()
fake.scanFolderMutex.RLock()
defer fake.scanFolderMutex.RUnlock()
fake.scanFolderSubdirsMutex.RLock()
defer fake.scanFolderSubdirsMutex.RUnlock()
fake.scanFoldersMutex.RLock()
defer fake.scanFoldersMutex.RUnlock()
fake.sequenceMutex.RLock()
defer fake.sequenceMutex.RUnlock()
fake.serveMutex.RLock()
defer fake.serveMutex.RUnlock()
fake.setIgnoresMutex.RLock()
defer fake.setIgnoresMutex.RUnlock()
fake.stateMutex.RLock()
defer fake.stateMutex.RUnlock()
fake.usageReportingStatsMutex.RLock()
defer fake.usageReportingStatsMutex.RUnlock()
fake.watchErrorMutex.RLock()
defer fake.watchErrorMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,8 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
//go:generate counterfeiter -o mocks/model.go --fake-name Model . Model
//go:generate go tool counterfeiter -o mocks/model.go --fake-name Model . Model
package model
@@ -2549,6 +2548,12 @@ func (m *model) numHashers(folder string) int {
m.mut.RLock()
folderCfg := m.folderCfgs[folder]
numFolders := max(1, len(m.folderCfgs))
// MaxFolderConcurrency already limits the number of scanned folders, so
// prefer it over the overall number of folders to avoid limiting performance
// further for no reason.
if concurrency := m.cfg.Options().MaxFolderConcurrency(); concurrency > 0 {
numFolders = min(numFolders, concurrency)
}
m.mut.RUnlock()
if folderCfg.Hashers > 0 {
@@ -2556,16 +2561,17 @@ func (m *model) numHashers(folder string) int {
return folderCfg.Hashers
}
numCpus := runtime.GOMAXPROCS(-1)
if build.IsWindows || build.IsDarwin || build.IsIOS || build.IsAndroid {
// Interactive operating systems; don't load the system too heavily by
// default.
return 1
numCpus = max(1, numCpus/4)
}
// For other operating systems and architectures, lets try to get some
// work done... Divide the available CPU cores among the configured
// folders.
if perFolder := runtime.GOMAXPROCS(-1) / numFolders; perFolder > 0 {
if perFolder := numCpus / numFolders; perFolder > 0 {
return perFolder
}

View File

@@ -149,7 +149,7 @@ type testModel struct {
func newModel(t testing.TB, cfg config.Wrapper, id protocol.DeviceID, protectedFiles []string) *testModel {
t.Helper()
evLogger := events.NewLogger()
mdb, err := sqlite.OpenTemp()
mdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -582,24 +582,6 @@ func (fake *mockedConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *mockedConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -1144,44 +1144,6 @@ func (fake *Connection) TypeReturnsOnCall(i int, result1 string) {
func (fake *Connection) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.closeMutex.RLock()
defer fake.closeMutex.RUnlock()
fake.closedMutex.RLock()
defer fake.closedMutex.RUnlock()
fake.clusterConfigMutex.RLock()
defer fake.clusterConfigMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.deviceIDMutex.RLock()
defer fake.deviceIDMutex.RUnlock()
fake.downloadProgressMutex.RLock()
defer fake.downloadProgressMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.indexMutex.RLock()
defer fake.indexMutex.RUnlock()
fake.indexUpdateMutex.RLock()
defer fake.indexUpdateMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.requestMutex.RLock()
defer fake.requestMutex.RUnlock()
fake.startMutex.RLock()
defer fake.startMutex.RUnlock()
fake.statisticsMutex.RLock()
defer fake.statisticsMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -584,24 +584,6 @@ func (fake *ConnectionInfo) TypeReturnsOnCall(i int, result1 string) {
func (fake *ConnectionInfo) Invocations() map[string][][]interface{} {
fake.invocationsMutex.RLock()
defer fake.invocationsMutex.RUnlock()
fake.connectionIDMutex.RLock()
defer fake.connectionIDMutex.RUnlock()
fake.cryptoMutex.RLock()
defer fake.cryptoMutex.RUnlock()
fake.establishedAtMutex.RLock()
defer fake.establishedAtMutex.RUnlock()
fake.isLocalMutex.RLock()
defer fake.isLocalMutex.RUnlock()
fake.priorityMutex.RLock()
defer fake.priorityMutex.RUnlock()
fake.remoteAddrMutex.RLock()
defer fake.remoteAddrMutex.RUnlock()
fake.stringMutex.RLock()
defer fake.stringMutex.RUnlock()
fake.transportMutex.RLock()
defer fake.transportMutex.RUnlock()
fake.typeMutex.RLock()
defer fake.typeMutex.RUnlock()
copiedInvocations := map[string][][]interface{}{}
for key, value := range fake.invocations {
copiedInvocations[key] = value

View File

@@ -4,14 +4,12 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:generate -command counterfeiter go run github.com/maxbrunsfeld/counterfeiter/v6
// Prevents import loop, for internal testing
//go:generate counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocked_connection_info_test.go --fake-name mockedConnectionInfo . ConnectionInfo
//go:generate go run ../../script/prune_mocks.go -t mocked_connection_info_test.go
//go:generate counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate counterfeiter -o mocks/connection.go --fake-name Connection . Connection
//go:generate go tool counterfeiter -o mocks/connection_info.go --fake-name ConnectionInfo . ConnectionInfo
//go:generate go tool counterfeiter -o mocks/connection.go --fake-name Connection . Connection
package protocol

View File

@@ -1,7 +1,6 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate -command genxdr go run github.com/calmh/xdr/cmd/genxdr
//go:generate genxdr -o packets_xdr.go packets.go
//go:generate go tool genxdr -o packets_xdr.go packets.go
package protocol

View File

@@ -18,7 +18,7 @@ import (
)
func TestDeviceStat(t *testing.T) {
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -191,6 +191,10 @@ func (a *App) startup() error {
if _, ok := cfgFolders[folder]; !ok {
slog.Info("Cleaning metadata for dropped folder", "folder", folder)
a.sdb.DropFolder(folder)
} else {
// Open the folder database, causing it to apply migrations
// early when appropriate.
_, _ = a.sdb.GetDeviceSequence(folder, protocol.LocalDeviceID)
}
}

View File

@@ -72,7 +72,7 @@ func TestStartupFail(t *testing.T) {
}, protocol.LocalDeviceID, events.NoopLogger)
defer os.Remove(cfg.ConfigPath())
sdb, err := sqlite.OpenTemp()
sdb, err := sqlite.Open(t.TempDir())
if err != nil {
t.Fatal(err)
}

View File

@@ -158,6 +158,7 @@ func OpenDatabase(path string, deleteRetention time.Duration) (db.DB, error) {
}
// Attempts migration of the old (LevelDB-based) database type to the new (SQLite-based) type
// This will attempt to provide a temporary API server during the migration, if `apiAddr` is not empty.
func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiAddr string) error {
oldDBDir := locations.Get(locations.LegacyDatabase)
if _, err := os.Lstat(oldDBDir); err != nil {
@@ -173,10 +174,12 @@ func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiA
defer be.Close()
// Start a temporary API server during the migration
api := migratingAPI{addr: apiAddr}
apiCtx, cancel := context.WithCancel(ctx)
defer cancel()
go api.Serve(apiCtx)
if apiAddr != "" {
api := migratingAPI{addr: apiAddr}
apiCtx, cancel := context.WithCancel(ctx)
defer cancel()
go api.Serve(apiCtx)
}
sdb, err := sqlite.OpenForMigration(locations.Get(locations.Database))
if err != nil {
@@ -233,7 +236,7 @@ func TryMigrateDatabase(ctx context.Context, deleteRetention time.Duration, apiA
if time.Since(t1) > 10*time.Second {
d := time.Since(t0) + 1
t1 = time.Now()
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "filesrate", float64(files)/d.Seconds())
slog.Info("Still migrating folder", "folder", folder, "files", files, "blocks", blocks, "duration", d.Truncate(time.Second), "blocksrate", float64(blocks)/d.Seconds(), "filesrate", float64(files)/d.Seconds())
}
}
}

View File

@@ -188,6 +188,13 @@ type Report struct {
DistDist string `json:"distDist" metric:"distribution,gaugeVec:distribution"`
DistOS string `json:"distOS" metric:"distribution,gaugeVec:os"`
DistArch string `json:"distArch" metric:"distribution,gaugeVec:arch"`
// Database counts
Database struct {
ModernCSQLite bool `json:"modernCSQLite" metric:"database_engine{engine=modernc-sqlite},gauge"`
MattnSQLite bool `json:"mattnSQLite" metric:"database_engine{engine=mattn-sqlite},gauge"`
LevelDB bool `json:"levelDB" metric:"database_engine{engine=leveldb},gauge"`
} `json:"database"`
}
func New() *Report {

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "STDISCOSRV" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "STRELAYSRV" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -28,7 +28,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING" "1" "Aug 14, 2025" "v2.0.0" "Syncthing"
.TH "SYNCTHING" "1" "Aug 21, 2025" "v2.0.0" "Syncthing"
.SH NAME
syncthing \- Syncthing
.SH SYNOPSIS

View File

@@ -4,8 +4,8 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
//go:build ignore
// +build ignore
//go:build tools
// +build tools
package main

View File

@@ -35,7 +35,7 @@ func main() {
if err != nil {
log.Fatal(err)
}
err = exec.Command("go", "run", "golang.org/x/tools/cmd/goimports", "-w", path).Run()
err = exec.Command("go", "tool", "goimports", "-w", path).Run()
if err != nil {
log.Fatal(err)
}

View File

@@ -1,15 +0,0 @@
// This file is never built. It serves to establish dependencies on tools
// used by go generate and build.go. See
// https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module
//go:build tools
// +build tools
package tools
import (
_ "github.com/calmh/xdr"
_ "github.com/coreos/go-semver/semver"
_ "github.com/maxbrunsfeld/counterfeiter/v6"
_ "golang.org/x/tools/cmd/goimports"
)