Compare commits

...

22 Commits

Author SHA1 Message Date
bt90
467522d04d lib/connections: Allow IPv6 ULA in discovery announcements (fixes #7456) (#9048)
The allowed IPv4 ranges are the same as before. But we now also accept IPv6 addresses in the ULA range FC00::/7. These addresses don't require an interface identifier and are roughly equivalent to the IPv4 private ranges.

Typical usecases:

VPN interface IPs: Wireguard, OpenVPN, Tailscale, ...
fixed IPv6 LAN addressing while the provider assigns a dynamic prefix. e.g used by pihole
https://cs.opensource.google/go/go/+/refs/tags/go1.21.0:src/net/ip.go;l=146
2023-08-23 12:28:48 +02:00
bt90
3147285c60 lib/beacon: Check FlagRunning (#9051) 2023-08-22 11:27:43 +02:00
Jakob Borg
acd767b30b all: Remove lib/util package (#9049)
Grab-bag packages are nasty, this cleans it up a little by splitting it
into topical packages sempahore, netutil, stringutil, structutil.
2023-08-21 19:44:33 +02:00
Jakob Borg
40b3b9ad15 lib/model: Clean up index handler life cycle (fixes #9021) (#9038)
Co-authored-by: Simon Frei <freisim93@gmail.com>
2023-08-21 18:39:13 +02:00
bt90
c2c6133aa5 lib/osutil, lib/upnp: Check FlagRunning (fixes #8767) (#9047) 2023-08-21 14:49:28 +00:00
Jakob Borg
ccec8a4cdb build: Update dependencies (#9046) 2023-08-21 15:56:02 +02:00
Jakob Borg
cbf0e31f69 all: Use Go 1.21, new QUIC API (#9040) 2023-08-21 15:25:52 +02:00
Syncthing Release Automation
c40dae315b gui, man, authors: Update docs, translations, and contributors 2023-08-21 03:45:38 +00:00
Jakob Borg
ac0ce1c38f script: Remove find-metrics which belongs in docs 2023-08-17 12:27:56 +02:00
Jakob Borg
72c683aaca gui: Fix inadvertently always-false comparison (ref #7726) 2023-08-16 11:51:45 +02:00
Syncthing Release Automation
8042bd1a54 gui, man, authors: Update docs, translations, and contributors 2023-08-14 03:45:48 +00:00
Jakob Borg
462389934b cmd/stupgrades: Serve friendlier URLs for upgrade assets (fixes #9033) 2023-08-09 21:01:15 +02:00
Jakob Borg
b347c14bd1 build: Use correct range specification for Go version
The old `^1.20.7` means `1.x.y, >= 1.20.7` which allows 1.21.0, which
was not intended. The new `~1.20.7` means `1.20.x, >= 1.20.7`, which is
safer.
2023-08-09 16:05:11 +02:00
Jakob Borg
8dfec6983b build: WASM is not a thing we need to try to compile for 2023-08-09 11:02:43 +02:00
Jakob Borg
9ebf2dae7b build: Ability to manually trigger Actions builds 2023-08-09 10:50:07 +02:00
André Colomb
a8cacdca94 lib/versioner: Minor fixes in comments and error message (#9031)
* lib/versioner: Factor out DefaultPath constant.

Replace several instances where .stversions is named literally to all
use the same definition in the versioner package.  Exceptions are the
packages where a cyclic dependency on versioner is impossible, or some
tests which combine the versions base path with other components.

* lib/versioner: Fix comment about trash can in simple versioner.

* lib/versioner: Fix wrong versioning type string in error message.

The error message shows the folder type instead of the versioning
type, although the correct field is used in the comparison.
2023-08-09 07:10:06 +00:00
Jakob Borg
8b87cd5229 lib/model: Reinstate setting folder idle state (#9029) 2023-08-08 07:24:02 +02:00
Syncthing Release Automation
e09146ee03 gui, man, authors: Update docs, translations, and contributors 2023-08-07 03:45:35 +00:00
Jakob Borg
b9c08d3814 all: Add Prometheus-style metrics to expose some internal performance counters (fixes #5175) (#9003) 2023-08-04 19:57:30 +02:00
Jakob Borg
58042b3129 build: Increase Go version to 1.20.7 2023-08-03 08:11:16 +02:00
Keith Harrison
eed12f3ec5 lib/config: Allow sharing already encrypted folder with untrusted devices (fixes #8965) (#9012)
Safety check added in v1.23.6 introduced bug. Bug unshares folders with untrusted devices if folder does not have an encryption password set, regardless of whether the folder is shared with the untrusted device as encrypted or not. Prevents sharing with untrusted devices in some cases where sharing would be encrypted.

Patch preserves safety check but permits sharing folders with untrusted devices if they are shared as encrypted.

Signed-off-by: kewiha <keithh@protonmail.com>
2023-08-02 07:14:53 +00:00
tomasz1986
5323928159 gui: Use case-insensive and backslash-agnostic versions filter (fixes #7973) (#8995)
Currently, the versions filter is case-sensitive regardless of the
underlying OS. With this change, the filter becomes case-insensitive
everywhere, which is more user-friendly and makes it easier to search
for files whose exact case the user may not remember.

In addition, forward and backslashes are no longer distinguished,
whether used as path separators or as part of a file / directory
name (which is unlikely but possible on some platforms).

Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
2023-08-01 14:20:01 +02:00
101 changed files with 1861 additions and 737 deletions

View File

@@ -6,7 +6,7 @@ on:
- infrastructure
env:
GO_VERSION: "^1.20.5"
GO_VERSION: "^1.21.0"
CGO_ENABLED: "0"
BUILD_USER: docker
BUILD_HOST: github.syncthing.net

View File

@@ -6,12 +6,13 @@ on:
schedule:
# Run nightly build at 05:00 UTC
- cron: '00 05 * * *'
workflow_dispatch:
env:
# The go version to use for builds. We set check-latest to true when
# installing, so we get the latest patch version that matches the
# expression.
GO_VERSION: "^1.20.3"
GO_VERSION: "~1.21.0"
# Optimize compatibility on the slow archictures.
GO386: softfloat
@@ -47,7 +48,7 @@ jobs:
runner: ["windows-latest", "ubuntu-latest", "macos-latest"]
# The oldest version in this list should match what we have in our go.mod.
# Variables don't seem to be supported here, or we could have done something nice.
go: ["1.19", "1.20"]
go: ["1.20", "1.21"]
runs-on: ${{ matrix.runner }}
steps:
- name: Set git to use LF
@@ -389,6 +390,7 @@ jobs:
| grep -v nacl/ \
| grep -v plan9/ \
| grep -v windows/ \
| grep -v /wasm \
)
for plat in $platforms; do

View File

@@ -177,6 +177,7 @@ K.B.Dharun Krishna <kbdharunkrishna@gmail.com>
Kalle Laine <pahakalle@protonmail.com>
Karol Różycki (krozycki) <rozycki.karol@gmail.com>
Kebin Liu <lkebin@gmail.com>
Keith Harrison <keithh@protonmail.com>
Keith Turner <kturner@apache.org>
Kelong Cong (kc1212) <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
Ken'ichi Kamada (kamadak) <kamada@nanohz.org>

View File

@@ -68,6 +68,16 @@ func (p *githubReleases) ServeHTTP(w http.ResponseWriter, _ *http.Request) {
sort.Sort(upgrade.SortByRelease(rels))
rels = filterForLatest(rels)
// Move the URL used for browser downloads to the URL field, and remove
// the browser URL field. This avoids going via the GitHub API for
// downloads, since Syncthing uses the URL field.
for _, rel := range rels {
for j, asset := range rel.Assets {
rel.Assets[j].URL = asset.BrowserURL
rel.Assets[j].BrowserURL = ""
}
}
buf := new(bytes.Buffer)
_ = json.NewEncoder(buf).Encode(rels)

34
go.mod
View File

@@ -1,9 +1,8 @@
module github.com/syncthing/syncthing
go 1.19
go 1.20
require (
github.com/AudriusButkevicius/pfilter v0.0.11
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f
github.com/alecthomas/kong v0.8.0
github.com/calmh/incontainer v0.0.0-20221224152218-b3e71b103d7a
@@ -22,7 +21,7 @@ require (
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v0.0.4 // indirect
github.com/greatroar/blobloom v0.7.2
github.com/hashicorp/golang-lru/v2 v2.0.4
github.com/hashicorp/golang-lru/v2 v2.0.5
github.com/jackpal/gateway v1.0.10
github.com/jackpal/go-nat-pmp v1.0.2
github.com/julienschmidt/httprouter v1.3.0
@@ -38,45 +37,44 @@ require (
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.16.0
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.11.0 // indirect
github.com/quic-go/quic-go v0.34.0
github.com/prometheus/procfs v0.11.1 // indirect
github.com/quic-go/quic-go v0.38.0
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475
github.com/sasha-s/go-deadlock v0.3.1
github.com/shirou/gopsutil/v3 v3.23.6
github.com/shirou/gopsutil/v3 v3.23.7
github.com/syncthing/notify v0.0.0-20210616190510-c6b7342338d2
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d
github.com/thejerf/suture/v4 v4.0.2
github.com/urfave/cli v1.22.14
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
golang.org/x/crypto v0.11.0
golang.org/x/crypto v0.12.0
golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63 // indirect
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.12.0
golang.org/x/sys v0.10.0
golang.org/x/text v0.11.0
golang.org/x/net v0.14.0
golang.org/x/sys v0.11.0
golang.org/x/text v0.12.0
golang.org/x/time v0.3.0
golang.org/x/tools v0.11.0
golang.org/x/tools v0.12.1-0.20230815132531-74c255bcf846
google.golang.org/protobuf v1.31.0
)
require (
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8 // indirect
github.com/google/pprof v0.0.0-20230821062121-407c9e7a662f // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/onsi/ginkgo/v2 v2.11.0 // indirect
github.com/oschwald/maxminddb-golang v1.11.0 // indirect
github.com/petermattis/goid v0.0.0-20230518223814-80aa455d8761 // indirect
github.com/oschwald/maxminddb-golang v1.12.0 // indirect
github.com/petermattis/goid v0.0.0-20230808133559-b036b712a89b // indirect
github.com/power-devops/perfstat v0.0.0-20221212215047-62379fc7944b // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/quic-go/qtls-go1-19 v0.3.2 // indirect
github.com/quic-go/qtls-go1-20 v0.2.2 // indirect
github.com/quic-go/qtls-go1-20 v0.3.2 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
golang.org/x/exp v0.0.0-20230711023510-fffb14384f22 // indirect
)
// https://github.com/gobwas/glob/pull/55

64
go.sum
View File

@@ -1,5 +1,3 @@
github.com/AudriusButkevicius/pfilter v0.0.11 h1:6emuvqNeH1gGlqkML35pEizyPcaxdAN4JO9sdgwcx78=
github.com/AudriusButkevicius/pfilter v0.0.11/go.mod h1:4eF1UYuEhoycTlr9IOP1sb0lL9u4nfAIouRqt2xJbzM=
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f h1:GmH5lT+moM7PbAJFBq57nH9WJ+wRnBXr/tyaYWbSAx8=
github.com/AudriusButkevicius/recli v0.0.7-0.20220911121932-d000ce8fbf0f/go.mod h1:Nhfib1j/VFnLrXL9cHgA+/n2O6P5THuWelOnbfPNd78=
github.com/Azure/go-ntlmssp v0.0.0-20221128193559-754e69321358 h1:mFRzDkZVAjdal+s7s0MwaRv9igoPqLRdzOLzw/8Xvq8=
@@ -50,8 +48,9 @@ github.com/go-asn1-ber/asn1-ber v1.5.4/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkPro
github.com/go-ldap/ldap/v3 v3.4.5 h1:ekEKmaDrpvR2yf5Nc/DClsGG9lAmdDixe44mLzlW5r8=
github.com/go-ldap/ldap/v3 v3.4.5/go.mod h1:bMGIq3AGbytbaMwf8wdv5Phdxz0FWHTIYMSzyrYgnQs=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
@@ -81,12 +80,12 @@ github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8 h1:n6vlPhxsA+BW/XsS5+uqi7GyzaLa5MH7qlSLBZtRdiA=
github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8/go.mod h1:Jh3hGz2jkYak8qXPD19ryItVnUgpgeqzdkY/D0EaeuA=
github.com/google/pprof v0.0.0-20230821062121-407c9e7a662f h1:pDhu5sgp8yJlEF/g6osliIIpF9K4F5jvkULXa4daRDQ=
github.com/google/pprof v0.0.0-20230821062121-407c9e7a662f/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/greatroar/blobloom v0.7.2 h1:F30MGLHOcb4zr0pwCPTcKdlTM70rEgkf+LzdUPc5ss8=
github.com/greatroar/blobloom v0.7.2/go.mod h1:mjMJ1hh1wjGVfr93QIHJ6FfDNVrA0IELv8OvMHJxHKs=
github.com/hashicorp/golang-lru/v2 v2.0.4 h1:7GHuZcgid37q8o5i3QI9KMT4nCWQQ3Kx3Ov6bb9MfK0=
github.com/hashicorp/golang-lru/v2 v2.0.4/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hashicorp/golang-lru/v2 v2.0.5 h1:wW7h1TG88eUIJ2i69gaE3uNVtEPIagzhGvHgwfx2Vm4=
github.com/hashicorp/golang-lru/v2 v2.0.5/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hexops/gotextdiff v1.0.3 h1:gitA9+qJrrTCsiCl7+kh75nPqQt1cx4ZkudSTLoUqJM=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
@@ -136,11 +135,11 @@ github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9
github.com/onsi/gomega v1.27.8 h1:gegWiwZjBsf2DgiSbf5hpokZ98JVDMcWkUiigk6/KXc=
github.com/oschwald/geoip2-golang v1.9.0 h1:uvD3O6fXAXs+usU+UGExshpdP13GAqp4GBrzN7IgKZc=
github.com/oschwald/geoip2-golang v1.9.0/go.mod h1:BHK6TvDyATVQhKNbQBdrj9eAvuwOMi2zSFXizL3K81Y=
github.com/oschwald/maxminddb-golang v1.11.0 h1:aSXMqYR/EPNjGE8epgqwDay+P30hCBZIveY0WZbAWh0=
github.com/oschwald/maxminddb-golang v1.11.0/go.mod h1:YmVI+H0zh3ySFR3w+oz8PCfglAFj3PuCmui13+P9zDg=
github.com/oschwald/maxminddb-golang v1.12.0 h1:9FnTOD0YOhP7DGxGsq4glzpGy5+w7pq50AS6wALUMYs=
github.com/oschwald/maxminddb-golang v1.12.0/go.mod h1:q0Nob5lTCqyQ8WT6FYgS1L7PXKVVbgiymefNwIjPzgY=
github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5/go.mod h1:jvVRKCrJTQWu0XVbaOlby/2lO20uSCHEMzzplHXte1o=
github.com/petermattis/goid v0.0.0-20230518223814-80aa455d8761 h1:W04oB3d0J01W5jgYRGKsV8LCM6g9EkCvPkZcmFuy0OE=
github.com/petermattis/goid v0.0.0-20230518223814-80aa455d8761/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20230808133559-b036b712a89b h1:vab8deKC4QoIfm9fJM59iuNz1ELGsuLoYYpiF+pHiG8=
github.com/petermattis/goid v0.0.0-20230808133559-b036b712a89b/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -157,14 +156,12 @@ github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUo
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/procfs v0.11.0 h1:5EAgkfkMl659uZPbe9AS2N68a7Cc1TJbPEuGzFuRbyk=
github.com/prometheus/procfs v0.11.0/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/quic-go/qtls-go1-19 v0.3.2 h1:tFxjCFcTQzK+oMxG6Zcvp4Dq8dx4yD3dDiIiyc86Z5U=
github.com/quic-go/qtls-go1-19 v0.3.2/go.mod h1:ySOI96ew8lnoKPtSqx2BlI5wCpUVPT05RMAlajtnyOI=
github.com/quic-go/qtls-go1-20 v0.2.2 h1:WLOPx6OY/hxtTxKV1Zrq20FtXtDEkeY00CGQm8GEa3E=
github.com/quic-go/qtls-go1-20 v0.2.2/go.mod h1:JKtK6mjbAVcUTN/9jZpvLbGxvdWIKS8uT7EiStoU1SM=
github.com/quic-go/quic-go v0.34.0 h1:OvOJ9LFjTySgwOTYUZmNoq0FzVicP8YujpV0kB7m2lU=
github.com/quic-go/quic-go v0.34.0/go.mod h1:+4CVgVppm0FNjpG3UcX8Joi/frKOH7/ciD5yGcwOO1g=
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY=
github.com/quic-go/qtls-go1-20 v0.3.2 h1:rRgN3WfnKbyik4dBV8A6girlJVxGand/d+jVKbQq5GI=
github.com/quic-go/qtls-go1-20 v0.3.2/go.mod h1:X9Nh97ZL80Z+bX/gUXMbipO6OxdiDi58b/fMC9mAL+k=
github.com/quic-go/quic-go v0.38.0 h1:T45lASr5q/TrVwt+jrVccmqHhPL2XuSyoCLVCpfOSLc=
github.com/quic-go/quic-go v0.38.0/go.mod h1:MPCuRq7KBK2hNcfKj/1iD1BGuN3eAYMeNxp3T42LRUg=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
@@ -172,8 +169,8 @@ github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD
github.com/sasha-s/go-deadlock v0.3.1 h1:sqv7fDNShgjcaxkO0JNcOAlr8B9+cV5Ey/OB71efZx0=
github.com/sasha-s/go-deadlock v0.3.1/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM=
github.com/sclevine/spec v1.4.0 h1:z/Q9idDcay5m5irkZ28M7PtQM4aOISzOpj4bUPkDee8=
github.com/shirou/gopsutil/v3 v3.23.6 h1:5y46WPI9QBKBbK7EEccUPNXpJpNrvPuTD0O2zHEHT08=
github.com/shirou/gopsutil/v3 v3.23.6/go.mod h1:j7QX50DrXYggrpN30W0Mo+I4/8U2UUIQrnrhqUeWrAU=
github.com/shirou/gopsutil/v3 v3.23.7 h1:C+fHO8hfIppoJ1WdsVm1RoI0RwXoNdfTK7yWXV0wVj4=
github.com/shirou/gopsutil/v3 v3.23.7/go.mod h1:c4gnmoRC0hQuaLqvxnx1//VXQ0Ms/X9UnJF8pddY5z4=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@@ -210,10 +207,10 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA=
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
golang.org/x/exp v0.0.0-20230711023510-fffb14384f22 h1:FqrVOBQxQ8r/UwwXibI0KMolVhvFiGobSfdE33deHJM=
golang.org/x/exp v0.0.0-20230711023510-fffb14384f22/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/crypto v0.12.0 h1:tFM/ta59kqch6LlvYnPa0yx5a83cL2nHflFhYKvv9Yk=
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63 h1:m64FZMko/V45gv0bNmrNYoDEq8U5YUhetc9cBWKS1TQ=
golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63/go.mod h1:0v4NqG35kSWCMzLaMeX+IQrlSnVE/bqGSyC2cz/9Le8=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -235,8 +232,8 @@ golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.12.0 h1:cfawfvKITfUsFCeJIHJrbSxpeu/E81khclypR0GVT50=
golang.org/x/net v0.12.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -271,12 +268,13 @@ golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -287,8 +285,8 @@ golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.11.0 h1:LAntKIrcmeSKERyiOh0XMV39LXS8IE9UL2yP7+f5ij4=
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -299,8 +297,8 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.11.0 h1:EMCa6U9S2LtZXLAMoWiR/R8dAQFRqbAitmbJ2UKhoi8=
golang.org/x/tools v0.11.0/go.mod h1:anzJrxPjNtfgiYQYirP2CPGzGLxrH2u2QBhn6Bf3qY8=
golang.org/x/tools v0.12.1-0.20230815132531-74c255bcf846 h1:Vve/L0v7CXXuxUmaMGIEK/dEeq7uiqb5qBgQrZzIE7E=
golang.org/x/tools v0.12.1-0.20230815132531-74c255bcf846/go.mod h1:Sc0INKfu04TlqNoRA1hgpFZbhYXHPr4V5DzpSBTPqQM=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -422,7 +422,7 @@
"The interval, in seconds, for running cleanup in the versions directory. Zero to disable periodic cleaning.": "Das Intervall, in Sekunden, zwischen den Bereinigungen im Versionsverzeichnis. Null um das regelmäßige Bereinigen zu deaktivieren.",
"The maximum age must be a number and cannot be blank.": "Das Höchstalter muss angegeben werden und eine Zahl sein.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Die längste Zeit, die alte Versionen vorgehalten werden (in Tagen) (0 um alte Versionen für immer zu behalten).",
"The number of days must be a number and cannot be blank.": "Die Anzahl von Versionen muss eine Ganzzahl und darf nicht leer sein.",
"The number of days must be a number and cannot be blank.": "Die Anzahl der Tage muss eine Ganzzahl sein und darf nicht leer sein.",
"The number of days to keep files in the trash can. Zero means forever.": "Dauer in Tagen für welche die Dateien aufgehoben werden sollen. 0 bedeutet für immer.",
"The number of old versions to keep, per file.": "Anzahl der alten Versionen, die von jeder Datei behalten werden sollen.",
"The number of versions must be a number and cannot be blank.": "Die Anzahl von Versionen muss eine Ganzzahl und darf nicht leer sein.",

View File

@@ -290,8 +290,8 @@
"Preview": "Aperçu",
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
"QR code": "Code QR",
"QUIC LAN": "QUIC LAN",
"QUIC WAN": "QUIC WAN",
"QUIC LAN": "LAN QUIC",
"QUIC WAN": "WAN QUIC",
"QUIC connections are in most cases considered suboptimal": "Les connexions QUIC sont généralement peu performantes",
"Quick guide to supported patterns": "Guide rapide des masques compatibles ci-dessous",
"Random": "Aléatoire",
@@ -355,7 +355,7 @@
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de l'appareil dans l'état du groupe. Sera diffusé aux autres appareils comme nom convivial optionnel par défaut.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Nom convivial local affiché à la place de l'ID de l'appareil dans la plupart des écrans. Si laissé vide, c'est le nom convivial local de l'appareil distant qui sera utilisé. (Modifiable ultérieurement).",
"Shutdown": "Arrêter",
"Shutdown Complete": "Arrêté",
"Shutdown Complete": "Arrêt complet",
"Simple": "Suivi simplifié",
"Simple File Versioning": "Suivi simplifié des versions",
"Single level wildcard (matches within a directory only)": "N'importe quel nombre (dont 0) de n'importe quels caractères (sauf le séparateur de répertoires)",
@@ -369,7 +369,7 @@
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "Les versions stables sont reportées d'environ deux semaines. Pendant ce temps elles sont testées en tant que versions préliminaires.",
"Stable releases only": "Seulement les versions stables",
"Staggered": "Versions échelonnées",
"Staggered File Versioning": "Versions échelonnées",
"Staggered File Versioning": "Versions échelonnées des fichiers",
"Start Browser": "Lancer le navigateur web",
"Statistics": "Statistiques",
"Stopped": "Arrêté",

View File

@@ -509,7 +509,7 @@
"Your SMS app should open to let you choose the recipient and send it from your own number.": "A sua aplicação de SMS deverá abrir para deixar escolher o destinatário e enviar a partir do seu próprio número.",
"Your email app should open to let you choose the recipient and send it from your own address.": "A sua aplicação de email deverá abrir para deixar escolher o destinatário e enviar a partir do seu próprio endereço.",
"days": "dias",
"deleted": "eliminada",
"deleted": "eliminado",
"deny": "negar",
"directories": "pastas",
"file": "ficheiro",
@@ -517,7 +517,7 @@
"folder": "pasta",
"full documentation": "documentação completa",
"items": "itens",
"modified": "modificada",
"modified": "modificado",
"permit": "permitir",
"seconds": "segundos",
"theme-name-black": "Preto",

View File

@@ -26,7 +26,7 @@
<h4 class="text-center" translate>The Syncthing Authors</h4>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Evgeny Kuznetsov, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Tomasz Wilczyński, Wulf Weich, greatroar, Aaron Bieber, Adam Piggott, Adel Qalieh, Alan Pope, Alberto Donato, Aleksey Vasenev, Alessandro G., Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Andreas Sommer, Andrew Dunham, Andrew Meyer, Andrew Rabert, Andrey D, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benjamin Nater, Benno Fünfstück, Benny Ng, Boqin Qin, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Chris Tonkinson, Christian Kujau, Christian Prescott, Colin Kennedy, Cromefire_, Cyprien Devillez, Dale Visser, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Devon G. Redekopp, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Emil Lundberg, Eng Zer Jun, Eric Lesiuta, Eric P, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, Han Boetes, HansK-p, Harrison Jones, Heiko Zuerker, Hugo Locurcio, Iain Barnett, Ian Johnson, Ikko Ashimine, Ilya Brin, Iskander Sharipov, Jaakko Hannikainen, Jacek Szafarkiewicz, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jauder Ho, Jaya Chithra, Jaya Kumar, Jeffery To, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jonathan Cross, Jonta, Jose Manuel Delicado, Jörg Thalheim, Jędrzej Kula, K.B.Dharun Krishna, Kalle Laine, Karol Różycki, Kebin Liu, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., Kurt Fitzner, LSmithx2, Lars Lehtonen, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Lukas Lihotzki, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Mateusz Naściszewski, Mateusz Ż, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maxime Thirouin, MichaIng, Michael Jephcote, Michael Rienstra, Michael Tilli, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, Naveen, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, Otiel, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Quentin Hibon, Rahmi Pruitt, Richard Hartmann, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, Ross Smith II, Ruslan Yevdokymov, Ryan Qian, Sacheendra Talluri, Scott Klupfel, Shaarad Dalvi, Simon Mwepu, Sly_tom_cat, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tommy Thorn, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, Vladimir Rusinov, Will Rouesnel, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, boomsquared, bt90, chenrui, chucic, cui fliter, derekriemer, desbma, entity0xfe, georgespatton, ghjklw, guangwu, ignacy123, janost, jaseg, jelle van der Waa, jtagcat, klemens, luzpaz, marco-m, mclang, mv1005, otbutz, overkill, perewa, red_led, rubenbe, sec65, villekalliomaki, wangguoliang, wouter bolsterlee, xarx00, xjtdy888, 佛跳墙, 落心
Jakob Borg, Audrius Butkevicius, Jesse Lucas, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, André Colomb, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Evgeny Kuznetsov, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Tomasz Wilczyński, Wulf Weich, greatroar, Aaron Bieber, Adam Piggott, Adel Qalieh, Alan Pope, Alberto Donato, Aleksey Vasenev, Alessandro G., Alex Lindeman, Alex Xu, Alexander Seiler, Alexandre Alves, Aman Gupta, Andreas Sommer, Andrew Dunham, Andrew Meyer, Andrew Rabert, Andrey D, Anjan Momi, Anthony Goeckner, Antoine Lamielle, Anur, Aranjedeath, Arkadiusz Tymiński, Aroun, Arthur Axel fREW Schmidt, Artur Zubilewicz, Aurélien Rainone, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benjamin Nater, Benno Fünfstück, Benny Ng, Boqin Qin, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chih-Hsuan Yen, Choongkyu, Chris Howie, Chris Joel, Chris Tonkinson, Christian Kujau, Christian Prescott, Colin Kennedy, Cromefire_, Cyprien Devillez, Dale Visser, Dan, Daniel Barczyk, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Devon G. Redekopp, Dimitri Papadopoulos Orfanos, Dmitry Saveliev, Domenic Horner, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Emil Lundberg, Eng Zer Jun, Eric Lesiuta, Eric P, Erik Meitner, Evan Spensley, Federico Castagnini, Felix, Felix Ableitner, Felix Lampe, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gahl Saraf, Gilli Sigurdsson, Gleb Sinyavskiy, Graham Miln, Greg, Han Boetes, HansK-p, Harrison Jones, Heiko Zuerker, Hugo Locurcio, Iain Barnett, Ian Johnson, Ikko Ashimine, Ilya Brin, Iskander Sharipov, Jaakko Hannikainen, Jacek Szafarkiewicz, Jack Croft, Jacob, Jake Peterson, James O'Beirne, James Patterson, Jaroslav Lichtblau, Jaroslav Malec, Jauder Ho, Jaya Chithra, Jaya Kumar, Jeffery To, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonas Thelemann, Jonathan, Jonathan Cross, Jonta, Jose Manuel Delicado, Jörg Thalheim, Jędrzej Kula, K.B.Dharun Krishna, Kalle Laine, Karol Różycki, Kebin Liu, Keith Harrison, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin Bushiri, Kevin White, Jr., Kurt Fitzner, LSmithx2, Lars Lehtonen, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Lukas Lihotzki, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Marcus Legendre, Mario Majila, Mark Pulford, Martchus, Mateusz Naściszewski, Mateusz Ż, Matic Potočnik, Matt Burke, Matt Robenolt, Matteo Ruina, Maurizio Tomasi, Max, Max Schulze, MaximAL, Maxime Thirouin, MichaIng, Michael Jephcote, Michael Rienstra, Michael Tilli, Migelo, Mike Boone, MikeLund, MikolajTwarog, Mingxuan Lin, Naveen, Nicholas Rishel, Nick Busey, Nico Stapelbroek, Nicolas Braud-Santoni, Nicolas Perraut, Niels Peter Roest, Nils Jakobi, NinoM4ster, Nitroretro, NoLooseEnds, Oliver Freyermuth, Otiel, Oyebanji Jacob Mayowa, Pablo, Pascal Jungblut, Paul Brit, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phani Rithvij, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Quentin Hibon, Rahmi Pruitt, Richard Hartmann, Robert Carosi, Roberto Santalla, Robin Schoonover, Roman Zaynetdinov, Ross Smith II, Ruslan Yevdokymov, Ryan Qian, Sacheendra Talluri, Scott Klupfel, Shaarad Dalvi, Simon Mwepu, Sly_tom_cat, Stefan Kuntz, Steven Eckhoff, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Klauser, Tobias Nygren, Tobias Tom, Tom Jakubowski, Tommy Thorn, Tully Robinson, Tyler Brazier, Tyler Kropp, Unrud, Veeti Paananen, Victor Buinsky, Vik, Vil Brekin, Vladimir Rusinov, Will Rouesnel, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, boomsquared, bt90, chenrui, chucic, cui fliter, derekriemer, desbma, entity0xfe, georgespatton, ghjklw, guangwu, ignacy123, janost, jaseg, jelle van der Waa, jtagcat, klemens, luzpaz, marco-m, mclang, mv1005, otbutz, overkill, perewa, red_led, rubenbe, sec65, villekalliomaki, wangguoliang, wouter bolsterlee, xarx00, xjtdy888, 佛跳墙, 落心
</div>
</div>
</div>

View File

@@ -2778,9 +2778,17 @@ angular.module('syncthing.core')
$scope.restoreVersions.tree.filterNodes(function (node) {
if (node.folder) return false;
if ($scope.restoreVersions.filters.text && node.key.indexOf($scope.restoreVersions.filters.text) < 0) {
return false;
if ($scope.restoreVersions.filters.text) {
// Use case-insensitive filter and convert backslashes to
// forward slashes to allow using them as path separators.
var filterText = $scope.restoreVersions.filters.text.toLowerCase().replace(/\\/g, '/');
var versionPath = node.key.toLowerCase().replace(/\\/g, '/');
if (versionPath.indexOf(filterText) < 0) {
return false;
}
}
if ($scope.restoreVersions.filterVersions(node.data.versions).length == 0) {
return false;
}
@@ -2869,7 +2877,7 @@ angular.module('syncthing.core')
};
$scope.hasReceiveOnlyChanged = function (folderCfg) {
if (!folderCfg || folderCfg.type !== ["receiveonly", "receiveencrypted"].indexOf(folderCfg.type) === -1) {
if (!folderCfg || ["receiveonly", "receiveencrypted"].indexOf(folderCfg.type) === -1) {
return false;
}
var counts = $scope.model[folderCfg.id];

View File

@@ -32,6 +32,7 @@ import (
"github.com/calmh/incontainer"
"github.com/julienschmidt/httprouter"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/rcrowley/go-metrics"
"github.com/thejerf/suture/v4"
"github.com/vitrun/qart/qr"
@@ -351,6 +352,15 @@ func (s *service) Serve(ctx context.Context) error {
// Handle the special meta.js path
mux.HandleFunc("/meta.js", s.getJSMetadata)
// Handle Prometheus metrics
promHttpHandler := promhttp.Handler()
mux.HandleFunc("/metrics", func(w http.ResponseWriter, req *http.Request) {
// fetching metrics counts as an event, for the purpose of whether
// we should prepare folder summaries etc.
s.fss.OnEventRequest()
promHttpHandler.ServeHTTP(w, req)
})
guiCfg := s.cfg.GUI()
// Wrap everything in CSRF protection. The /rest prefix should be
@@ -1214,6 +1224,12 @@ func (s *service) getSupportBundle(w http.ResponseWriter, r *http.Request) {
}
}
// Metrics data as text
buf := bytes.NewBuffer(nil)
wr := bufferedResponseWriter{Writer: buf}
promhttp.Handler().ServeHTTP(wr, &http.Request{Method: http.MethodGet})
files = append(files, fileEntry{name: "metrics.txt", data: buf.Bytes()})
// Heap and CPU Proofs as a pprof extension
var heapBuffer, cpuBuffer bytes.Buffer
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, build.Version, time.Now().Format("150405")) // hhmmss
@@ -2043,3 +2059,12 @@ func httpError(w http.ResponseWriter, err error) {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
type bufferedResponseWriter struct {
io.Writer
}
func (w bufferedResponseWriter) WriteHeader(int) {}
func (w bufferedResponseWriter) Header() http.Header {
return http.Header{}
}

View File

@@ -44,8 +44,8 @@ import (
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/ur"
"github.com/syncthing/syncthing/lib/util"
"github.com/thejerf/suture/v4"
"golang.org/x/exp/slices"
)
var (
@@ -1313,7 +1313,7 @@ func TestBrowse(t *testing.T) {
for _, tc := range cases {
ret := browseFiles(ffs, tc.current)
if !util.EqualStrings(ret, tc.returns) {
if !slices.Equal(ret, tc.returns) {
t.Errorf("browseFiles(%q) => %q, expected %q", tc.current, ret, tc.returns)
}
}

View File

@@ -15,7 +15,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/structutil"
)
type configMuxBuilder struct {
@@ -212,7 +212,7 @@ func (c *configMuxBuilder) registerDefaultFolder(path string) {
c.HandlerFunc(http.MethodPut, path, func(w http.ResponseWriter, r *http.Request) {
var cfg config.FolderConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
c.adjustFolder(w, r, cfg, true)
})
@@ -228,7 +228,7 @@ func (c *configMuxBuilder) registerDefaultDevice(path string) {
c.HandlerFunc(http.MethodPut, path, func(w http.ResponseWriter, r *http.Request) {
var cfg config.DeviceConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
c.adjustDevice(w, r, cfg, true)
})
@@ -266,7 +266,7 @@ func (c *configMuxBuilder) registerOptions(path string) {
c.HandlerFunc(http.MethodPut, path, func(w http.ResponseWriter, r *http.Request) {
var cfg config.OptionsConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
c.adjustOptions(w, r, cfg)
})
@@ -282,7 +282,7 @@ func (c *configMuxBuilder) registerLDAP(path string) {
c.HandlerFunc(http.MethodPut, path, func(w http.ResponseWriter, r *http.Request) {
var cfg config.LDAPConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
c.adjustLDAP(w, r, cfg)
})
@@ -298,7 +298,7 @@ func (c *configMuxBuilder) registerGUI(path string) {
c.HandlerFunc(http.MethodPut, path, func(w http.ResponseWriter, r *http.Request) {
var cfg config.GUIConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
c.adjustGUI(w, r, cfg)
})

View File

@@ -52,7 +52,7 @@ func writeBroadcasts(ctx context.Context, inbox <-chan []byte, port int) error {
var dsts []net.IP
for _, intf := range intfs {
if intf.Flags&net.FlagBroadcast == 0 {
if intf.Flags&net.FlagRunning == 0 || intf.Flags&net.FlagBroadcast == 0 {
continue
}

View File

@@ -67,7 +67,7 @@ func writeMulticasts(ctx context.Context, inbox <-chan []byte, addr string) erro
success := 0
for _, intf := range intfs {
if intf.Flags&net.FlagMulticast == 0 {
if intf.Flags&net.FlagRunning == 0 || intf.Flags&net.FlagMulticast == 0 {
continue
}

View File

@@ -16,14 +16,16 @@ import (
"net"
"net/url"
"os"
"reflect"
"sort"
"strconv"
"strings"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/netutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/structutil"
)
const (
@@ -42,9 +44,9 @@ var (
// "consumer" of the configuration as we don't want these saved to the
// config.
DefaultListenAddresses = []string{
util.Address("tcp", net.JoinHostPort("0.0.0.0", strconv.Itoa(DefaultTCPPort))),
netutil.AddressURL("tcp", net.JoinHostPort("0.0.0.0", strconv.Itoa(DefaultTCPPort))),
"dynamic+https://relays.syncthing.net/endpoint",
util.Address("quic", net.JoinHostPort("0.0.0.0", strconv.Itoa(DefaultQUICPort))),
netutil.AddressURL("quic", net.JoinHostPort("0.0.0.0", strconv.Itoa(DefaultQUICPort))),
}
DefaultGUIPort = 8384
// DefaultDiscoveryServersV4 should be substituted when the configuration
@@ -101,7 +103,7 @@ func New(myID protocol.DeviceID) Configuration {
cfg.Options.UnackedNotificationIDs = []string{"authenticationUserAndPassword"}
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
// Can't happen.
if err := cfg.prepare(myID); err != nil {
@@ -127,9 +129,9 @@ func (cfg *Configuration) ProbeFreePorts() error {
cfg.Options.RawListenAddresses = []string{"default"}
} else {
cfg.Options.RawListenAddresses = []string{
util.Address("tcp", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
netutil.AddressURL("tcp", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
"dynamic+https://relays.syncthing.net/endpoint",
util.Address("quic", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
netutil.AddressURL("quic", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
}
}
@@ -144,7 +146,7 @@ type xmlConfiguration struct {
func ReadXML(r io.Reader, myID protocol.DeviceID) (Configuration, int, error) {
var cfg xmlConfiguration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
if err := xml.NewDecoder(r).Decode(&cfg); err != nil {
return Configuration{}, 0, err
@@ -166,7 +168,7 @@ func ReadJSON(r io.Reader, myID protocol.DeviceID) (Configuration, error) {
var cfg Configuration
util.SetDefaults(&cfg)
structutil.SetDefaults(&cfg)
if err := json.Unmarshal(bs, &cfg); err != nil {
return Configuration{}, err
@@ -259,7 +261,7 @@ func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
cfg.removeDeprecatedProtocols()
util.FillNilExceptDeprecated(cfg)
structutil.FillNilExceptDeprecated(cfg)
// TestIssue1750 relies on migrations happening after preparing options.
cfg.applyMigrations()
@@ -556,8 +558,8 @@ loop:
func ensureNoUntrustedTrustingSharing(f *FolderConfiguration, devices []FolderDeviceConfiguration, existingDevices map[protocol.DeviceID]*DeviceConfiguration) []FolderDeviceConfiguration {
for i := 0; i < len(devices); i++ {
dev := devices[i]
if dev.EncryptionPassword != "" {
// There's a password set, no check required
if dev.EncryptionPassword != "" || f.Type == FolderTypeReceiveEncrypted {
// There's a password set or the folder is received encrypted, no check required
continue
}
if devCfg := existingDevices[dev.DeviceID]; devCfg.Untrusted {
@@ -636,7 +638,7 @@ func (defaults *Defaults) prepare(myID protocol.DeviceID, existingDevices map[pr
}
func ensureZeroForNodefault(empty interface{}, target interface{}) {
util.CopyMatchingTag(empty, target, "nodefault", func(v string) bool {
copyMatchingTag(empty, target, "nodefault", func(v string) bool {
if len(v) > 0 && v != "true" {
panic(fmt.Sprintf(`unexpected tag value: %s. expected untagged or "true"`, v))
}
@@ -644,6 +646,36 @@ func ensureZeroForNodefault(empty interface{}, target interface{}) {
})
}
// copyMatchingTag copies fields tagged tag:"value" from "from" struct onto "to" struct.
func copyMatchingTag(from interface{}, to interface{}, tag string, shouldCopy func(value string) bool) {
fromStruct := reflect.ValueOf(from).Elem()
fromType := fromStruct.Type()
toStruct := reflect.ValueOf(to).Elem()
toType := toStruct.Type()
if fromType != toType {
panic(fmt.Sprintf("non equal types: %s != %s", fromType, toType))
}
for i := 0; i < toStruct.NumField(); i++ {
fromField := fromStruct.Field(i)
toField := toStruct.Field(i)
if !toField.CanSet() {
// Unexported fields
continue
}
structTag := toType.Field(i).Tag
v := structTag.Get(tag)
if shouldCopy(v) {
toField.Set(fromField)
}
}
}
func (i Ignores) Copy() Ignores {
out := Ignores{Lines: make([]string, len(i.Lines))}
copy(out.Lines, i.Lines)

View File

@@ -1597,3 +1597,61 @@ func handleFile(name string) {
fd.Write(origin)
fd.Close()
}
func TestCopyMatching(t *testing.T) {
type Nested struct {
A int
}
type Test struct {
CopyA int
CopyB []string
CopyC Nested
CopyD *Nested
NoCopy int `restart:"true"`
}
from := Test{
CopyA: 1,
CopyB: []string{"friend", "foe"},
CopyC: Nested{
A: 2,
},
CopyD: &Nested{
A: 3,
},
NoCopy: 4,
}
to := Test{
CopyA: 11,
CopyB: []string{"foot", "toe"},
CopyC: Nested{
A: 22,
},
CopyD: &Nested{
A: 33,
},
NoCopy: 44,
}
// Copy empty fields
copyMatchingTag(&from, &to, "restart", func(v string) bool {
return v != "true"
})
if to.CopyA != 1 {
t.Error("CopyA")
}
if len(to.CopyB) != 2 || to.CopyB[0] != "friend" || to.CopyB[1] != "foe" {
t.Error("CopyB")
}
if to.CopyC.A != 2 {
t.Error("CopyC")
}
if to.CopyD.A != 3 {
t.Error("CopyC")
}
if to.NoCopy != 44 {
t.Error("NoCopy")
}
}

View File

@@ -21,7 +21,6 @@ import (
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
)
var (
@@ -244,7 +243,7 @@ func (f FolderConfiguration) RequiresRestartOnly() FolderConfiguration {
// copier, yet should not cause a restart.
blank := FolderConfiguration{}
util.CopyMatchingTag(&blank, &copy, "restart", func(v string) bool {
copyMatchingTag(&blank, &copy, "restart", func(v string) bool {
if len(v) > 0 && v != "false" {
panic(fmt.Sprintf(`unexpected tag value: %s. expected untagged or "false"`, v))
}

View File

@@ -17,8 +17,8 @@ import (
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/netutil"
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/syncthing/syncthing/lib/util"
)
// migrations is the set of config migration functions, with their target
@@ -197,11 +197,11 @@ func migrateToConfigV24(cfg *Configuration) {
}
func migrateToConfigV23(cfg *Configuration) {
permBits := fs.FileMode(0777)
permBits := fs.FileMode(0o777)
if build.IsWindows {
// Windows has no umask so we must chose a safer set of bits to
// begin with.
permBits = 0700
permBits = 0o700
}
// Upgrade code remains hardcoded for .stfolder despite configurable
@@ -391,14 +391,14 @@ func migrateToConfigV12(cfg *Configuration) {
// Change listen address schema
for i, addr := range cfg.Options.RawListenAddresses {
if len(addr) > 0 && !strings.HasPrefix(addr, "tcp://") {
cfg.Options.RawListenAddresses[i] = util.Address("tcp", addr)
cfg.Options.RawListenAddresses[i] = netutil.AddressURL("tcp", addr)
}
}
for i, device := range cfg.Devices {
for j, addr := range device.Addresses {
if addr != "dynamic" && addr != "" {
cfg.Devices[i].Addresses[j] = util.Address("tcp", addr)
cfg.Devices[i].Addresses[j] = netutil.AddressURL("tcp", addr)
}
}
}

View File

@@ -12,7 +12,8 @@ import (
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/structutil"
)
func (opts OptionsConfiguration) Copy() OptionsConfiguration {
@@ -29,10 +30,10 @@ func (opts OptionsConfiguration) Copy() OptionsConfiguration {
}
func (opts *OptionsConfiguration) prepare(guiPWIsSet bool) {
util.FillNilSlices(opts)
structutil.FillNilSlices(opts)
opts.RawListenAddresses = util.UniqueTrimmedStrings(opts.RawListenAddresses)
opts.RawGlobalAnnServers = util.UniqueTrimmedStrings(opts.RawGlobalAnnServers)
opts.RawListenAddresses = stringutil.UniqueTrimmedStrings(opts.RawListenAddresses)
opts.RawGlobalAnnServers = stringutil.UniqueTrimmedStrings(opts.RawGlobalAnnServers)
// Very short reconnection intervals are annoying
if opts.ReconnectIntervalS < 5 {
@@ -71,7 +72,7 @@ func (opts *OptionsConfiguration) prepare(guiPWIsSet bool) {
func (opts OptionsConfiguration) RequiresRestartOnly() OptionsConfiguration {
optsCopy := opts
blank := OptionsConfiguration{}
util.CopyMatchingTag(&blank, &optsCopy, "restart", func(v string) bool {
copyMatchingTag(&blank, &optsCopy, "restart", func(v string) bool {
if len(v) > 0 && v != "true" {
panic(fmt.Sprintf(`unexpected tag value: %s. Expected untagged or "true"`, v))
}
@@ -94,7 +95,7 @@ func (opts OptionsConfiguration) ListenAddresses() []string {
addresses = append(addresses, addr)
}
}
return util.UniqueTrimmedStrings(addresses)
return stringutil.UniqueTrimmedStrings(addresses)
}
func (opts OptionsConfiguration) StunServers() []string {
@@ -116,7 +117,7 @@ func (opts OptionsConfiguration) StunServers() []string {
}
}
addresses = util.UniqueTrimmedStrings(addresses)
addresses = stringutil.UniqueTrimmedStrings(addresses)
return addresses
}
@@ -135,7 +136,7 @@ func (opts OptionsConfiguration) GlobalDiscoveryServers() []string {
servers = append(servers, srv)
}
}
return util.UniqueTrimmedStrings(servers)
return stringutil.UniqueTrimmedStrings(servers)
}
func (opts OptionsConfiguration) MaxFolderConcurrency() int {

View File

@@ -10,7 +10,7 @@ import (
"testing"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/structutil"
)
type TestStruct struct {
@@ -20,7 +20,7 @@ type TestStruct struct {
func TestSizeDefaults(t *testing.T) {
x := &TestStruct{}
util.SetDefaults(x)
structutil.SetDefaults(x)
if !x.Size.Percentage() {
t.Error("not percentage")

View File

@@ -12,7 +12,7 @@ import (
"sort"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/structutil"
)
// internalVersioningConfiguration is used in XML serialization
@@ -39,7 +39,7 @@ func (c VersioningConfiguration) Copy() VersioningConfiguration {
}
func (c *VersioningConfiguration) UnmarshalJSON(data []byte) error {
util.SetDefaults(c)
structutil.SetDefaults(c)
type noCustomUnmarshal VersioningConfiguration
ptr := (*noCustomUnmarshal)(c)
return json.Unmarshal(data, ptr)
@@ -47,7 +47,7 @@ func (c *VersioningConfiguration) UnmarshalJSON(data []byte) error {
func (c *VersioningConfiguration) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
var intCfg internalVersioningConfiguration
util.SetDefaults(&intCfg)
structutil.SetDefaults(&intCfg)
if err := d.DecodeElement(&intCfg, &start); err != nil {
return err
}

View File

@@ -52,27 +52,23 @@ func (d *quicDialer) Dial(ctx context.Context, _ protocol.DeviceID, uri *url.URL
return internalConn{}, err
}
var conn net.PacketConn
// We need to track who created the conn.
// Given we always pass the connection to quic, it assumes it's a remote connection it never closes it,
// So our wrapper around it needs to close it, but it only needs to close it if it's not the listening connection.
// If we created the conn we need to close it at the end. If we got a
// Transport from the registry we have no conn to close.
var createdConn net.PacketConn
listenConn := d.registry.Get(uri.Scheme, packetConnUnspecified)
if listenConn != nil {
conn = listenConn.(net.PacketConn)
} else {
transport, _ := d.registry.Get(uri.Scheme, transportConnUnspecified).(*quic.Transport)
if transport == nil {
if packetConn, err := net.ListenPacket("udp", ":0"); err != nil {
return internalConn{}, err
} else {
conn = packetConn
createdConn = packetConn
transport = &quic.Transport{Conn: packetConn}
}
}
ctx, cancel := context.WithTimeout(ctx, quicOperationTimeout)
defer cancel()
session, err := quic.DialContext(ctx, conn, addr, uri.Host, d.tlsCfg, quicConfig)
session, err := transport.Dial(ctx, addr, d.tlsCfg, quicConfig)
if err != nil {
if createdConn != nil {
_ = createdConn.Close()

View File

@@ -95,17 +95,22 @@ func (t *quicListener) serve(ctx context.Context) error {
l.Infoln("Listen (BEP/quic):", err)
return err
}
defer func() { _ = udpConn.Close() }()
defer udpConn.Close()
svc, conn := stun.New(t.cfg, t, udpConn)
defer conn.Close()
tracer := &writeTrackingTracer{}
quicTransport := &quic.Transport{
Conn: udpConn,
Tracer: tracer,
}
defer quicTransport.Close()
svc := stun.New(t.cfg, t, &transportPacketConn{tran: quicTransport}, tracer)
go svc.Serve(ctx)
t.registry.Register(t.uri.Scheme, conn)
defer t.registry.Unregister(t.uri.Scheme, conn)
t.registry.Register(t.uri.Scheme, quicTransport)
defer t.registry.Unregister(t.uri.Scheme, quicTransport)
listener, err := quic.Listen(conn, t.tlsCfg, quicConfig)
listener, err := quicTransport.Listen(t.tlsCfg, quicConfig)
if err != nil {
l.Infoln("Listen (BEP/quic):", err)
return err

View File

@@ -10,20 +10,22 @@
package connections
import (
"context"
"crypto/tls"
"net"
"net/url"
"sync/atomic"
"time"
"github.com/quic-go/quic-go"
"github.com/quic-go/quic-go/logging"
"github.com/syncthing/syncthing/lib/osutil"
)
var quicConfig = &quic.Config{
ConnectionIDLength: 4,
MaxIdleTimeout: 30 * time.Second,
KeepAlivePeriod: 15 * time.Second,
MaxIdleTimeout: 30 * time.Second,
KeepAlivePeriod: 15 * time.Second,
}
func quicNetwork(uri *url.URL) string {
@@ -61,11 +63,75 @@ func (q *quicTlsConn) Close() error {
}
func (q *quicTlsConn) ConnectionState() tls.ConnectionState {
return q.Connection.ConnectionState().TLS.ConnectionState
return q.Connection.ConnectionState().TLS
}
func packetConnUnspecified(conn interface{}) bool {
addr := conn.(net.PacketConn).LocalAddr()
func transportConnUnspecified(conn any) bool {
tran, ok := conn.(*quic.Transport)
if !ok {
return false
}
addr := tran.Conn.LocalAddr()
ip, err := osutil.IPFromAddr(addr)
return err == nil && ip.IsUnspecified()
}
type writeTrackingTracer struct {
lastWrite atomic.Int64 // unix nanos
}
func (t *writeTrackingTracer) SentPacket(net.Addr, *logging.Header, logging.ByteCount, []logging.Frame) {
t.lastWrite.Store(time.Now().UnixNano())
}
func (t *writeTrackingTracer) SentVersionNegotiationPacket(_ net.Addr, dest, src logging.ArbitraryLenConnectionID, _ []quic.VersionNumber) {
t.lastWrite.Store(time.Now().UnixNano())
}
func (t *writeTrackingTracer) DroppedPacket(net.Addr, logging.PacketType, logging.ByteCount, logging.PacketDropReason) {
}
func (t *writeTrackingTracer) LastWrite() time.Time {
return time.Unix(0, t.lastWrite.Load())
}
// A transportPacketConn is a net.PacketConn that uses a quic.Transport.
type transportPacketConn struct {
tran *quic.Transport
readDeadline atomic.Value // time.Time
}
func (t *transportPacketConn) ReadFrom(p []byte) (n int, addr net.Addr, err error) {
ctx := context.Background()
if deadline, ok := t.readDeadline.Load().(time.Time); ok && !deadline.IsZero() {
var cancel context.CancelFunc
ctx, cancel = context.WithDeadline(ctx, deadline)
defer cancel()
}
return t.tran.ReadNonQUICPacket(ctx, p)
}
func (t *transportPacketConn) WriteTo(p []byte, addr net.Addr) (n int, err error) {
return t.tran.WriteTo(p, addr)
}
func (t *transportPacketConn) Close() error {
return errUnsupported
}
func (t *transportPacketConn) LocalAddr() net.Addr {
return t.tran.Conn.LocalAddr()
}
func (t *transportPacketConn) SetDeadline(deadline time.Time) error {
return t.SetReadDeadline(deadline)
}
func (t *transportPacketConn) SetReadDeadline(deadline time.Time) error {
t.readDeadline.Store(deadline)
return nil
}
func (t *transportPacketConn) SetWriteDeadline(_ time.Time) error {
return nil // yolo
}

View File

@@ -23,6 +23,8 @@ import (
stdsync "sync"
"time"
"golang.org/x/exp/slices"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections/registry"
"github.com/syncthing/syncthing/lib/discover"
@@ -30,9 +32,10 @@ import (
"github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
// Registers NAT service providers
_ "github.com/syncthing/syncthing/lib/pmp"
@@ -582,7 +585,7 @@ func (s *service) dialDevices(ctx context.Context, now time.Time, cfg config.Con
// allowed additional number of connections (if limited).
numConns := 0
var numConnsMut stdsync.Mutex
dialSemaphore := util.NewSemaphore(dialMaxParallel)
dialSemaphore := semaphore.New(dialMaxParallel)
dialWG := new(stdsync.WaitGroup)
dialCtx, dialCancel := context.WithCancel(ctx)
defer func() {
@@ -698,7 +701,7 @@ func (s *service) resolveDeviceAddrs(ctx context.Context, cfg config.DeviceConfi
addrs = append(addrs, addr)
}
}
return util.UniqueTrimmedStrings(addrs)
return stringutil.UniqueTrimmedStrings(addrs)
}
type lanChecker struct {
@@ -875,7 +878,7 @@ func (s *service) checkAndSignalConnectLoopOnUpdatedDevices(from, to config.Conf
if oldDev, ok := oldDevices[dev.DeviceID]; !ok || oldDev.Paused {
s.dialNowDevices[dev.DeviceID] = struct{}{}
dial = true
} else if !util.EqualStrings(oldDev.Addresses, dev.Addresses) {
} else if !slices.Equal(oldDev.Addresses, dev.Addresses) {
dial = true
}
}
@@ -905,7 +908,7 @@ func (s *service) AllAddresses() []string {
}
}
s.listenersMut.RUnlock()
return util.UniqueTrimmedStrings(addrs)
return stringutil.UniqueTrimmedStrings(addrs)
}
func (s *service) ExternalAddresses() []string {
@@ -920,7 +923,7 @@ func (s *service) ExternalAddresses() []string {
}
}
s.listenersMut.RUnlock()
return util.UniqueTrimmedStrings(addrs)
return stringutil.UniqueTrimmedStrings(addrs)
}
func (s *service) ListenerStatus() map[string]ListenerStatusEntry {
@@ -1079,7 +1082,7 @@ func IsAllowedNetwork(host string, allowed []string) bool {
return false
}
func (s *service) dialParallel(ctx context.Context, deviceID protocol.DeviceID, dialTargets []dialTarget, parentSema *util.Semaphore) (internalConn, bool) {
func (s *service) dialParallel(ctx context.Context, deviceID protocol.DeviceID, dialTargets []dialTarget, parentSema *semaphore.Semaphore) (internalConn, bool) {
// Group targets into buckets by priority
dialTargetBuckets := make(map[int][]dialTarget, len(dialTargets))
for _, tgt := range dialTargets {
@@ -1095,7 +1098,7 @@ func (s *service) dialParallel(ctx context.Context, deviceID protocol.DeviceID,
// Sort the priorities so that we dial lowest first (which means highest...)
sort.Ints(priorities)
sema := util.MultiSemaphore{util.NewSemaphore(dialMaxParallelPerDevice), parentSema}
sema := semaphore.MultiSemaphore{semaphore.New(dialMaxParallelPerDevice), parentSema}
for _, prio := range priorities {
tgts := dialTargetBuckets[prio]
res := make(chan internalConn, len(tgts))

View File

@@ -71,12 +71,9 @@ func getHostPortsForAllAdapters(port int) []string {
portStr := strconv.Itoa(port)
for _, network := range nets {
// Only IPv4 addresses, as v6 link local require an interface identifiers to work correctly
// And non link local in theory are globally routable anyway.
if network.IP.To4() == nil {
continue
}
if network.IP.IsLinkLocalUnicast() || (isV4Local(network.IP) && network.IP.IsGlobalUnicast()) {
// Only accept IPv4 link-local unicast and the private ranges defined in RFC 1918 and RFC 4193
// IPv6 link-local addresses require an interface identifier to work correctly
if (network.IP.To4() != nil && network.IP.IsLinkLocalUnicast()) || network.IP.IsPrivate() {
hostPorts = append(hostPorts, net.JoinHostPort(network.IP.String(), portStr))
}
}
@@ -107,17 +104,6 @@ func resolve(network, hostPort string) (net.IP, int, error) {
return net.IPv4zero, 0, net.UnknownNetworkError(network)
}
func isV4Local(ip net.IP) bool {
// See https://go-review.googlesource.com/c/go/+/162998/
// We only take the V4 part of that.
if ip4 := ip.To4(); ip4 != nil {
return ip4[0] == 10 ||
(ip4[0] == 172 && ip4[1]&0xf0 == 16) ||
(ip4[0] == 192 && ip4[1] == 168)
}
return false
}
func maybeReplacePort(uri *url.URL, laddr net.Addr) *url.URL {
if laddr == nil {
return uri

View File

@@ -23,9 +23,9 @@ import (
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
"github.com/thejerf/suture/v4"
)
@@ -1042,7 +1042,7 @@ func (db *Lowlevel) loadMetadataTracker(folder string) (*metadataTracker, error)
}
if age := time.Since(meta.Created()); age > db.recheckInterval {
l.Infof("Stored folder metadata for %q is %v old; recalculating", folder, util.NiceDurationString(age))
l.Infof("Stored folder metadata for %q is %v old; recalculating", folder, stringutil.NiceDurationString(age))
return db.getMetaAndCheck(folder)
}

View File

@@ -22,9 +22,9 @@ import (
"github.com/syncthing/syncthing/lib/connections/registry"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
)
// The Manager aggregates results from multiple Finders. Each Finder has
@@ -158,7 +158,7 @@ func (m *manager) Lookup(ctx context.Context, deviceID protocol.DeviceID) (addre
}
m.mut.RUnlock()
addresses = util.UniqueTrimmedStrings(addresses)
addresses = stringutil.UniqueTrimmedStrings(addresses)
sort.Strings(addresses)
l.Debugln("lookup results for", deviceID)
@@ -223,7 +223,7 @@ func (m *manager) Cache() map[protocol.DeviceID]CacheEntry {
m.mut.RUnlock()
for k, v := range res {
v.Addresses = util.UniqueTrimmedStrings(v.Addresses)
v.Addresses = stringutil.UniqueTrimmedStrings(v.Addresses)
res[k] = v
}

View File

@@ -297,6 +297,7 @@ loop:
case e := <-l.events:
// Incoming events get sent
l.sendEvent(e)
metricEvents.WithLabelValues(e.Type.String(), metricEventStateCreated).Inc()
case fn := <-l.funcs:
// Subscriptions are handled here.
@@ -345,9 +346,11 @@ func (l *logger) sendEvent(e Event) {
select {
case s.events <- e:
metricEvents.WithLabelValues(e.Type.String(), metricEventStateDelivered).Inc()
case <-l.timeout.C:
// if s.events is not ready, drop the event
timedOut = true
metricEvents.WithLabelValues(e.Type.String(), metricEventStateDropped).Inc()
}
// If stop returns false it already sent something to the

25
lib/events/metrics.go Normal file
View File

@@ -0,0 +1,25 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package events
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var metricEvents = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "events",
Name: "total",
Help: "Total number of created/forwarded/dropped events",
}, []string{"event", "state"})
const (
metricEventStateCreated = "created"
metricEventStateDelivered = "delivered"
metricEventStateDropped = "dropped"
)

View File

@@ -28,6 +28,7 @@ const (
filesystemWrapperTypeError
filesystemWrapperTypeWalk
filesystemWrapperTypeLog
filesystemWrapperTypeMetrics
)
type XattrFilter interface {
@@ -275,6 +276,8 @@ func NewFilesystem(fsType FilesystemType, uri string, opts ...Option) Filesystem
fs = mtimeOpt.apply(fs)
}
fs = &metricsFS{next: fs}
if l.ShouldDebug("walkfs") {
return NewWalkFilesystem(&logFilesystem{fs})
}
@@ -290,7 +293,8 @@ func NewFilesystem(fsType FilesystemType, uri string, opts ...Option) Filesystem
// root, represents an internal file that should always be ignored. The file
// path must be clean (i.e., in canonical shortest form).
func IsInternal(file string) bool {
// fs cannot import config, so we hard code .stfolder here (config.DefaultMarkerName)
// fs cannot import config or versioner, so we hard code .stfolder
// (config.DefaultMarkerName) and .stversions (versioner.DefaultPath)
internals := []string{".stfolder", ".stignore", ".stversions"}
for _, internal := range internals {
if file == internal {

View File

@@ -320,7 +320,15 @@ func TestCopyRange(tttt *testing.T) {
t.Fatal(err)
}
if err := impl(src.(basicFile), dst.(basicFile), testCase.srcOffset, testCase.dstOffset, testCase.copySize); err != nil {
srcBasic, ok := unwrap(src).(basicFile)
if !ok {
t.Fatal("src file is not a basic file")
}
dstBasic, ok := unwrap(dst).(basicFile)
if !ok {
t.Fatal("dst file is not a basic file")
}
if err := impl(srcBasic, dstBasic, testCase.srcOffset, testCase.dstOffset, testCase.copySize); err != nil {
if err == syscall.ENOTSUP {
// Test runner can adjust directory in which to run the tests, that allow broader tests.
t.Skip("Not supported on the current filesystem, set STFSTESTPATH env var.")

339
lib/fs/metrics.go Normal file
View File

@@ -0,0 +1,339 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package fs
import (
"context"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/syncthing/syncthing/lib/protocol"
)
var (
metricTotalOperationSeconds = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "fs",
Name: "operation_seconds_total",
Help: "Total time spent in filesystem operations, per filesystem root and operation",
}, []string{"root", "operation"})
metricTotalOperationsCount = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "fs",
Name: "operations_total",
Help: "Total number of filesystem operations, per filesystem root and operation",
}, []string{"root", "operation"})
metricTotalBytesCount = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "fs",
Name: "operation_bytes_total",
Help: "Total number of filesystem bytes transferred, per filesystem root and operation",
}, []string{"root", "operation"})
)
const (
// fs operations
metricOpChmod = "chmod"
metricOpLchmod = "lchmod"
metricOpChtimes = "chtimes"
metricOpCreate = "create"
metricOpCreateSymlink = "createsymlink"
metricOpDirNames = "dirnames"
metricOpLstat = "lstat"
metricOpMkdir = "mdkir"
metricOpMkdirAll = "mkdirall"
metricOpOpen = "open"
metricOpOpenFile = "openfile"
metricOpReadSymlink = "readsymlink"
metricOpRemove = "remove"
metricOpRemoveAll = "removeall"
metricOpRename = "rename"
metricOpStat = "stat"
metricOpSymlinksSupported = "symlinkssupported"
metricOpWalk = "walk"
metricOpWatch = "watch"
metricOpHide = "hide"
metricOpUnhide = "unhide"
metricOpGlob = "glob"
metricOpRoots = "roots"
metricOpUsage = "usage"
metricOpType = "type"
metricOpURI = "uri"
metricOpOptions = "options"
metricOpSameFile = "samefile"
metricOpPlatformData = "platformdata"
metricOpGetXattr = "getxattr"
metricOpSetXattr = "setxattr"
// file operations
metricOpRead = "read"
metricOpReadAt = "readat"
metricOpWrite = "write"
metricOpWriteAt = "writeat"
metricOpTruncate = "truncate"
metricOpSeek = "seek"
metricOpSync = "sync"
metricOpClose = "close"
metricOpName = "name"
)
type metricsFS struct {
next Filesystem
}
var _ Filesystem = (*metricsFS)(nil)
func (m *metricsFS) account(op string) func(bytes int) {
t0 := time.Now()
root := m.next.URI()
return func(bytes int) {
metricTotalOperationSeconds.WithLabelValues(root, op).Add(time.Since(t0).Seconds())
metricTotalOperationsCount.WithLabelValues(root, op).Inc()
if bytes >= 0 {
metricTotalBytesCount.WithLabelValues(root, op).Add(float64(bytes))
}
}
}
func (m *metricsFS) Chmod(name string, mode FileMode) error {
defer m.account(metricOpChmod)(-1)
return m.next.Chmod(name, mode)
}
func (m *metricsFS) Lchown(name string, uid, gid string) error {
defer m.account(metricOpLchmod)(-1)
return m.next.Lchown(name, uid, gid)
}
func (m *metricsFS) Chtimes(name string, atime time.Time, mtime time.Time) error {
defer m.account(metricOpChtimes)(-1)
return m.next.Chtimes(name, atime, mtime)
}
func (m *metricsFS) Create(name string) (File, error) {
defer m.account(metricOpCreate)(-1)
f, err := m.next.Create(name)
if err != nil {
return nil, err
}
return &metricsFile{next: f, fs: m}, nil
}
func (m *metricsFS) CreateSymlink(target, name string) error {
defer m.account(metricOpCreateSymlink)(-1)
return m.next.CreateSymlink(target, name)
}
func (m *metricsFS) DirNames(name string) ([]string, error) {
defer m.account(metricOpDirNames)(-1)
return m.next.DirNames(name)
}
func (m *metricsFS) Lstat(name string) (FileInfo, error) {
defer m.account(metricOpLstat)(-1)
return m.next.Lstat(name)
}
func (m *metricsFS) Mkdir(name string, perm FileMode) error {
defer m.account(metricOpMkdir)(-1)
return m.next.Mkdir(name, perm)
}
func (m *metricsFS) MkdirAll(name string, perm FileMode) error {
defer m.account(metricOpMkdirAll)(-1)
return m.next.MkdirAll(name, perm)
}
func (m *metricsFS) Open(name string) (File, error) {
defer m.account(metricOpOpen)(-1)
f, err := m.next.Open(name)
if err != nil {
return nil, err
}
return &metricsFile{next: f, fs: m}, nil
}
func (m *metricsFS) OpenFile(name string, flags int, mode FileMode) (File, error) {
defer m.account(metricOpOpenFile)(-1)
f, err := m.next.OpenFile(name, flags, mode)
if err != nil {
return nil, err
}
return &metricsFile{next: f, fs: m}, nil
}
func (m *metricsFS) ReadSymlink(name string) (string, error) {
defer m.account(metricOpReadSymlink)(-1)
return m.next.ReadSymlink(name)
}
func (m *metricsFS) Remove(name string) error {
defer m.account(metricOpRemove)(-1)
return m.next.Remove(name)
}
func (m *metricsFS) RemoveAll(name string) error {
defer m.account(metricOpRemoveAll)(-1)
return m.next.RemoveAll(name)
}
func (m *metricsFS) Rename(oldname, newname string) error {
defer m.account(metricOpRename)(-1)
return m.next.Rename(oldname, newname)
}
func (m *metricsFS) Stat(name string) (FileInfo, error) {
defer m.account(metricOpStat)(-1)
return m.next.Stat(name)
}
func (m *metricsFS) SymlinksSupported() bool {
defer m.account(metricOpSymlinksSupported)(-1)
return m.next.SymlinksSupported()
}
func (m *metricsFS) Walk(name string, walkFn WalkFunc) error {
defer m.account(metricOpWalk)(-1)
return m.next.Walk(name, walkFn)
}
func (m *metricsFS) Watch(path string, ignore Matcher, ctx context.Context, ignorePerms bool) (<-chan Event, <-chan error, error) {
defer m.account(metricOpWatch)(-1)
return m.next.Watch(path, ignore, ctx, ignorePerms)
}
func (m *metricsFS) Hide(name string) error {
defer m.account(metricOpHide)(-1)
return m.next.Hide(name)
}
func (m *metricsFS) Unhide(name string) error {
defer m.account(metricOpUnhide)(-1)
return m.next.Unhide(name)
}
func (m *metricsFS) Glob(pattern string) ([]string, error) {
defer m.account(metricOpGlob)(-1)
return m.next.Glob(pattern)
}
func (m *metricsFS) Roots() ([]string, error) {
defer m.account(metricOpRoots)(-1)
return m.next.Roots()
}
func (m *metricsFS) Usage(name string) (Usage, error) {
defer m.account(metricOpUsage)(-1)
return m.next.Usage(name)
}
func (m *metricsFS) Type() FilesystemType {
defer m.account(metricOpType)(-1)
return m.next.Type()
}
func (m *metricsFS) URI() string {
defer m.account(metricOpURI)(-1)
return m.next.URI()
}
func (m *metricsFS) Options() []Option {
defer m.account(metricOpOptions)(-1)
return m.next.Options()
}
func (m *metricsFS) SameFile(fi1, fi2 FileInfo) bool {
defer m.account(metricOpSameFile)(-1)
return m.next.SameFile(fi1, fi2)
}
func (m *metricsFS) PlatformData(name string, withOwnership, withXattrs bool, xattrFilter XattrFilter) (protocol.PlatformData, error) {
defer m.account(metricOpPlatformData)(-1)
return m.next.PlatformData(name, withOwnership, withXattrs, xattrFilter)
}
func (m *metricsFS) GetXattr(name string, xattrFilter XattrFilter) ([]protocol.Xattr, error) {
defer m.account(metricOpGetXattr)(-1)
return m.next.GetXattr(name, xattrFilter)
}
func (m *metricsFS) SetXattr(path string, xattrs []protocol.Xattr, xattrFilter XattrFilter) error {
defer m.account(metricOpSetXattr)(-1)
return m.next.SetXattr(path, xattrs, xattrFilter)
}
func (m *metricsFS) underlying() (Filesystem, bool) {
return m.next, true
}
func (m *metricsFS) wrapperType() filesystemWrapperType {
return filesystemWrapperTypeMetrics
}
type metricsFile struct {
fs *metricsFS
next File
}
func (m *metricsFile) Read(p []byte) (n int, err error) {
acc := m.fs.account(metricOpRead)
defer func() { acc(n) }()
return m.next.Read(p)
}
func (m *metricsFile) ReadAt(p []byte, off int64) (n int, err error) {
acc := m.fs.account(metricOpReadAt)
defer func() { acc(n) }()
return m.next.ReadAt(p, off)
}
func (m *metricsFile) Seek(offset int64, whence int) (int64, error) {
defer m.fs.account(metricOpSeek)(-1)
return m.next.Seek(offset, whence)
}
func (m *metricsFile) Stat() (FileInfo, error) {
defer m.fs.account(metricOpStat)(-1)
return m.next.Stat()
}
func (m *metricsFile) Sync() error {
defer m.fs.account(metricOpSync)(-1)
return m.next.Sync()
}
func (m *metricsFile) Truncate(size int64) error {
defer m.fs.account(metricOpTruncate)(-1)
return m.next.Truncate(size)
}
func (m *metricsFile) Write(p []byte) (n int, err error) {
acc := m.fs.account(metricOpWrite)
defer func() { acc(n) }()
return m.next.Write(p)
}
func (m *metricsFile) WriteAt(p []byte, off int64) (n int, err error) {
acc := m.fs.account(metricOpWriteAt)
defer func() { acc(n) }()
return m.next.WriteAt(p, off)
}
func (m *metricsFile) Close() error {
defer m.fs.account(metricOpClose)(-1)
return m.next.Close()
}
func (m *metricsFile) Name() string {
defer m.fs.account(metricOpName)(-1)
return m.next.Name()
}
func (m *metricsFile) unwrap() File {
return m.next
}

View File

@@ -260,6 +260,8 @@ func newMtimeFS(path string, db database, options ...MtimeFSOption) *mtimeFS {
}
func newMtimeFSWithWalk(path string, db database, options ...MtimeFSOption) (*mtimeFS, *walkFilesystem) {
wfs := NewFilesystem(FilesystemTypeBasic, path, NewMtimeOption(db, options...)).(*walkFilesystem)
return wfs.Filesystem.(*mtimeFS), wfs
fs := NewFilesystem(FilesystemTypeBasic, path, NewMtimeOption(db, options...))
wfs, _ := unwrapFilesystem(fs, filesystemWrapperTypeWalk)
mfs, _ := unwrapFilesystem(fs, filesystemWrapperTypeMtime)
return mfs.(*mtimeFS), wfs.(*walkFilesystem)
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/syncthing/syncthing/lib/protocol"
protocolmocks "github.com/syncthing/syncthing/lib/protocol/mocks"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/scanner"
)
@@ -36,10 +37,11 @@ func newFakeConnection(id protocol.DeviceID, model Model) *fakeConnection {
f.CloseCalls(func(err error) {
f.closeOnce.Do(func() {
close(f.closed)
model.Closed(f, err)
})
model.Closed(f, err)
f.ClosedReturns(f.closed)
})
f.StringReturns(rand.String(8))
return f
}

View File

@@ -24,10 +24,11 @@ import (
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/stats"
"github.com/syncthing/syncthing/lib/stringutil"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/versioner"
"github.com/syncthing/syncthing/lib/watchaggregator"
)
@@ -39,7 +40,7 @@ type folder struct {
stateTracker
config.FolderConfiguration
*stats.FolderStatisticsReference
ioLimiter *util.Semaphore
ioLimiter *semaphore.Semaphore
localFlags uint32
@@ -95,7 +96,7 @@ type puller interface {
pull() (bool, error) // true when successful and should not be retried
}
func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, evLogger events.Logger, ioLimiter *util.Semaphore, ver versioner.Versioner) folder {
func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, evLogger events.Logger, ioLimiter *semaphore.Semaphore, ver versioner.Versioner) folder {
f := folder{
stateTracker: newStateTracker(cfg.ID, evLogger),
FolderConfiguration: cfg,
@@ -137,6 +138,9 @@ func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg conf
f.pullPause = f.pullBasePause()
f.pullFailTimer = time.NewTimer(0)
<-f.pullFailTimer.C
registerFolderMetrics(f.ID)
return f
}
@@ -423,7 +427,7 @@ func (f *folder) pull() (success bool, err error) {
// Pulling failed, try again later.
delay := f.pullPause + time.Since(startTime)
l.Infof("Folder %v isn't making sync progress - retrying in %v.", f.Description(), util.NiceDurationString(delay))
l.Infof("Folder %v isn't making sync progress - retrying in %v.", f.Description(), stringutil.NiceDurationString(delay))
f.pullFailTimer.Reset(delay)
return false, err
@@ -459,6 +463,11 @@ func (f *folder) scanSubdirs(subDirs []string) error {
}
defer f.ioLimiter.Give(1)
metricFolderScans.WithLabelValues(f.ID).Inc()
ctx, cancel := context.WithCancel(f.ctx)
defer cancel()
go addTimeUntilCancelled(ctx, metricFolderScanSeconds.WithLabelValues(f.ID))
for i := range subDirs {
sub := osutil.NativeFilename(subDirs[i])

View File

@@ -16,7 +16,7 @@ import (
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/versioner"
)
@@ -28,7 +28,7 @@ type receiveEncryptedFolder struct {
*sendReceiveFolder
}
func newReceiveEncryptedFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *util.Semaphore) service {
func newReceiveEncryptedFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *semaphore.Semaphore) service {
f := &receiveEncryptedFolder{newSendReceiveFolder(model, fset, ignores, cfg, ver, evLogger, ioLimiter).(*sendReceiveFolder)}
f.localFlags = protocol.FlagLocalReceiveOnly // gets propagated to the scanner, and set on locally changed files
return f

View File

@@ -15,7 +15,7 @@ import (
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/ignore"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/versioner"
)
@@ -57,7 +57,7 @@ type receiveOnlyFolder struct {
*sendReceiveFolder
}
func newReceiveOnlyFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *util.Semaphore) service {
func newReceiveOnlyFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *semaphore.Semaphore) service {
sr := newSendReceiveFolder(model, fset, ignores, cfg, ver, evLogger, ioLimiter).(*sendReceiveFolder)
sr.localFlags = protocol.FlagLocalReceiveOnly // gets propagated to the scanner, and set on locally changed files
return &receiveOnlyFolder{sr}

View File

@@ -12,7 +12,7 @@ import (
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/ignore"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/versioner"
)
@@ -24,7 +24,7 @@ type sendOnlyFolder struct {
folder
}
func newSendOnlyFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, _ versioner.Versioner, evLogger events.Logger, ioLimiter *util.Semaphore) service {
func newSendOnlyFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, _ versioner.Versioner, evLogger events.Logger, ioLimiter *semaphore.Semaphore) service {
f := &sendOnlyFolder{
folder: newFolder(model, fset, ignores, cfg, evLogger, ioLimiter, nil),
}

View File

@@ -8,6 +8,7 @@ package model
import (
"bytes"
"context"
"errors"
"fmt"
"io"
@@ -26,9 +27,9 @@ import (
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/versioner"
"github.com/syncthing/syncthing/lib/weakhash"
)
@@ -124,17 +125,17 @@ type sendReceiveFolder struct {
queue *jobQueue
blockPullReorderer blockPullReorderer
writeLimiter *util.Semaphore
writeLimiter *semaphore.Semaphore
tempPullErrors map[string]string // pull errors that might be just transient
}
func newSendReceiveFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *util.Semaphore) service {
func newSendReceiveFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, ver versioner.Versioner, evLogger events.Logger, ioLimiter *semaphore.Semaphore) service {
f := &sendReceiveFolder{
folder: newFolder(model, fset, ignores, cfg, evLogger, ioLimiter, ver),
queue: newJobQueue(),
blockPullReorderer: newBlockPullReorderer(cfg.BlockPullOrder, model.id, cfg.DeviceIDs()),
writeLimiter: util.NewSemaphore(cfg.MaxConcurrentWrites),
writeLimiter: semaphore.New(cfg.MaxConcurrentWrites),
}
f.folder.puller = f
@@ -162,12 +163,16 @@ func (f *sendReceiveFolder) pull() (bool, error) {
scanChan := make(chan string)
go f.pullScannerRoutine(scanChan)
defer func() {
close(scanChan)
f.setState(FolderIdle)
}()
metricFolderPulls.WithLabelValues(f.ID).Inc()
ctx, cancel := context.WithCancel(f.ctx)
defer cancel()
go addTimeUntilCancelled(ctx, metricFolderPullSeconds.WithLabelValues(f.ID))
changed := 0
f.errorsMut.Lock()
@@ -573,9 +578,9 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, snap *db.Snapshot,
})
}()
mode := fs.FileMode(file.Permissions & 0777)
mode := fs.FileMode(file.Permissions & 0o777)
if f.IgnorePerms || file.NoPermissions {
mode = 0777
mode = 0o777
}
if shouldDebug() {
@@ -705,7 +710,7 @@ func (f *sendReceiveFolder) checkParent(file string, scanChan chan<- string) boo
return true
}
l.Debugf("%v creating parent directory of %v", f, file)
if err := f.mtimefs.MkdirAll(parent, 0755); err != nil {
if err := f.mtimefs.MkdirAll(parent, 0o755); err != nil {
f.newPullError(file, fmt.Errorf("creating parent dir: %w", err))
return false
}
@@ -1136,12 +1141,12 @@ func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, snap *db.Snapshot
func (f *sendReceiveFolder) reuseBlocks(blocks []protocol.BlockInfo, reused []int, file protocol.FileInfo, tempName string) ([]protocol.BlockInfo, []int) {
// Check for an old temporary file which might have some blocks we could
// reuse.
tempBlocks, err := scanner.HashFile(f.ctx, f.mtimefs, tempName, file.BlockSize(), nil, false)
tempBlocks, err := scanner.HashFile(f.ctx, f.ID, f.mtimefs, tempName, file.BlockSize(), nil, false)
if err != nil {
var caseErr *fs.ErrCaseConflict
if errors.As(err, &caseErr) {
if rerr := f.mtimefs.Rename(caseErr.Real, tempName); rerr == nil {
tempBlocks, err = scanner.HashFile(f.ctx, f.mtimefs, tempName, file.BlockSize(), nil, false)
tempBlocks, err = scanner.HashFile(f.ctx, f.ID, f.mtimefs, tempName, file.BlockSize(), nil, false)
}
}
}
@@ -1235,7 +1240,7 @@ func (f *sendReceiveFolder) shortcutFile(file protocol.FileInfo, dbUpdateChan ch
f.queue.Done(file.Name)
if !f.IgnorePerms && !file.NoPermissions {
if err = f.mtimefs.Chmod(file.Name, fs.FileMode(file.Permissions&0777)); err != nil {
if err = f.mtimefs.Chmod(file.Name, fs.FileMode(file.Permissions&0o777)); err != nil {
f.newPullError(file.Name, fmt.Errorf("shortcut file (setting permissions): %w", err))
return
}
@@ -1249,7 +1254,7 @@ func (f *sendReceiveFolder) shortcutFile(file protocol.FileInfo, dbUpdateChan ch
// Still need to re-write the trailer with the new encrypted fileinfo.
if f.Type == config.FolderTypeReceiveEncrypted {
err = inWritableDir(func(path string) error {
fd, err := f.mtimefs.OpenFile(path, fs.OptReadWrite, 0666)
fd, err := f.mtimefs.OpenFile(path, fs.OptReadWrite, 0o666)
if err != nil {
return err
}
@@ -1329,7 +1334,7 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
// block of all zeroes, so then we should not skip it.
// Pretend we copied it.
state.copiedFromOrigin()
state.skippedSparseBlock(block.Size)
state.copyDone(block)
continue
}
@@ -1348,9 +1353,9 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
state.fail(fmt.Errorf("dst write: %w", err))
}
if offset == block.Offset {
state.copiedFromOrigin()
state.copiedFromOrigin(block.Size)
} else {
state.copiedFromOriginShifted()
state.copiedFromOriginShifted(block.Size)
}
return false
@@ -1398,7 +1403,9 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
state.fail(fmt.Errorf("dst write: %w", err))
}
if path == state.file.Name {
state.copiedFromOrigin()
state.copiedFromOrigin(block.Size)
} else {
state.copiedFromElsewhere(block.Size)
}
return true
})
@@ -1485,7 +1492,7 @@ func (*sendReceiveFolder) verifyBuffer(buf []byte, block protocol.BlockInfo) err
}
func (f *sendReceiveFolder) pullerRoutine(snap *db.Snapshot, in <-chan pullBlockState, out chan<- *sharedPullerState) {
requestLimiter := util.NewSemaphore(f.PullerMaxPendingKiB * 1024)
requestLimiter := semaphore.New(f.PullerMaxPendingKiB * 1024)
wg := sync.NewWaitGroup()
for state := range in {
@@ -1608,7 +1615,7 @@ loop:
func (f *sendReceiveFolder) performFinish(file, curFile protocol.FileInfo, hasCurFile bool, tempName string, snap *db.Snapshot, dbUpdateChan chan<- dbUpdateJob, scanChan chan<- string) error {
// Set the correct permission bits on the new file
if !f.IgnorePerms && !file.NoPermissions {
if err := f.mtimefs.Chmod(tempName, fs.FileMode(file.Permissions&0777)); err != nil {
if err := f.mtimefs.Chmod(tempName, fs.FileMode(file.Permissions&0o777)); err != nil {
return fmt.Errorf("setting permissions: %w", err)
}
}

View File

@@ -299,7 +299,7 @@ func TestCopierFinder(t *testing.T) {
}
// Verify that the fetched blocks have actually been written to the temp file
blks, err := scanner.HashFile(context.TODO(), f.Filesystem(nil), tempFile, protocol.MinBlockSize, nil, false)
blks, err := scanner.HashFile(context.TODO(), f.ID, f.Filesystem(nil), tempFile, protocol.MinBlockSize, nil, false)
if err != nil {
t.Log(err)
}

View File

@@ -396,6 +396,24 @@ func (c *folderSummaryService) sendSummary(ctx context.Context, folder string) {
Summary: data,
})
metricFolderSummary.WithLabelValues(folder, metricScopeGlobal, metricTypeFiles).Set(float64(data.GlobalFiles))
metricFolderSummary.WithLabelValues(folder, metricScopeGlobal, metricTypeDirectories).Set(float64(data.GlobalDirectories))
metricFolderSummary.WithLabelValues(folder, metricScopeGlobal, metricTypeSymlinks).Set(float64(data.GlobalSymlinks))
metricFolderSummary.WithLabelValues(folder, metricScopeGlobal, metricTypeDeleted).Set(float64(data.GlobalDeleted))
metricFolderSummary.WithLabelValues(folder, metricScopeGlobal, metricTypeBytes).Set(float64(data.GlobalBytes))
metricFolderSummary.WithLabelValues(folder, metricScopeLocal, metricTypeFiles).Set(float64(data.LocalFiles))
metricFolderSummary.WithLabelValues(folder, metricScopeLocal, metricTypeDirectories).Set(float64(data.LocalDirectories))
metricFolderSummary.WithLabelValues(folder, metricScopeLocal, metricTypeSymlinks).Set(float64(data.LocalSymlinks))
metricFolderSummary.WithLabelValues(folder, metricScopeLocal, metricTypeDeleted).Set(float64(data.LocalDeleted))
metricFolderSummary.WithLabelValues(folder, metricScopeLocal, metricTypeBytes).Set(float64(data.LocalBytes))
metricFolderSummary.WithLabelValues(folder, metricScopeNeed, metricTypeFiles).Set(float64(data.NeedFiles))
metricFolderSummary.WithLabelValues(folder, metricScopeNeed, metricTypeDirectories).Set(float64(data.NeedDirectories))
metricFolderSummary.WithLabelValues(folder, metricScopeNeed, metricTypeSymlinks).Set(float64(data.NeedSymlinks))
metricFolderSummary.WithLabelValues(folder, metricScopeNeed, metricTypeDeleted).Set(float64(data.NeedDeletes))
metricFolderSummary.WithLabelValues(folder, metricScopeNeed, metricTypeBytes).Set(float64(data.NeedBytes))
for _, devCfg := range c.cfg.Folders()[folder].Devices {
select {
case <-ctx.Done():

View File

@@ -111,6 +111,10 @@ func (s *stateTracker) setState(newState folderState) {
return
}
defer func() {
metricFolderState.WithLabelValues(s.folderID).Set(float64(s.current))
}()
/* This should hold later...
if s.current != FolderIdle && (newState == FolderScanning || newState == FolderSyncing) {
panic("illegal state transition " + s.current.String() + " -> " + newState.String())
@@ -148,6 +152,10 @@ func (s *stateTracker) setError(err error) {
s.mut.Lock()
defer s.mut.Unlock()
defer func() {
metricFolderState.WithLabelValues(s.folderID).Set(float64(s.current))
}()
eventData := map[string]interface{}{
"folder": s.folderID,
"from": s.current.String(),

View File

@@ -12,8 +12,6 @@ import (
"sync"
"time"
"github.com/thejerf/suture/v4"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/events"
@@ -28,7 +26,6 @@ type indexHandler struct {
folderIsReceiveEncrypted bool
prevSequence int64
evLogger events.Logger
token suture.ServiceToken
cond *sync.Cond
paused bool
@@ -373,11 +370,10 @@ func (s *indexHandler) String() string {
}
type indexHandlerRegistry struct {
sup *suture.Supervisor
evLogger events.Logger
conn protocol.Connection
downloads *deviceDownloadState
indexHandlers map[string]*indexHandler
indexHandlers *serviceMap[string, *indexHandler]
startInfos map[string]*clusterConfigDeviceInfo
folderStates map[string]*indexHandlerFolderState
mut sync.Mutex
@@ -389,27 +385,16 @@ type indexHandlerFolderState struct {
runner service
}
func newIndexHandlerRegistry(conn protocol.Connection, downloads *deviceDownloadState, closed chan struct{}, parentSup *suture.Supervisor, evLogger events.Logger) *indexHandlerRegistry {
func newIndexHandlerRegistry(conn protocol.Connection, downloads *deviceDownloadState, evLogger events.Logger) *indexHandlerRegistry {
r := &indexHandlerRegistry{
evLogger: evLogger,
conn: conn,
downloads: downloads,
evLogger: evLogger,
indexHandlers: make(map[string]*indexHandler),
indexHandlers: newServiceMap[string, *indexHandler](evLogger),
startInfos: make(map[string]*clusterConfigDeviceInfo),
folderStates: make(map[string]*indexHandlerFolderState),
mut: sync.Mutex{},
}
r.sup = suture.New(r.String(), svcutil.SpecWithDebugLogger(l))
ourToken := parentSup.Add(r.sup)
r.sup.Add(svcutil.AsService(func(ctx context.Context) error {
select {
case <-ctx.Done():
return ctx.Err()
case <-closed:
parentSup.Remove(ourToken)
}
return nil
}, fmt.Sprintf("%v/waitForClosed", r)))
return r
}
@@ -417,20 +402,18 @@ func (r *indexHandlerRegistry) String() string {
return fmt.Sprintf("indexHandlerRegistry/%v", r.conn.DeviceID().Short())
}
func (r *indexHandlerRegistry) GetSupervisor() *suture.Supervisor {
return r.sup
func (r *indexHandlerRegistry) Serve(ctx context.Context) error {
// Running the index handler registry means running the individual index
// handler children.
return r.indexHandlers.Serve(ctx)
}
func (r *indexHandlerRegistry) startLocked(folder config.FolderConfiguration, fset *db.FileSet, runner service, startInfo *clusterConfigDeviceInfo) {
if is, ok := r.indexHandlers[folder.ID]; ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexHandlers, folder.ID)
}
r.indexHandlers.RemoveAndWait(folder.ID, 0)
delete(r.startInfos, folder.ID)
is := newIndexHandler(r.conn, r.downloads, folder, fset, runner, startInfo, r.evLogger)
is.token = r.sup.Add(is)
r.indexHandlers[folder.ID] = is
r.indexHandlers.Add(folder.ID, is)
// This new connection might help us get in sync.
runner.SchedulePull()
@@ -444,9 +427,7 @@ func (r *indexHandlerRegistry) AddIndexInfo(folder string, startInfo *clusterCon
r.mut.Lock()
defer r.mut.Unlock()
if is, ok := r.indexHandlers[folder]; ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexHandlers, folder)
if r.indexHandlers.RemoveAndWait(folder, 0) {
l.Debugf("Removed index sender for device %v and folder %v due to added pending", r.conn.DeviceID().Short(), folder)
}
folderState, ok := r.folderStates[folder]
@@ -465,10 +446,7 @@ func (r *indexHandlerRegistry) Remove(folder string) {
defer r.mut.Unlock()
l.Debugf("Removing index handler for device %v and folder %v", r.conn.DeviceID().Short(), folder)
if is, ok := r.indexHandlers[folder]; ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexHandlers, folder)
}
r.indexHandlers.RemoveAndWait(folder, 0)
delete(r.startInfos, folder)
l.Debugf("Removed index handler for device %v and folder %v", r.conn.DeviceID().Short(), folder)
}
@@ -480,13 +458,12 @@ func (r *indexHandlerRegistry) RemoveAllExcept(except map[string]remoteFolderSta
r.mut.Lock()
defer r.mut.Unlock()
for folder, is := range r.indexHandlers {
r.indexHandlers.Each(func(folder string, is *indexHandler) {
if _, ok := except[folder]; !ok {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexHandlers, folder)
r.indexHandlers.RemoveAndWait(folder, 0)
l.Debugf("Removed index handler for device %v and folder %v (removeAllExcept)", r.conn.DeviceID().Short(), folder)
}
}
})
for folder := range r.startInfos {
if _, ok := except[folder]; !ok {
delete(r.startInfos, folder)
@@ -518,7 +495,7 @@ func (r *indexHandlerRegistry) RegisterFolderState(folder config.FolderConfigura
func (r *indexHandlerRegistry) folderPausedLocked(folder string) {
l.Debugf("Pausing index handler for device %v and folder %v", r.conn.DeviceID().Short(), folder)
delete(r.folderStates, folder)
if is, ok := r.indexHandlers[folder]; ok {
if is, ok := r.indexHandlers.Get(folder); ok {
is.pause()
l.Debugf("Paused index handler for device %v and folder %v", r.conn.DeviceID().Short(), folder)
} else {
@@ -536,11 +513,10 @@ func (r *indexHandlerRegistry) folderRunningLocked(folder config.FolderConfigura
runner: runner,
}
is, isOk := r.indexHandlers[folder.ID]
is, isOk := r.indexHandlers.Get(folder.ID)
if info, ok := r.startInfos[folder.ID]; ok {
if isOk {
r.sup.RemoveAndWait(is.token, 0)
delete(r.indexHandlers, folder.ID)
r.indexHandlers.RemoveAndWait(folder.ID, 0)
l.Debugf("Removed index handler for device %v and folder %v in resume", r.conn.DeviceID().Short(), folder.ID)
}
r.startLocked(folder, fset, runner, info)
@@ -557,7 +533,7 @@ func (r *indexHandlerRegistry) folderRunningLocked(folder config.FolderConfigura
func (r *indexHandlerRegistry) ReceiveIndex(folder string, fs []protocol.FileInfo, update bool, op string) error {
r.mut.Lock()
defer r.mut.Unlock()
is, isOk := r.indexHandlers[folder]
is, isOk := r.indexHandlers.Get(folder)
if !isOk {
l.Infof("%v for nonexistent or paused folder %q", op, folder)
return fmt.Errorf("%s: %w", folder, ErrFolderMissing)

93
lib/model/metrics.go Normal file
View File

@@ -0,0 +1,93 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricFolderState = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_state",
Help: "Current folder state",
}, []string{"folder"})
metricFolderSummary = promauto.NewGaugeVec(prometheus.GaugeOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_summary",
Help: "Current folder summary data (counts for global/local/need files/directories/symlinks/deleted/bytes)",
}, []string{"folder", "scope", "type"})
metricFolderPulls = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_pulls_total",
Help: "Total number of folder pull iterations, per folder ID",
}, []string{"folder"})
metricFolderPullSeconds = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_pull_seconds_total",
Help: "Total time spent in folder pull iterations, per folder ID",
}, []string{"folder"})
metricFolderScans = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_scans_total",
Help: "Total number of folder scan iterations, per folder ID",
}, []string{"folder"})
metricFolderScanSeconds = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_scan_seconds_total",
Help: "Total time spent in folder scan iterations, per folder ID",
}, []string{"folder"})
metricFolderProcessedBytesTotal = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "model",
Name: "folder_processed_bytes_total",
Help: "Total amount of data processed during folder syncing, per folder ID and data source (network/local_origin/local_other/local_shifted/skipped)",
}, []string{"folder", "source"})
)
const (
metricSourceNetwork = "network" // from the network
metricSourceLocalOrigin = "local_origin" // from the existing version of the local file
metricSourceLocalOther = "local_other" // from a different local file
metricSourceLocalShifted = "local_shifted" // from the existing version of the local file, rolling hash shifted
metricSourceSkipped = "skipped" // block of all zeroes, invented out of thin air
metricScopeGlobal = "global"
metricScopeLocal = "local"
metricScopeNeed = "need"
metricTypeFiles = "files"
metricTypeDirectories = "directories"
metricTypeSymlinks = "symlinks"
metricTypeDeleted = "deleted"
metricTypeBytes = "bytes"
)
func registerFolderMetrics(folderID string) {
// Register metrics for this folder, so that counters are present even
// when zero.
metricFolderState.WithLabelValues(folderID)
metricFolderPulls.WithLabelValues(folderID)
metricFolderPullSeconds.WithLabelValues(folderID)
metricFolderScans.WithLabelValues(folderID)
metricFolderScanSeconds.WithLabelValues(folderID)
metricFolderProcessedBytesTotal.WithLabelValues(folderID, metricSourceNetwork)
metricFolderProcessedBytesTotal.WithLabelValues(folderID, metricSourceLocalOrigin)
metricFolderProcessedBytesTotal.WithLabelValues(folderID, metricSourceLocalOther)
metricFolderProcessedBytesTotal.WithLabelValues(folderID, metricSourceLocalShifted)
metricFolderProcessedBytesTotal.WithLabelValues(folderID, metricSourceSkipped)
}

View File

@@ -38,11 +38,11 @@ import (
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/stats"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/ur/contract"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/versioner"
)
@@ -136,10 +136,10 @@ type model struct {
shortID protocol.ShortID
// globalRequestLimiter limits the amount of data in concurrent incoming
// requests
globalRequestLimiter *util.Semaphore
globalRequestLimiter *semaphore.Semaphore
// folderIOLimiter limits the number of concurrent I/O heavy operations,
// such as scans and pulls.
folderIOLimiter *util.Semaphore
folderIOLimiter *semaphore.Semaphore
fatalChan chan error
started chan struct{}
keyGen *protocol.KeyGenerator
@@ -160,12 +160,12 @@ type model struct {
// fields protected by pmut
pmut sync.RWMutex
conn map[protocol.DeviceID]protocol.Connection
connRequestLimiters map[protocol.DeviceID]*util.Semaphore
connRequestLimiters map[protocol.DeviceID]*semaphore.Semaphore
closed map[protocol.DeviceID]chan struct{}
helloMessages map[protocol.DeviceID]protocol.Hello
deviceDownloads map[protocol.DeviceID]*deviceDownloadState
remoteFolderStates map[protocol.DeviceID]map[string]remoteFolderState // deviceID -> folders
indexHandlers map[protocol.DeviceID]*indexHandlerRegistry
indexHandlers *serviceMap[protocol.DeviceID, *indexHandlerRegistry]
// for testing only
foldersRunning atomic.Int32
@@ -173,7 +173,7 @@ type model struct {
var _ config.Verifier = &model{}
type folderFactory func(*model, *db.FileSet, *ignore.Matcher, config.FolderConfiguration, versioner.Versioner, events.Logger, *util.Semaphore) service
type folderFactory func(*model, *db.FileSet, *ignore.Matcher, config.FolderConfiguration, versioner.Versioner, events.Logger, *semaphore.Semaphore) service
var folderFactories = make(map[config.FolderType]folderFactory)
@@ -222,8 +222,8 @@ func NewModel(cfg config.Wrapper, id protocol.DeviceID, clientName, clientVersio
finder: db.NewBlockFinder(ldb),
progressEmitter: NewProgressEmitter(cfg, evLogger),
shortID: id.Short(),
globalRequestLimiter: util.NewSemaphore(1024 * cfg.Options().MaxConcurrentIncomingRequestKiB()),
folderIOLimiter: util.NewSemaphore(cfg.Options().MaxFolderConcurrency()),
globalRequestLimiter: semaphore.New(1024 * cfg.Options().MaxConcurrentIncomingRequestKiB()),
folderIOLimiter: semaphore.New(cfg.Options().MaxFolderConcurrency()),
fatalChan: make(chan error),
started: make(chan struct{}),
keyGen: keyGen,
@@ -243,17 +243,18 @@ func NewModel(cfg config.Wrapper, id protocol.DeviceID, clientName, clientVersio
// fields protected by pmut
pmut: sync.NewRWMutex(),
conn: make(map[protocol.DeviceID]protocol.Connection),
connRequestLimiters: make(map[protocol.DeviceID]*util.Semaphore),
connRequestLimiters: make(map[protocol.DeviceID]*semaphore.Semaphore),
closed: make(map[protocol.DeviceID]chan struct{}),
helloMessages: make(map[protocol.DeviceID]protocol.Hello),
deviceDownloads: make(map[protocol.DeviceID]*deviceDownloadState),
remoteFolderStates: make(map[protocol.DeviceID]map[string]remoteFolderState),
indexHandlers: make(map[protocol.DeviceID]*indexHandlerRegistry),
indexHandlers: newServiceMap[protocol.DeviceID, *indexHandlerRegistry](evLogger),
}
for devID := range cfg.Devices() {
m.deviceStatRefs[devID] = stats.NewDeviceStatisticsReference(m.db, devID)
}
m.Add(m.progressEmitter)
m.Add(m.indexHandlers)
m.Add(svcutil.AsService(m.serve, m.String()))
return m
@@ -399,7 +400,7 @@ func (m *model) addAndStartFolderLockedWithIgnores(cfg config.FolderConfiguratio
// These are our metadata files, and they should always be hidden.
ffs := cfg.Filesystem(nil)
_ = ffs.Hide(config.DefaultMarkerName)
_ = ffs.Hide(".stversions")
_ = ffs.Hide(versioner.DefaultPath)
_ = ffs.Hide(".stignore")
var ver versioner.Versioner
@@ -487,9 +488,9 @@ func (m *model) removeFolder(cfg config.FolderConfiguration) {
}
m.cleanupFolderLocked(cfg)
for _, r := range m.indexHandlers {
m.indexHandlers.Each(func(_ protocol.DeviceID, r *indexHandlerRegistry) {
r.Remove(cfg.ID)
}
})
m.fmut.Unlock()
m.pmut.RUnlock()
@@ -563,9 +564,9 @@ func (m *model) restartFolder(from, to config.FolderConfiguration, cacheIgnoredF
// Care needs to be taken because we already hold fmut and the lock order
// must be the same everywhere. As fmut is acquired first, this is fine.
m.pmut.RLock()
for _, indexRegistry := range m.indexHandlers {
indexRegistry.RegisterFolderState(to, fset, m.folderRunners[to.ID])
}
m.indexHandlers.Each(func(_ protocol.DeviceID, r *indexHandlerRegistry) {
r.RegisterFolderState(to, fset, m.folderRunners[to.ID])
})
m.pmut.RUnlock()
var infoMsg string
@@ -601,9 +602,9 @@ func (m *model) newFolder(cfg config.FolderConfiguration, cacheIgnoredFiles bool
// Care needs to be taken because we already hold fmut and the lock order
// must be the same everywhere. As fmut is acquired first, this is fine.
m.pmut.RLock()
for _, indexRegistry := range m.indexHandlers {
indexRegistry.RegisterFolderState(cfg, fset, m.folderRunners[cfg.ID])
}
m.indexHandlers.Each(func(_ protocol.DeviceID, r *indexHandlerRegistry) {
r.RegisterFolderState(cfg, fset, m.folderRunners[cfg.ID])
})
m.pmut.RUnlock()
return nil
@@ -1138,7 +1139,7 @@ func (m *model) handleIndex(conn protocol.Connection, folder string, fs []protoc
}
m.pmut.RLock()
indexHandler, ok := m.indexHandlers[deviceID]
indexHandler, ok := m.indexHandlers.Get(deviceID)
m.pmut.RUnlock()
if !ok {
// This should be impossible, as an index handler always exists for an
@@ -1170,7 +1171,7 @@ func (m *model) ClusterConfig(conn protocol.Connection, cm protocol.ClusterConfi
l.Debugf("Handling ClusterConfig from %v", deviceID.Short())
m.pmut.RLock()
indexHandlerRegistry, ok := m.indexHandlers[deviceID]
indexHandlerRegistry, ok := m.indexHandlers.Get(deviceID)
m.pmut.RUnlock()
if !ok {
panic("bug: ClusterConfig called on closed or nonexistent connection")
@@ -1792,7 +1793,7 @@ func (m *model) Closed(conn protocol.Connection, err error) {
delete(m.remoteFolderStates, device)
closed := m.closed[device]
delete(m.closed, device)
delete(m.indexHandlers, device)
m.indexHandlers.RemoveAndWait(device, 0)
m.pmut.Unlock()
m.progressEmitter.temporaryIndexUnsubscribe(conn)
@@ -1965,8 +1966,8 @@ func (m *model) Request(conn protocol.Connection, folder, name string, _, size i
// skipping nil limiters, then returns a requestResponse of the given size.
// When the requestResponse is closed the limiters are given back the bytes,
// in reverse order.
func newLimitedRequestResponse(size int, limiters ...*util.Semaphore) *requestResponse {
multi := util.MultiSemaphore(limiters)
func newLimitedRequestResponse(size int, limiters ...*semaphore.Semaphore) *requestResponse {
multi := semaphore.MultiSemaphore(limiters)
multi.Take(size)
res := newRequestResponse(size)
@@ -2251,18 +2252,18 @@ func (m *model) AddConnection(conn protocol.Connection, hello protocol.Hello) {
closed := make(chan struct{})
m.closed[deviceID] = closed
m.deviceDownloads[deviceID] = newDeviceDownloadState()
indexRegistry := newIndexHandlerRegistry(conn, m.deviceDownloads[deviceID], closed, m.Supervisor, m.evLogger)
indexRegistry := newIndexHandlerRegistry(conn, m.deviceDownloads[deviceID], m.evLogger)
for id, fcfg := range m.folderCfgs {
indexRegistry.RegisterFolderState(fcfg, m.folderFiles[id], m.folderRunners[id])
}
m.indexHandlers[deviceID] = indexRegistry
m.indexHandlers.Add(deviceID, indexRegistry)
m.fmut.RUnlock()
// 0: default, <0: no limiting
switch {
case device.MaxRequestKiB > 0:
m.connRequestLimiters[deviceID] = util.NewSemaphore(1024 * device.MaxRequestKiB)
m.connRequestLimiters[deviceID] = semaphore.New(1024 * device.MaxRequestKiB)
case device.MaxRequestKiB == 0:
m.connRequestLimiters[deviceID] = util.NewSemaphore(1024 * defaultPullerPendingKiB)
m.connRequestLimiters[deviceID] = semaphore.New(1024 * defaultPullerPendingKiB)
}
m.helloMessages[deviceID] = hello

View File

@@ -35,8 +35,8 @@ import (
"github.com/syncthing/syncthing/lib/protocol"
protocolmocks "github.com/syncthing/syncthing/lib/protocol/mocks"
srand "github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/testutils"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/semaphore"
"github.com/syncthing/syncthing/lib/testutil"
"github.com/syncthing/syncthing/lib/versioner"
)
@@ -1337,8 +1337,9 @@ func TestAutoAcceptEnc(t *testing.T) {
// Earlier tests might cause the connection to get closed, thus ClusterConfig
// would panic.
clusterConfig := func(deviceID protocol.DeviceID, cm protocol.ClusterConfig) {
m.AddConnection(newFakeConnection(deviceID, m), protocol.Hello{})
m.ClusterConfig(&protocolmocks.Connection{DeviceIDStub: func() protocol.DeviceID { return deviceID }}, cm)
conn := newFakeConnection(deviceID, m)
m.AddConnection(conn, protocol.Hello{})
m.ClusterConfig(conn, cm)
}
clusterConfig(device1, basicCC())
@@ -2647,7 +2648,7 @@ func TestVersionRestore(t *testing.T) {
file = filepath.FromSlash(file)
}
tag := version.In(time.Local).Truncate(time.Second).Format(versioner.TimeFormat)
taggedName := filepath.Join(".stversions", versioner.TagFilename(file, tag))
taggedName := filepath.Join(versioner.DefaultPath, versioner.TagFilename(file, tag))
fd, err := filesystem.Open(file)
if err != nil {
t.Error(err)
@@ -2680,7 +2681,7 @@ func TestVersionRestore(t *testing.T) {
}
for _, version := range versions {
if version.VersionTime.Equal(beforeRestore) || version.VersionTime.After(beforeRestore) {
fd, err := filesystem.Open(".stversions/" + versioner.TagFilename(file, version.VersionTime.Format(versioner.TimeFormat)))
fd, err := filesystem.Open(versioner.DefaultPath + "/" + versioner.TagFilename(file, version.VersionTime.Format(versioner.TimeFormat)))
must(t, err)
defer fd.Close()
@@ -2967,10 +2968,10 @@ func TestConnCloseOnRestart(t *testing.T) {
m := setupModel(t, w)
defer cleanupModelAndRemoveDir(m, fcfg.Filesystem(nil).URI())
br := &testutils.BlockingRW{}
nw := &testutils.NoopRW{}
br := &testutil.BlockingRW{}
nw := &testutil.NoopRW{}
ci := &protocolmocks.ConnectionInfo{}
m.AddConnection(protocol.NewConnection(device1, br, nw, testutils.NoopCloser{}, m, ci, protocol.CompressionNever, nil, m.keyGen), protocol.Hello{})
m.AddConnection(protocol.NewConnection(device1, br, nw, testutil.NoopCloser{}, m, ci, protocol.CompressionNever, nil, m.keyGen), protocol.Hello{})
m.pmut.RLock()
if len(m.closed) != 1 {
t.Fatalf("Expected just one conn (len(m.closed) == %v)", len(m.closed))
@@ -3112,9 +3113,9 @@ func TestDeviceWasSeen(t *testing.T) {
}
func TestNewLimitedRequestResponse(t *testing.T) {
l0 := util.NewSemaphore(0)
l1 := util.NewSemaphore(1024)
l2 := (*util.Semaphore)(nil)
l0 := semaphore.New(0)
l1 := semaphore.New(1024)
l2 := (*semaphore.Semaphore)(nil)
// Should take 500 bytes from any non-unlimited non-nil limiters.
res := newLimitedRequestResponse(500, l0, l1, l2)

View File

@@ -91,7 +91,7 @@ func TestProgressEmitter(t *testing.T) {
expectEvent(w, t, 1)
expectTimeout(w, t)
s.copiedFromOrigin()
s.copiedFromOrigin(1)
expectEvent(w, t, 1)
expectTimeout(w, t)

View File

@@ -13,6 +13,7 @@ import (
"time"
"github.com/d4l3k/messagediff"
"golang.org/x/exp/slices"
)
func TestJobQueue(t *testing.T) {
@@ -282,7 +283,6 @@ func BenchmarkJobQueuePushPopDone10k(b *testing.B) {
q.Done(n)
}
}
}
func TestQueuePagination(t *testing.T) {
@@ -302,21 +302,21 @@ func TestQueuePagination(t *testing.T) {
progress, queued, skip = q.Jobs(1, 5)
if len(progress) != 0 || len(queued) != 5 || skip != 0 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(queued, names[:5]) {
} else if !slices.Equal(queued, names[:5]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[:5])
}
progress, queued, skip = q.Jobs(2, 5)
if len(progress) != 0 || len(queued) != 5 || skip != 5 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(queued, names[5:]) {
} else if !slices.Equal(queued, names[5:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[5:])
}
progress, queued, skip = q.Jobs(2, 7)
if len(progress) != 0 || len(queued) != 3 || skip != 7 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(queued, names[7:]) {
} else if !slices.Equal(queued, names[7:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[7:])
}
@@ -338,23 +338,23 @@ func TestQueuePagination(t *testing.T) {
progress, queued, skip = q.Jobs(1, 5)
if len(progress) != 1 || len(queued) != 4 || skip != 0 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(progress, names[:1]) {
} else if !slices.Equal(progress, names[:1]) {
t.Errorf("Wrong elements in progress, got %v, expected %v", progress, names[:1])
} else if !equalStrings(queued, names[1:5]) {
} else if !slices.Equal(queued, names[1:5]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[1:5])
}
progress, queued, skip = q.Jobs(2, 5)
if len(progress) != 0 || len(queued) != 5 || skip != 5 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(queued, names[5:]) {
} else if !slices.Equal(queued, names[5:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[5:])
}
progress, queued, skip = q.Jobs(2, 7)
if len(progress) != 0 || len(queued) != 3 || skip != 7 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(queued, names[7:]) {
} else if !slices.Equal(queued, names[7:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[7:])
}
@@ -378,25 +378,25 @@ func TestQueuePagination(t *testing.T) {
progress, queued, skip = q.Jobs(1, 5)
if len(progress) != 5 || len(queued) != 0 || skip != 0 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(progress, names[:5]) {
} else if !slices.Equal(progress, names[:5]) {
t.Errorf("Wrong elements in progress, got %v, expected %v", progress, names[:5])
}
progress, queued, skip = q.Jobs(2, 5)
if len(progress) != 3 || len(queued) != 2 || skip != 5 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(progress, names[5:8]) {
} else if !slices.Equal(progress, names[5:8]) {
t.Errorf("Wrong elements in progress, got %v, expected %v", progress, names[5:8])
} else if !equalStrings(queued, names[8:]) {
} else if !slices.Equal(queued, names[8:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[8:])
}
progress, queued, skip = q.Jobs(2, 7)
if len(progress) != 1 || len(queued) != 2 || skip != 7 {
t.Error("Wrong length", len(progress), len(queued), 0)
} else if !equalStrings(progress, names[7:8]) {
} else if !slices.Equal(progress, names[7:8]) {
t.Errorf("Wrong elements in progress, got %v, expected %v", progress, names[7:8])
} else if !equalStrings(queued, names[8:]) {
} else if !slices.Equal(queued, names[8:]) {
t.Errorf("Wrong elements in queued, got %v, expected %v", queued, names[8:])
}
@@ -405,15 +405,3 @@ func TestQueuePagination(t *testing.T) {
t.Error("Wrong length", len(progress), len(queued), 0)
}
}
func equalStrings(first, second []string) bool {
if len(first) != len(second) {
return false
}
for i := range first {
if first[i] != second[i] {
return false
}
}
return true
}

103
lib/model/service_map.go Normal file
View File

@@ -0,0 +1,103 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import (
"context"
"fmt"
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/svcutil"
"github.com/thejerf/suture/v4"
)
// A serviceMap is a utility map of arbitrary keys to a suture.Service of
// some kind, where adding and removing services ensures they are properly
// started and stopped on the given Supervisor. The serviceMap is itself a
// suture.Service and should be added to a Supervisor.
// Not safe for concurrent use.
type serviceMap[K comparable, S suture.Service] struct {
services map[K]S
tokens map[K]suture.ServiceToken
supervisor *suture.Supervisor
eventLogger events.Logger
}
func newServiceMap[K comparable, S suture.Service](eventLogger events.Logger) *serviceMap[K, S] {
m := &serviceMap[K, S]{
services: make(map[K]S),
tokens: make(map[K]suture.ServiceToken),
eventLogger: eventLogger,
}
m.supervisor = suture.New(m.String(), svcutil.SpecWithDebugLogger(l))
return m
}
// Add adds a service to the map, starting it on the supervisor. If there is
// already a service at the given key, it is removed first.
func (s *serviceMap[K, S]) Add(k K, v S) {
if tok, ok := s.tokens[k]; ok {
// There is already a service at this key, remove it first.
s.supervisor.Remove(tok)
s.eventLogger.Log(events.Failure, fmt.Sprintf("%s replaced service at key %v", s, k))
}
s.services[k] = v
s.tokens[k] = s.supervisor.Add(v)
}
// Get returns the service at the given key, or the empty value and false if
// there is no service at that key.
func (s *serviceMap[K, S]) Get(k K) (v S, ok bool) {
v, ok = s.services[k]
return
}
// Remove removes the service at the given key, stopping it on the supervisor.
// If there is no service at the given key, nothing happens. The return value
// indicates whether a service was removed.
func (s *serviceMap[K, S]) Remove(k K) (found bool) {
if tok, ok := s.tokens[k]; ok {
found = true
s.supervisor.Remove(tok)
}
delete(s.services, k)
delete(s.tokens, k)
return
}
// RemoveAndWait removes the service at the given key, stopping it on the
// supervisor. If there is no service at the given key, nothing happens. The
// return value indicates whether a service was removed.
func (s *serviceMap[K, S]) RemoveAndWait(k K, timeout time.Duration) (found bool) {
if tok, ok := s.tokens[k]; ok {
found = true
s.supervisor.RemoveAndWait(tok, timeout)
}
delete(s.services, k)
delete(s.tokens, k)
return found
}
// Each calls the given function for each service in the map.
func (s *serviceMap[K, S]) Each(fn func(K, S)) {
for key, svc := range s.services {
fn(key, svc)
}
}
// Suture implementation
func (s *serviceMap[K, S]) Serve(ctx context.Context) error {
return s.supervisor.Serve(ctx)
}
func (s *serviceMap[K, S]) String() string {
var kv K
var sv S
return fmt.Sprintf("serviceMap[%T, %T]@%p", kv, sv, s)
}

View File

@@ -0,0 +1,156 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import (
"context"
"strings"
"testing"
"github.com/syncthing/syncthing/lib/events"
"github.com/thejerf/suture/v4"
)
func TestServiceMap(t *testing.T) {
t.Parallel()
ctx, cancel := context.WithCancel(context.Background())
t.Cleanup(cancel)
sup := suture.NewSimple("TestServiceMap")
sup.ServeBackground(ctx)
t.Run("SimpleAddRemove", func(t *testing.T) {
t.Parallel()
sm := newServiceMap[string, *dummyService](events.NoopLogger)
sup.Add(sm)
// Add two services. They should start.
d1 := newDummyService()
d2 := newDummyService()
sm.Add("d1", d1)
sm.Add("d2", d2)
<-d1.started
<-d2.started
// Remove them. They should stop.
if !sm.Remove("d1") {
t.Errorf("Remove failed")
}
if !sm.Remove("d2") {
t.Errorf("Remove failed")
}
<-d1.stopped
<-d2.stopped
})
t.Run("OverwriteImpliesRemove", func(t *testing.T) {
t.Parallel()
sm := newServiceMap[string, *dummyService](events.NoopLogger)
sup.Add(sm)
d1 := newDummyService()
d2 := newDummyService()
// Add d1, it should start.
sm.Add("k", d1)
<-d1.started
// Add d2, with the same key. The previous one should stop as we're
// doing a replace.
sm.Add("k", d2)
<-d1.stopped
<-d2.started
if !sm.Remove("k") {
t.Errorf("Remove failed")
}
<-d2.stopped
})
t.Run("IterateWithRemoveAndWait", func(t *testing.T) {
t.Parallel()
sm := newServiceMap[string, *dummyService](events.NoopLogger)
sup.Add(sm)
// Add four services.
d1 := newDummyService()
d2 := newDummyService()
d3 := newDummyService()
d4 := newDummyService()
sm.Add("keep1", d1)
sm.Add("remove2", d2)
sm.Add("keep3", d3)
sm.Add("remove4", d4)
<-d1.started
<-d2.started
<-d3.started
<-d4.started
// Remove two of them from within the iterator.
sm.Each(func(k string, v *dummyService) {
if strings.HasPrefix(k, "remove") {
sm.RemoveAndWait(k, 0)
}
})
// They should have stopped.
<-d2.stopped
<-d4.stopped
// They should not be in the map anymore.
if _, ok := sm.Get("remove2"); ok {
t.Errorf("Service still in map")
}
if _, ok := sm.Get("remove4"); ok {
t.Errorf("Service still in map")
}
// The other two should still be running.
if _, ok := sm.Get("keep1"); !ok {
t.Errorf("Service not in map")
}
if _, ok := sm.Get("keep3"); !ok {
t.Errorf("Service not in map")
}
})
}
type dummyService struct {
started chan struct{}
stopped chan struct{}
}
func newDummyService() *dummyService {
return &dummyService{
started: make(chan struct{}),
stopped: make(chan struct{}),
}
}
func (d *dummyService) Serve(ctx context.Context) error {
close(d.started)
defer close(d.stopped)
<-ctx.Done()
return nil
}

View File

@@ -152,11 +152,11 @@ func (s *sharedPullerState) tempFileInWritableDir(_ string) error {
// permissions will be set to the final value later, but in the meantime
// we don't want to have a temporary file with looser permissions than
// the final outcome.
mode := fs.FileMode(s.file.Permissions) | 0600
mode := fs.FileMode(s.file.Permissions) | 0o600
if s.ignorePerms {
// When ignorePerms is set we use a very permissive mode and let the
// system umask filter it.
mode = 0666
mode = 0o666
}
// Attempt to create the temp file
@@ -261,19 +261,34 @@ func (s *sharedPullerState) copyDone(block protocol.BlockInfo) {
s.mut.Unlock()
}
func (s *sharedPullerState) copiedFromOrigin() {
func (s *sharedPullerState) copiedFromOrigin(bytes int) {
s.mut.Lock()
s.copyOrigin++
s.updated = time.Now()
s.mut.Unlock()
metricFolderProcessedBytesTotal.WithLabelValues(s.folder, metricSourceLocalOrigin).Add(float64(bytes))
}
func (s *sharedPullerState) copiedFromOriginShifted() {
func (s *sharedPullerState) copiedFromElsewhere(bytes int) {
metricFolderProcessedBytesTotal.WithLabelValues(s.folder, metricSourceLocalOther).Add(float64(bytes))
}
func (s *sharedPullerState) skippedSparseBlock(bytes int) {
// pretend we copied it, historical
s.mut.Lock()
s.copyOrigin++
s.updated = time.Now()
s.mut.Unlock()
metricFolderProcessedBytesTotal.WithLabelValues(s.folder, metricSourceSkipped).Add(float64(bytes))
}
func (s *sharedPullerState) copiedFromOriginShifted(bytes int) {
s.mut.Lock()
s.copyOrigin++
s.copyOriginShifted++
s.updated = time.Now()
s.mut.Unlock()
metricFolderProcessedBytesTotal.WithLabelValues(s.folder, metricSourceLocalShifted).Add(float64(bytes))
}
func (s *sharedPullerState) pullStarted() {
@@ -295,6 +310,7 @@ func (s *sharedPullerState) pullDone(block protocol.BlockInfo) {
s.availableUpdated = time.Now()
l.Debugln("sharedPullerState", s.folder, s.file.Name, "pullNeeded done ->", s.pullNeeded)
s.mut.Unlock()
metricFolderProcessedBytesTotal.WithLabelValues(s.folder, metricSourceNetwork).Add(float64(block.Size))
}
// finalClose atomically closes and returns closed status of a file. A true

View File

@@ -7,6 +7,7 @@
package model
import (
"context"
"errors"
"fmt"
"path/filepath"
@@ -14,6 +15,7 @@ import (
"sync"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ur"
@@ -117,11 +119,11 @@ func inWritableDir(fn func(string) error, targetFs fs.Filesystem, path string, i
const permBits = fs.ModePerm | fs.ModeSetuid | fs.ModeSetgid | fs.ModeSticky
var parentErr error
if mode := info.Mode() & permBits; mode&0200 == 0 {
if mode := info.Mode() & permBits; mode&0o200 == 0 {
// A non-writeable directory (for this user; we assume that's the
// relevant part). Temporarily change the mode so we can delete the
// file or directory inside it.
parentErr = targetFs.Chmod(dir, mode|0700)
parentErr = targetFs.Chmod(dir, mode|0o700)
if parentErr != nil {
l.Debugf("Failed to make parent directory writable: %v", parentErr)
} else {
@@ -148,3 +150,27 @@ func inWritableDir(fn func(string) error, targetFs fs.Filesystem, path string, i
}
return err
}
// addTimeUntilCancelled adds time to the counter for the duration of the
// Context. We do this piecemeal so that polling the counter during a long
// operation shows a relevant value, instead of the counter just increasing
// by a large amount at the end of the operation.
func addTimeUntilCancelled(ctx context.Context, counter prometheus.Counter) {
t0 := time.Now()
defer func() {
counter.Add(time.Since(t0).Seconds())
}()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case t := <-ticker.C:
counter.Add(t.Sub(t0).Seconds())
t0 = t
case <-ctx.Done():
return
}
}
}

18
lib/netutil/netutil.go Normal file
View File

@@ -0,0 +1,18 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package netutil
import "net/url"
// Address constructs a URL from the given network and hostname.
func AddressURL(network, host string) string {
u := url.URL{
Scheme: network,
Host: host,
}
return u.String()
}

View File

@@ -0,0 +1,28 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package netutil
import "testing"
func TestAddress(t *testing.T) {
tests := []struct {
network string
host string
result string
}{
{"tcp", "google.com", "tcp://google.com"},
{"foo", "google", "foo://google"},
{"123", "456", "123://456"},
}
for _, test := range tests {
result := AddressURL(test.network, test.host)
if result != test.result {
t.Errorf("%s != %s", result, test.result)
}
}
}

View File

@@ -18,7 +18,7 @@ func GetLans() ([]*net.IPNet, error) {
var addrs []net.Addr
for _, currentIf := range ifs {
if currentIf.Flags&net.FlagUp != net.FlagUp {
if currentIf.Flags&net.FlagRunning == 0 {
continue
}
currentAddrs, err := currentIf.Addrs()

View File

@@ -19,7 +19,7 @@ import (
"github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/svcutil"
)
func init() {
@@ -28,7 +28,7 @@ func init() {
func Discover(ctx context.Context, renewal, timeout time.Duration) []nat.Device {
var ip net.IP
err := util.CallWithContext(ctx, func() error {
err := svcutil.CallWithContext(ctx, func() error {
var err error
ip, err = gateway.DiscoverGateway()
return err
@@ -46,7 +46,7 @@ func Discover(ctx context.Context, renewal, timeout time.Duration) []nat.Device
c := natpmp.NewClientWithTimeout(ip, timeout)
// Try contacting the gateway, if it does not respond, assume it does not
// speak NAT-PMP.
err = util.CallWithContext(ctx, func() error {
err = svcutil.CallWithContext(ctx, func() error {
_, ierr := c.GetExternalAddress()
return ierr
})
@@ -104,7 +104,7 @@ func (w *wrapper) AddPortMapping(ctx context.Context, protocol nat.Protocol, int
duration = w.renewal
}
var result *natpmp.AddPortMappingResult
err := util.CallWithContext(ctx, func() error {
err := svcutil.CallWithContext(ctx, func() error {
var err error
result, err = w.client.AddPortMapping(strings.ToLower(string(protocol)), internalPort, externalPort, int(duration/time.Second))
return err
@@ -118,7 +118,7 @@ func (w *wrapper) AddPortMapping(ctx context.Context, protocol nat.Protocol, int
func (w *wrapper) GetExternalIPAddress(ctx context.Context) (net.IP, error) {
var result *natpmp.GetExternalAddressResult
err := util.CallWithContext(ctx, func() error {
err := svcutil.CallWithContext(ctx, func() error {
var err error
result, err = w.client.GetExternalAddress()
return err

View File

@@ -10,7 +10,7 @@ import (
"testing"
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/testutils"
"github.com/syncthing/syncthing/lib/testutil"
)
func BenchmarkRequestsRawTCP(b *testing.B) {
@@ -60,9 +60,9 @@ func benchmarkRequestsTLS(b *testing.B, conn0, conn1 net.Conn) {
func benchmarkRequestsConnPair(b *testing.B, conn0, conn1 net.Conn) {
// Start up Connections on them
c0 := NewConnection(LocalDeviceID, conn0, conn0, testutils.NoopCloser{}, new(fakeModel), new(mockedConnectionInfo), CompressionMetadata, nil, testKeyGen)
c0 := NewConnection(LocalDeviceID, conn0, conn0, testutil.NoopCloser{}, new(fakeModel), new(mockedConnectionInfo), CompressionMetadata, nil, testKeyGen)
c0.Start()
c1 := NewConnection(LocalDeviceID, conn1, conn1, testutils.NoopCloser{}, new(fakeModel), new(mockedConnectionInfo), CompressionMetadata, nil, testKeyGen)
c1 := NewConnection(LocalDeviceID, conn1, conn1, testutil.NoopCloser{}, new(fakeModel), new(mockedConnectionInfo), CompressionMetadata, nil, testKeyGen)
c1.Start()
// Satisfy the assertions in the protocol by sending an initial cluster config

View File

@@ -10,8 +10,9 @@ import (
type countingReader struct {
io.Reader
tot atomic.Int64 // bytes
last atomic.Int64 // unix nanos
idString string
tot atomic.Int64 // bytes
last atomic.Int64 // unix nanos
}
var (
@@ -24,6 +25,7 @@ func (c *countingReader) Read(bs []byte) (int, error) {
c.tot.Add(int64(n))
totalIncoming.Add(int64(n))
c.last.Store(time.Now().UnixNano())
metricDeviceRecvBytes.WithLabelValues(c.idString).Add(float64(n))
return n, err
}
@@ -35,8 +37,9 @@ func (c *countingReader) Last() time.Time {
type countingWriter struct {
io.Writer
tot atomic.Int64 // bytes
last atomic.Int64 // unix nanos
idString string
tot atomic.Int64 // bytes
last atomic.Int64 // unix nanos
}
func (c *countingWriter) Write(bs []byte) (int, error) {
@@ -44,6 +47,7 @@ func (c *countingWriter) Write(bs []byte) (int, error) {
c.tot.Add(int64(n))
totalOutgoing.Add(int64(n))
c.last.Store(time.Now().UnixNano())
metricDeviceSentBytes.WithLabelValues(c.idString).Add(float64(n))
return n, err
}

62
lib/protocol/metrics.go Normal file
View File

@@ -0,0 +1,62 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package protocol
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricDeviceSentBytes = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "sent_bytes_total",
Help: "Total amount of data sent, per device",
}, []string{"device"})
metricDeviceSentUncompressedBytes = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "sent_uncompressed_bytes_total",
Help: "Total amount of data sent, before compression, per device",
}, []string{"device"})
metricDeviceSentMessages = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "sent_messages_total",
Help: "Total number of messages sent, per device",
}, []string{"device"})
metricDeviceRecvBytes = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "recv_bytes_total",
Help: "Total amount of data received, per device",
}, []string{"device"})
metricDeviceRecvDecompressedBytes = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "recv_decompressed_bytes_total",
Help: "Total amount of data received, after decompression, per device",
}, []string{"device"})
metricDeviceRecvMessages = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "protocol",
Name: "recv_messages_total",
Help: "Total number of messages received, per device",
}, []string{"device"})
)
func registerDeviceMetrics(deviceID string) {
// Register metrics for this device, so that counters are present even
// when zero.
metricDeviceSentBytes.WithLabelValues(deviceID)
metricDeviceSentUncompressedBytes.WithLabelValues(deviceID)
metricDeviceSentMessages.WithLabelValues(deviceID)
metricDeviceRecvBytes.WithLabelValues(deviceID)
metricDeviceRecvMessages.WithLabelValues(deviceID)
}

View File

@@ -183,6 +183,7 @@ type rawConnection struct {
ConnectionInfo
deviceID DeviceID
idString string
model contextLessModel
startTime time.Time
@@ -263,12 +264,15 @@ func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, closer
}
func newRawConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, closer io.Closer, receiver contextLessModel, connInfo ConnectionInfo, compress Compression) *rawConnection {
cr := &countingReader{Reader: reader}
cw := &countingWriter{Writer: writer}
idString := deviceID.String()
cr := &countingReader{Reader: reader, idString: idString}
cw := &countingWriter{Writer: writer, idString: idString}
registerDeviceMetrics(idString)
return &rawConnection{
ConnectionInfo: connInfo,
deviceID: deviceID,
idString: deviceID.String(),
model: receiver,
cr: cr,
cw: cw,
@@ -445,6 +449,8 @@ func (c *rawConnection) dispatcherLoop() (err error) {
return ErrClosed
}
metricDeviceRecvMessages.WithLabelValues(c.idString).Inc()
msgContext, err := messageContext(msg)
if err != nil {
return fmt.Errorf("protocol error: %w", err)
@@ -553,6 +559,8 @@ func (c *rawConnection) readMessageAfterHeader(hdr Header, fourByteBuf []byte) (
// ... and is then unmarshalled
metricDeviceRecvDecompressedBytes.WithLabelValues(c.idString).Add(float64(4 + len(buf)))
msg, err := newMessage(hdr.Type)
if err != nil {
BufferPool.Put(buf)
@@ -593,6 +601,8 @@ func (c *rawConnection) readHeader(fourByteBuf []byte) (Header, error) {
return Header{}, fmt.Errorf("unmarshalling header: %w", err)
}
metricDeviceRecvDecompressedBytes.WithLabelValues(c.idString).Add(float64(2 + len(buf)))
return hdr, nil
}
@@ -758,6 +768,10 @@ func (c *rawConnection) writeMessage(msg message) error {
msgContext, _ := messageContext(msg)
l.Debugf("Writing %v", msgContext)
defer func() {
metricDeviceSentMessages.WithLabelValues(c.idString).Inc()
}()
size := msg.ProtoSize()
hdr := Header{
Type: typeOf(msg),
@@ -784,6 +798,8 @@ func (c *rawConnection) writeMessage(msg message) error {
}
}
metricDeviceSentUncompressedBytes.WithLabelValues(c.idString).Add(float64(totSize))
// Header length
binary.BigEndian.PutUint16(buf, uint16(hdrSize))
// Header
@@ -817,6 +833,9 @@ func (c *rawConnection) writeCompressedMessage(msg message, marshaled []byte) (o
}
cOverhead := 2 + hdrSize + 4
metricDeviceSentUncompressedBytes.WithLabelValues(c.idString).Add(float64(cOverhead + len(marshaled)))
// The compressed size may be at most n-n/32 = .96875*n bytes,
// I.e., if we can't save at least 3.125% bandwidth, we forgo compression.
// This number is arbitrary but cheap to compute.

View File

@@ -19,7 +19,7 @@ import (
lz4 "github.com/pierrec/lz4/v4"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/testutils"
"github.com/syncthing/syncthing/lib/testutil"
)
var (
@@ -32,10 +32,10 @@ func TestPing(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutils.NoopCloser{}, newTestModel(), new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutil.NoopCloser{}, newTestModel(), new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c0.Start()
defer closeAndWait(c0, ar, bw)
c1 := getRawConnection(NewConnection(c1ID, br, aw, testutils.NoopCloser{}, newTestModel(), new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c1 := getRawConnection(NewConnection(c1ID, br, aw, testutil.NoopCloser{}, newTestModel(), new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c1.Start()
defer closeAndWait(c1, ar, bw)
c0.ClusterConfig(ClusterConfig{})
@@ -58,10 +58,10 @@ func TestClose(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutils.NoopCloser{}, m0, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutil.NoopCloser{}, m0, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c0.Start()
defer closeAndWait(c0, ar, bw)
c1 := NewConnection(c1ID, br, aw, testutils.NoopCloser{}, m1, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen)
c1 := NewConnection(c1ID, br, aw, testutil.NoopCloser{}, m1, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen)
c1.Start()
defer closeAndWait(c1, ar, bw)
c0.ClusterConfig(ClusterConfig{})
@@ -102,8 +102,8 @@ func TestCloseOnBlockingSend(t *testing.T) {
m := newTestModel()
rw := testutils.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutils.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
rw := testutil.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutil.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c.Start()
defer closeAndWait(c, rw)
@@ -154,10 +154,10 @@ func TestCloseRace(t *testing.T) {
ar, aw := io.Pipe()
br, bw := io.Pipe()
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutils.NoopCloser{}, m0, new(mockedConnectionInfo), CompressionNever, nil, testKeyGen))
c0 := getRawConnection(NewConnection(c0ID, ar, bw, testutil.NoopCloser{}, m0, new(mockedConnectionInfo), CompressionNever, nil, testKeyGen))
c0.Start()
defer closeAndWait(c0, ar, bw)
c1 := NewConnection(c1ID, br, aw, testutils.NoopCloser{}, m1, new(mockedConnectionInfo), CompressionNever, nil, testKeyGen)
c1 := NewConnection(c1ID, br, aw, testutil.NoopCloser{}, m1, new(mockedConnectionInfo), CompressionNever, nil, testKeyGen)
c1.Start()
defer closeAndWait(c1, ar, bw)
c0.ClusterConfig(ClusterConfig{})
@@ -193,8 +193,8 @@ func TestCloseRace(t *testing.T) {
func TestClusterConfigFirst(t *testing.T) {
m := newTestModel()
rw := testutils.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, &testutils.NoopRW{}, testutils.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
rw := testutil.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, &testutil.NoopRW{}, testutil.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c.Start()
defer closeAndWait(c, rw)
@@ -245,8 +245,8 @@ func TestCloseTimeout(t *testing.T) {
m := newTestModel()
rw := testutils.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutils.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
rw := testutil.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutil.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c.Start()
defer closeAndWait(c, rw)
@@ -898,8 +898,8 @@ func TestSha256OfEmptyBlock(t *testing.T) {
func TestClusterConfigAfterClose(t *testing.T) {
m := newTestModel()
rw := testutils.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutils.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
rw := testutil.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, rw, testutil.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
c.Start()
defer closeAndWait(c, rw)
@@ -922,8 +922,8 @@ func TestDispatcherToCloseDeadlock(t *testing.T) {
// Verify that we don't deadlock when calling Close() from within one of
// the model callbacks (ClusterConfig).
m := newTestModel()
rw := testutils.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, &testutils.NoopRW{}, testutils.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
rw := testutil.NewBlockingRW()
c := getRawConnection(NewConnection(c0ID, rw, &testutil.NoopRW{}, testutil.NoopCloser{}, m, new(mockedConnectionInfo), CompressionAlways, nil, testKeyGen))
m.ccFn = func(ClusterConfig) {
c.Close(errManual)
}

View File

@@ -16,7 +16,7 @@ import (
)
// HashFile hashes the files and returns a list of blocks representing the file.
func HashFile(ctx context.Context, fs fs.Filesystem, path string, blockSize int, counter Counter, useWeakHashes bool) ([]protocol.BlockInfo, error) {
func HashFile(ctx context.Context, folderID string, fs fs.Filesystem, path string, blockSize int, counter Counter, useWeakHashes bool) ([]protocol.BlockInfo, error) {
fd, err := fs.Open(path)
if err != nil {
l.Debugln("open:", err)
@@ -42,6 +42,8 @@ func HashFile(ctx context.Context, fs fs.Filesystem, path string, blockSize int,
return nil, err
}
metricHashedBytes.WithLabelValues(folderID).Add(float64(size))
// Recheck the size and modtime again. If they differ, the file changed
// while we were reading it and our hash results are invalid.
@@ -62,22 +64,24 @@ func HashFile(ctx context.Context, fs fs.Filesystem, path string, blockSize int,
// workers are used in parallel. The outbox will become closed when the inbox
// is closed and all items handled.
type parallelHasher struct {
fs fs.Filesystem
outbox chan<- ScanResult
inbox <-chan protocol.FileInfo
counter Counter
done chan<- struct{}
wg sync.WaitGroup
folderID string
fs fs.Filesystem
outbox chan<- ScanResult
inbox <-chan protocol.FileInfo
counter Counter
done chan<- struct{}
wg sync.WaitGroup
}
func newParallelHasher(ctx context.Context, fs fs.Filesystem, workers int, outbox chan<- ScanResult, inbox <-chan protocol.FileInfo, counter Counter, done chan<- struct{}) {
func newParallelHasher(ctx context.Context, folderID string, fs fs.Filesystem, workers int, outbox chan<- ScanResult, inbox <-chan protocol.FileInfo, counter Counter, done chan<- struct{}) {
ph := &parallelHasher{
fs: fs,
outbox: outbox,
inbox: inbox,
counter: counter,
done: done,
wg: sync.NewWaitGroup(),
folderID: folderID,
fs: fs,
outbox: outbox,
inbox: inbox,
counter: counter,
done: done,
wg: sync.NewWaitGroup(),
}
ph.wg.Add(workers)
@@ -104,7 +108,7 @@ func (ph *parallelHasher) hashFiles(ctx context.Context) {
panic("Bug. Asked to hash a directory or a deleted file.")
}
blocks, err := HashFile(ctx, ph.fs, f.Name, f.BlockSize(), ph.counter, true)
blocks, err := HashFile(ctx, ph.folderID, ph.fs, f.Name, f.BlockSize(), ph.counter, true)
if err != nil {
handleError(ctx, "hashing", f.Name, err, ph.outbox)
continue

35
lib/scanner/metrics.go Normal file
View File

@@ -0,0 +1,35 @@
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package scanner
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
metricHashedBytes = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "scanner",
Name: "hashed_bytes_total",
Help: "Total amount of data hashed, per folder",
}, []string{"folder"})
metricScannedItems = promauto.NewCounterVec(prometheus.CounterOpts{
Namespace: "syncthing",
Subsystem: "scanner",
Name: "scanned_items_total",
Help: "Total number of items (files/directories) inspected, per folder",
}, []string{"folder"})
)
func registerFolderMetrics(folderID string) {
// Register metrics for this folder, so that counters are present even
// when zero.
metricHashedBytes.WithLabelValues(folderID)
metricScannedItems.WithLabelValues(folderID)
}

View File

@@ -115,7 +115,7 @@ type fakeInfo struct {
}
func (f fakeInfo) Name() string { return f.name }
func (fakeInfo) Mode() fs.FileMode { return 0755 }
func (fakeInfo) Mode() fs.FileMode { return 0o755 }
func (f fakeInfo) Size() int64 { return f.size }
func (fakeInfo) ModTime() time.Time { return time.Unix(1234567890, 0) }
func (f fakeInfo) IsDir() bool {

View File

@@ -104,6 +104,7 @@ func newWalker(cfg Config) *walker {
w.Matcher = ignore.New(w.Filesystem)
}
registerFolderMetrics(w.Folder)
return w
}
@@ -132,7 +133,7 @@ func (w *walker) walk(ctx context.Context) chan ScanResult {
// We're not required to emit scan progress events, just kick off hashers,
// and feed inputs directly from the walker.
if w.ProgressTickIntervalS < 0 {
newParallelHasher(ctx, w.Filesystem, w.Hashers, finishedChan, toHashChan, nil, nil)
newParallelHasher(ctx, w.Folder, w.Filesystem, w.Hashers, finishedChan, toHashChan, nil, nil)
return finishedChan
}
@@ -163,7 +164,7 @@ func (w *walker) walk(ctx context.Context) chan ScanResult {
done := make(chan struct{})
progress := newByteCounter()
newParallelHasher(ctx, w.Filesystem, w.Hashers, finishedChan, realToHashChan, progress, done)
newParallelHasher(ctx, w.Folder, w.Filesystem, w.Hashers, finishedChan, realToHashChan, progress, done)
// A routine which actually emits the FolderScanProgress events
// every w.ProgressTicker ticks, until the hasher routines terminate.
@@ -255,6 +256,8 @@ func (w *walker) walkAndHashFiles(ctx context.Context, toHashChan chan<- protoco
default:
}
metricScannedItems.WithLabelValues(w.Folder).Inc()
// Return value used when we are returning early and don't want to
// process the item. For directories, this means do-not-descend.
var skip error // nil
@@ -599,7 +602,7 @@ func (w *walker) updateFileInfo(dst, src protocol.FileInfo) protocol.FileInfo {
if dst.Type == protocol.FileInfoTypeFile && build.IsWindows {
// If we have an existing index entry, copy the executable bits
// from there.
dst.Permissions |= (src.Permissions & 0111)
dst.Permissions |= (src.Permissions & 0o111)
}
dst.Version = src.Version.Update(w.ShortID)
dst.ModifiedBy = w.ShortID

View File

@@ -635,7 +635,7 @@ func BenchmarkHashFile(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := HashFile(context.TODO(), testFs, testdataName, protocol.MinBlockSize, nil, true); err != nil {
if _, err := HashFile(context.TODO(), "", testFs, testdataName, protocol.MinBlockSize, nil, true); err != nil {
b.Fatal(err)
}
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package util
package semaphore
import (
"context"
@@ -18,7 +18,7 @@ type Semaphore struct {
cond *sync.Cond
}
func NewSemaphore(max int) *Semaphore {
func New(max int) *Semaphore {
if max < 0 {
max = 0
}

View File

@@ -4,14 +4,16 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package util
package semaphore
import "testing"
func TestZeroByteSemaphore(_ *testing.T) {
func TestZeroByteSemaphore(t *testing.T) {
t.Parallel()
// A semaphore with zero capacity is just a no-op.
s := NewSemaphore(0)
s := New(0)
// None of these should block or panic
s.Take(123)
@@ -20,9 +22,11 @@ func TestZeroByteSemaphore(_ *testing.T) {
}
func TestByteSemaphoreCapChangeUp(t *testing.T) {
t.Parallel()
// Waiting takes should unblock when the capacity increases
s := NewSemaphore(100)
s := New(100)
s.Take(75)
if s.available != 25 {
@@ -43,9 +47,11 @@ func TestByteSemaphoreCapChangeUp(t *testing.T) {
}
func TestByteSemaphoreCapChangeDown1(t *testing.T) {
t.Parallel()
// Things should make sense when capacity is adjusted down
s := NewSemaphore(100)
s := New(100)
s.Take(75)
if s.available != 25 {
@@ -64,9 +70,11 @@ func TestByteSemaphoreCapChangeDown1(t *testing.T) {
}
func TestByteSemaphoreCapChangeDown2(t *testing.T) {
t.Parallel()
// Things should make sense when capacity is adjusted down, different case
s := NewSemaphore(100)
s := New(100)
s.Take(75)
if s.available != 25 {
@@ -85,9 +93,11 @@ func TestByteSemaphoreCapChangeDown2(t *testing.T) {
}
func TestByteSemaphoreGiveMore(t *testing.T) {
t.Parallel()
// We shouldn't end up with more available than we have capacity...
s := NewSemaphore(100)
s := New(100)
s.Take(150)
if s.available != 0 {

View File

@@ -0,0 +1,46 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package stringutil
import (
"strings"
"time"
)
// UniqueTrimmedStrings returns a list of all unique strings in ss,
// in the order in which they first appear in ss, after trimming away
// leading and trailing spaces.
func UniqueTrimmedStrings(ss []string) []string {
m := make(map[string]struct{}, len(ss))
us := make([]string, 0, len(ss))
for _, v := range ss {
v = strings.Trim(v, " ")
if _, ok := m[v]; ok {
continue
}
m[v] = struct{}{}
us = append(us, v)
}
return us
}
func NiceDurationString(d time.Duration) string {
switch {
case d > 24*time.Hour:
d = d.Round(time.Hour)
case d > time.Hour:
d = d.Round(time.Minute)
case d > time.Minute:
d = d.Round(time.Second)
case d > time.Second:
d = d.Round(time.Millisecond)
case d > time.Millisecond:
d = d.Round(time.Microsecond)
}
return d.String()
}

View File

@@ -0,0 +1,51 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package stringutil
import (
"testing"
)
func TestUniqueStrings(t *testing.T) {
tests := []struct {
input []string
expected []string
}{
{
[]string{"a", "b"},
[]string{"a", "b"},
},
{
[]string{"a", "a"},
[]string{"a"},
},
{
[]string{"a", "a", "a", "a"},
[]string{"a"},
},
{
nil,
nil,
},
{
[]string{" a ", " a ", "b ", " b"},
[]string{"a", "b"},
},
}
for _, test := range tests {
result := UniqueTrimmedStrings(test.input)
if len(result) != len(test.expected) {
t.Errorf("%s != %s", result, test.expected)
}
for i := range result {
if test.expected[i] != result[i] {
t.Errorf("%s != %s", result, test.expected)
}
}
}
}

View File

@@ -1,19 +1,15 @@
// Copyright (C) 2016 The Syncthing Authors.
// Copyright (C) 2023 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package util
package structutil
import (
"context"
"fmt"
"net/url"
"reflect"
"strconv"
"strings"
"time"
)
type defaultParser interface {
@@ -21,7 +17,7 @@ type defaultParser interface {
}
// SetDefaults sets default values on a struct, based on the default annotation.
func SetDefaults(data interface{}) {
func SetDefaults(data any) {
s := reflect.ValueOf(data).Elem()
t := s.Type()
@@ -86,63 +82,15 @@ func SetDefaults(data interface{}) {
}
}
// CopyMatchingTag copies fields tagged tag:"value" from "from" struct onto "to" struct.
func CopyMatchingTag(from interface{}, to interface{}, tag string, shouldCopy func(value string) bool) {
fromStruct := reflect.ValueOf(from).Elem()
fromType := fromStruct.Type()
toStruct := reflect.ValueOf(to).Elem()
toType := toStruct.Type()
if fromType != toType {
panic(fmt.Sprintf("non equal types: %s != %s", fromType, toType))
}
for i := 0; i < toStruct.NumField(); i++ {
fromField := fromStruct.Field(i)
toField := toStruct.Field(i)
if !toField.CanSet() {
// Unexported fields
continue
}
structTag := toType.Field(i).Tag
v := structTag.Get(tag)
if shouldCopy(v) {
toField.Set(fromField)
}
}
}
// UniqueTrimmedStrings returns a list of all unique strings in ss,
// in the order in which they first appear in ss, after trimming away
// leading and trailing spaces.
func UniqueTrimmedStrings(ss []string) []string {
var m = make(map[string]struct{}, len(ss))
var us = make([]string, 0, len(ss))
for _, v := range ss {
v = strings.Trim(v, " ")
if _, ok := m[v]; ok {
continue
}
m[v] = struct{}{}
us = append(us, v)
}
return us
}
func FillNilExceptDeprecated(data interface{}) {
func FillNilExceptDeprecated(data any) {
fillNil(data, true)
}
func FillNil(data interface{}) {
func FillNil(data any) {
fillNil(data, false)
}
func fillNil(data interface{}, skipDeprecated bool) {
func fillNil(data any, skipDeprecated bool) {
s := reflect.ValueOf(data).Elem()
t := s.Type()
for i := 0; i < s.NumField(); i++ {
@@ -190,7 +138,7 @@ func fillNil(data interface{}, skipDeprecated bool) {
}
// FillNilSlices sets default value on slices that are still nil.
func FillNilSlices(data interface{}) error {
func FillNilSlices(data any) error {
s := reflect.ValueOf(data).Elem()
t := s.Type()
@@ -220,55 +168,3 @@ func FillNilSlices(data interface{}) error {
}
return nil
}
// Address constructs a URL from the given network and hostname.
func Address(network, host string) string {
u := url.URL{
Scheme: network,
Host: host,
}
return u.String()
}
func CallWithContext(ctx context.Context, fn func() error) error {
var err error
done := make(chan struct{})
go func() {
err = fn()
close(done)
}()
select {
case <-done:
return err
case <-ctx.Done():
return ctx.Err()
}
}
func NiceDurationString(d time.Duration) string {
switch {
case d > 24*time.Hour:
d = d.Round(time.Hour)
case d > time.Hour:
d = d.Round(time.Minute)
case d > time.Minute:
d = d.Round(time.Second)
case d > time.Second:
d = d.Round(time.Millisecond)
case d > time.Millisecond:
d = d.Round(time.Microsecond)
}
return d.String()
}
func EqualStrings(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package util
package structutil
import (
"testing"
@@ -55,46 +55,6 @@ func TestSetDefaults(t *testing.T) {
}
}
func TestUniqueStrings(t *testing.T) {
tests := []struct {
input []string
expected []string
}{
{
[]string{"a", "b"},
[]string{"a", "b"},
},
{
[]string{"a", "a"},
[]string{"a"},
},
{
[]string{"a", "a", "a", "a"},
[]string{"a"},
},
{
nil,
nil,
},
{
[]string{" a ", " a ", "b ", " b"},
[]string{"a", "b"},
},
}
for _, test := range tests {
result := UniqueTrimmedStrings(test.input)
if len(result) != len(test.expected) {
t.Errorf("%s != %s", result, test.expected)
}
for i := range result {
if test.expected[i] != result[i] {
t.Errorf("%s != %s", result, test.expected)
}
}
}
}
func TestFillNillSlices(t *testing.T) {
// Nil
x := &struct {
@@ -148,83 +108,6 @@ func TestFillNillSlices(t *testing.T) {
}
}
func TestAddress(t *testing.T) {
tests := []struct {
network string
host string
result string
}{
{"tcp", "google.com", "tcp://google.com"},
{"foo", "google", "foo://google"},
{"123", "456", "123://456"},
}
for _, test := range tests {
result := Address(test.network, test.host)
if result != test.result {
t.Errorf("%s != %s", result, test.result)
}
}
}
func TestCopyMatching(t *testing.T) {
type Nested struct {
A int
}
type Test struct {
CopyA int
CopyB []string
CopyC Nested
CopyD *Nested
NoCopy int `restart:"true"`
}
from := Test{
CopyA: 1,
CopyB: []string{"friend", "foe"},
CopyC: Nested{
A: 2,
},
CopyD: &Nested{
A: 3,
},
NoCopy: 4,
}
to := Test{
CopyA: 11,
CopyB: []string{"foot", "toe"},
CopyC: Nested{
A: 22,
},
CopyD: &Nested{
A: 33,
},
NoCopy: 44,
}
// Copy empty fields
CopyMatchingTag(&from, &to, "restart", func(v string) bool {
return v != "true"
})
if to.CopyA != 1 {
t.Error("CopyA")
}
if len(to.CopyB) != 2 || to.CopyB[0] != "friend" || to.CopyB[1] != "foe" {
t.Error("CopyB")
}
if to.CopyC.A != 2 {
t.Error("CopyC")
}
if to.CopyD.A != 3 {
t.Error("CopyC")
}
if to.NoCopy != 44 {
t.Error("NoCopy")
}
}
func TestFillNil(t *testing.T) {
type A struct {
Slice []int

View File

@@ -9,20 +9,20 @@ package stun
import (
"context"
"net"
"sync/atomic"
"time"
"github.com/AudriusButkevicius/pfilter"
"github.com/ccding/go-stun/stun"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/svcutil"
)
const stunRetryInterval = 5 * time.Minute
type Host = stun.Host
type NATType = stun.NATType
type (
Host = stun.Host
NATType = stun.NATType
)
// NAT types.
@@ -38,38 +38,6 @@ const (
NATSymmetricUDPFirewall = stun.NATSymmetricUDPFirewall
)
type writeTrackingUdpConn struct {
// Needs to be UDPConn not PacketConn, as pfilter checks for WriteMsgUDP/ReadMsgUDP
// and even if we embed UDPConn here, in place of a PacketConn, seems the interface
// check fails.
*net.UDPConn
lastWrite atomic.Int64
}
func (c *writeTrackingUdpConn) WriteTo(p []byte, addr net.Addr) (n int, err error) {
c.lastWrite.Store(time.Now().Unix())
return c.UDPConn.WriteTo(p, addr)
}
func (c *writeTrackingUdpConn) WriteMsgUDP(b, oob []byte, addr *net.UDPAddr) (n, oobn int, err error) {
c.lastWrite.Store(time.Now().Unix())
return c.UDPConn.WriteMsgUDP(b, oob, addr)
}
func (c *writeTrackingUdpConn) WriteToUDP(b []byte, addr *net.UDPAddr) (int, error) {
c.lastWrite.Store(time.Now().Unix())
return c.UDPConn.WriteToUDP(b, addr)
}
func (c *writeTrackingUdpConn) Write(b []byte) (int, error) {
c.lastWrite.Store(time.Now().Unix())
return c.UDPConn.Write(b)
}
func (c *writeTrackingUdpConn) getLastWrite() time.Time {
return time.Unix(c.lastWrite.Load(), 0)
}
type Subscriber interface {
OnNATTypeChanged(natType NATType)
OnExternalAddressChanged(address *Host, via string)
@@ -79,30 +47,21 @@ type Service struct {
name string
cfg config.Wrapper
subscriber Subscriber
stunConn net.PacketConn
client *stun.Client
writeTrackingUdpConn *writeTrackingUdpConn
lastWriter LastWriter
natType NATType
addr *Host
}
func New(cfg config.Wrapper, subscriber Subscriber, conn *net.UDPConn) (*Service, net.PacketConn) {
// Wrap the original connection to track writes on it
writeTrackingUdpConn := &writeTrackingUdpConn{UDPConn: conn}
// Wrap it in a filter and split it up, so that stun packets arrive on stun conn, others arrive on the data conn
filterConn := pfilter.NewPacketFilter(writeTrackingUdpConn)
otherDataConn := filterConn.NewConn(otherDataPriority, nil)
stunConn := filterConn.NewConn(stunFilterPriority, &stunFilter{
ids: make(map[string]time.Time),
})
filterConn.Start()
type LastWriter interface {
LastWrite() time.Time
}
func New(cfg config.Wrapper, subscriber Subscriber, conn net.PacketConn, lastWriter LastWriter) *Service {
// Construct the client to use the stun conn
client := stun.NewClientWithConnection(stunConn)
client := stun.NewClientWithConnection(conn)
client.SetSoftwareName("") // Explicitly unset this, seems to freak some servers out.
// Return the service and the other conn to the client
@@ -117,15 +76,14 @@ func New(cfg config.Wrapper, subscriber Subscriber, conn *net.UDPConn) (*Service
cfg: cfg,
subscriber: subscriber,
stunConn: stunConn,
client: client,
writeTrackingUdpConn: writeTrackingUdpConn,
lastWriter: lastWriter,
natType: NATUnknown,
addr: nil,
}
return s, otherDataConn
return s
}
func (s *Service) Serve(ctx context.Context) error {
@@ -134,13 +92,6 @@ func (s *Service) Serve(ctx context.Context) error {
s.setExternalAddress(nil, "")
}()
// Closing s.stunConn unblocks operations that use the connection
// (Discover, Keepalive) and might otherwise block us from returning.
go func() {
<-ctx.Done()
_ = s.stunConn.Close()
}()
timer := time.NewTimer(time.Millisecond)
for {
@@ -208,7 +159,7 @@ func (s *Service) runStunForServer(ctx context.Context, addr string) {
var natType stun.NATType
var extAddr *stun.Host
err = util.CallWithContext(ctx, func() error {
err = svcutil.CallWithContext(ctx, func() error {
natType, extAddr, err = s.client.Discover()
return err
})
@@ -244,6 +195,7 @@ func (s *Service) stunKeepAlive(ctx context.Context, addr string, extAddr *Host)
l.Debugf("%s starting stun keepalive via %s, next sleep %s", s, addr, nextSleep)
var ourLastWrite time.Time
for {
if areDifferent(s.addr, extAddr) {
// If the port has changed (addresses are not equal but the hosts are equal),
@@ -264,7 +216,10 @@ func (s *Service) stunKeepAlive(ctx context.Context, addr string, extAddr *Host)
}
// Adjust the keepalives to fire only nextSleep after last write.
lastWrite := s.writeTrackingUdpConn.getLastWrite()
lastWrite := ourLastWrite
if quicLastWrite := s.lastWriter.LastWrite(); quicLastWrite.After(lastWrite) {
lastWrite = quicLastWrite
}
minSleep := time.Duration(s.cfg.Options().StunKeepaliveMinS) * time.Second
if nextSleep < minSleep {
nextSleep = minSleep
@@ -293,7 +248,7 @@ func (s *Service) stunKeepAlive(ctx context.Context, addr string, extAddr *Host)
}
// Check if any writes happened while we were sleeping, if they did, sleep again
lastWrite = s.writeTrackingUdpConn.getLastWrite()
lastWrite = s.lastWriter.LastWrite()
if gap := time.Since(lastWrite); gap < nextSleep {
l.Debugf("%s stun last write gap less than next sleep: %s < %s. Will try later", s, gap, nextSleep)
goto tryLater
@@ -306,6 +261,7 @@ func (s *Service) stunKeepAlive(ctx context.Context, addr string, extAddr *Host)
l.Debugf("%s stun keepalive on %s: %s (%v)", s, addr, err, extAddr)
return
}
ourLastWrite = time.Now()
}
}

View File

@@ -223,3 +223,18 @@ func asNonContextError(ctx context.Context, err error) error {
}
return err
}
func CallWithContext(ctx context.Context, fn func() error) error {
var err error
done := make(chan struct{})
go func() {
err = fn()
close(done)
}()
select {
case <-done:
return err
case <-ctx.Done():
return ctx.Err()
}
}

View File

@@ -4,7 +4,7 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package testutils
package testutil
import (
"errors"
@@ -25,6 +25,7 @@ func NewBlockingRW() *BlockingRW {
closeOnce: sync.Once{},
}
}
func (rw *BlockingRW) Read(_ []byte) (int, error) {
<-rw.c
return 0, ErrClosed

View File

@@ -35,7 +35,7 @@ type Asset struct {
// The browser URL is needed for human readable links in the output created
// by cmd/stupgrades.
BrowserURL string `json:"browser_download_url"`
BrowserURL string `json:"browser_download_url,omitempty"`
}
var (

View File

@@ -47,7 +47,6 @@ import (
"sync"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/osutil"
@@ -99,8 +98,7 @@ func Discover(ctx context.Context, _, timeout time.Duration) []nat.Device {
wg := &sync.WaitGroup{}
for _, intf := range interfaces {
// Interface flags seem to always be 0 on Windows
if !build.IsWindows && (intf.Flags&net.FlagUp == 0 || intf.Flags&net.FlagMulticast == 0) {
if intf.Flags&net.FlagRunning == 0 || intf.Flags&net.FlagMulticast == 0 {
continue
}

View File

@@ -14,7 +14,7 @@ import (
"strconv"
"time"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/structutil"
)
type Report struct {
@@ -179,7 +179,7 @@ type Report struct {
func New() *Report {
r := &Report{}
util.FillNil(r)
structutil.FillNil(r)
return r
}

View File

@@ -32,7 +32,7 @@ type simple struct {
func newSimple(cfg config.FolderConfiguration) Versioner {
var keep, err = strconv.Atoi(cfg.Versioning.Params["keep"])
cleanoutDays, _ := strconv.Atoi(cfg.Versioning.Params["cleanoutDays"])
// On error we default to 0, "do not clean out the trash can"
// On error we default to 0, "do not clean out the versioned items"
if err != nil {
keep = 5 // A reasonable default

View File

@@ -81,7 +81,7 @@ func TestSimpleVersioningVersionCount(t *testing.T) {
t.Error(err)
}
n, err := fs.DirNames(".stversions")
n, err := fs.DirNames(DefaultPath)
if err != nil {
t.Error(err)
}

View File

@@ -19,7 +19,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/stringutil"
)
var (
@@ -28,6 +28,10 @@ var (
errFileAlreadyExists = errors.New("file already exists")
)
const (
DefaultPath = ".stversions"
)
// TagFilename inserts ~tag just before the extension of the filename.
func TagFilename(name, tag string) string {
dir, file := filepath.Dir(name), filepath.Base(name)
@@ -122,7 +126,6 @@ func retrieveVersions(fileSystem fs.Filesystem) (map[string][]FileVersion, error
return nil
})
if err != nil {
return nil, err
}
@@ -149,7 +152,7 @@ func archiveFile(method fs.CopyRangeMethod, srcFs, dstFs fs.Filesystem, filePath
if err != nil {
if fs.IsNotExist(err) {
l.Debugln("creating versions dir")
err := dstFs.MkdirAll(".", 0755)
err := dstFs.MkdirAll(".", 0o755)
if err != nil {
return err
}
@@ -162,7 +165,7 @@ func archiveFile(method fs.CopyRangeMethod, srcFs, dstFs fs.Filesystem, filePath
file := filepath.Base(filePath)
inFolderPath := filepath.Dir(filePath)
err = dstFs.MkdirAll(inFolderPath, 0755)
err = dstFs.MkdirAll(inFolderPath, 0o755)
if err != nil && !fs.IsExist(err) {
l.Debugln("archiving", filePath, err)
return err
@@ -249,7 +252,7 @@ func restoreFile(method fs.CopyRangeMethod, src, dst fs.Filesystem, filePath str
return err
}
_ = dst.MkdirAll(filepath.Dir(filePath), 0755)
_ = dst.MkdirAll(filepath.Dir(filePath), 0o755)
err := osutil.RenameOrCopy(method, src, dst, sourceFile, filePath)
_ = dst.Chtimes(filePath, sourceMtime, sourceMtime)
return err
@@ -258,7 +261,7 @@ func restoreFile(method fs.CopyRangeMethod, src, dst fs.Filesystem, filePath str
func versionerFsFromFolderCfg(cfg config.FolderConfiguration) (versionsFs fs.Filesystem) {
folderFs := cfg.Filesystem(nil)
if cfg.Versioning.FSPath == "" {
versionsFs = fs.NewFilesystem(folderFs.Type(), filepath.Join(folderFs.URI(), ".stversions"))
versionsFs = fs.NewFilesystem(folderFs.Type(), filepath.Join(folderFs.URI(), DefaultPath))
} else if cfg.Versioning.FSType == fs.FilesystemTypeBasic && !filepath.IsAbs(cfg.Versioning.FSPath) {
// We only know how to deal with relative folders for basic filesystems, as that's the only one we know
// how to check if it's absolute or relative.
@@ -281,7 +284,7 @@ func findAllVersions(fs fs.Filesystem, filePath string) []string {
l.Warnln("globbing:", err, "for", pattern)
return nil
}
versions = util.UniqueTrimmedStrings(versions)
versions = stringutil.UniqueTrimmedStrings(versions)
sort.Strings(versions)
return versions

View File

@@ -44,7 +44,7 @@ const (
func New(cfg config.FolderConfiguration) (Versioner, error) {
fac, ok := factories[cfg.Versioning.Type]
if !ok {
return nil, fmt.Errorf("requested versioning type %q does not exist", cfg.Type)
return nil, fmt.Errorf("requested versioning type %q does not exist", cfg.Versioning.Type)
}
return &versionerWithErrorContext{

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STDISCOSRV" "1" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "STDISCOSRV" "1" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.SH SYNOPSIS
@@ -438,6 +438,41 @@ configuration:
.nf
.ft C
RemoteIPHeader X\-Forwarded\-For
.ft P
.fi
.UNINDENT
.UNINDENT
.SS Caddy
.sp
The following lines must be added to the Caddyfile:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
discovery.example.com {
reverse_proxy 192.0.2.1:8443 {
header_up X\-Forwarded\-For {http.request.remote.host}
header_up X\-Client\-Port {http.request.remote.port}
header_up X\-Tls\-Client\-Cert\-Der\-Base64 {http.request.tls.client.certificate_der_base64}
}
tls {
client_auth {
mode request
}
}
}
.ft P
.fi
.UNINDENT
.UNINDENT
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
.ft P
.fi
.UNINDENT

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "STRELAYSRV" "1" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "STRELAYSRV" "1" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-BEP" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.SH INTRODUCTION AND DEFINITIONS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-CONFIG" "5" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.SH SYNOPSIS
@@ -193,7 +193,6 @@ may no longer correspond to the defaults.
<urURL>https://data.syncthing.net/newdata</urURL>
<urPostInsecurely>false</urPostInsecurely>
<urInitialDelayS>1800</urInitialDelayS>
<restartOnWakeup>true</restartOnWakeup>
<autoUpgradeIntervalH>12</autoUpgradeIntervalH>
<upgradeToPreReleases>false</upgradeToPreReleases>
<keepTemporariesH>24</keepTemporariesH>
@@ -1141,7 +1140,6 @@ Search filter for user searches.
<urURL>https://data.syncthing.net/newdata</urURL>
<urPostInsecurely>false</urPostInsecurely>
<urInitialDelayS>1800</urInitialDelayS>
<restartOnWakeup>true</restartOnWakeup>
<autoUpgradeIntervalH>12</autoUpgradeIntervalH>
<upgradeToPreReleases>false</upgradeToPreReleases>
<keepTemporariesH>24</keepTemporariesH>
@@ -1304,12 +1302,6 @@ the system to stabilize before reporting statistics.
.UNINDENT
.INDENT 0.0
.TP
.B restartOnWakeup
Whether to perform a restart of Syncthing when it is detected that we are
waking from sleep mode (i.e. an unfolding laptop).
.UNINDENT
.INDENT 0.0
.TP
.B autoUpgradeIntervalH
Check for a newer version after this many hours. Set to \fB0\fP to disable
automatic upgrades.

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-DEVICE-IDS" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-EVENT-API" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.SH DESCRIPTION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-FAQ" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.INDENT 0.0

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-GLOBALDISCO" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.SH ANNOUNCEMENTS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-LOCALDISCO" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.SH MODE OF OPERATION

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-NETWORKING" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.SH ROUTER SETUP

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-RELAY" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.SH WHAT IS A RELAY?

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-REST-API" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.sp
@@ -249,7 +249,6 @@ Returns the current configuration.
"urURL": "https://data.syncthing.net/newdata",
"urPostInsecurely": false,
"urInitialDelayS": 1800,
"restartOnWakeup": true,
"autoUpgradeIntervalH": 12,
"upgradeToPreReleases": false,
"keepTemporariesH": 24,

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-SECURITY" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.sp

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-STIGNORE" "5" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.SH SYNOPSIS

View File

@@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "SYNCTHING-VERSIONING" "7" "Jul 30, 2023" "v1.23.6" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Aug 18, 2023" "v1.23.7" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.sp
@@ -47,6 +47,13 @@ Bob. If Alice changes a file locally on her own computer Syncthing will
not and can not archive the old version.
.UNINDENT
.UNINDENT
.sp
The applicable configuration options for each versioning strategy are described
below. For most of them its possible to specify where the versions are stored,
with the default being the \fB\&.stversions\fP folder inside the shared folder path.
If you set a custom version path, please ensure that its on the same partition
or filesystem as the regular folder path, as moving files there may otherwise
fail.
.SH TRASH CAN FILE VERSIONING
.sp
This versioning strategy emulates the common “trash can” approach. When a file
@@ -54,34 +61,27 @@ is deleted or replaced due to a change on a remote device, it is moved to
the trash can in the \fB\&.stversions\fP folder. If a file with the same name was
already in the trash can it is replaced.
.sp
A configuration option is available to clean the trash can from files older
than a specified number of days. If this is set to a positive number of days,
files will be removed when they have been in the trash can that long. Setting
this to zero prevents any files from being removed from the trash can
automatically.
A \fI\%configuration option\fP is
available to clean the trash can from files older than a specified number of
days. If this is set to a positive number of days, files will be removed when
they have been in the trash can that long. Setting this to zero prevents any
files from being removed from the trash can automatically.
.SH SIMPLE FILE VERSIONING
.sp
With “Simple File Versioning” files are moved to the \fB\&.stversions\fP folder
(inside your shared folder) when replaced or deleted on a remote device. This
option also takes a value in an input titled “Keep Versions” which tells
Syncthing how many old versions of the file it should keep. For example, if
you set this value to 5, if a file is replaced 5 times on a remote device, you
will see 5 time\-stamped versions on that file in the “.stversions” folder on
the other devices sharing the same folder.
With “Simple File Versioning” files are moved to the \fB\&.stversions\fP folder when
replaced or deleted on a remote device. In addition to the
\fI\%cleanoutDays\fP option, this strategy also takes a
value in an input titled “Keep Versions” which tells Syncthing how many old
versions of the file it should \fI\%keep\fP\&. For
example, if you set this value to 5, if a file is replaced 5 times on a remote
device, you will see 5 time\-stamped versions on that file in the \fB\&.stversions\fP
folder on the other devices sharing the same folder.
.SH STAGGERED FILE VERSIONING
.sp
With “Staggered File Versioning” files are also moved to a different folder
when replaced or deleted on a remote device (just like “Simple File
Versioning”), however, versions are automatically deleted if they are older
than the maximum age or exceed the number of files allowed in an interval.
.sp
With this versioning method its possible to specify where the versions are
stored, with the default being the \fB\&.stversions\fP folder inside the normal
folder path. If you set a custom version path, please ensure that its on the
same partition or filesystem as the regular folder path, as moving files there
may otherwise fail. You can use an absolute path (this is recommended) or a
relative path. Relative paths are interpreted relative to Syncthings current
or startup directory.
With “Staggered File Versioning” files are also moved to the \fB\&.stversions\fP
folder when replaced or deleted on a remote device (just like “Simple File
Versioning”), however, versions are automatically deleted if they are older than
the maximum age or exceed the number of files allowed in an interval.
.sp
The following intervals are used and they each have a maximum number of files
that will be kept for each.
@@ -102,8 +102,9 @@ Until maximum age, the oldest version in every week is kept.
.TP
.B Maximum Age
The maximum time to keep a version in days. For example, to keep replaced or
deleted files in the .stversions folder for an entire year, use 365. If
only for 10 days, use 10.
deleted files in the \fB\&.stversions\fP folder for an entire year, use 365. If
only for 10 days, use 10. Corresponds to the
\fI\%maxAge\fP option.
\fBNote: Set to 0 to keep versions forever.\fP
.UNINDENT
.sp
@@ -116,12 +117,12 @@ For more info, check the \fI\%unit test file\fP <\fBhttps://github.com/syncthing
that shows which versions are deleted for a specific run.
.SH EXTERNAL FILE VERSIONING
.sp
This versioning method delegates the decision on what to do to an
external command (e.g. a program or a command line script). Just prior
to a file being replaced, the command will be executed. The file needs
to be removed from the folder in the process, or otherwise Syncthing
will report an error. The command can use the following templated
arguments:
This versioning strategy delegates the decision on what to do to an
\fI\%external command\fP (e.g. a program or a
command line script). Just prior to a file being replaced, the command will be
executed. The file needs to be removed from the folder in the process, or
otherwise Syncthing will report an error. The command can use the following
templated arguments:
.INDENT 0.0
.TP
.B %FOLDER_PATH%
@@ -291,6 +292,98 @@ The only caveat that you should be aware of is that if your Syncthing
folder is located on a portable storage, such as a USB stick, or if you
have the Recycle Bin disabled, then the script will end up deleting all
files permanently.
.SH CONFIGURATION PARAMETER REFERENCE
.sp
The versioning settings are grouped into their own section of each folder in the
\fBconfiguration file\fP\&. The following shows an
example of such a section in the XML:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
<folder id="...">
<versioning type="simple">
<cleanupIntervalS>3600</cleanupIntervalS>
<fsPath></fsPath>
<fsType>basic</fsType>
<param key="cleanoutDays" val="0"></param>
<param key="keep" val="5"></param>
</versioning>
</folder>
.ft P
.fi
.UNINDENT
.UNINDENT
.INDENT 0.0
.TP
.B versioning.type
Selects one of the versioning strategies: \fBtrashcan\fP, \fBsimple\fP,
\fBstaggered\fP, \fBexternal\fP or leave empty to disable versioning completely.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.fsPath
Overrides the path where old versions of files are stored and defaults to
\fB\&.stversions\fP if left empty. An absolute or relative path can be
specified. The latter is interpreted relative to the shared folder path, if
the \fI\%fsType\fP is configured as \fBbasic\fP\&. Ignored
for the \fBexternal\fP versioning strategy.
.sp
This option used to be stored under the keys \fBfsPath\fP or \fBversionsPath\fP
in the \fI\%params\fP element.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.fsType
The internal file system implementation used to access this versions folder.
Only applies if \fI\%fsPath\fP is also set non\-empty,
otherwise the \fBfilesystemType\fP from the folder element is used
instead. Refer to that option description for possible values. Ignored for
the \fBexternal\fP versioning strategy.
.sp
This option used to be stored under the key \fBfsType\fP in the
\fI\%params\fP element.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.cleanupIntervalS
The interval, in seconds, for running cleanup in the versions folder. Zero
to disable periodic cleaning. Limited to one year (31536000 seconds).
Ignored for the \fBexternal\fP versioning strategy.
.sp
This option used to be stored under the key \fBcleanInterval\fP in the
\fI\%params\fP element.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.params
Each versioning strategy can have configuration parameters specific to its
implementation under this element.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.params.cleanoutDays
The number of days to keep files in the versions folder. Zero means to keep
forever. Older elements encountered during cleanup are removed.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.params.keep
The number of old versions to keep, per file.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.params.maxAge
The maximum time to keep a version, in seconds. Zero means to keep forever.
.UNINDENT
.INDENT 0.0
.TP
.B versioning.params.command
External command to execute for storing a file version about to be replaced
or deleted. If the path to the application contains spaces, it should be
quoted.
.UNINDENT
.SH AUTHOR
The Syncthing Authors
.SH COPYRIGHT

Some files were not shown because too many files have changed in this diff Show More