Compare commits

..

45 Commits

Author SHA1 Message Date
Simon Frei
f6f696c6c5 lib/config: Prevent nil deref in debug logging (fixes #5955) (#5956) 2019-08-15 15:51:09 +02:00
Simon Frei
cf40ed6cec lib/connections: Return exported intf from exported function (#5947) 2019-08-13 09:33:33 +02:00
Simon Frei
6fa02d5081 lib/model: Fix a few more problematic locks (ref #5929) (#5944) 2019-08-13 09:04:43 +02:00
Simon Frei
86e35f1879 lib/model: Less locking in ClusterConfig (#5943) 2019-08-11 19:30:24 +02:00
Jakob Borg
720a6bf62e lib/tlsutil: Remove hardcoded curve preferences (fixes #5940) (#5942)
They are arguable outdated and we are better off trusting the standard
library than trying to keep up with it ourselves.
2019-08-11 19:01:57 +02:00
Simon Frei
4a619e74f2 lib/model: Fix incorrect locking (#5939) 2019-08-11 16:10:30 +02:00
Audrius Butkevicius
58ef5368f8 lib/connections: Validate device id before assuming success (fixes #5934) (#5935)
* lib/connections: Validate device id before assuming success (fixes #5934)

* Vet
2019-08-09 12:31:42 +01:00
Cromefire_
7b37d453f9 build, etc: Add systemd units and ufw rules for relay and discovery (fixes #5115) (#5350) 2019-08-08 18:04:52 +02:00
Oliver Freyermuth
edf2399ce6 Add NATSymmetricUDPFirewall to punchable NATs (#5931)
NATSymmetricUDPFirewall actually is not NAT at all, but means the machine has a global IP address and an UDP firewall in front (RFC calls it Symmetric UDP Firewall). This is punchable fine, both theoretically and also practically in testing.
2019-08-06 12:26:02 +01:00
dependabot-preview[bot]
d43b0a4395 build(deps): bump github.com/urfave/cli from 1.20.0 to 1.21.0 (#5928)
Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.20.0 to 1.21.0.
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/master/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v1.20.0...v1.21.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-08-05 12:18:35 +02:00
Jakob Borg
f2efd08e0f Merge branch 'release'
* release:
  cmd/syncthing: Print version information early (fixes #5891) (#5893)
2019-08-05 07:16:07 +02:00
Jakob Borg
61b9f7bd55 lib/config: Format bytes in insufficient space errors (fixes #5920) (#5921) 2019-08-02 14:43:05 +02:00
Simon Frei
1475c0344a gui: Parse strings before copying options object (fixes #5824) (#5922) 2019-08-02 12:46:27 +02:00
Simon Frei
77cc87dfca lib/connections: Log errors in relay clients (#5917) 2019-08-01 17:37:58 +02:00
Jakob Borg
bc7dd02e2b authors: Update carstenhag (Moter8) (fixes #5919) 2019-08-01 17:35:45 +02:00
Simon Frei
8a06cf0973 lib/model: Unflake TestPullInvalidIgnored (#5918) 2019-08-01 11:07:41 +02:00
Simon Frei
05835ed81f all: Remove potentially problematic errors from panics (fixes #5839) (#5912) 2019-07-31 10:53:35 +02:00
Simon Frei
df522576ac lib/model: Don't call t.Fatal in goroutines (fixes #5901) (#5903) 2019-07-30 17:50:51 +02:00
Simon Frei
d681ac11fe lib/config: Handle empty Fstype for mtime-window (#5906) 2019-07-30 15:23:00 +02:00
Simon Frei
7d5f7d508d lib/syncthing: Stop only once (fixes #5908) (#5909) 2019-07-29 20:07:19 +02:00
Simon Frei
1d182e4631 lib/fs: Use gopsutils for disk usage (#5905) 2019-07-29 20:06:17 +02:00
Simon Frei
710f5c199f lib/db: Don't use global fileset in benchmarks (#5902) 2019-07-28 22:31:24 +02:00
Simon Frei
fd847d4efe lib/model: Fix flakyness of TestRequestRemoteRenameChanged (#5904) 2019-07-28 22:29:31 +02:00
Jakob Borg
15e51fc045 cmd/stcrashreceiver: Store and serve compressed reports (#5892)
This changes the on disk format for new raw reports to be gzip
compressed. Also adds the ability to serve these reports in plain text,
to insulate web browsers from the change (previously we just served the
raw reports from disk using Caddy).
2019-07-28 11:13:04 +02:00
Jakob Borg
c1c976aa2b lib/model: Don't panic on failed chmod-back on directory (fixes #5836) (#5896)
* lib/model: Don't panic on failed chmod-back on directory (fixes #5836)

This makes the "in writable dir"-wrapper log chmod-back errors instead
of panicking. To do that we need a logger so the function moved into the
model package which is also the only place it's used. The tests came
along.

(The test also exercised osutil.RenameOrCopy like some sort of
piggybacking. I removed that.)
2019-07-28 10:25:05 +02:00
Jakob Borg
159d1a68e1 lib/api: Don't log random stuff in the HTTP server (fixes #5738) (#5897) 2019-07-28 09:49:07 +02:00
Simon Frei
dd850f66bb lib/api: Close server before exiting serve (fixes #5866) (#5895) 2019-07-28 08:03:55 +02:00
Jakob Borg
d0c3697152 cmd/syncthing: Print version information early (fixes #5891) (#5893) 2019-07-27 20:36:33 +02:00
Jakob Borg
669bcb748f lib/config, lib/model: Don't save on every pending folder/device update (fixes #5888) (#5890)
Wrapper methods generally don't save by themselves.
2019-07-27 11:05:00 +01:00
Jakob Borg
4e22a96602 cmd/syncthing: Print version information early (fixes #5891) (#5893) 2019-07-27 10:58:39 +01:00
Jakob Borg
a992559abc lib/db: Add hacky way to adjust database parameters (#5889)
This adds a set of magical environment variables that can be used to
tweak the database parameters. It's totally undocumented and not
intended to be a long term or supported thing.

It's ugly, but there is a backstory. I have a couple of large
installations where the database options are inefficient or otherwise
suboptimal (24/7 compaction going on and stuff like that). I don't
*know* the correct database parameters, nor yet the formula or method to
derive them by, so this requires experimentation. Experimentation needs
to happen partly in production, and rolling out new builds for every
tweak isn't practical. This provides override points for all reasonable
values, while not changing anything by default.

Ideally, at the end of such experimentation, we'll know which values are
relevant to change and in what manner, and can provide a more user
friendly knob to do so - or do it automatically based on the database
size.
2019-07-26 22:18:42 +02:00
Simon Frei
46e72d76b5 cmd/syncthing, lib/syncthing: Create library utils (ref #4085) (#5871) 2019-07-23 23:39:20 +02:00
Simon Frei
7a4c88d4e4 lib: Add mtime window when comparing files (#5852) 2019-07-23 21:48:53 +02:00
Simon Frei
35f40e9a58 lib/model: Create new file-set after stopping folder (fixes #5882) (#5883) 2019-07-23 20:39:25 +02:00
Simon Frei
5de9b677c2 lib/fs: Fix kqueue event list (fixes #5308) (#5885) 2019-07-23 14:11:15 +02:00
Simon Frei
6f08162376 lib/model: Remove incorrect/useless panics (#5881) 2019-07-23 10:51:16 +02:00
Simon Frei
7b3d9a8dca lib/syncthing: Refactor to use util.AsService (#5858) 2019-07-23 10:50:37 +02:00
Simon Frei
942659fb06 lib/model, lib/nat: More service termination speedup (#5884) 2019-07-23 10:49:22 +02:00
dependabot-preview[bot]
15c262184b build(deps): bump github.com/maruel/panicparse from 1.2.1 to 1.3.0 (#5879)
Bumps [github.com/maruel/panicparse](https://github.com/maruel/panicparse) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/maruel/panicparse/releases)
- [Commits](https://github.com/maruel/panicparse/compare/v1.2.1...v1.3.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-22 19:46:52 +01:00
dependabot-preview[bot]
484fa0592e build(deps): bump github.com/lib/pq from 1.1.1 to 1.2.0 (#5878)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.1.1 to 1.2.0.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.1.1...v1.2.0)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-07-22 08:07:21 +01:00
Simon Frei
b5b54ff057 lib/model: No watch-error on missing folder (fixes #5833) (#5876) 2019-07-19 19:41:16 +02:00
Simon Frei
4d3432af3e lib: Ensure timely service termination (fixes #5860) (#5863) 2019-07-19 19:40:40 +02:00
Simon Frei
1cb55904bc lib/model: Prevent panic in NeedFolderFiles (fixes #5872) (#5875) 2019-07-19 19:39:52 +02:00
Simon Frei
2b622d0774 lib/model: Close conn on dev pause (fixes #5873) (#5874) 2019-07-19 19:37:29 +02:00
Simon Frei
1894123d3c lib/syncthing: Modify exit status before stopping (fixes #5869) (#5870) 2019-07-18 20:49:00 +02:00
65 changed files with 1381 additions and 871 deletions

View File

@@ -46,7 +46,7 @@ Brandon Philips (philips) <brandon@ifup.org>
Brendan Long (brendanlong) <self@brendanlong.com>
Brian R. Becker (brbecker) <brbecker@gmail.com>
Caleb Callaway (cqcallaw) <enlightened.despot@gmail.com>
Carsten Hagemann (Moter8) <moter8@gmail.com>
Carsten Hagemann (carstenhag) <moter8@gmail.com> <carsten@chagemann.de>
Cathryne Linenweaver (Cathryne) <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com> <katrinleinweber@MAC.local>
Cedric Staniewski (xduugu) <cedric@gmx.ca>
Chris Howie (cdhowie) <me@chrishowie.com>

View File

@@ -57,11 +57,13 @@ type target struct {
name string
debname string
debdeps []string
debpre string
debpost string
description string
buildPkg string
binaryName string
archiveFiles []archiveFile
systemdServices []string
installationFiles []archiveFile
tags []string
}
@@ -128,6 +130,7 @@ var targets = map[string]target{
name: "stdiscosrv",
debname: "syncthing-discosrv",
debdeps: []string{"libc6"},
debpre: "cmd/stdiscosrv/scripts/preinst",
description: "Syncthing Discovery Server",
buildPkg: "github.com/syncthing/syncthing/cmd/stdiscosrv",
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
@@ -137,12 +140,17 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/stdiscosrv/etc/linux-systemd/stdiscosrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/syncthing-discosrv/README.txt", perm: 0644},
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-discosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
{src: "cmd/stdiscosrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-discosrv", perm: 0644},
{src: "cmd/stdiscosrv/etc/firewall-ufw/stdiscosrv", dst: "deb/etc/ufw/applications.d/stdiscosrv", perm: 0644},
},
tags: []string{"purego"},
},
@@ -150,6 +158,7 @@ var targets = map[string]target{
name: "strelaysrv",
debname: "syncthing-relaysrv",
debdeps: []string{"libc6"},
debpre: "cmd/strelaysrv/scripts/preinst",
description: "Syncthing Relay Server",
buildPkg: "github.com/syncthing/syncthing/cmd/strelaysrv",
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
@@ -160,6 +169,9 @@ var targets = map[string]target{
{src: "LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
systemdServices: []string{
"cmd/strelaysrv/etc/linux-systemd/strelaysrv.service",
},
installationFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/syncthing-relaysrv/README.txt", perm: 0644},
@@ -167,6 +179,8 @@ var targets = map[string]target{
{src: "LICENSE", dst: "deb/usr/share/doc/syncthing-relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/syncthing-relaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
{src: "cmd/strelaysrv/etc/linux-systemd/default", dst: "deb/etc/default/syncthing-relaysrv", perm: 0644},
{src: "cmd/strelaysrv/etc/firewall-ufw/strelaysrv", dst: "deb/etc/ufw/applications.d/strelaysrv", perm: 0644},
},
},
"strelaypoolsrv": {
@@ -555,9 +569,15 @@ func buildDeb(target target) {
for _, dep := range target.debdeps {
args = append(args, "-d", dep)
}
for _, service := range target.systemdServices {
args = append(args, "--deb-systemd", service)
}
if target.debpost != "" {
args = append(args, "--after-upgrade", target.debpost)
}
if target.debpre != "" {
args = append(args, "--before-install", target.debpre)
}
runPrint("fpm", args...)
}

View File

@@ -13,6 +13,8 @@
package main
import (
"bytes"
"compress/gzip"
"flag"
"io"
"io/ioutil"
@@ -52,12 +54,12 @@ func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// The final path component should be a SHA256 hash in hex, so 64 hex
// characters. We don't care about case on the request but use lower
// case internally.
base := strings.ToLower(path.Base(req.URL.Path))
if len(base) != 64 {
reportID := strings.ToLower(path.Base(req.URL.Path))
if len(reportID) != 64 {
http.Error(w, "Bad request", http.StatusBadRequest)
return
}
for _, c := range base {
for _, c := range reportID {
if c >= 'a' && c <= 'f' {
continue
}
@@ -68,40 +70,57 @@ func (r *crashReceiver) ServeHTTP(w http.ResponseWriter, req *http.Request) {
return
}
// The location of the report on disk, compressed
fullPath := filepath.Join(r.dir, r.dirFor(reportID), reportID) + ".gz"
switch req.Method {
case http.MethodGet:
r.serveGet(fullPath, w, req)
case http.MethodHead:
r.serveHead(base, w, req)
r.serveHead(fullPath, w, req)
case http.MethodPut:
r.servePut(base, w, req)
r.servePut(reportID, fullPath, w, req)
default:
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
// serveGet responds to GET requests by serving the uncompressed report.
func (r *crashReceiver) serveGet(fullPath string, w http.ResponseWriter, _ *http.Request) {
fd, err := os.Open(fullPath)
if err != nil {
http.Error(w, "Not found", http.StatusNotFound)
return
}
defer fd.Close()
gr, err := gzip.NewReader(fd)
if err != nil {
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
_, _ = io.Copy(w, gr) // best effort
}
// serveHead responds to HEAD requests by checking if the named report
// already exists in the system.
func (r *crashReceiver) serveHead(base string, w http.ResponseWriter, _ *http.Request) {
path := filepath.Join(r.dirFor(base), base)
if _, err := os.Lstat(path); err != nil {
func (r *crashReceiver) serveHead(fullPath string, w http.ResponseWriter, _ *http.Request) {
if _, err := os.Lstat(fullPath); err != nil {
http.Error(w, "Not found", http.StatusNotFound)
}
// 200 OK
}
// servePut accepts and stores the given report.
func (r *crashReceiver) servePut(base string, w http.ResponseWriter, req *http.Request) {
path := filepath.Join(r.dirFor(base), base)
fullPath := filepath.Join(r.dir, path)
func (r *crashReceiver) servePut(reportID, fullPath string, w http.ResponseWriter, req *http.Request) {
// Ensure the destination directory exists
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
log.Printf("Creating directory for report %s: %v", base, err)
log.Println("Creating directory:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Read at most maxRequestSize of report data.
log.Println("Receiving report", base)
log.Println("Receiving report", reportID)
lr := io.LimitReader(req.Body, maxRequestSize)
bs, err := ioutil.ReadAll(lr)
if err != nil {
@@ -110,10 +129,16 @@ func (r *crashReceiver) servePut(base string, w http.ResponseWriter, req *http.R
return
}
// Create an output file
err = ioutil.WriteFile(fullPath, bs, 0644)
// Compress the report for storage
buf := new(bytes.Buffer)
gw := gzip.NewWriter(buf)
_, _ = gw.Write(bs) // can't fail
gw.Close()
// Create an output file with the compressed report
err = ioutil.WriteFile(fullPath, buf.Bytes(), 0644)
if err != nil {
log.Printf("Creating file for report %s: %v", base, err)
log.Println("Saving report:", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
@@ -122,7 +147,7 @@ func (r *crashReceiver) servePut(base string, w http.ResponseWriter, req *http.R
if r.dsn != "" {
go func() {
// There's no need for the client to have to wait for this part.
if err := sendReport(r.dsn, path, bs); err != nil {
if err := sendReport(r.dsn, reportID, bs); err != nil {
log.Println("Failed to send report:", err)
}
}()

View File

@@ -0,0 +1,4 @@
[stdiscosrv]
title=Syncthing discovery server
description=Lets syncthing clients discover each other
ports=8443/tcp

View File

@@ -0,0 +1,3 @@
# Default settings for syncthing-relaysrv (strelaysrv).
## Add Options here:
DISCOSRV_OPTS=

View File

@@ -0,0 +1,25 @@
[Unit]
Description=Syncthing Discovery Server
After=network.target
Documentation=man:stdiscosrv(1)
[Service]
WorkingDirectory=/var/lib/syncthing-discosrv
EnvironmentFile=/etc/default/syncthing-discosrv
ExecStart=/usr/bin/stdiscosrv $DISCOSRV_OPTS
# Hardening
User=syncthing-discosrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-discosrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-discosrv.service

View File

@@ -0,0 +1,4 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-discosrv --ingroup syncthing syncthing-discosrv

View File

@@ -0,0 +1,9 @@
[strelaysrv]
title=Syncthing relay server
description=Proxies traffic of syncthing client behind firewalls
ports=22067/tcp
[strelaysrv-metrics]
title=Syncthing relay metrics
description=Provides metrics about the syncthing relay server
ports=22070/tcp

View File

@@ -0,0 +1,5 @@
# Default settings for syncthing-relaysrv (strelaysrv).
NAT=true
## Add Options here:
RELAYSRV_OPTS=

View File

@@ -1,17 +1,25 @@
[Unit]
Description=Syncthing relay server
Description=Syncthing Relay Server
After=network.target
Documentation=man:strelaysrv(1)
[Service]
User=strelaysrv
Group=strelaysrv
ExecStart=/usr/bin/strelaysrv
WorkingDirectory=/var/lib/strelaysrv
WorkingDirectory=/var/lib/syncthing-relaysrv
EnvironmentFile=/etc/default/syncthing-relaysrv
ExecStart=/usr/bin/strelaysrv -nat=${NAT} $RELAYSRV_OPTS
PrivateTmp=true
ProtectSystem=full
ProtectHome=true
# Hardening
User=syncthing-relaysrv
Group=syncthing
ProtectSystem=strict
ReadWritePaths=/var/lib/syncthing-relaysrv
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectHome=true
SystemCallArchitectures=native
MemoryDenyWriteExecute=true
[Install]
WantedBy=multi-user.target
Alias=syncthing-relaysrv.service

View File

@@ -0,0 +1,4 @@
#!/bin/bash
addgroup --system syncthing
adduser --system --home /var/lib/syncthing-relaysrv --ingroup syncthing syncthing-relaysrv

View File

@@ -22,11 +22,15 @@ func init() {
panic("Couldn't find block profiler")
}
l.Debugln("Starting block profiling")
go saveBlockingProfiles(profiler)
go func() {
err := saveBlockingProfiles(profiler) // Only returns on error
l.Warnln("Block profiler failed:", err)
panic("Block profiler failed")
}()
}
}
func saveBlockingProfiles(profiler *pprof.Profile) {
func saveBlockingProfiles(profiler *pprof.Profile) error {
runtime.SetBlockProfileRate(1)
t0 := time.Now()
@@ -35,16 +39,16 @@ func saveBlockingProfiles(profiler *pprof.Profile) {
fd, err := os.Create(fmt.Sprintf("block-%05d-%07d.pprof", syscall.Getpid(), startms))
if err != nil {
panic(err)
return err
}
err = profiler.WriteTo(fd, 0)
if err != nil {
panic(err)
return err
}
err = fd.Close()
if err != nil {
panic(err)
return err
}
}
return nil
}

View File

@@ -23,11 +23,15 @@ func init() {
rate = i
}
l.Debugln("Starting heap profiling")
go saveHeapProfiles(rate)
go func() {
err := saveHeapProfiles(rate) // Only returns on error
l.Warnln("Heap profiler failed:", err)
panic("Heap profiler failed")
}()
}
}
func saveHeapProfiles(rate int) {
func saveHeapProfiles(rate int) error {
runtime.MemProfileRate = rate
var memstats, prevMemstats runtime.MemStats
@@ -38,21 +42,21 @@ func saveHeapProfiles(rate int) {
if memstats.HeapInuse > prevMemstats.HeapInuse {
fd, err := os.Create(name + ".tmp")
if err != nil {
panic(err)
return err
}
err = pprof.WriteHeapProfile(fd)
if err != nil {
panic(err)
return err
}
err = fd.Close()
if err != nil {
panic(err)
return err
}
os.Remove(name) // Error deliberately ignored
err = os.Rename(name+".tmp", name)
if err != nil {
panic(err)
return err
}
prevMemstats = memstats

View File

@@ -437,7 +437,7 @@ func generate(generateDir string) error {
l.Warnln("Config exists; will not overwrite.")
return nil
}
cfg, err := defaultConfig(cfgFile, myID)
cfg, err := syncthing.DefaultConfig(cfgFile, myID, noDefaultFolder)
if err != nil {
return err
}
@@ -548,26 +548,25 @@ func upgradeViaRest() error {
}
func syncthingMain(runtimeOptions RuntimeOptions) {
// Set a log prefix similar to the ID we will have later on, or early log
// lines look ugly.
l.SetPrefix("[start] ")
// Print our version information up front, so any crash that happens
// early etc. will have it available.
l.Infoln(build.LongVersion)
// Ensure that we have a certificate and key.
cert, err := tls.LoadX509KeyPair(
cert, err := syncthing.LoadOrGenerateCertificate(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Infof("Generating ECDSA key and certificate for %s...", tlsDefaultCommonName)
cert, err = tlsutil.NewCertificate(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
tlsDefaultCommonName,
)
if err != nil {
l.Warnln("Failed to generate certificate:", err)
os.Exit(1)
}
l.Warnln("Failed to load/generate certificate:", err)
os.Exit(1)
}
myID := protocol.NewDeviceID(cert.Certificate[0])
cfg, err := loadConfigAtStartup(runtimeOptions.allowNewerConfig, myID)
cfg, err := syncthing.LoadConfigAtStartup(locations.Get(locations.ConfigFile), cert, runtimeOptions.allowNewerConfig, noDefaultFolder)
if err != nil {
l.Warnln("Failed to initialize config:", err)
os.Exit(exitError)
@@ -690,74 +689,12 @@ func loadOrDefaultConfig(myID protocol.DeviceID) (config.Wrapper, error) {
cfg, err := config.Load(cfgFile, myID)
if err != nil {
cfg, err = defaultConfig(cfgFile, myID)
cfg, err = syncthing.DefaultConfig(cfgFile, myID, noDefaultFolder)
}
return cfg, err
}
func loadConfigAtStartup(allowNewerConfig bool, myID protocol.DeviceID) (config.Wrapper, error) {
cfgFile := locations.Get(locations.ConfigFile)
cfg, err := config.Load(cfgFile, myID)
if os.IsNotExist(err) {
cfg, err = defaultConfig(cfgFile, myID)
if err != nil {
return nil, errors.Wrap(err, "failed to generate default config")
}
err = cfg.Save()
if err != nil {
return nil, errors.Wrap(err, "failed to save default config")
}
l.Infof("Default config saved. Edit %s to taste (with Syncthing stopped) or use the GUI", cfg.ConfigPath())
} else if err == io.EOF {
return nil, errors.New("Failed to load config: unexpected end of file. Truncated or empty configuration?")
} else if err != nil {
return nil, errors.Wrap(err, "failed to load config")
}
if cfg.RawCopy().OriginalVersion != config.CurrentVersion {
if cfg.RawCopy().OriginalVersion == config.CurrentVersion+1101 {
l.Infof("Now, THAT's what we call a config from the future! Don't worry. As long as you hit that wire with the connecting hook at precisely eighty-eight miles per hour the instant the lightning strikes the tower... everything will be fine.")
}
if cfg.RawCopy().OriginalVersion > config.CurrentVersion && !allowNewerConfig {
return nil, fmt.Errorf("Config file version (%d) is newer than supported version (%d). If this is expected, use -allow-newer-config to override.", cfg.RawCopy().OriginalVersion, config.CurrentVersion)
}
err = archiveAndSaveConfig(cfg)
if err != nil {
return nil, errors.Wrap(err, "config archive")
}
}
return cfg, nil
}
func archiveAndSaveConfig(cfg config.Wrapper) error {
// Copy the existing config to an archive copy
archivePath := cfg.ConfigPath() + fmt.Sprintf(".v%d", cfg.RawCopy().OriginalVersion)
l.Infoln("Archiving a copy of old config file format at:", archivePath)
if err := copyFile(cfg.ConfigPath(), archivePath); err != nil {
return err
}
// Do a regular atomic config sve
return cfg.Save()
}
func copyFile(src, dst string) error {
bs, err := ioutil.ReadFile(src)
if err != nil {
return err
}
if err := ioutil.WriteFile(dst, bs, 0600); err != nil {
// Attempt to clean up
os.Remove(dst)
return err
}
return nil
}
func auditWriter(auditFile string) io.Writer {
var fd io.Writer
var err error
@@ -790,22 +727,6 @@ func auditWriter(auditFile string) io.Writer {
return fd
}
func defaultConfig(cfgFile string, myID protocol.DeviceID) (config.Wrapper, error) {
newCfg, err := config.NewWithFreePorts(myID)
if err != nil {
return nil, err
}
if noDefaultFolder {
l.Infoln("We will skip creation of a default folder on first start since the proper envvar is set")
return config.Wrap(cfgFile, newCfg), nil
}
newCfg.Folders = append(newCfg.Folders, config.NewFolderConfiguration(myID, "default", "Default Folder", fs.FilesystemTypeBasic, locations.Get(locations.DefFolder)))
l.Infoln("Default folder created and/or linked to new config")
return config.Wrap(cfgFile, newCfg), nil
}
func resetDB() error {
return os.RemoveAll(locations.Get(locations.Database))
}

View File

@@ -102,7 +102,8 @@ func monitorMain(runtimeOptions RuntimeOptions) {
l.Infoln("Starting syncthing")
err = cmd.Start()
if err != nil {
panic(err)
l.Warnln("Error starting the main Syncthing process:", err)
panic("Error starting the main Syncthing process")
}
stdoutMut.Lock()

9
go.mod
View File

@@ -5,7 +5,6 @@ require (
github.com/AudriusButkevicius/pfilter v0.0.0-20190627213056-c55ef6137fc6
github.com/AudriusButkevicius/recli v0.0.5
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e
github.com/calmh/du v1.0.1
github.com/calmh/xdr v1.1.0
github.com/ccding/go-stun v0.0.0-20180726100737-be486d185f3d
github.com/certifi/gocertifi v0.0.0-20190506164543-d2eda7129713 // indirect
@@ -19,9 +18,9 @@ require (
github.com/jackpal/gateway v0.0.0-20161225004348-5795ac81146e
github.com/kballard/go-shellquote v0.0.0-20170619183022-cd60e84ee657
github.com/kr/pretty v0.1.0 // indirect
github.com/lib/pq v1.1.1
github.com/lib/pq v1.2.0
github.com/lucas-clemente/quic-go v0.11.2
github.com/maruel/panicparse v1.2.1
github.com/maruel/panicparse v1.3.0
github.com/mattn/go-isatty v0.0.7
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338
github.com/onsi/ginkgo v1.8.0 // indirect
@@ -33,10 +32,11 @@ require (
github.com/prometheus/client_golang v0.9.4
github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9
github.com/sasha-s/go-deadlock v0.2.0
github.com/shirou/gopsutil v0.0.0-20190714054239-47ef3260b6bf
github.com/syncthing/notify v0.0.0-20190709140112-69c7a957d3e2
github.com/syndtr/goleveldb v1.0.1-0.20190318030020-c3a204f8e965
github.com/thejerf/suture v3.0.2+incompatible
github.com/urfave/cli v1.20.0
github.com/urfave/cli v1.21.0
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980
@@ -46,7 +46,6 @@ require (
gopkg.in/asn1-ber.v1 v1.0.0-20170511165959-379148ca0225 // indirect
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
gopkg.in/ldap.v2 v2.5.1
gopkg.in/yaml.v2 v2.2.2 // indirect
)
go 1.12

20
go.sum
View File

@@ -4,6 +4,9 @@ github.com/AudriusButkevicius/pfilter v0.0.0-20190627213056-c55ef6137fc6 h1:Apvc
github.com/AudriusButkevicius/pfilter v0.0.0-20190627213056-c55ef6137fc6/go.mod h1:1N0EEx/irz4B1qV17wW82TFbjQrE7oX316Cki6eDY0Q=
github.com/AudriusButkevicius/recli v0.0.5 h1:xUa55PvWTHBm17T6RvjElRO3y5tALpdceH86vhzQ5wg=
github.com/AudriusButkevicius/recli v0.0.5/go.mod h1:Q2E26yc6RvWWEz/TJ/goUp6yXvipYdJI096hpoaqsNs=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6 h1:fLjPD/aNc3UIOA6tDi6QXUemppXK3P9BI7mr2hd6gx8=
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
@@ -12,8 +15,6 @@ github.com/beorn7/perks v1.0.0 h1:HWo1m869IqiPhD389kmkxeTalrjNbbJTC8LXupb+sl0=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e h1:2augTYh6E+XoNrrivZJBadpThP/dsvYKj0nzqfQ8tM4=
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e/go.mod h1:0YdlkowM3VswSROI7qDxhRvJ3sLhlFrRRwjwegp5jy4=
github.com/calmh/du v1.0.1 h1:uDCrDbXVVPrzxSNRkpj6nqSfwrl5uRWH3zvrJgl7RRo=
github.com/calmh/du v1.0.1/go.mod h1:pHNccp4cXQeyDaiV3S7t5GN+eGOgynF0VSLxJjk9tLU=
github.com/calmh/xdr v1.1.0 h1:U/Dd4CXNLoo8EiQ4ulJUXkgO1/EyQLgDKLgpY1SOoJE=
github.com/calmh/xdr v1.1.0/go.mod h1:E8sz2ByAdXC8MbANf1LCRYzedSnnc+/sXXJs/PVqoeg=
github.com/ccding/go-stun v0.0.0-20180726100737-be486d185f3d h1:As4937T5NVbJ/DmZT9z33pyLEprMd6CUSfhbmMY57Io=
@@ -37,6 +38,8 @@ github.com/getsentry/raven-go v0.2.0 h1:no+xWJRb5ZI7eE8TWgIq1jLulQiIoLG0IfYxv5JY
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
github.com/go-ole/go-ole v1.2.1/go.mod h1:7FAglXiTm7HKlQRDeOQ6ZNUHidzCWXuZWq/1dTyBNF8=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobwas/glob v0.0.0-20170212200151-51eb1ee00b6d h1:IngNQgbqr5ZOU0exk395Szrvkzes9Ilk1fmJfkw7d+M=
github.com/gobwas/glob v0.0.0-20170212200151-51eb1ee00b6d/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
@@ -72,18 +75,24 @@ github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.1.1 h1:sJZmqHoEaY7f+NPP8pgLB/WxulyR3fewgCM2qaSlBb4=
github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lucas-clemente/quic-go v0.11.2 h1:Mop0ac3zALaBR3wGs6j8OYe/tcFvFsxTUFMkE/7yUOI=
github.com/lucas-clemente/quic-go v0.11.2/go.mod h1:PpMmPfPKO9nKJ/psF49ESTAGQSdfXxlg1otPbEB2nOw=
github.com/marten-seemann/qtls v0.2.3 h1:0yWJ43C62LsZt08vuQJDK1uC1czUc3FJeCLPoNAI4vA=
github.com/marten-seemann/qtls v0.2.3/go.mod h1:xzjG7avBwGGbdZ8dTGxlBnLArsVKLvwmjgmPuiQEcYk=
github.com/maruel/panicparse v1.2.1 h1:mNlHGiakrixj+AwF/qRpTwnj+zsWYPRLQ7wRqnJsfO0=
github.com/maruel/panicparse v1.2.1/go.mod h1:vszMjr5QQ4F5FSRfraldcIA/BCw5xrdLL+zEcU2nRBs=
github.com/maruel/panicparse v1.3.0 h1:1Ep/RaYoSL1r5rTILHQQbyzHG8T4UP5ZbQTYTo4bdDc=
github.com/maruel/panicparse v1.3.0/go.mod h1:vszMjr5QQ4F5FSRfraldcIA/BCw5xrdLL+zEcU2nRBs=
github.com/mattn/go-colorable v0.1.1 h1:G1f5SKeVxmagw/IyvzvtZE4Gybcc4Tr1tf7I8z0XgOg=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.7 h1:UvyT9uN+3r7yLEYSlJsbQGdsaB/a0DlgWP3pql6iwOc=
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b h1:j7+1HpAFS1zy5+Q4qx1fWh90gTKwiN4QCGoY9TWyyO4=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338 h1:USW1+zAUkUSvk097CAX/i8KR3r6f+DHNhk6Xe025Oyw=
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
@@ -126,6 +135,11 @@ github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9 h1:jmLW6izPBVl
github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/sasha-s/go-deadlock v0.2.0 h1:lMqc+fUb7RrFS3gQLtoQsJ7/6TV/pAIFvBsqX73DK8Y=
github.com/sasha-s/go-deadlock v0.2.0/go.mod h1:StQn567HiB1fF2yJ44N9au7wOhrPS3iZqiDbRupzT10=
github.com/shirou/gopsutil v0.0.0-20190714054239-47ef3260b6bf h1:c9SV5NzG4KOk448TUE7iqCmb4E4y79CZF4zDdc1Jx3Q=
github.com/shirou/gopsutil v0.0.0-20190714054239-47ef3260b6bf/go.mod h1:WWnYX4lzhCH5h/3YBfyVA3VbLYjlMZZAQcW9ojMexNc=
github.com/shirou/gopsutil v2.19.6+incompatible h1:49/Gru26Lne9Cl3IoAVDZVM09hvkSrUodgIIsCVRwbs=
github.com/shirou/gopsutil v2.19.6+incompatible/go.mod h1:WWnYX4lzhCH5h/3YBfyVA3VbLYjlMZZAQcW9ojMexNc=
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4/go.mod h1:qsXQc7+bwAM3Q1u/4XEfrquwF8Lw7D7y5cD8CuHnfIc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
@@ -141,6 +155,8 @@ github.com/thejerf/suture v3.0.2+incompatible h1:GtMydYcnK4zBJ0KL6Lx9vLzl6Oozb65
github.com/thejerf/suture v3.0.2+incompatible/go.mod h1:ibKwrVj+Uzf3XZdAiNWUouPaAbSoemxOHLmJmwheEMc=
github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.21.0 h1:wYSSj06510qPIzGSua9ZqsncMmWE3Zr55KBERygyrxE=
github.com/urfave/cli v1.21.0/go.mod h1:lxDj6qX9Q6lWQxIrbrT0nwecwUtRnhVZAJjJZrVUZZQ=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0 h1:okhMind4q9H1OxF44gNegWkiP4H/gsTFLalHFa4OOUI=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=

View File

@@ -1279,6 +1279,13 @@ angular.module('syncthing.core')
$scope.protocolChanged = true;
}
// Parse strings to arrays before copying over
['listenAddresses', 'globalAnnounceServers'].forEach(function (key) {
$scope.tmpOptions[key] = $scope.tmpOptions["_" + key + "Str"].split(/[ ,]+/).map(function (x) {
return x.trim();
});
});
// Apply new settings locally
$scope.thisDeviceIn($scope.tmpDevices).name = $scope.tmpOptions.deviceName;
$scope.config.options = angular.copy($scope.tmpOptions);
@@ -1292,12 +1299,6 @@ angular.module('syncthing.core')
// here as well...
$scope.devices = $scope.config.devices;
['listenAddresses', 'globalAnnounceServers'].forEach(function (key) {
$scope.config.options[key] = $scope.config.options["_" + key + "Str"].split(/[ ,]+/).map(function (x) {
return x.trim();
});
});
$scope.saveConfig(function () {
if (themeChanged) {
document.location.reload(true);

View File

@@ -13,6 +13,7 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
"net/url"
@@ -79,7 +80,6 @@ type service struct {
contr Controller
noUpgrade bool
tlsDefaultCommonName string
stop chan struct{} // signals intentional stop
configChanged chan struct{} // signals intentional listener close due to config change
started chan string // signals startup complete by sending the listener address, for testing only
startedOnce chan struct{} // the service has started successfully at least once
@@ -338,6 +338,9 @@ func (s *service) serve(stop chan struct{}) {
// ReadTimeout must be longer than SyncthingController $scope.refresh
// interval to avoid HTTP keepalive/GUI refresh race.
ReadTimeout: 15 * time.Second,
// Prevent the HTTP server from logging stuff on its own. The things we
// care about we log ourselves from the handlers.
ErrorLog: log.New(ioutil.Discard, "", 0),
}
l.Infoln("GUI and API listening on", listener.Addr())
@@ -375,6 +378,7 @@ func (s *service) serve(stop chan struct{}) {
// Restart due to listen/serve failure
l.Warnln("GUI/API:", err, "(restarting)")
}
srv.Close()
}
// Complete implements suture.IsCompletable, which signifies to the supervisor

View File

@@ -27,3 +27,7 @@ func (m *mockedConnections) NATType() string {
func (m *mockedConnections) Serve() {}
func (m *mockedConnections) Stop() {}
func (m *mockedConnections) ExternalAddresses() []string { return nil }
func (m *mockedConnections) AllAddresses() []string { return nil }

View File

@@ -8,7 +8,6 @@ package beacon
import (
"net"
stdsync "sync"
"github.com/thejerf/suture"
)
@@ -24,21 +23,3 @@ type Interface interface {
Recv() ([]byte, net.Addr)
Error() error
}
type errorHolder struct {
err error
mut stdsync.Mutex // uses stdlib sync as I want this to be trivially embeddable, and there is no risk of blocking
}
func (e *errorHolder) setError(err error) {
e.mut.Lock()
e.err = err
e.mut.Unlock()
}
func (e *errorHolder) Error() error {
e.mut.Lock()
err := e.err
e.mut.Unlock()
return err
}

View File

@@ -11,8 +11,9 @@ import (
"net"
"time"
"github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/util"
)
type Broadcast struct {
@@ -44,16 +45,16 @@ func NewBroadcast(port int) *Broadcast {
}
b.br = &broadcastReader{
port: port,
outbox: b.outbox,
connMut: sync.NewMutex(),
port: port,
outbox: b.outbox,
}
b.br.ServiceWithError = util.AsServiceWithError(b.br.serve)
b.Add(b.br)
b.bw = &broadcastWriter{
port: port,
inbox: b.inbox,
connMut: sync.NewMutex(),
port: port,
inbox: b.inbox,
}
b.bw.ServiceWithError = util.AsServiceWithError(b.bw.serve)
b.Add(b.bw)
return b
@@ -76,34 +77,42 @@ func (b *Broadcast) Error() error {
}
type broadcastWriter struct {
port int
inbox chan []byte
conn *net.UDPConn
connMut sync.Mutex
errorHolder
util.ServiceWithError
port int
inbox chan []byte
}
func (w *broadcastWriter) Serve() {
func (w *broadcastWriter) serve(stop chan struct{}) error {
l.Debugln(w, "starting")
defer l.Debugln(w, "stopping")
conn, err := net.ListenUDP("udp4", nil)
if err != nil {
l.Debugln(err)
w.setError(err)
return
return err
}
defer conn.Close()
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-stop:
case <-done:
}
conn.Close()
}()
w.connMut.Lock()
w.conn = conn
w.connMut.Unlock()
for {
var bs []byte
select {
case bs = <-w.inbox:
case <-stop:
return nil
}
for bs := range w.inbox {
addrs, err := net.InterfaceAddrs()
if err != nil {
l.Debugln(err)
w.setError(err)
w.SetError(err)
continue
}
@@ -134,14 +143,13 @@ func (w *broadcastWriter) Serve() {
// Write timeouts should not happen. We treat it as a fatal
// error on the socket.
l.Debugln(err)
w.setError(err)
return
return err
}
if err != nil {
// Some other error that we don't expect. Debug and continue.
l.Debugln(err)
w.setError(err)
w.SetError(err)
continue
}
@@ -150,57 +158,49 @@ func (w *broadcastWriter) Serve() {
}
if success > 0 {
w.setError(nil)
w.SetError(nil)
}
}
}
func (w *broadcastWriter) Stop() {
w.connMut.Lock()
if w.conn != nil {
w.conn.Close()
}
w.connMut.Unlock()
}
func (w *broadcastWriter) String() string {
return fmt.Sprintf("broadcastWriter@%p", w)
}
type broadcastReader struct {
port int
outbox chan recv
conn *net.UDPConn
connMut sync.Mutex
errorHolder
util.ServiceWithError
port int
outbox chan recv
}
func (r *broadcastReader) Serve() {
func (r *broadcastReader) serve(stop chan struct{}) error {
l.Debugln(r, "starting")
defer l.Debugln(r, "stopping")
conn, err := net.ListenUDP("udp4", &net.UDPAddr{Port: r.port})
if err != nil {
l.Debugln(err)
r.setError(err)
return
return err
}
defer conn.Close()
r.connMut.Lock()
r.conn = conn
r.connMut.Unlock()
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-stop:
case <-done:
}
conn.Close()
}()
bs := make([]byte, 65536)
for {
n, addr, err := conn.ReadFrom(bs)
if err != nil {
l.Debugln(err)
r.setError(err)
return
return err
}
r.setError(nil)
r.SetError(nil)
l.Debugf("recv %d bytes from %s", n, addr)
@@ -208,19 +208,12 @@ func (r *broadcastReader) Serve() {
copy(c, bs)
select {
case r.outbox <- recv{c, addr}:
case <-stop:
return nil
default:
l.Debugln("dropping message")
}
}
}
func (r *broadcastReader) Stop() {
r.connMut.Lock()
if r.conn != nil {
r.conn.Close()
}
r.connMut.Unlock()
}
func (r *broadcastReader) String() string {

View File

@@ -48,14 +48,14 @@ func NewMulticast(addr string) *Multicast {
addr: addr,
outbox: m.outbox,
}
m.mr.Service = util.AsService(m.mr.serve)
m.mr.ServiceWithError = util.AsServiceWithError(m.mr.serve)
m.Add(m.mr)
m.mw = &multicastWriter{
addr: addr,
inbox: m.inbox,
}
m.mw.Service = util.AsService(m.mw.serve)
m.mw.ServiceWithError = util.AsServiceWithError(m.mw.serve)
m.Add(m.mw)
return m
@@ -78,29 +78,35 @@ func (m *Multicast) Error() error {
}
type multicastWriter struct {
suture.Service
util.ServiceWithError
addr string
inbox <-chan []byte
errorHolder
}
func (w *multicastWriter) serve(stop chan struct{}) {
func (w *multicastWriter) serve(stop chan struct{}) error {
l.Debugln(w, "starting")
defer l.Debugln(w, "stopping")
gaddr, err := net.ResolveUDPAddr("udp6", w.addr)
if err != nil {
l.Debugln(err)
w.setError(err)
return
return err
}
conn, err := net.ListenPacket("udp6", ":0")
if err != nil {
l.Debugln(err)
w.setError(err)
return
return err
}
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-stop:
case <-done:
}
conn.Close()
}()
pconn := ipv6.NewPacketConn(conn)
@@ -113,14 +119,13 @@ func (w *multicastWriter) serve(stop chan struct{}) {
select {
case bs = <-w.inbox:
case <-stop:
return
return nil
}
intfs, err := net.Interfaces()
if err != nil {
l.Debugln(err)
w.setError(err)
return
return err
}
success := 0
@@ -132,7 +137,7 @@ func (w *multicastWriter) serve(stop chan struct{}) {
if err != nil {
l.Debugln(err, "on write to", gaddr, intf.Name)
w.setError(err)
w.SetError(err)
continue
}
@@ -142,16 +147,13 @@ func (w *multicastWriter) serve(stop chan struct{}) {
select {
case <-stop:
return
return nil
default:
}
}
if success > 0 {
w.setError(nil)
} else {
l.Debugln(err)
w.setError(err)
w.SetError(nil)
}
}
}
@@ -161,35 +163,40 @@ func (w *multicastWriter) String() string {
}
type multicastReader struct {
suture.Service
util.ServiceWithError
addr string
outbox chan<- recv
errorHolder
}
func (r *multicastReader) serve(stop chan struct{}) {
func (r *multicastReader) serve(stop chan struct{}) error {
l.Debugln(r, "starting")
defer l.Debugln(r, "stopping")
gaddr, err := net.ResolveUDPAddr("udp6", r.addr)
if err != nil {
l.Debugln(err)
r.setError(err)
return
return err
}
conn, err := net.ListenPacket("udp6", r.addr)
if err != nil {
l.Debugln(err)
r.setError(err)
return
return err
}
done := make(chan struct{})
defer close(done)
go func() {
select {
case <-stop:
case <-done:
}
conn.Close()
}()
intfs, err := net.Interfaces()
if err != nil {
l.Debugln(err)
r.setError(err)
return
return err
}
pconn := ipv6.NewPacketConn(conn)
@@ -206,16 +213,20 @@ func (r *multicastReader) serve(stop chan struct{}) {
if joined == 0 {
l.Debugln("no multicast interfaces available")
r.setError(errors.New("no multicast interfaces available"))
return
return errors.New("no multicast interfaces available")
}
bs := make([]byte, 65536)
for {
select {
case <-stop:
return nil
default:
}
n, _, addr, err := pconn.ReadFrom(bs)
if err != nil {
l.Debugln(err)
r.setError(err)
r.SetError(err)
continue
}
l.Debugf("recv %d bytes from %s", n, addr)
@@ -224,8 +235,6 @@ func (r *multicastReader) serve(stop chan struct{}) {
copy(c, bs)
select {
case r.outbox <- recv{c, addr}:
case <-stop:
return
default:
l.Debugln("dropping message")
}

View File

@@ -109,7 +109,8 @@ func New(myID protocol.DeviceID) Configuration {
// Can't happen.
if err := cfg.prepare(myID); err != nil {
panic("bug: error in preparing new folder: " + err.Error())
l.Warnln("bug: error in preparing new folder:", err)
panic("error in preparing new folder")
}
return cfg

View File

@@ -10,6 +10,10 @@ import (
"errors"
"fmt"
"runtime"
"strings"
"time"
"github.com/shirou/gopsutil/disk"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
@@ -53,8 +57,10 @@ type FolderConfiguration struct {
WeakHashThresholdPct int `xml:"weakHashThresholdPct" json:"weakHashThresholdPct"` // Use weak hash if more than X percent of the file has changed. Set to -1 to always use weak hash.
MarkerName string `xml:"markerName" json:"markerName"`
CopyOwnershipFromParent bool `xml:"copyOwnershipFromParent" json:"copyOwnershipFromParent"`
RawModTimeWindowS int `xml:"modTimeWindowS" json:"modTimeWindowS"`
cachedFilesystem fs.Filesystem
cachedFilesystem fs.Filesystem
cachedModTimeWindow time.Duration
DeprecatedReadOnly bool `xml:"ro,attr,omitempty" json:"-"`
DeprecatedMinDiskFreePct float64 `xml:"minDiskFreePct,omitempty" json:"-"`
@@ -111,6 +117,10 @@ func (f FolderConfiguration) Versioner() versioner.Versioner {
return versionerFactory(f.ID, f.Filesystem(), f.Versioning.Params)
}
func (f FolderConfiguration) ModTimeWindow() time.Duration {
return f.cachedModTimeWindow
}
func (f *FolderConfiguration) CreateMarker() error {
if err := f.CheckPath(); err != ErrMarkerMissing {
return err
@@ -233,6 +243,21 @@ func (f *FolderConfiguration) prepare() {
if f.MarkerName == "" {
f.MarkerName = DefaultMarkerName
}
switch {
case f.RawModTimeWindowS > 0:
f.cachedModTimeWindow = time.Duration(f.RawModTimeWindowS) * time.Second
case runtime.GOOS == "android":
if usage, err := disk.Usage(f.Filesystem().URI()); err != nil {
f.cachedModTimeWindow = 2 * time.Second
l.Debugf(`Detecting FS at "%v" on android: Setting mtime window to 2s: err == "%v"`, f.Path, err)
} else if usage.Fstype == "" || strings.Contains(strings.ToLower(usage.Fstype), "fat") {
f.cachedModTimeWindow = 2 * time.Second
l.Debugf(`Detecting FS at "%v" on android: Setting mtime window to 2s: usage.Fstype == "%v"`, f.Path, usage.Fstype)
} else {
l.Debugf(`Detecting FS at %v on android: Leaving mtime window at 0: usage.Fstype == "%v"`, f.Path, usage.Fstype)
}
}
}
// RequiresRestartOnly returns a copy with only the attributes that require

View File

@@ -87,11 +87,26 @@ func CheckFreeSpace(req Size, usage fs.Usage) error {
if req.Percentage() {
freePct := (float64(usage.Free) / float64(usage.Total)) * 100
if freePct < val {
return fmt.Errorf("%f %% < %v", freePct, req)
return fmt.Errorf("%.1f %% < %v", freePct, req)
}
} else if float64(usage.Free) < val {
return fmt.Errorf("%v < %v", usage.Free, req)
return fmt.Errorf("%sB < %v", formatSI(usage.Free), req)
}
return nil
}
func formatSI(b int64) string {
switch {
case b < 1000:
return fmt.Sprintf("%d ", b)
case b < 1000*1000:
return fmt.Sprintf("%.1f K", float64(b)/1000)
case b < 1000*1000*1000:
return fmt.Sprintf("%.1f M", float64(b)/(1000*1000))
case b < 1000*1000*1000*1000:
return fmt.Sprintf("%.1f G", float64(b)/(1000*1000*1000))
default:
return fmt.Sprintf("%.1f T", float64(b)/(1000*1000*1000*1000))
}
}

View File

@@ -91,3 +91,42 @@ func TestParseSize(t *testing.T) {
}
}
}
func TestFormatSI(t *testing.T) {
cases := []struct {
bytes int64
result string
}{
{
bytes: 0,
result: "0 ", // space for unit
},
{
bytes: 999,
result: "999 ",
},
{
bytes: 1000,
result: "1.0 K",
},
{
bytes: 1023 * 1000,
result: "1.0 M",
},
{
bytes: 5 * 1000 * 1000 * 1000,
result: "5.0 G",
},
{
bytes: 50000 * 1000 * 1000 * 1000 * 1000,
result: "50000.0 T",
},
}
for _, tc := range cases {
res := formatSI(tc.bytes)
if res != tc.result {
t.Errorf("formatSI(%d) => %q, expected %q", tc.bytes, res, tc.result)
}
}
}

View File

@@ -501,8 +501,6 @@ func (w *wrapper) MyName() string {
}
func (w *wrapper) AddOrUpdatePendingDevice(device protocol.DeviceID, name, address string) {
defer w.Save()
w.mut.Lock()
defer w.mut.Unlock()
@@ -524,8 +522,6 @@ func (w *wrapper) AddOrUpdatePendingDevice(device protocol.DeviceID, name, addre
}
func (w *wrapper) AddOrUpdatePendingFolder(id, label string, device protocol.DeviceID) {
defer w.Save()
w.mut.Lock()
defer w.mut.Unlock()

View File

@@ -36,7 +36,7 @@ type quicDialer struct {
tlsCfg *tls.Config
}
func (d *quicDialer) Dial(id protocol.DeviceID, uri *url.URL) (internalConn, error) {
func (d *quicDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error) {
uri = fixupPort(uri, config.DefaultQUICPort)
addr, err := net.ResolveUDPAddr("udp", uri.Host)

View File

@@ -16,6 +16,7 @@ import (
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/nat"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/util"
)
func init() {
@@ -26,6 +27,7 @@ func init() {
}
type relayListener struct {
util.ServiceWithError
onAddressesChangedNotifier
uri *url.URL
@@ -34,30 +36,22 @@ type relayListener struct {
conns chan internalConn
factory listenerFactory
err error
client client.RelayClient
mut sync.RWMutex
}
func (t *relayListener) Serve() {
t.mut.Lock()
t.err = nil
t.mut.Unlock()
func (t *relayListener) serve(stop chan struct{}) error {
clnt, err := client.NewClient(t.uri, t.tlsCfg.Certificates, nil, 10*time.Second)
invitations := clnt.Invitations()
if err != nil {
t.mut.Lock()
t.err = err
t.mut.Unlock()
l.Warnln("Listen (BEP/relay):", err)
return
l.Infoln("Listen (BEP/relay):", err)
return err
}
go clnt.Serve()
invitations := clnt.Invitations()
t.mut.Lock()
t.client = clnt
go clnt.Serve()
defer clnt.Stop()
t.mut.Unlock()
oldURI := clnt.URI()
@@ -69,7 +63,10 @@ func (t *relayListener) Serve() {
select {
case inv, ok := <-invitations:
if !ok {
return
if err := clnt.Error(); err != nil {
l.Infoln("Listen (BEP/relay):", err)
}
return err
}
conn, err := client.JoinSession(inv)
@@ -114,18 +111,13 @@ func (t *relayListener) Serve() {
oldURI = currentURI
t.notifyAddressesChanged(t)
}
case <-stop:
return nil
}
}
}
func (t *relayListener) Stop() {
t.mut.RLock()
if t.client != nil {
t.client.Stop()
}
t.mut.RUnlock()
}
func (t *relayListener) URI() *url.URL {
return t.uri
}
@@ -152,18 +144,16 @@ func (t *relayListener) LANAddresses() []*url.URL {
}
func (t *relayListener) Error() error {
t.mut.RLock()
err := t.err
var cerr error
if t.client != nil {
cerr = t.client.Error()
}
t.mut.RUnlock()
err := t.ServiceWithError.Error()
if err != nil {
return err
}
return cerr
t.mut.RLock()
defer t.mut.RUnlock()
if t.client != nil {
return t.client.Error()
}
return nil
}
func (t *relayListener) Factory() listenerFactory {
@@ -181,13 +171,15 @@ func (t *relayListener) NATType() string {
type relayListenerFactory struct{}
func (f *relayListenerFactory) New(uri *url.URL, cfg config.Wrapper, tlsCfg *tls.Config, conns chan internalConn, natService *nat.Service) genericListener {
return &relayListener{
t := &relayListener{
uri: uri,
cfg: cfg,
tlsCfg: tlsCfg,
conns: conns,
factory: f,
}
t.ServiceWithError = util.AsServiceWithError(t.serve)
return t
}
func (relayListenerFactory) Valid(cfg config.Configuration) error {

View File

@@ -90,6 +90,7 @@ var tlsVersionNames = map[uint16]string{
// dialers. Successful connections are handed to the model.
type Service interface {
suture.Service
discover.AddressLister
ListenerStatus() map[string]ListenerStatusEntry
ConnectionStatus() map[string]ConnectionStatusEntry
NATType() string
@@ -129,9 +130,7 @@ type service struct {
connectionStatus map[string]ConnectionStatusEntry // address -> latest error/status
}
func NewService(cfg config.Wrapper, myID protocol.DeviceID, mdl Model, tlsCfg *tls.Config, discoverer discover.Finder,
bepProtocolName string, tlsDefaultCommonName string) *service {
func NewService(cfg config.Wrapper, myID protocol.DeviceID, mdl Model, tlsCfg *tls.Config, discoverer discover.Finder, bepProtocolName string, tlsDefaultCommonName string) Service {
service := &service{
Supervisor: suture.New("connections.Service", suture.Spec{
Log: func(line string) {
@@ -231,7 +230,7 @@ func (s *service) handle(stop chan struct{}) {
continue
}
c.SetDeadline(time.Now().Add(20 * time.Second))
_ = c.SetDeadline(time.Now().Add(20 * time.Second))
hello, err := protocol.ExchangeHello(c, s.model.GetHello(remoteID))
if err != nil {
if protocol.IsVersionMismatch(err) {
@@ -255,7 +254,7 @@ func (s *service) handle(stop chan struct{}) {
c.Close()
continue
}
c.SetDeadline(time.Time{})
_ = c.SetDeadline(time.Time{})
// The Model will return an error for devices that we don't want to
// have a connection with for whatever reason, for example unknown devices.
@@ -850,8 +849,15 @@ func (s *service) dialParallel(deviceID protocol.DeviceID, dialTargets []dialTar
wg.Add(1)
go func(tgt dialTarget) {
conn, err := tgt.Dial()
s.setConnectionStatus(tgt.addr, err)
if err == nil {
// Closes the connection on error
err = s.validateIdentity(conn, deviceID)
}
s.setConnectionStatus(tgt.addr, err)
if err != nil {
l.Debugln("dialing", deviceID, tgt.uri, "error:", err)
} else {
l.Debugln("dialing", deviceID, tgt.uri, "success:", conn)
res <- conn
}
wg.Done()
@@ -884,3 +890,36 @@ func (s *service) dialParallel(deviceID protocol.DeviceID, dialTargets []dialTar
}
return internalConn{}, false
}
func (s *service) validateIdentity(c internalConn, expectedID protocol.DeviceID) error {
cs := c.ConnectionState()
// We should have received exactly one certificate from the other
// side. If we didn't, they don't have a device ID and we drop the
// connection.
certs := cs.PeerCertificates
if cl := len(certs); cl != 1 {
l.Infof("Got peer certificate list of length %d != 1 from peer at %s; protocol error", cl, c)
c.Close()
return fmt.Errorf("expected 1 certificate, got %d", cl)
}
remoteCert := certs[0]
remoteID := protocol.NewDeviceID(remoteCert.Raw)
// The device ID should not be that of ourselves. It can happen
// though, especially in the presence of NAT hairpinning, multiple
// clients between the same NAT gateway, and global discovery.
if remoteID == s.myID {
l.Infof("Connected to myself (%s) at %s - should not happen", remoteID, c)
c.Close()
return fmt.Errorf("connected to self")
}
// We should see the expected device ID
if !remoteID.Equals(expectedID) {
c.Close()
return fmt.Errorf("unexpected device id, expected %s got %s", expectedID, remoteID)
}
return nil
}

View File

@@ -215,11 +215,5 @@ type dialTarget struct {
func (t dialTarget) Dial() (internalConn, error) {
l.Debugln("dialing", t.deviceID, t.uri, "prio", t.priority)
conn, err := t.dialer.Dial(t.deviceID, t.uri)
if err != nil {
l.Debugln("dialing", t.deviceID, t.uri, "error:", err)
} else {
l.Debugln("dialing", t.deviceID, t.uri, "success:", conn)
}
return conn, err
return t.dialer.Dial(t.deviceID, t.uri)
}

View File

@@ -30,7 +30,7 @@ type tcpDialer struct {
tlsCfg *tls.Config
}
func (d *tcpDialer) Dial(id protocol.DeviceID, uri *url.URL) (internalConn, error) {
func (d *tcpDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error) {
uri = fixupPort(uri, config.DefaultTCPPort)
conn, err := dialer.DialTimeout(uri.Scheme, uri.Host, 10*time.Second)

View File

@@ -16,13 +16,13 @@ import (
)
var files, oneFile, firstHalf, secondHalf []protocol.FileInfo
var benchS *db.FileSet
func lazyInitBenchFileSet() {
if benchS != nil {
func lazyInitBenchFiles() {
if files != nil {
return
}
files = make([]protocol.FileInfo, 0, 1000)
for i := 0; i < 1000; i++ {
files = append(files, protocol.FileInfo{
Name: fmt.Sprintf("file%d", i),
@@ -35,11 +35,17 @@ func lazyInitBenchFileSet() {
firstHalf = files[:middle]
secondHalf = files[middle:]
oneFile = firstHalf[middle-1 : middle]
}
func getBenchFileSet() (*db.Lowlevel, *db.FileSet) {
lazyInitBenchFiles()
ldb := db.OpenMemory()
benchS = db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
benchS := db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
replace(benchS, remoteDevice0, files)
replace(benchS, protocol.LocalDeviceID, firstHalf)
return ldb, benchS
}
func BenchmarkReplaceAll(b *testing.B) {
@@ -56,7 +62,8 @@ func BenchmarkReplaceAll(b *testing.B) {
}
func BenchmarkUpdateOneChanged(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
changed := make([]protocol.FileInfo, 1)
changed[0] = oneFile[0]
@@ -75,7 +82,8 @@ func BenchmarkUpdateOneChanged(b *testing.B) {
}
func BenchmarkUpdate100Changed(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
unchanged := files[100:200]
changed := append([]protocol.FileInfo{}, unchanged...)
@@ -96,7 +104,8 @@ func BenchmarkUpdate100Changed(b *testing.B) {
}
func BenchmarkUpdate100ChangedRemote(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
unchanged := files[100:200]
changed := append([]protocol.FileInfo{}, unchanged...)
@@ -117,7 +126,8 @@ func BenchmarkUpdate100ChangedRemote(b *testing.B) {
}
func BenchmarkUpdateOneUnchanged(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -128,7 +138,8 @@ func BenchmarkUpdateOneUnchanged(b *testing.B) {
}
func BenchmarkNeedHalf(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -146,8 +157,6 @@ func BenchmarkNeedHalf(b *testing.B) {
}
func BenchmarkNeedHalfRemote(b *testing.B) {
lazyInitBenchFileSet()
ldb := db.OpenMemory()
defer ldb.Close()
fset := db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -170,7 +179,8 @@ func BenchmarkNeedHalfRemote(b *testing.B) {
}
func BenchmarkHave(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -188,7 +198,8 @@ func BenchmarkHave(b *testing.B) {
}
func BenchmarkGlobal(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -206,7 +217,8 @@ func BenchmarkGlobal(b *testing.B) {
}
func BenchmarkNeedHalfTruncated(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -224,7 +236,8 @@ func BenchmarkNeedHalfTruncated(b *testing.B) {
}
func BenchmarkHaveTruncated(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -242,7 +255,8 @@ func BenchmarkHaveTruncated(b *testing.B) {
}
func BenchmarkGlobalTruncated(b *testing.B) {
lazyInitBenchFileSet()
ldb, benchS := getBenchFileSet()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {

View File

@@ -167,7 +167,7 @@ func TestUpdate0to3(t *testing.T) {
t.Error("Unexpected additional file via sequence", f.FileName())
return true
}
if e := haveUpdate0to3[protocol.LocalDeviceID][0]; f.IsEquivalentOptional(e, true, true, 0) {
if e := haveUpdate0to3[protocol.LocalDeviceID][0]; f.IsEquivalentOptional(e, 0, true, true, 0) {
found = true
} else {
t.Errorf("Wrong file via sequence, got %v, expected %v", f, e)
@@ -192,7 +192,7 @@ func TestUpdate0to3(t *testing.T) {
}
f := fi.(protocol.FileInfo)
delete(need, f.Name)
if !f.IsEquivalentOptional(e, true, true, 0) {
if !f.IsEquivalentOptional(e, 0, true, true, 0) {
t.Errorf("Wrong needed file, got %v, expected %v", f, e)
}
return true

View File

@@ -8,6 +8,7 @@ package db
import (
"os"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -23,7 +24,10 @@ import (
const (
dbMaxOpenFiles = 100
dbWriteBuffer = 16 << 20
dbFlushBatch = dbWriteBuffer / 4 // Some leeway for any leveldb in-memory optimizations
)
var (
dbFlushBatch = debugEnvValue("WriteBuffer", dbWriteBuffer) / 4 // Some leeway for any leveldb in-memory optimizations
)
// Lowlevel is the lowest level database interface. It has a very simple
@@ -47,8 +51,32 @@ type Lowlevel struct {
// the database is erased and created from scratch.
func Open(location string) (*Lowlevel, error) {
opts := &opt.Options{
OpenFilesCacheCapacity: dbMaxOpenFiles,
WriteBuffer: dbWriteBuffer,
BlockCacheCapacity: debugEnvValue("BlockCacheCapacity", 0),
BlockCacheEvictRemoved: debugEnvValue("BlockCacheEvictRemoved", 0) != 0,
BlockRestartInterval: debugEnvValue("BlockRestartInterval", 0),
BlockSize: debugEnvValue("BlockSize", 0),
CompactionExpandLimitFactor: debugEnvValue("CompactionExpandLimitFactor", 0),
CompactionGPOverlapsFactor: debugEnvValue("CompactionGPOverlapsFactor", 0),
CompactionL0Trigger: debugEnvValue("CompactionL0Trigger", 0),
CompactionSourceLimitFactor: debugEnvValue("CompactionSourceLimitFactor", 0),
CompactionTableSize: debugEnvValue("CompactionTableSize", 0),
CompactionTableSizeMultiplier: float64(debugEnvValue("CompactionTableSizeMultiplier", 0)) / 10.0,
CompactionTotalSize: debugEnvValue("CompactionTotalSize", 0),
CompactionTotalSizeMultiplier: float64(debugEnvValue("CompactionTotalSizeMultiplier", 0)) / 10.0,
DisableBufferPool: debugEnvValue("DisableBufferPool", 0) != 0,
DisableBlockCache: debugEnvValue("DisableBlockCache", 0) != 0,
DisableCompactionBackoff: debugEnvValue("DisableCompactionBackoff", 0) != 0,
DisableLargeBatchTransaction: debugEnvValue("DisableLargeBatchTransaction", 0) != 0,
NoSync: debugEnvValue("NoSync", 0) != 0,
NoWriteMerge: debugEnvValue("NoWriteMerge", 0) != 0,
OpenFilesCacheCapacity: debugEnvValue("OpenFilesCacheCapacity", dbMaxOpenFiles),
WriteBuffer: debugEnvValue("WriteBuffer", dbWriteBuffer),
// The write slowdown and pause can be overridden, but even if they
// are not and the compaction trigger is overridden we need to
// adjust so that we don't pause writes for L0 compaction before we
// even *start* L0 compaction...
WriteL0SlowdownTrigger: debugEnvValue("WriteL0SlowdownTrigger", 2*debugEnvValue("CompactionL0Trigger", opt.DefaultCompactionL0Trigger)),
WriteL0PauseTrigger: debugEnvValue("WriteL0SlowdownTrigger", 3*debugEnvValue("CompactionL0Trigger", opt.DefaultCompactionL0Trigger)),
}
return open(location, opts)
}
@@ -80,6 +108,12 @@ func open(location string, opts *opt.Options) (*Lowlevel, error) {
if err != nil {
return nil, errorSuggestion{err, "is another instance of Syncthing running?"}
}
if debugEnvValue("CompactEverything", 0) != 0 {
if err := db.CompactRange(util.Range{}); err != nil {
l.Warnln("Compacting database:", err)
}
}
return NewLowlevel(db, location), nil
}
@@ -304,3 +338,11 @@ func (it *iter) execIfNotClosed(fn func() bool) bool {
}
return fn()
}
func debugEnvValue(key string, def int) int {
v, err := strconv.ParseInt(os.Getenv("STDEBUG_"+key), 10, 63)
if err != nil {
return def
}
return int(v)
}

View File

@@ -905,7 +905,7 @@ func TestWithHaveSequence(t *testing.T) {
i := 2
s.WithHaveSequence(int64(i), func(fi db.FileIntf) bool {
if f := fi.(protocol.FileInfo); !f.IsEquivalent(localHave[i-1]) {
if f := fi.(protocol.FileInfo); !f.IsEquivalent(localHave[i-1], 0) {
t.Fatalf("Got %v\nExpected %v", f, localHave[i-1])
}
i++
@@ -1004,7 +1004,7 @@ func TestMoveGlobalBack(t *testing.T) {
if need := needList(s, protocol.LocalDeviceID); len(need) != 1 {
t.Error("Expected 1 local need, got", need)
} else if !need[0].IsEquivalent(remote0Have[0]) {
} else if !need[0].IsEquivalent(remote0Have[0], 0) {
t.Errorf("Local need incorrect;\n A: %v !=\n E: %v", need[0], remote0Have[0])
}
@@ -1030,7 +1030,7 @@ func TestMoveGlobalBack(t *testing.T) {
if need := needList(s, remoteDevice0); len(need) != 1 {
t.Error("Expected 1 need for remote 0, got", need)
} else if !need[0].IsEquivalent(localHave[0]) {
} else if !need[0].IsEquivalent(localHave[0], 0) {
t.Errorf("Need for remote 0 incorrect;\n A: %v !=\n E: %v", need[0], localHave[0])
}
@@ -1066,7 +1066,7 @@ func TestIssue5007(t *testing.T) {
if need := needList(s, protocol.LocalDeviceID); len(need) != 1 {
t.Fatal("Expected 1 local need, got", need)
} else if !need[0].IsEquivalent(fs[0]) {
} else if !need[0].IsEquivalent(fs[0], 0) {
t.Fatalf("Local need incorrect;\n A: %v !=\n E: %v", need[0], fs[0])
}
@@ -1101,7 +1101,7 @@ func TestNeedDeleted(t *testing.T) {
if need := needList(s, protocol.LocalDeviceID); len(need) != 1 {
t.Fatal("Expected 1 local need, got", need)
} else if !need[0].IsEquivalent(fs[0]) {
} else if !need[0].IsEquivalent(fs[0], 0) {
t.Fatalf("Local need incorrect;\n A: %v !=\n E: %v", need[0], fs[0])
}
@@ -1243,7 +1243,7 @@ func TestNeedAfterUnignore(t *testing.T) {
if need := needList(s, protocol.LocalDeviceID); len(need) != 1 {
t.Fatal("Expected one local need, got", need)
} else if !need[0].IsEquivalent(remote) {
} else if !need[0].IsEquivalent(remote, 0) {
t.Fatalf("Got %v, expected %v", need[0], remote)
}
}
@@ -1287,7 +1287,7 @@ func TestNeedWithNewerInvalid(t *testing.T) {
if len(need) != 1 {
t.Fatal("Locally missing file should be needed")
}
if !need[0].IsEquivalent(file) {
if !need[0].IsEquivalent(file, 0) {
t.Fatalf("Got needed file %v, expected %v", need[0], file)
}
@@ -1302,7 +1302,7 @@ func TestNeedWithNewerInvalid(t *testing.T) {
if len(need) != 1 {
t.Fatal("Locally missing file should be needed regardless of invalid files")
}
if !need[0].IsEquivalent(file) {
if !need[0].IsEquivalent(file, 0) {
t.Fatalf("Got needed file %v, expected %v", need[0], file)
}
}

View File

@@ -15,7 +15,7 @@ import (
"strings"
"time"
"github.com/calmh/du"
"github.com/shirou/gopsutil/disk"
)
var (
@@ -266,11 +266,14 @@ func (f *BasicFilesystem) Usage(name string) (Usage, error) {
if err != nil {
return Usage{}, err
}
u, err := du.Get(name)
u, err := disk.Usage(name)
if err != nil {
return Usage{}, err
}
return Usage{
Free: u.FreeBytes,
Total: u.TotalBytes,
}, err
Free: int64(u.Free),
Total: int64(u.Total),
}, nil
}
func (f *BasicFilesystem) Type() FilesystemType {

View File

@@ -11,7 +11,9 @@ package fs
import "github.com/syncthing/notify"
const (
subEventMask = notify.NoteDelete | notify.NoteWrite | notify.NoteRename
permEventMask = notify.NoteAttrib
// Platform independent notify.Create is required, as kqueue does not have
// any event signalling file creation, but notify does generate those internally.
subEventMask = notify.NoteDelete | notify.NoteWrite | notify.NoteRename | notify.Create
permEventMask = notify.NoteAttrib | notify.NoteExtend
rmEventMask = notify.NoteDelete | notify.NoteRename
)

View File

@@ -46,7 +46,8 @@ const (
func init() {
err := expandLocations()
if err != nil {
panic(err)
fmt.Println(err)
panic("Failed to expand locations at init time")
}
}
@@ -124,7 +125,8 @@ func defaultConfigDir() string {
case "darwin":
dir, err := fs.ExpandTilde("~/Library/Application Support/Syncthing")
if err != nil {
panic(err)
fmt.Println(err)
panic("Failed to get default config dir")
}
return dir
@@ -134,7 +136,8 @@ func defaultConfigDir() string {
}
dir, err := fs.ExpandTilde("~/.config/syncthing")
if err != nil {
panic(err)
fmt.Println(err)
panic("Failed to get default config dir")
}
return dir
}
@@ -144,7 +147,8 @@ func defaultConfigDir() string {
func homeDir() string {
home, err := fs.ExpandTilde("~")
if err != nil {
panic(err)
fmt.Println(err)
panic("Failed to get user home dir")
}
return home
}

View File

@@ -347,6 +347,7 @@ func (f *folder) scanSubdirs(subDirs []string) error {
ShortID: f.shortID,
ProgressTickIntervalS: f.ScanProgressIntervalS,
LocalFlags: f.localFlags,
ModTimeWindow: f.ModTimeWindow(),
})
batchFn := func(fs []protocol.FileInfo) error {
@@ -365,7 +366,7 @@ func (f *folder) scanSubdirs(subDirs []string) error {
switch gf, ok := f.fset.GetGlobal(fs[i].Name); {
case !ok:
continue
case gf.IsEquivalentOptional(fs[i], false, false, protocol.FlagLocalReceiveOnly):
case gf.IsEquivalentOptional(fs[i], f.ModTimeWindow(), false, false, protocol.FlagLocalReceiveOnly):
// What we have locally is equivalent to the global file.
fs[i].Version = fs[i].Version.Merge(gf.Version)
fallthrough
@@ -423,6 +424,12 @@ func (f *folder) scanSubdirs(subDirs []string) error {
var iterError error
f.fset.WithPrefixedHaveTruncated(protocol.LocalDeviceID, sub, func(fi db.FileIntf) bool {
select {
case <-f.ctx.Done():
return false
default:
}
file := fi.(db.FileInfoTruncated)
if err := batch.flushIfFull(); err != nil {
@@ -507,6 +514,12 @@ func (f *folder) scanSubdirs(subDirs []string) error {
return true
})
select {
case <-f.ctx.Done():
return f.ctx.Err()
default:
}
if iterError == nil && len(toIgnore) > 0 {
for _, file := range toIgnore {
l.Debugln("marking file as ignored", f)
@@ -672,10 +685,11 @@ func (f *folder) setWatchError(err error) {
// scanOnWatchErr schedules a full scan immediately if an error occurred while watching.
func (f *folder) scanOnWatchErr() {
f.watchMut.Lock()
if f.watchErr != nil {
err := f.watchErr
f.watchMut.Unlock()
if err != nil {
f.Delay(0)
}
f.watchMut.Unlock()
}
func (f *folder) setError(err error) {

View File

@@ -68,13 +68,13 @@ func (f *sendOnlyFolder) pull() bool {
curFile, ok := f.fset.Get(protocol.LocalDeviceID, intf.FileName())
if !ok {
if intf.IsDeleted() {
panic("Should never get a deleted file as needed when we don't have it")
l.Debugln("Should never get a deleted file as needed when we don't have it")
}
return true
}
file := intf.(protocol.FileInfo)
if !file.IsEquivalentOptional(curFile, f.IgnorePerms, false, 0) {
if !file.IsEquivalentOptional(curFile, f.ModTimeWindow(), f.IgnorePerms, false, 0) {
return true
}

View File

@@ -594,9 +594,9 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
// Symlinks aren't checked for conflicts.
file.Version = file.Version.Merge(curFile.Version)
err = osutil.InWritableDir(func(name string) error {
err = f.inWritableDir(func(name string) error {
return f.moveForConflict(name, file.ModifiedBy.String(), scanChan)
}, f.fs, curFile.Name)
}, curFile.Name)
} else {
err = f.deleteItemOnDisk(curFile, scanChan)
}
@@ -633,7 +633,7 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
return f.fs.Chmod(path, mode|(info.Mode()&retainBits))
}
if err = osutil.InWritableDir(mkdir, f.fs, file.Name); err == nil {
if err = f.inWritableDir(mkdir, file.Name); err == nil {
dbUpdateChan <- dbUpdateJob{file, dbUpdateHandleDir}
} else {
f.newPullError(file.Name, errors.Wrap(err, "creating directory"))
@@ -748,9 +748,9 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
// Directories and symlinks aren't checked for conflicts.
file.Version = file.Version.Merge(curFile.Version)
err = osutil.InWritableDir(func(name string) error {
err = f.inWritableDir(func(name string) error {
return f.moveForConflict(name, file.ModifiedBy.String(), scanChan)
}, f.fs, curFile.Name)
}, curFile.Name)
} else {
err = f.deleteItemOnDisk(curFile, scanChan)
}
@@ -769,7 +769,7 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
return f.maybeCopyOwner(path)
}
if err = osutil.InWritableDir(createLink, f.fs, file.Name); err == nil {
if err = f.inWritableDir(createLink, file.Name); err == nil {
dbUpdateChan <- dbUpdateJob{file, dbUpdateHandleSymlink}
} else {
f.newPullError(file.Name, errors.Wrap(err, "symlink create"))
@@ -869,9 +869,9 @@ func (f *sendReceiveFolder) deleteFileWithCurrent(file, cur protocol.FileInfo, h
}
if f.versioner != nil && !cur.IsSymlink() {
err = osutil.InWritableDir(f.versioner.Archive, f.fs, file.Name)
err = f.inWritableDir(f.versioner.Archive, file.Name)
} else {
err = osutil.InWritableDir(f.fs.Remove, f.fs, file.Name)
err = f.inWritableDir(f.fs.Remove, file.Name)
}
if err == nil || fs.IsNotExist(err) {
@@ -953,7 +953,7 @@ func (f *sendReceiveFolder) renameFile(cur, source, target protocol.FileInfo, db
default:
var fi protocol.FileInfo
if fi, err = scanner.CreateFileInfo(stat, target.Name, f.fs); err == nil {
if !fi.IsEquivalentOptional(curTarget, f.IgnorePerms, true, protocol.LocalAllFlags) {
if !fi.IsEquivalentOptional(curTarget, f.ModTimeWindow(), f.IgnorePerms, true, protocol.LocalAllFlags) {
// Target changed
scanChan <- target.Name
err = errModified
@@ -971,7 +971,7 @@ func (f *sendReceiveFolder) renameFile(cur, source, target protocol.FileInfo, db
if err == nil {
err = osutil.Copy(f.fs, f.fs, source.Name, tempName)
if err == nil {
err = osutil.InWritableDir(f.versioner.Archive, f.fs, source.Name)
err = f.inWritableDir(f.versioner.Archive, source.Name)
}
}
} else {
@@ -1078,7 +1078,7 @@ func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, copyChan chan<- c
// Otherwise, discard the file ourselves in order for the
// sharedpuller not to panic when it fails to exclusively create a
// file which already exists
osutil.InWritableDir(f.fs.Remove, f.fs, tempName)
f.inWritableDir(f.fs.Remove, tempName)
}
} else {
// Copy the blocks, as we don't want to shuffle them on the FileInfo
@@ -1522,9 +1522,9 @@ func (f *sendReceiveFolder) performFinish(file, curFile protocol.FileInfo, hasCu
// Directories and symlinks aren't checked for conflicts.
file.Version = file.Version.Merge(curFile.Version)
err = osutil.InWritableDir(func(name string) error {
err = f.inWritableDir(func(name string) error {
return f.moveForConflict(name, file.ModifiedBy.String(), scanChan)
}, f.fs, curFile.Name)
}, curFile.Name)
} else {
err = f.deleteItemOnDisk(curFile, scanChan)
}
@@ -1825,10 +1825,10 @@ func (f *sendReceiveFolder) deleteItemOnDisk(item protocol.FileInfo, scanChan ch
// an error.
// Symlinks aren't archived.
return osutil.InWritableDir(f.versioner.Archive, f.fs, item.Name)
return f.inWritableDir(f.versioner.Archive, item.Name)
}
return osutil.InWritableDir(f.fs.Remove, f.fs, item.Name)
return f.inWritableDir(f.fs.Remove, item.Name)
}
// deleteDirOnDisk attempts to delete a directory. It checks for files/dirs inside
@@ -1879,7 +1879,7 @@ func (f *sendReceiveFolder) deleteDirOnDisk(dir string, scanChan chan<- string)
f.fs.RemoveAll(del)
}
err := osutil.InWritableDir(f.fs.Remove, f.fs, dir)
err := f.inWritableDir(f.fs.Remove, dir)
if err == nil || fs.IsNotExist(err) {
// It was removed or it doesn't exist to start with
return nil
@@ -1919,7 +1919,7 @@ func (f *sendReceiveFolder) scanIfItemChanged(stat fs.FileInfo, item protocol.Fi
return errors.Wrap(err, "comparing item on disk to db")
}
if !statItem.IsEquivalentOptional(item, f.IgnorePerms, true, protocol.LocalAllFlags) {
if !statItem.IsEquivalentOptional(item, f.ModTimeWindow(), f.IgnorePerms, true, protocol.LocalAllFlags) {
return errModified
}
@@ -1963,6 +1963,10 @@ func (f *sendReceiveFolder) maybeCopyOwner(path string) error {
return nil
}
func (f *sendReceiveFolder) inWritableDir(fn func(string) error, path string) error {
return inWritableDir(fn, f.fs, path, f.IgnorePerms)
}
// A []FileError is sent as part of an event and will be JSON serialized.
type FileError struct {
Path string `json:"path"`

View File

@@ -248,10 +248,8 @@ func (m *model) StartFolder(folder string) {
// Need to hold lock on m.fmut when calling this.
func (m *model) startFolderLocked(cfg config.FolderConfiguration) {
if err := m.checkFolderRunningLocked(cfg.ID); err == errFolderMissing {
l.Warnln("Cannot start nonexistent folder", cfg.Description())
panic("cannot start nonexistent folder")
} else if err == nil {
_, ok := m.folderRunners[cfg.ID]
if ok {
l.Warnln("Cannot start already running folder", cfg.Description())
panic("cannot start already running folder")
}
@@ -461,18 +459,16 @@ func (m *model) RestartFolder(from, to config.FolderConfiguration) {
errMsg = "restarting"
}
var fset *db.FileSet
if !to.Paused {
// Creating the fileset can take a long time (metadata calculation)
// so we do it outside of the lock.
fset = db.NewFileSet(to.ID, to.Filesystem(), m.db)
}
m.fmut.Lock()
defer m.fmut.Unlock()
m.tearDownFolderLocked(from, fmt.Errorf("%v folder %v", errMsg, to.Description()))
if !to.Paused {
// Creating the fileset can take a long time (metadata calculation)
// so we do it outside of the lock.
m.fmut.Unlock()
fset := db.NewFileSet(to.ID, to.Filesystem(), m.db)
m.fmut.Lock()
m.addFolderLocked(to, fset)
m.startFolderLocked(to)
}
@@ -601,10 +597,8 @@ func (info ConnectionInfo) MarshalJSON() ([]byte, error) {
// ConnectionStats returns a map with connection statistics for each device.
func (m *model) ConnectionStats() map[string]interface{} {
m.fmut.RLock()
m.pmut.RLock()
defer m.pmut.RUnlock()
defer m.fmut.RUnlock()
res := make(map[string]interface{})
devs := m.cfg.Devices()
@@ -771,8 +765,9 @@ func addSizeOfFile(s *db.Counts, f db.FileIntf) {
// files in the global model.
func (m *model) GlobalSize(folder string) db.Counts {
m.fmut.RLock()
defer m.fmut.RUnlock()
if rf, ok := m.folderFiles[folder]; ok {
rf, ok := m.folderFiles[folder]
m.fmut.RUnlock()
if ok {
return rf.GlobalSize()
}
return db.Counts{}
@@ -782,8 +777,9 @@ func (m *model) GlobalSize(folder string) db.Counts {
// files in the local folder.
func (m *model) LocalSize(folder string) db.Counts {
m.fmut.RLock()
defer m.fmut.RUnlock()
if rf, ok := m.folderFiles[folder]; ok {
rf, ok := m.folderFiles[folder]
m.fmut.RUnlock()
if ok {
return rf.LocalSize()
}
return db.Counts{}
@@ -794,8 +790,9 @@ func (m *model) LocalSize(folder string) db.Counts {
// folder.
func (m *model) ReceiveOnlyChangedSize(folder string) db.Counts {
m.fmut.RLock()
defer m.fmut.RUnlock()
if rf, ok := m.folderFiles[folder]; ok {
rf, ok := m.folderFiles[folder]
m.fmut.RUnlock()
if ok {
return rf.ReceiveOnlyChangedSize()
}
return db.Counts{}
@@ -847,6 +844,10 @@ func (m *model) NeedFolderFiles(folder string, page, perpage int) ([]db.FileInfo
if runnerOk {
progressNames, queuedNames, skipped := runner.Jobs(page, perpage)
progress = make([]db.FileInfoTruncated, len(progressNames))
queued = make([]db.FileInfoTruncated, len(queuedNames))
seen = make(map[string]struct{}, len(progressNames)+len(queuedNames))
for i, name := range progressNames {
if f, ok := rf.GetGlobalTruncated(name); ok {
progress[i] = f
@@ -1008,8 +1009,9 @@ func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []prot
}
m.pmut.RLock()
m.deviceDownloads[deviceID].Update(folder, makeForgetUpdate(fs))
downloads := m.deviceDownloads[deviceID]
m.pmut.RUnlock()
downloads.Update(folder, makeForgetUpdate(fs))
if !update {
files.Drop(deviceID)
@@ -1064,8 +1066,7 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
}
}
m.fmut.Lock()
defer m.fmut.Unlock()
m.fmut.RLock()
var paused []string
for _, folder := range cm.Folders {
cfg, ok := m.cfg.Folder(folder.ID)
@@ -1075,6 +1076,7 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
continue
}
m.cfg.AddOrUpdatePendingFolder(folder.ID, folder.Label, deviceID)
changed = true
events.Default.Log(events.FolderRejected, map[string]string{
"folder": folder.ID,
"folderLabel": folder.Label,
@@ -1185,6 +1187,7 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
// implementing suture.IsCompletable).
m.Add(is)
}
m.fmut.RUnlock()
m.pmut.Lock()
m.remotePausedFolders[deviceID] = paused
@@ -1653,9 +1656,9 @@ func (m *model) recheckFile(deviceID protocol.DeviceID, folderFs fs.Filesystem,
// to what we have in the database, yet the content we've read off the filesystem doesn't
// Something is fishy, invalidate the file and rescan it.
// The file will temporarily become invalid, which is ok as the content is messed up.
m.fmut.Lock()
m.fmut.RLock()
runner, ok := m.folderRunners[folder]
m.fmut.Unlock()
m.fmut.RUnlock()
if !ok {
l.Debugf("%v recheckFile: %s: %q / %q: Folder stopped before rescan could be scheduled", m, deviceID, folder, name)
return
@@ -1769,6 +1772,7 @@ func (m *model) OnHello(remoteID protocol.DeviceID, addr net.Addr, hello protoco
cfg, ok := m.cfg.Device(remoteID)
if !ok {
m.cfg.AddOrUpdatePendingDevice(remoteID, hello.DeviceName, addr.String())
_ = m.cfg.Save() // best effort
events.Default.Log(events.DeviceRejected, map[string]string{
"name": hello.DeviceName,
"device": remoteID.String(),
@@ -1885,9 +1889,10 @@ func (m *model) DownloadProgress(device protocol.DeviceID, folder string, update
}
m.pmut.RLock()
m.deviceDownloads[device].Update(folder, updates)
state := m.deviceDownloads[device].GetBlockCounts(folder)
downloads := m.deviceDownloads[device]
m.pmut.RUnlock()
downloads.Update(folder, updates)
state := downloads.GetBlockCounts(folder)
events.Default.Log(events.RemoteDownloadProgress, map[string]interface{}{
"device": device.String(),
@@ -2227,20 +2232,24 @@ func (m *model) State(folder string) (string, time.Time, error) {
func (m *model) FolderErrors(folder string) ([]FileError, error) {
m.fmut.RLock()
defer m.fmut.RUnlock()
if err := m.checkFolderRunningLocked(folder); err != nil {
err := m.checkFolderRunningLocked(folder)
runner := m.folderRunners[folder]
m.fmut.RUnlock()
if err != nil {
return nil, err
}
return m.folderRunners[folder].Errors(), nil
return runner.Errors(), nil
}
func (m *model) WatchError(folder string) error {
m.fmut.RLock()
defer m.fmut.RUnlock()
if err := m.checkFolderRunningLocked(folder); err != nil {
return err
err := m.checkFolderRunningLocked(folder)
runner := m.folderRunners[folder]
m.fmut.RUnlock()
if err != nil {
return nil // If the folder isn't running, there's no error to report.
}
return m.folderRunners[folder].WatchError()
return runner.WatchError()
}
func (m *model) Override(folder string) {
@@ -2549,6 +2558,7 @@ func (m *model) CommitConfiguration(from, to config.Configuration) bool {
if toCfg.Paused {
l.Infoln("Pausing", deviceID)
m.closeConn(deviceID, errDevicePaused)
events.Default.Log(events.DevicePaused, map[string]string{"device": deviceID.String()})
} else {
events.Default.Log(events.DeviceResumed, map[string]string{"device": deviceID.String()})

View File

@@ -26,6 +26,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
"github.com/syncthing/syncthing/lib/osutil"
@@ -3303,3 +3304,83 @@ func TestConnCloseOnRestart(t *testing.T) {
t.Fatal("Timed out before connection was closed")
}
}
func TestModTimeWindow(t *testing.T) {
w, fcfg := tmpDefaultWrapper()
tfs := fcfg.Filesystem()
fcfg.RawModTimeWindowS = 2
w.SetFolder(fcfg)
m := setupModel(w)
defer cleanupModelAndRemoveDir(m, tfs.URI())
name := "foo"
fd, err := tfs.Create(name)
must(t, err)
stat, err := fd.Stat()
must(t, err)
modTime := stat.ModTime()
fd.Close()
m.ScanFolders()
v := protocol.Vector{}
v = v.Update(myID.Short())
fi, ok := m.CurrentFolderFile("default", name)
if !ok {
t.Fatal("File missing")
}
if !fi.Version.Equal(v) {
t.Fatalf("Got version %v, expected %v", fi.Version, v)
}
err = tfs.Chtimes(name, time.Now(), modTime.Add(time.Second))
must(t, err)
m.ScanFolders()
// No change due to window
fi, _ = m.CurrentFolderFile("default", name)
if !fi.Version.Equal(v) {
t.Fatalf("Got version %v, expected %v", fi.Version, v)
}
err = tfs.Chtimes(name, time.Now(), modTime.Add(2*time.Second))
must(t, err)
m.ScanFolders()
v = v.Update(myID.Short())
fi, _ = m.CurrentFolderFile("default", name)
if !fi.Version.Equal(v) {
t.Fatalf("Got version %v, expected %v", fi.Version, v)
}
}
func TestDevicePause(t *testing.T) {
sub := events.Default.Subscribe(events.DevicePaused)
defer events.Default.Unsubscribe(sub)
m, _, fcfg := setupModelWithConnection()
defer cleanupModelAndRemoveDir(m, fcfg.Filesystem().URI())
m.pmut.RLock()
closed := m.closed[device1]
m.pmut.RUnlock()
dev := m.cfg.Devices()[device1]
dev.Paused = true
m.cfg.SetDevice(dev)
timeout := time.NewTimer(5 * time.Second)
select {
case <-sub.C():
select {
case <-closed:
case <-timeout.C:
t.Fatal("Timed out before connection was closed")
}
case <-timeout.C:
t.Fatal("Timed out before device was paused")
}
}

View File

@@ -13,6 +13,7 @@ import (
"os"
"path/filepath"
"runtime"
"strconv"
"strings"
"testing"
"time"
@@ -39,7 +40,7 @@ func TestRequestSimple(t *testing.T) {
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
t.Error("More than one index update sent")
default:
}
for _, f := range fs {
@@ -81,7 +82,7 @@ func TestSymlinkTraversalRead(t *testing.T) {
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
t.Error("More than one index update sent")
default:
}
for _, f := range fs {
@@ -187,7 +188,8 @@ func TestRequestCreateTmpSymlink(t *testing.T) {
if f.IsInvalid() {
goodIdx <- struct{}{}
} else {
t.Fatal("Received index with non-invalid temporary file")
t.Error("Received index with non-invalid temporary file")
close(goodIdx)
}
return
}
@@ -365,13 +367,12 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
expected := map[string]struct{}{ign: {}, ignExisting: {}}
// The indexes will normally arrive in one update, but it is possible
// that they arrive in separate ones.
secondIndex := false
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
secondIndex = true
for _, f := range fs {
if _, ok := expected[f.Name]; !ok {
t.Fatalf("Unexpected file %v was updated in index", f.Name)
t.Errorf("Unexpected file %v was updated in index", f.Name)
continue
}
if f.IsInvalid() {
t.Errorf("File %v is still marked as invalid", f.Name)
@@ -393,8 +394,6 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
}
if len(expected) == 0 {
close(done)
} else if secondIndex {
t.Fatal("Didn't receive index updates for all existing files, missing", expected)
}
}
// Make sure pulling doesn't interfere, as index updates are racy and
@@ -410,7 +409,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
select {
case <-time.After(5 * time.Second):
t.Fatalf("timed out before index was received")
t.Fatal("timed out before receiving index updates for all existing files, missing", expected)
case <-done:
}
}
@@ -419,18 +418,22 @@ func TestIssue4841(t *testing.T) {
m, fc, fcfg := setupModelWithConnection()
defer cleanupModelAndRemoveDir(m, fcfg.Filesystem().URI())
received := make(chan protocol.FileInfo)
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ string, fs []protocol.FileInfo) {
received <- fs
}
fc.mut.Unlock()
checkReceived := func(fs []protocol.FileInfo) protocol.FileInfo {
t.Helper()
if len(fs) != 1 {
t.Fatalf("Sent index with %d files, should be 1", len(fs))
}
if fs[0].Name != "foo" {
t.Fatalf(`Sent index with file %v, should be "foo"`, fs[0].Name)
}
received <- fs[0]
return fs[0]
}
fc.mut.Unlock()
// Setup file from remote that was ignored locally
folder := m.folderRunners[defaultFolderConfig.ID].(*sendReceiveFolder)
@@ -440,14 +443,16 @@ func TestIssue4841(t *testing.T) {
LocalFlags: protocol.FlagLocalIgnored,
Version: protocol.Vector{}.Update(device1.Short()),
}})
<-received
checkReceived(<-received)
// Scan without ignore patterns with "foo" not existing locally
if err := m.ScanFolder("default"); err != nil {
t.Fatal("Failed scanning:", err)
}
f := <-received
f := checkReceived(<-received)
if expected := (protocol.Vector{}.Update(myID.Short())); !f.Version.Equal(expected) {
t.Errorf("Got Version == %v, expected %v", f.Version, expected)
}
@@ -462,25 +467,29 @@ func TestRescanIfHaveInvalidContent(t *testing.T) {
must(t, ioutil.WriteFile(filepath.Join(tmpDir, "foo"), payload, 0777))
received := make(chan protocol.FileInfo)
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ string, fs []protocol.FileInfo) {
received <- fs
}
fc.mut.Unlock()
checkReceived := func(fs []protocol.FileInfo) protocol.FileInfo {
t.Helper()
if len(fs) != 1 {
t.Fatalf("Sent index with %d files, should be 1", len(fs))
}
if fs[0].Name != "foo" {
t.Fatalf(`Sent index with file %v, should be "foo"`, fs[0].Name)
}
received <- fs[0]
return fs[0]
}
fc.mut.Unlock()
// Scan without ignore patterns with "foo" not existing locally
if err := m.ScanFolder("default"); err != nil {
t.Fatal("Failed scanning:", err)
}
f := <-received
f := checkReceived(<-received)
if f.Blocks[0].WeakHash != 103547413 {
t.Fatalf("unexpected weak hash: %d != 103547413", f.Blocks[0].WeakHash)
}
@@ -505,7 +514,8 @@ func TestRescanIfHaveInvalidContent(t *testing.T) {
}
select {
case f := <-received:
case fs := <-received:
f := checkReceived(fs)
if f.Blocks[0].WeakHash != 41943361 {
t.Fatalf("unexpected weak hash: %d != 41943361", f.Blocks[0].WeakHash)
}
@@ -597,14 +607,24 @@ func TestRequestSymlinkWindows(t *testing.T) {
m, fc, fcfg := setupModelWithConnection()
defer cleanupModelAndRemoveDir(m, fcfg.Filesystem().URI())
done := make(chan struct{})
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
case <-received:
t.Error("More than one index update sent")
default:
}
received <- fs
}
fc.mut.Unlock()
fc.addFile("link", 0644, protocol.FileInfoTypeSymlink, nil)
fc.sendIndexUpdate()
select {
case fs := <-received:
close(received)
// expected first index
if len(fs) != 1 {
t.Fatalf("Expected just one file in index, got %v", fs)
@@ -616,15 +636,6 @@ func TestRequestSymlinkWindows(t *testing.T) {
if !f.IsInvalid() {
t.Errorf(`File info was not marked as invalid`)
}
close(done)
}
fc.mut.Unlock()
fc.addFile("link", 0644, protocol.FileInfoTypeSymlink, nil)
fc.sendIndexUpdate()
select {
case <-done:
case <-time.After(time.Second):
t.Fatalf("timed out before pull was finished")
}
@@ -666,18 +677,15 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
tmpDir := tfs.URI()
defer cleanupModelAndRemoveDir(m, tfs.URI())
done := make(chan struct{})
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
case <-received:
t.Error("More than one index update sent")
default:
}
if len(fs) != 2 {
t.Fatalf("Received index with %v indexes instead of 2", len(fs))
}
close(done)
received <- fs
}
fc.mut.Unlock()
@@ -693,7 +701,11 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
}
fc.sendIndexUpdate()
select {
case <-done:
case fs := <-received:
close(received)
if len(fs) != 2 {
t.Fatalf("Received index with %v indexes instead of 2", len(fs))
}
case <-time.After(10 * time.Second):
t.Fatal("timed out")
}
@@ -703,12 +715,15 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
}
var gotA, gotB, gotConfl bool
done = make(chan struct{})
bIntermediateVersion := protocol.Vector{}.Update(fc.id.Short()).Update(myID.Short())
bFinalVersion := bIntermediateVersion.Copy().Update(fc.id.Short())
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("Received more index updates than expected")
t.Error("Received more index updates than expected")
return
default:
}
for _, f := range fs {
@@ -722,7 +737,16 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
if gotB {
t.Error("Got more than one index update for", f.Name)
}
gotB = true
if f.Version.Equal(bIntermediateVersion) {
// This index entry might be superseeded
// by the final one or sent before it separately.
break
}
if f.Version.Equal(bFinalVersion) {
gotB = true
break
}
t.Errorf("Got unexpected version %v for file %v in index update", f.Version, f.Name)
case strings.HasPrefix(f.Name, "b.sync-conflict-"):
if gotConfl {
t.Error("Got more than one index update for conflicts of", f.Name)
@@ -889,7 +913,7 @@ func TestRequestDeleteChanged(t *testing.T) {
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
t.Error("More than one index update sent")
default:
}
close(done)
@@ -912,7 +936,7 @@ func TestRequestDeleteChanged(t *testing.T) {
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Fatalf("More than one index update sent")
t.Error("More than one index update sent")
default:
}
close(done)
@@ -947,3 +971,46 @@ func TestRequestDeleteChanged(t *testing.T) {
}
}
}
func TestNeedFolderFiles(t *testing.T) {
m, fc, fcfg := setupModelWithConnection()
tfs := fcfg.Filesystem()
tmpDir := tfs.URI()
defer cleanupModelAndRemoveDir(m, tmpDir)
sub := events.Default.Subscribe(events.RemoteIndexUpdated)
defer events.Default.Unsubscribe(sub)
errPreventSync := errors.New("you aren't getting any of this")
fc.mut.Lock()
fc.requestFn = func(string, string, int64, int, []byte, bool) ([]byte, error) {
return nil, errPreventSync
}
fc.mut.Unlock()
data := []byte("foo")
num := 20
for i := 0; i < num; i++ {
fc.addFile(strconv.Itoa(i), 0644, protocol.FileInfoTypeFile, data)
}
fc.sendIndexUpdate()
select {
case <-sub.C():
case <-time.After(5 * time.Second):
t.Fatal("Timed out before receiving index")
}
progress, queued, rest := m.NeedFolderFiles(fcfg.ID, 1, 100)
if got := len(progress) + len(queued) + len(rest); got != num {
t.Errorf("Got %v needed items, expected %v", got, num)
}
exp := 10
for page := 1; page < 3; page++ {
progress, queued, rest := m.NeedFolderFiles(fcfg.ID, page, exp)
if got := len(progress) + len(queued) + len(rest); got != exp {
t.Errorf("Got %v needed items on page %v, expected %v", got, page, exp)
}
}
}

View File

@@ -8,7 +8,6 @@ package model
import (
"io"
"path/filepath"
"time"
"github.com/pkg/errors"
@@ -92,25 +91,16 @@ func (s *sharedPullerState) tempFile() (io.WriterAt, error) {
return lockedWriterAt{&s.mut, s.fd}, nil
}
// Ensure that the parent directory is writable. This is
// osutil.InWritableDir except we need to do more stuff so we duplicate it
// here.
dir := filepath.Dir(s.tempName)
if info, err := s.fs.Stat(dir); err != nil {
s.failLocked(errors.Wrap(err, "ensuring parent dir is writeable"))
if err := inWritableDir(s.tempFileInWritableDir, s.fs, s.tempName, s.ignorePerms); err != nil {
s.failLocked(err)
return nil, err
} else if info.Mode()&0200 == 0 {
err := s.fs.Chmod(dir, 0755)
if !s.ignorePerms && err == nil {
defer func() {
err := s.fs.Chmod(dir, info.Mode()&fs.ModePerm)
if err != nil {
panic(err)
}
}()
}
}
return lockedWriterAt{&s.mut, s.fd}, nil
}
// tempFileInWritableDir should only be called from tempFile.
func (s *sharedPullerState) tempFileInWritableDir(_ string) error {
// The permissions to use for the temporary file should be those of the
// final file, except we need user read & write at minimum. The
// permissions will be set to the final value later, but in the meantime
@@ -140,14 +130,12 @@ func (s *sharedPullerState) tempFile() (io.WriterAt, error) {
// what the umask dictates.
if err := s.fs.Chmod(s.tempName, mode); err != nil {
s.failLocked(errors.Wrap(err, "setting perms on temp file"))
return nil, err
return errors.Wrap(err, "setting perms on temp file")
}
}
fd, err := s.fs.OpenFile(s.tempName, flags, mode)
if err != nil {
s.failLocked(errors.Wrap(err, "opening temp file"))
return nil, err
return errors.Wrap(err, "opening temp file")
}
// Hide the temporary file
@@ -177,16 +165,14 @@ func (s *sharedPullerState) tempFile() (io.WriterAt, error) {
l.Debugln("failed to remove temporary file:", remErr)
}
s.failLocked(err)
return nil, err
return err
}
}
}
// Same fd will be used by all writers
s.fd = fd
return lockedWriterAt{&s.mut, s.fd}, nil
return nil
}
// fail sets the error on the puller state compose of error, and marks the

View File

@@ -8,8 +8,12 @@ package model
import (
"fmt"
"path/filepath"
"sync"
"time"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/fs"
)
type Holdable interface {
@@ -59,3 +63,39 @@ func (d *deadlockDetector) Watch(name string, mut sync.Locker) {
}
}()
}
// inWritableDir calls fn(path), while making sure that the directory
// containing `path` is writable for the duration of the call.
func inWritableDir(fn func(string) error, targetFs fs.Filesystem, path string, ignorePerms bool) error {
dir := filepath.Dir(path)
info, err := targetFs.Stat(dir)
if err != nil {
return err
}
if !info.IsDir() {
return errors.New("Not a directory: " + path)
}
if info.Mode()&0200 == 0 {
// A non-writeable directory (for this user; we assume that's the
// relevant part). Temporarily change the mode so we can delete the
// file or directory inside it.
if err := targetFs.Chmod(dir, 0755); err == nil {
// Chmod succeeded, we should change the permissions back on the way
// out. If we fail we log the error as we have irrevocably messed up
// at this point. :( (The operation we were called to wrap has
// succeeded or failed on its own so returning an error to the
// caller is inappropriate.)
defer func() {
if err := targetFs.Chmod(dir, info.Mode()&fs.ModePerm); err != nil && !fs.IsNotExist(err) {
logFn := l.Warnln
if ignorePerms {
logFn = l.Debugln
}
logFn("Failed to restore directory permissions after gaining write access:", err)
}
}()
}
}
return fn(path)
}

195
lib/model/utils_test.go Normal file
View File

@@ -0,0 +1,195 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import (
"os"
"runtime"
"testing"
"github.com/syncthing/syncthing/lib/fs"
)
func TestInWriteableDir(t *testing.T) {
dir := createTmpDir()
defer os.RemoveAll(dir)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, dir)
fs.Mkdir("testdata", 0700)
fs.Mkdir("testdata/rw", 0700)
fs.Mkdir("testdata/ro", 0500)
create := func(name string) error {
fd, err := fs.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
// These should succeed
err := inWritableDir(create, fs, "testdata/file", false)
if err != nil {
t.Error("testdata/file:", err)
}
err = inWritableDir(create, fs, "testdata/rw/foo", false)
if err != nil {
t.Error("testdata/rw/foo:", err)
}
err = inWritableDir(fs.Remove, fs, "testdata/rw/foo", false)
if err != nil {
t.Error("testdata/rw/foo:", err)
}
err = inWritableDir(create, fs, "testdata/ro/foo", false)
if err != nil {
t.Error("testdata/ro/foo:", err)
}
err = inWritableDir(fs.Remove, fs, "testdata/ro/foo", false)
if err != nil {
t.Error("testdata/ro/foo:", err)
}
// These should not
err = inWritableDir(create, fs, "testdata/nonexistent/foo", false)
if err == nil {
t.Error("testdata/nonexistent/foo returned nil error")
}
err = inWritableDir(create, fs, "testdata/file/foo", false)
if err == nil {
t.Error("testdata/file/foo returned nil error")
}
}
func TestOSWindowsRemove(t *testing.T) {
// os.Remove should remove read only things on windows
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
dir := createTmpDir()
defer os.RemoveAll(dir)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, dir)
defer fs.Chmod("testdata/windows/ro/readonlynew", 0700)
create := func(name string) error {
fd, err := fs.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
fs.Mkdir("testdata", 0700)
fs.Mkdir("testdata/windows", 0500)
fs.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
fs.Chmod("testdata/windows/ro/readonly", 0500)
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := inWritableDir(fs.Remove, fs, path, false)
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
}
}
func TestOSWindowsRemoveAll(t *testing.T) {
// os.RemoveAll should remove read only things on windows
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
dir := createTmpDir()
defer os.RemoveAll(dir)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, dir)
defer fs.Chmod("testdata/windows/ro/readonlynew", 0700)
create := func(name string) error {
fd, err := fs.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
fs.Mkdir("testdata", 0700)
fs.Mkdir("testdata/windows", 0500)
fs.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
fs.Chmod("testdata/windows/ro/readonly", 0500)
if err := fs.RemoveAll("testdata/windows"); err != nil {
t.Errorf("Unexpected error: %s", err)
}
}
func TestInWritableDirWindowsRename(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
dir := createTmpDir()
defer os.RemoveAll(dir)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, dir)
defer fs.Chmod("testdata/windows/ro/readonlynew", 0700)
create := func(name string) error {
fd, err := fs.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
fs.Mkdir("testdata", 0700)
fs.Mkdir("testdata/windows", 0500)
fs.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
fs.Chmod("testdata/windows/ro/readonly", 0500)
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := fs.Rename(path, path+"new")
if err == nil {
t.Skipf("seem like this test doesn't work here")
return
}
}
rename := func(path string) error {
return fs.Rename(path, path+"new")
}
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := inWritableDir(rename, fs, path, false)
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
_, err = fs.Stat(path + "new")
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
}
}

View File

@@ -19,7 +19,7 @@ func Register(provider DiscoverFunc) {
providers = append(providers, provider)
}
func discoverAll(renewal, timeout time.Duration) map[string]Device {
func discoverAll(renewal, timeout time.Duration, stop chan struct{}) map[string]Device {
wg := &sync.WaitGroup{}
wg.Add(len(providers))
@@ -28,20 +28,32 @@ func discoverAll(renewal, timeout time.Duration) map[string]Device {
for _, discoverFunc := range providers {
go func(f DiscoverFunc) {
defer wg.Done()
for _, dev := range f(renewal, timeout) {
c <- dev
select {
case c <- dev:
case <-stop:
return
}
}
wg.Done()
}(discoverFunc)
}
nats := make(map[string]Device)
go func() {
for dev := range c {
nats[dev.ID()] = dev
defer close(done)
for {
select {
case dev, ok := <-c:
if !ok {
return
}
nats[dev.ID()] = dev
case <-stop:
return
}
}
close(done)
}()
wg.Wait()

View File

@@ -14,17 +14,21 @@ import (
stdsync "sync"
"time"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
)
// Service runs a loop for discovery of IGDs (Internet Gateway Devices) and
// setup/renewal of a port mapping.
type Service struct {
id protocol.DeviceID
cfg config.Wrapper
stop chan struct{}
suture.Service
id protocol.DeviceID
cfg config.Wrapper
mappings []*Mapping
timer *time.Timer
@@ -32,27 +36,28 @@ type Service struct {
}
func NewService(id protocol.DeviceID, cfg config.Wrapper) *Service {
return &Service{
s := &Service{
id: id,
cfg: cfg,
timer: time.NewTimer(0),
mut: sync.NewRWMutex(),
}
s.Service = util.AsService(s.serve)
return s
}
func (s *Service) Serve() {
func (s *Service) serve(stop chan struct{}) {
announce := stdsync.Once{}
s.mut.Lock()
s.timer.Reset(0)
s.stop = make(chan struct{})
s.mut.Unlock()
for {
select {
case <-s.timer.C:
if found := s.process(); found != -1 {
if found := s.process(stop); found != -1 {
announce.Do(func() {
suffix := "s"
if found == 1 {
@@ -61,7 +66,7 @@ func (s *Service) Serve() {
l.Infoln("Detected", found, "NAT service"+suffix)
})
}
case <-s.stop:
case <-stop:
s.timer.Stop()
s.mut.RLock()
for _, mapping := range s.mappings {
@@ -73,7 +78,7 @@ func (s *Service) Serve() {
}
}
func (s *Service) process() int {
func (s *Service) process(stop chan struct{}) int {
// toRenew are mappings which are due for renewal
// toUpdate are the remaining mappings, which will only be updated if one of
// the old IGDs has gone away, or a new IGD has appeared, but only if we
@@ -115,25 +120,19 @@ func (s *Service) process() int {
return -1
}
nats := discoverAll(time.Duration(s.cfg.Options().NATRenewalM)*time.Minute, time.Duration(s.cfg.Options().NATTimeoutS)*time.Second)
nats := discoverAll(time.Duration(s.cfg.Options().NATRenewalM)*time.Minute, time.Duration(s.cfg.Options().NATTimeoutS)*time.Second, stop)
for _, mapping := range toRenew {
s.updateMapping(mapping, nats, true)
s.updateMapping(mapping, nats, true, stop)
}
for _, mapping := range toUpdate {
s.updateMapping(mapping, nats, false)
s.updateMapping(mapping, nats, false, stop)
}
return len(nats)
}
func (s *Service) Stop() {
s.mut.RLock()
close(s.stop)
s.mut.RUnlock()
}
func (s *Service) NewMapping(protocol Protocol, ip net.IP, port int) *Mapping {
mapping := &Mapping{
protocol: protocol,
@@ -178,17 +177,17 @@ func (s *Service) RemoveMapping(mapping *Mapping) {
// acquire mappings for natds which the mapping was unaware of before.
// Optionally takes renew flag which indicates whether or not we should renew
// mappings with existing natds
func (s *Service) updateMapping(mapping *Mapping, nats map[string]Device, renew bool) {
func (s *Service) updateMapping(mapping *Mapping, nats map[string]Device, renew bool, stop chan struct{}) {
var added, removed []Address
renewalTime := time.Duration(s.cfg.Options().NATRenewalM) * time.Minute
mapping.expires = time.Now().Add(renewalTime)
newAdded, newRemoved := s.verifyExistingMappings(mapping, nats, renew)
newAdded, newRemoved := s.verifyExistingMappings(mapping, nats, renew, stop)
added = append(added, newAdded...)
removed = append(removed, newRemoved...)
newAdded, newRemoved = s.acquireNewMappings(mapping, nats)
newAdded, newRemoved = s.acquireNewMappings(mapping, nats, stop)
added = append(added, newAdded...)
removed = append(removed, newRemoved...)
@@ -197,12 +196,18 @@ func (s *Service) updateMapping(mapping *Mapping, nats map[string]Device, renew
}
}
func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Device, renew bool) ([]Address, []Address) {
func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Device, renew bool, stop chan struct{}) ([]Address, []Address) {
var added, removed []Address
leaseTime := time.Duration(s.cfg.Options().NATLeaseM) * time.Minute
for id, address := range mapping.addressMap() {
select {
case <-stop:
return nil, nil
default:
}
// Delete addresses for NATDevice's that do not exist anymore
nat, ok := nats[id]
if !ok {
@@ -220,7 +225,7 @@ func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Devic
l.Debugf("Renewing %s -> %s mapping on %s", mapping, address, id)
addr, err := s.tryNATDevice(nat, mapping.address.Port, address.Port, leaseTime)
addr, err := s.tryNATDevice(nat, mapping.address.Port, address.Port, leaseTime, stop)
if err != nil {
l.Debugf("Failed to renew %s -> mapping on %s", mapping, address, id)
mapping.removeAddress(id)
@@ -242,13 +247,19 @@ func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Devic
return added, removed
}
func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device) ([]Address, []Address) {
func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device, stop chan struct{}) ([]Address, []Address) {
var added, removed []Address
leaseTime := time.Duration(s.cfg.Options().NATLeaseM) * time.Minute
addrMap := mapping.addressMap()
for id, nat := range nats {
select {
case <-stop:
return nil, nil
default:
}
if _, ok := addrMap[id]; ok {
continue
}
@@ -263,7 +274,7 @@ func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device) (
l.Debugf("Acquiring %s mapping on %s", mapping, id)
addr, err := s.tryNATDevice(nat, mapping.address.Port, 0, leaseTime)
addr, err := s.tryNATDevice(nat, mapping.address.Port, 0, leaseTime, stop)
if err != nil {
l.Debugf("Failed to acquire %s mapping on %s", mapping, id)
continue
@@ -280,7 +291,7 @@ func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device) (
// tryNATDevice tries to acquire a port mapping for the given internal address to
// the given external port. If external port is 0, picks a pseudo-random port.
func (s *Service) tryNATDevice(natd Device, intPort, extPort int, leaseTime time.Duration) (Address, error) {
func (s *Service) tryNATDevice(natd Device, intPort, extPort int, leaseTime time.Duration, stop chan struct{}) (Address, error) {
var err error
var port int
@@ -301,6 +312,12 @@ func (s *Service) tryNATDevice(natd Device, intPort, extPort int, leaseTime time
}
for i := 0; i < 10; i++ {
select {
case <-stop:
return Address{}, nil
default:
}
// Then try up to ten random ports.
extPort = 1024 + predictableRand.Intn(65535-1024)
name := fmt.Sprintf("syncthing-%d", extPort)

View File

@@ -8,7 +8,6 @@
package osutil
import (
"errors"
"io"
"path/filepath"
"runtime"
@@ -81,38 +80,6 @@ func Copy(src, dst fs.Filesystem, from, to string) (err error) {
})
}
// InWritableDir calls fn(path), while making sure that the directory
// containing `path` is writable for the duration of the call.
func InWritableDir(fn func(string) error, fs fs.Filesystem, path string) error {
dir := filepath.Dir(path)
info, err := fs.Stat(dir)
if err != nil {
return err
}
if !info.IsDir() {
return errors.New("Not a directory: " + path)
}
if info.Mode()&0200 == 0 {
// A non-writeable directory (for this user; we assume that's the
// relevant part). Temporarily change the mode so we can delete the
// file or directory inside it.
err = fs.Chmod(dir, 0755)
if err == nil {
defer func() {
err = fs.Chmod(dir, info.Mode())
if err != nil {
// We managed to change the permission bits like a
// millisecond ago, so it'd be bizarre if we couldn't
// change it back.
panic(err)
}
}()
}
}
return fn(path)
}
// Tries hard to succeed on various systems by temporarily tweaking directory
// permissions and removing the destination file when necessary.
func withPreparedTarget(filesystem fs.Filesystem, from, to string, f func() error) error {

View File

@@ -18,196 +18,6 @@ import (
"github.com/syncthing/syncthing/lib/osutil"
)
func TestInWriteableDir(t *testing.T) {
err := os.RemoveAll("testdata")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll("testdata")
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, ".")
os.Mkdir("testdata", 0700)
os.Mkdir("testdata/rw", 0700)
os.Mkdir("testdata/ro", 0500)
create := func(name string) error {
fd, err := os.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
// These should succeed
err = osutil.InWritableDir(create, fs, "testdata/file")
if err != nil {
t.Error("testdata/file:", err)
}
err = osutil.InWritableDir(create, fs, "testdata/rw/foo")
if err != nil {
t.Error("testdata/rw/foo:", err)
}
err = osutil.InWritableDir(os.Remove, fs, "testdata/rw/foo")
if err != nil {
t.Error("testdata/rw/foo:", err)
}
err = osutil.InWritableDir(create, fs, "testdata/ro/foo")
if err != nil {
t.Error("testdata/ro/foo:", err)
}
err = osutil.InWritableDir(os.Remove, fs, "testdata/ro/foo")
if err != nil {
t.Error("testdata/ro/foo:", err)
}
// These should not
err = osutil.InWritableDir(create, fs, "testdata/nonexistent/foo")
if err == nil {
t.Error("testdata/nonexistent/foo returned nil error")
}
err = osutil.InWritableDir(create, fs, "testdata/file/foo")
if err == nil {
t.Error("testdata/file/foo returned nil error")
}
}
func TestInWritableDirWindowsRemove(t *testing.T) {
// os.Remove should remove read only things on windows
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
err := os.RemoveAll("testdata")
if err != nil {
t.Fatal(err)
}
defer os.Chmod("testdata/windows/ro/readonlynew", 0700)
defer os.RemoveAll("testdata")
create := func(name string) error {
fd, err := os.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
os.Mkdir("testdata", 0700)
os.Mkdir("testdata/windows", 0500)
os.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
os.Chmod("testdata/windows/ro/readonly", 0500)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, ".")
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := osutil.InWritableDir(os.Remove, fs, path)
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
}
}
func TestInWritableDirWindowsRemoveAll(t *testing.T) {
// os.RemoveAll should remove read only things on windows
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
err := os.RemoveAll("testdata")
if err != nil {
t.Fatal(err)
}
defer os.Chmod("testdata/windows/ro/readonlynew", 0700)
defer os.RemoveAll("testdata")
create := func(name string) error {
fd, err := os.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
os.Mkdir("testdata", 0700)
os.Mkdir("testdata/windows", 0500)
os.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
os.Chmod("testdata/windows/ro/readonly", 0500)
if err := os.RemoveAll("testdata/windows"); err != nil {
t.Errorf("Unexpected error: %s", err)
}
}
func TestInWritableDirWindowsRename(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("Tests not required")
return
}
err := os.RemoveAll("testdata")
if err != nil {
t.Fatal(err)
}
defer os.Chmod("testdata/windows/ro/readonlynew", 0700)
defer os.RemoveAll("testdata")
create := func(name string) error {
fd, err := os.Create(name)
if err != nil {
return err
}
fd.Close()
return nil
}
os.Mkdir("testdata", 0700)
os.Mkdir("testdata/windows", 0500)
os.Mkdir("testdata/windows/ro", 0500)
create("testdata/windows/ro/readonly")
os.Chmod("testdata/windows/ro/readonly", 0500)
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, ".")
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := os.Rename(path, path+"new")
if err == nil {
t.Skipf("seem like this test doesn't work here")
return
}
}
rename := func(path string) error {
return osutil.RenameOrCopy(fs, fs, path, path+"new")
}
for _, path := range []string{"testdata/windows/ro/readonly", "testdata/windows/ro", "testdata/windows"} {
err := osutil.InWritableDir(rename, fs, path)
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
_, err = os.Stat(path + "new")
if err != nil {
t.Errorf("Unexpected error %s: %s", path, err)
}
}
}
func TestIsDeleted(t *testing.T) {
type tc struct {
path string

View File

@@ -158,12 +158,12 @@ func (f FileInfo) IsEmpty() bool {
return f.Version.Counters == nil
}
func (f FileInfo) IsEquivalent(other FileInfo) bool {
return f.isEquivalent(other, false, false, 0)
func (f FileInfo) IsEquivalent(other FileInfo, modTimeWindow time.Duration) bool {
return f.isEquivalent(other, modTimeWindow, false, false, 0)
}
func (f FileInfo) IsEquivalentOptional(other FileInfo, ignorePerms bool, ignoreBlocks bool, ignoreFlags uint32) bool {
return f.isEquivalent(other, ignorePerms, ignoreBlocks, ignoreFlags)
func (f FileInfo) IsEquivalentOptional(other FileInfo, modTimeWindow time.Duration, ignorePerms bool, ignoreBlocks bool, ignoreFlags uint32) bool {
return f.isEquivalent(other, modTimeWindow, ignorePerms, ignoreBlocks, ignoreFlags)
}
// isEquivalent checks that the two file infos represent the same actual file content,
@@ -175,13 +175,13 @@ func (f FileInfo) IsEquivalentOptional(other FileInfo, ignorePerms bool, ignoreB
// - invalid flag
// - permissions, unless they are ignored
// A file is not "equivalent", if it has different
// - modification time
// - modification time (difference bigger than modTimeWindow)
// - size
// - blocks, unless there are no blocks to compare (scanning)
// A symlink is not "equivalent", if it has different
// - target
// A directory does not have anything specific to check.
func (f FileInfo) isEquivalent(other FileInfo, ignorePerms bool, ignoreBlocks bool, ignoreFlags uint32) bool {
func (f FileInfo) isEquivalent(other FileInfo, modTimeWindow time.Duration, ignorePerms bool, ignoreBlocks bool, ignoreFlags uint32) bool {
if f.MustRescan() || other.MustRescan() {
// These are per definition not equivalent because they don't
// represent a valid state, even if both happen to have the
@@ -203,7 +203,7 @@ func (f FileInfo) isEquivalent(other FileInfo, ignorePerms bool, ignoreBlocks bo
switch f.Type {
case FileInfoTypeFile:
return f.Size == other.Size && f.ModTime().Equal(other.ModTime()) && (ignoreBlocks || BlocksEqual(f.Blocks, other.Blocks))
return f.Size == other.Size && ModTimeEqual(f.ModTime(), other.ModTime(), modTimeWindow) && (ignoreBlocks || BlocksEqual(f.Blocks, other.Blocks))
case FileInfoTypeSymlink:
return f.SymlinkTarget == other.SymlinkTarget
case FileInfoTypeDirectory:
@@ -213,6 +213,17 @@ func (f FileInfo) isEquivalent(other FileInfo, ignorePerms bool, ignoreBlocks bo
return false
}
func ModTimeEqual(a, b time.Time, modTimeWindow time.Duration) bool {
if a.Equal(b) {
return true
}
diff := a.Sub(b)
if diff < 0 {
diff *= -1
}
return diff < modTimeWindow
}
func PermsEqual(a, b uint32) bool {
switch runtime.GOOS {
case "windows":

View File

@@ -770,10 +770,10 @@ func TestIsEquivalent(t *testing.T) {
continue
}
if res := tc.a.isEquivalent(tc.b, ignPerms, ignBlocks, tc.ignFlags); res != tc.eq {
if res := tc.a.isEquivalent(tc.b, 0, ignPerms, ignBlocks, tc.ignFlags); res != tc.eq {
t.Errorf("Case %d:\na: %v\nb: %v\na.IsEquivalent(b, %v, %v) => %v, expected %v", i, tc.a, tc.b, ignPerms, ignBlocks, res, tc.eq)
}
if res := tc.b.isEquivalent(tc.a, ignPerms, ignBlocks, tc.ignFlags); res != tc.eq {
if res := tc.b.isEquivalent(tc.a, 0, ignPerms, ignBlocks, tc.ignFlags); res != tc.eq {
t.Errorf("Case %d:\na: %v\nb: %v\nb.IsEquivalent(a, %v, %v) => %v, expected %v", i, tc.a, tc.b, ignPerms, ignBlocks, res, tc.eq)
}
}

View File

@@ -69,15 +69,7 @@ func (c *dynamicClient) serve(stop chan struct{}) error {
addrs = append(addrs, ruri.String())
}
defer func() {
c.mut.RLock()
if c.client != nil {
c.client.Stop()
}
c.mut.RUnlock()
}()
for _, addr := range relayAddressesOrder(addrs) {
for _, addr := range relayAddressesOrder(addrs, stop) {
select {
case <-stop:
l.Debugln(c, "stopping")
@@ -104,6 +96,15 @@ func (c *dynamicClient) serve(stop chan struct{}) error {
return fmt.Errorf("could not find a connectable relay")
}
func (c *dynamicClient) Stop() {
c.mut.RLock()
if c.client != nil {
c.client.Stop()
}
c.mut.RUnlock()
c.commonClient.Stop()
}
func (c *dynamicClient) Error() error {
c.mut.RLock()
defer c.mut.RUnlock()
@@ -147,7 +148,7 @@ type dynamicAnnouncement struct {
// the closest 50ms, and puts them in buckets of 50ms latency ranges. Then
// shuffles each bucket, and returns all addresses starting with the ones from
// the lowest latency bucket, ending with the highest latency buceket.
func relayAddressesOrder(input []string) []string {
func relayAddressesOrder(input []string, stop chan struct{}) []string {
buckets := make(map[int][]string)
for _, relay := range input {
@@ -159,6 +160,12 @@ func relayAddressesOrder(input []string) []string {
id := int(latency/time.Millisecond) / 50
buckets[id] = append(buckets[id], relay)
select {
case <-stop:
return nil
default:
}
}
var ids []int

View File

@@ -54,6 +54,8 @@ type Config struct {
ProgressTickIntervalS int
// Local flags to set on scanned files
LocalFlags uint32
// Modification time is to be considered unchanged if the difference is lower.
ModTimeWindow time.Duration
}
type CurrentFiler interface {
@@ -346,7 +348,7 @@ func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileIn
f.RawBlockSize = int32(blockSize)
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.IgnorePerms, true, w.LocalFlags) {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
return nil
}
if curFile.ShouldConflict() {
@@ -379,7 +381,7 @@ func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo,
f.NoPermissions = w.IgnorePerms
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.IgnorePerms, true, w.LocalFlags) {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
return nil
}
if curFile.ShouldConflict() {
@@ -423,7 +425,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, info fs.FileIn
f = w.updateFileInfo(f, curFile)
if hasCurFile {
if curFile.IsEquivalentOptional(f, w.IgnorePerms, true, w.LocalFlags) {
if curFile.IsEquivalentOptional(f, w.ModTimeWindow, w.IgnorePerms, true, w.LocalFlags) {
return nil
}
if curFile.ShouldConflict() {

View File

@@ -109,8 +109,8 @@ func New(cfg config.Wrapper, subscriber Subscriber, conn net.PacketConn) (*Servi
}
func (s *Service) Stop() {
s.Service.Stop()
_ = s.stunConn.Close()
s.Service.Stop()
}
func (s *Service) serve(stop chan struct{}) {
@@ -163,7 +163,11 @@ func (s *Service) serve(stop chan struct{}) {
// We failed to contact all provided stun servers or the nat is not punchable.
// Chillout for a while.
time.Sleep(stunRetryInterval)
select {
case <-time.After(stunRetryInterval):
case <-stop:
return
}
}
}
@@ -300,7 +304,7 @@ func (s *Service) String() string {
}
func (s *Service) isCurrentNATTypePunchable() bool {
return s.natType == NATNone || s.natType == NATPortRestricted || s.natType == NATRestricted || s.natType == NATFull
return s.natType == NATNone || s.natType == NATPortRestricted || s.natType == NATRestricted || s.natType == NATFull || s.natType == NATSymmetricUDPFirewall
}
func areDifferent(first, second *Host) bool {

View File

@@ -10,42 +10,38 @@ import (
"encoding/json"
"io"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/util"
)
// The auditService subscribes to events and writes these in JSON format, one
// event per line, to the specified writer.
type auditService struct {
w io.Writer // audit destination
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
stopped chan struct{} // signals stop complete
suture.Service
w io.Writer // audit destination
sub *events.Subscription
}
func newAuditService(w io.Writer) *auditService {
return &auditService{
w: w,
stop: make(chan struct{}),
started: make(chan struct{}),
stopped: make(chan struct{}),
s := &auditService{
w: w,
sub: events.Default.Subscribe(events.AllEvents),
}
s.Service = util.AsService(s.serve)
return s
}
// Serve runs the audit service.
func (s *auditService) Serve() {
defer close(s.stopped)
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
// serve runs the audit service.
func (s *auditService) serve(stop chan struct{}) {
enc := json.NewEncoder(s.w)
// We're ready to start processing events.
close(s.started)
for {
select {
case ev := <-sub.C():
case ev := <-s.sub.C():
enc.Encode(ev)
case <-s.stop:
case <-stop:
return
}
}
@@ -53,17 +49,6 @@ func (s *auditService) Serve() {
// Stop stops the audit service.
func (s *auditService) Stop() {
close(s.stop)
}
// WaitForStart returns once the audit service is ready to receive events, or
// immediately if it's already running.
func (s *auditService) WaitForStart() {
<-s.started
}
// WaitForStop returns once the audit service has stopped.
// (Needed by the tests.)
func (s *auditService) WaitForStop() {
<-s.stopped
s.Service.Stop()
events.Default.Unsubscribe(s.sub)
}

View File

@@ -17,13 +17,12 @@ import (
func TestAuditService(t *testing.T) {
buf := new(bytes.Buffer)
service := newAuditService(buf)
// Event sent before start, will not be logged
// Event sent before construction, will not be logged
events.Default.Log(events.ConfigSaved, "the first event")
service := newAuditService(buf)
go service.Serve()
service.WaitForStart()
// Event that should end up in the audit log
events.Default.Log(events.ConfigSaved, "the second event")
@@ -32,7 +31,6 @@ func TestAuditService(t *testing.T) {
time.Sleep(10 * time.Millisecond)
service.Stop()
service.WaitForStop()
// This event should not be logged, since we have stopped.
events.Default.Log(events.ConfigSaved, "the third event")

View File

@@ -16,6 +16,8 @@ import (
"sync"
"time"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/api"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
@@ -32,8 +34,6 @@ import (
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/tlsutil"
"github.com/syncthing/syncthing/lib/ur"
"github.com/thejerf/suture"
)
const (
@@ -73,6 +73,7 @@ type App struct {
exitStatus ExitStatus
err error
startOnce sync.Once
stopOnce sync.Once
stop chan struct{}
stopped chan struct{}
}
@@ -100,10 +101,7 @@ func (a *App) Run() ExitStatus {
func (a *App) Start() {
a.startOnce.Do(func() {
if err := a.startup(); err != nil {
close(a.stop)
a.exitStatus = ExitError
a.err = err
close(a.stopped)
a.stopWithErr(ExitError, err)
return
}
go a.run()
@@ -121,12 +119,8 @@ func (a *App) startup() error {
})
a.mainService.ServeBackground()
// Set a log prefix similar to the ID we will have later on, or early log
// lines look ugly.
l.SetPrefix("[start] ")
if a.opts.AuditWriter != nil {
a.startAuditing()
a.mainService.Add(newAuditService(a.opts.AuditWriter))
}
if a.opts.Verbose {
@@ -147,10 +141,9 @@ func (a *App) startup() error {
// report the error if there is one.
osutil.MaximizeOpenFileLimit()
// Figure out our device ID, set it as the log prefix and log it.
a.myID = protocol.NewDeviceID(a.cert.Certificate[0])
l.SetPrefix(fmt.Sprintf("[%s] ", a.myID.String()[:5]))
l.Infoln(build.LongVersion)
l.Infoln("My ID:", a.myID)
// Select SHA256 implementation and report. Affected by the
@@ -403,28 +396,20 @@ func (a *App) Error() error {
// Stop stops the app and sets its exit status to given reason, unless the app
// was already stopped before. In any case it returns the effective exit status.
func (a *App) Stop(stopReason ExitStatus) ExitStatus {
select {
case <-a.stopped:
case <-a.stop:
default:
close(a.stop)
}
<-a.stopped
// ExitSuccess is the default value for a.exitStatus. If another status
// was already set, ignore the stop reason given as argument to Stop.
if a.exitStatus == ExitSuccess {
a.exitStatus = stopReason
}
return a.exitStatus
return a.stopWithErr(stopReason, nil)
}
func (a *App) startAuditing() {
auditService := newAuditService(a.opts.AuditWriter)
a.mainService.Add(auditService)
// We wait for the audit service to fully start before we return, to
// ensure we capture all events from the start.
auditService.WaitForStart()
func (a *App) stopWithErr(stopReason ExitStatus, err error) ExitStatus {
a.stopOnce.Do(func() {
// ExitSuccess is the default value for a.exitStatus. If another status
// was already set, ignore the stop reason given as argument to Stop.
if a.exitStatus == ExitSuccess {
a.exitStatus = stopReason
a.err = err
}
close(a.stop)
})
return a.exitStatus
}
func (a *App) setupGUI(m model.Model, defaultSub, diskSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService connections.Service, urService *ur.Service, errors, systemLog logger.Recorder) error {
@@ -482,15 +467,3 @@ func (e *controller) Shutdown() {
func (e *controller) ExitUpgrading() {
e.Stop(ExitUpgrade)
}
func LoadCertificate(certFile, keyFile string) (tls.Certificate, error) {
return tls.LoadX509KeyPair(certFile, keyFile)
}
func LoadConfig(path string, cert tls.Certificate) (config.Wrapper, error) {
return config.Load(path, protocol.NewDeviceID(cert.Certificate[0]))
}
func OpenGoleveldb(path string) (*db.Lowlevel, error) {
return db.Open(path)
}

126
lib/syncthing/utils.go Normal file
View File

@@ -0,0 +1,126 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package syncthing
import (
"crypto/tls"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
)
func LoadOrGenerateCertificate(certFile, keyFile string) (tls.Certificate, error) {
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Infof("Generating ECDSA key and certificate for %s...", tlsDefaultCommonName)
return tlsutil.NewCertificate(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
tlsDefaultCommonName,
)
}
return cert, nil
}
func DefaultConfig(path string, myID protocol.DeviceID, noDefaultFolder bool) (config.Wrapper, error) {
newCfg, err := config.NewWithFreePorts(myID)
if err != nil {
return nil, err
}
if noDefaultFolder {
l.Infoln("We will skip creation of a default folder on first start")
return config.Wrap(path, newCfg), nil
}
newCfg.Folders = append(newCfg.Folders, config.NewFolderConfiguration(myID, "default", "Default Folder", fs.FilesystemTypeBasic, locations.Get(locations.DefFolder)))
l.Infoln("Default folder created and/or linked to new config")
return config.Wrap(path, newCfg), nil
}
// LoadConfigAtStartup loads an existing config. If it doesn't yet exist, it
// creates a default one, without the default folder if noDefaultFolder is ture.
// Otherwise it checks the version, and archives and upgrades the config if
// necessary or returns an error, if the version isn't compatible.
func LoadConfigAtStartup(path string, cert tls.Certificate, allowNewerConfig, noDefaultFolder bool) (config.Wrapper, error) {
myID := protocol.NewDeviceID(cert.Certificate[0])
cfg, err := config.Load(path, myID)
if fs.IsNotExist(err) {
cfg, err = DefaultConfig(path, myID, noDefaultFolder)
if err != nil {
return nil, errors.Wrap(err, "failed to generate default config")
}
err = cfg.Save()
if err != nil {
return nil, errors.Wrap(err, "failed to save default config")
}
l.Infof("Default config saved. Edit %s to taste (with Syncthing stopped) or use the GUI", cfg.ConfigPath())
} else if err == io.EOF {
return nil, errors.New("failed to load config: unexpected end of file. Truncated or empty configuration?")
} else if err != nil {
return nil, errors.Wrap(err, "failed to load config")
}
if cfg.RawCopy().OriginalVersion != config.CurrentVersion {
if cfg.RawCopy().OriginalVersion == config.CurrentVersion+1101 {
l.Infof("Now, THAT's what we call a config from the future! Don't worry. As long as you hit that wire with the connecting hook at precisely eighty-eight miles per hour the instant the lightning strikes the tower... everything will be fine.")
}
if cfg.RawCopy().OriginalVersion > config.CurrentVersion && !allowNewerConfig {
return nil, fmt.Errorf("config file version (%d) is newer than supported version (%d). If this is expected, use -allow-newer-config to override.", cfg.RawCopy().OriginalVersion, config.CurrentVersion)
}
err = archiveAndSaveConfig(cfg)
if err != nil {
return nil, errors.Wrap(err, "config archive")
}
}
return cfg, nil
}
func archiveAndSaveConfig(cfg config.Wrapper) error {
// Copy the existing config to an archive copy
archivePath := cfg.ConfigPath() + fmt.Sprintf(".v%d", cfg.RawCopy().OriginalVersion)
l.Infoln("Archiving a copy of old config file format at:", archivePath)
if err := copyFile(cfg.ConfigPath(), archivePath); err != nil {
return err
}
// Do a regular atomic config sve
return cfg.Save()
}
func copyFile(src, dst string) error {
bs, err := ioutil.ReadFile(src)
if err != nil {
return err
}
if err := ioutil.WriteFile(dst, bs, 0600); err != nil {
// Attempt to clean up
os.Remove(dst)
return err
}
return nil
}
func OpenGoleveldb(path string) (*db.Lowlevel, error) {
return db.Open(path)
}

View File

@@ -9,45 +9,37 @@ package syncthing
import (
"fmt"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/util"
)
// The verbose logging service subscribes to events and prints these in
// verbose format to the console using INFO level.
type verboseService struct {
stop chan struct{} // signals time to stop
started chan struct{} // signals startup complete
suture.Service
sub *events.Subscription
}
func newVerboseService() *verboseService {
return &verboseService{
stop: make(chan struct{}),
started: make(chan struct{}),
s := &verboseService{
sub: events.Default.Subscribe(events.AllEvents),
}
s.Service = util.AsService(s.serve)
return s
}
// Serve runs the verbose logging service.
func (s *verboseService) Serve() {
sub := events.Default.Subscribe(events.AllEvents)
defer events.Default.Unsubscribe(sub)
select {
case <-s.started:
// The started channel has already been closed; do nothing.
default:
// This is the first time around. Indicate that we're ready to start
// processing events.
close(s.started)
}
// serve runs the verbose logging service.
func (s *verboseService) serve(stop chan struct{}) {
for {
select {
case ev := <-sub.C():
case ev := <-s.sub.C():
formatted := s.formatEvent(ev)
if formatted != "" {
l.Verboseln(formatted)
}
case <-s.stop:
case <-stop:
return
}
}
@@ -55,13 +47,9 @@ func (s *verboseService) Serve() {
// Stop stops the verbose logging service.
func (s *verboseService) Stop() {
close(s.stop)
}
s.Service.Stop()
events.Default.Unsubscribe(s.sub)
// WaitForStart returns once the verbose logging service is ready to receive
// events, or immediately if it's already running.
func (s *verboseService) WaitForStart() {
<-s.started
}
func (s *verboseService) formatEvent(ev events.Event) string {

View File

@@ -86,10 +86,6 @@ func SecureDefault() *tls.Config {
return &tls.Config{
// TLS 1.2 is the minimum we accept
MinVersion: tls.VersionTLS12,
// We want the longer curves at the front, because that's more
// secure (so the web tells me, don't ask me to explain the
// details).
CurvePreferences: []tls.CurveID{tls.CurveP521, tls.CurveP384, tls.CurveP256},
// The cipher suite lists built above. These are ignored in TLS 1.3.
CipherSuites: cs,
// We've put some thought into this choice and would like it to

View File

@@ -187,6 +187,7 @@ func AsService(fn func(stop chan struct{})) suture.Service {
type ServiceWithError interface {
suture.Service
Error() error
SetError(error)
}
// AsServiceWithError does the same as AsService, except that it keeps track
@@ -244,3 +245,9 @@ func (s *service) Error() error {
defer s.mut.Unlock()
return s.err
}
func (s *service) SetError(err error) {
s.mut.Lock()
s.err = err
s.mut.Unlock()
}