Compare commits

..

50 Commits

Author SHA1 Message Date
Audrius Butkevicius
f0f79a3e3e lib/scanner: Use standard adler32 when we don't need rolling (#5556)
* lib/scanner: Use standard adler32 when we don't need rolling

Seems the rolling adler32 implementation is super slow when executed on large blocks, even tho I can't explain why.

BenchmarkFind1MFile-16    				     100	  18991667 ns/op	  55.21 MB/s	  398844 B/op	      20 allocs/op
BenchmarkBlock/adler32-131072/#00-16     		     200	   9726519 ns/op	1078.06 MB/s	 2654936 B/op	     163 allocs/op
BenchmarkBlock/bozo32-131072/#00-16      		      20	  73435540 ns/op	 142.79 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash32-131072/#00-16   		      20	  61482005 ns/op	 170.55 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/buzhash64-131072/#00-16   		      20	  61673660 ns/op	 170.02 MB/s	 2654928 B/op	     163 allocs/op
BenchmarkBlock/vanilla-adler32-131072/#00-16         	     300	   4377307 ns/op	2395.48 MB/s	 2654935 B/op	     163 allocs/op
BenchmarkBlock/adler32-16777216/#00-16               	       2	 544010100 ns/op	  19.27 MB/s	   65624 B/op	       5 allocs/op
BenchmarkBlock/bozo32-16777216/#00-16                	       1	4678108500 ns/op	   2.24 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash32-16777216/#00-16             	       1	3880370700 ns/op	   2.70 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/buzhash64-16777216/#00-16             	       1	3875911700 ns/op	   2.71 MB/s	51970144 B/op	      24 allocs/op
BenchmarkBlock/vanilla-adler32-16777216/#00-16       	     300	   4010279 ns/op	2614.72 MB/s	   65624 B/op	       5 allocs/op
BenchmarkRoll/adler32-131072/#00-16                  	    2000	    974279 ns/op	 134.53 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/bozo32-131072/#00-16                   	    2000	    791770 ns/op	 165.54 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-131072/#00-16                	    2000	    917409 ns/op	 142.87 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-131072/#00-16                	    2000	    881125 ns/op	 148.76 MB/s	     270 B/op	       0 allocs/op
BenchmarkRoll/adler32-16777216/#00-16                	      10	 124000400 ns/op	 135.30 MB/s	 7548937 B/op	       0 allocs/op
BenchmarkRoll/bozo32-16777216/#00-16                 	      10	 118008080 ns/op	 142.17 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash32-16777216/#00-16              	      10	 126794440 ns/op	 132.32 MB/s	 7548928 B/op	       0 allocs/op
BenchmarkRoll/buzhash64-16777216/#00-16              	      10	 126631960 ns/op	 132.49 MB/s	 7548928 B/op	       0 allocs/op

* Update benchmark_test.go

* gofmt

* fixup benchmark
2019-02-25 19:25:08 +01:00
Simon Frei
4299af1c63 lib/config, lib/model: Use path from locations to check disk space for db (#5525) 2019-02-12 12:25:11 +00:00
Simon Frei
d85ef949be lib/model: Introduce setupModel test utility (#5524) 2019-02-12 12:18:13 +00:00
Audrius Butkevicius
dc929946fe all: Use new reflect based CLI (#5487) 2019-02-12 07:58:24 +01:00
Simon Frei
7bac927ac8 lib/model: Use functions to generate config (#5513) 2019-02-12 07:50:07 +01:00
Matt Robenolt
93b4597d1a gui: Localize items counts (#5520) 2019-02-12 07:49:14 +01:00
Evgeny Kuznetsov
0928628e7b etc: Add keywords to .desktop files (fixes #5365) (#5521) 2019-02-11 14:33:05 +01:00
Audrius Butkevicius
c5a79bdfe6 Fix builds on Windows without CGO (#5518) 2019-02-10 15:58:29 +01:00
Jakob Borg
04fdafa280 lib/db, lib/model: Remove dead code (#5517) 2019-02-08 16:42:58 +01:00
Jakob Borg
cb49136269 gui: Add missing translation string (fixes #5515) 2019-02-07 07:30:22 +01:00
Simon Frei
2f415d8f09 lib/model: Don't use LocalDeviceID as normal id in tests (#5512) 2019-02-06 09:32:03 +01:00
Jakob Borg
b076031bfa gui, man, authors: Update docs, translations, and contributors 2019-02-06 07:45:24 +01:00
Simon Frei
e538797ce1 lib/model: Improve TestIssue5063 (#5509)
* lib/model: Improve TestIssue5063

* add not that helpful issue comment
2019-02-05 23:07:21 +00:00
Simon Frei
5df8219bcb lib/model: Add progressEmitter to supervisor (model) (#5510) 2019-02-05 18:02:36 +00:00
Simon Frei
af4fb97538 lib/model: Fail test instead of panic due to closing channel twice (#5508) 2019-02-05 18:01:56 +00:00
Simon Frei
5d9d87f770 lib/model: Helperize test os and remove error return value (#5507) 2019-02-05 18:01:05 +00:00
Jakob Borg
c75cfcfd06 gui: Enable large blocks by default, add to config dialog (#5405)
Also moves a couple of </div> that were out of balance and reindents
accordingly, so try a white space insensitive diff when reviewing...
2019-02-02 13:06:01 +01:00
Jakob Borg
6cdd1c5158 cmd/syncthing: Fixup previous commit 2019-02-02 12:54:26 +01:00
Simon Frei
41d037da1f build: Remove outdated&non-functional setup command (fixes #5454) (#5455) 2019-02-02 12:47:46 +01:00
Simon Frei
82afe73a9a cmd/syncthing, lib/config: Update default config creation (#5492)
Also remove dead code in config.Wrapper.
2019-02-02 12:43:57 +01:00
Jakob Borg
6452e16f15 golangci: Add config file 2019-02-02 12:34:57 +01:00
Iskander (Alex) Sharipov
ca47b4c218 cmd/syncthing: Correct strings.HasPrefix args order (#5498) 2019-02-02 12:18:19 +01:00
Jakob Borg
c2ddc83509 all: Revert the underscore sillyness 2019-02-02 12:16:27 +01:00
Jakob Borg
9fd270d78e all: A few more interesting linter fixes (#5502)
A couple of minor bugs and simplifications
2019-02-02 12:09:07 +01:00
Jakob Borg
0b2cabbc31 all: Even more boring linter fixes (#5501) 2019-02-02 11:45:17 +01:00
Jakob Borg
df5c1eaf01 all: Bunch of more linter fixes (#5500) 2019-02-02 11:02:28 +01:00
Jakob Borg
2111386ee4 all: Fix some linter errors (#5499)
I'm working through linter complaints, these are some fixes. Broad
categories:

1) Ignore errors where we can ignore errors: add "_ = ..." construct.
you can argue that this is annoying noise, but apart from silencing the
linter it *does* serve the purpose of highlighting that an error is
being ignored. I think this is OK, because the linter highlighted some
error cases I wasn't aware of (starting CPU profiles, for example).

2) Untyped constants where we though we had set the type.

3) A real bug where we ineffectually assigned to a shadowed err.

4) Some dead code removed.

There'll be more of these, because not all packages are fixed, but the
diff was already large enough.
2019-02-02 10:11:42 +01:00
Simon Frei
583172dc8d lib/db: Fix race in NamespacedKV (#5496) 2019-02-01 09:54:21 +01:00
Jakob Borg
1529563332 docker: Build outside GOPATH (fixes #5495) 2019-01-31 20:38:33 +01:00
Simon Frei
5605877625 cmd/syncthing: Pass SIGTERM on in monitor (fixes #5493) (#5494) 2019-01-31 18:35:20 +01:00
Simon Frei
7236d56731 lib/model: In tests disable watching for changes by default (fixes #5246) (#5485) 2019-01-30 16:38:10 +01:00
Jakob Borg
47d68a0aec gui, man, authors: Update docs, translations, and contributors 2019-01-30 07:45:27 +01:00
Simon Frei
657be162dd test, lib/rc: Integration test fixes and polish (#5488) 2019-01-29 16:59:00 +01:00
Simon Frei
8815ef922b mod: Update dependencies and tidy (fixes #5311) (#5486) 2019-01-28 22:45:56 +01:00
Simon Frei
79d109a386 lib/config: Add omitempty to DeprecatedMinHomeDiskFreePct (fixes #5482) (#5484) 2019-01-28 11:46:28 +01:00
Jakob Borg
75dcff0a0e all: Copy owner/group from parent (fixes #5445) (#5479)
This adds a folder option "CopyOwnershipFromParent" which, when set,
makes Syncthing attempt to retain the owner/group information when
syncing files. Specifically, at the finisher stage we look at the parent
dir to get owner/group and then attempt a Lchown call on the temp file.
For this to succeed Syncthing must be running with the appropriate
permissions. On Linux this is CAP_FOWNER, which can be granted by the
service manager on startup or set on the binary in the filesystem. Other
operating systems do other things, but often it's not required to run as
full "root". On Windows this patch does nothing - ownership works
differently there and is generally less of a deal, as permissions are
inherited as ACLs anyway.

There are unit tests on the Lchown functionality, which requires the
above permissions to run. There is also a unit test on the folder which
uses the fake filesystem and hence does not need special permissions.
2019-01-25 09:52:21 +01:00
Jakob Borg
0e07f6bef4 vendor: rm -rf vendor (#5478)
This removes our vendored dependencies. They provide no value for our
own build or development processes. For our source releases, the build
job can accomplish the same thing by a "go mod vendor" to recreate the
vendor dir (from the cryptographically verified dependencies).
2019-01-24 09:03:00 +01:00
Simon Frei
a45ba70467 lib/model: Improve errors while pulling (#5474) 2019-01-24 08:18:55 +01:00
Simon Frei
42bd42df5a lib/db: Do all update operations on a single item at once (#5441)
To do so the BlockMap struct has been removed. It behaves like any other prefixed
part of the database, but was not integrated in the recent keyer refactor. Now
the database is only flushed when files are in a consistent state.
2019-01-23 10:22:33 +01:00
Jakob Borg
6421693cce gui, man, authors: Update docs, translations, and contributors 2019-01-23 07:45:23 +01:00
Simon Frei
a371b15398 lib/db: Various polish (#5471)
naming: buf -> keyBuf
dedup: use getFileTrunc
manually inline insertFile
2019-01-20 10:24:39 +01:00
Jakob Borg
29e4b417f2 vendor: Update vendor dir from a80c0fda 2019-01-20 08:50:40 +01:00
Simon Frei
00fa77dd47 lib/db: Consistent use of buffers (#5470) 2019-01-20 08:47:20 +01:00
Simon Frei
df4d754197 lib/db: Minor polish (#5469) 2019-01-19 20:26:46 +01:00
Simon Frei
f3d735c56a lib/db: Fix, optimize and extend benchmarks (#5467) 2019-01-19 20:24:44 +01:00
Simon Frei
1d99db9bc6 lib/db: Deduplicate getFile* code (#5468) 2019-01-19 20:21:58 +01:00
Audrius Butkevicius
96bd691f55 lib/fs: Skip some tests on OpenBSD (fixes #5077) (#5466) 2019-01-19 08:28:57 +01:00
Simon Frei
22e133cce6 lib/db: Deduplicate comparing old and new items (#5465) 2019-01-18 21:19:56 +00:00
Audrius Butkevicius
a80c0fdae8 vendor: Update minio/sha256-simd 2019-01-18 20:30:40 +00:00
Simon Frei
1f87b874af lib/db: Add "dirty" function terminology to getGlobal (ref #5462) (#5463) 2019-01-18 13:01:39 +01:00
1402 changed files with 2458 additions and 471172 deletions

5
.golangci.yml Normal file
View File

@@ -0,0 +1,5 @@
service:
golangci-lint-version: 1.13.x
prepare:
- go run build.go assets
- GO111MODULE=on go mod vendor

View File

@@ -83,6 +83,7 @@ Heiko Zuerker (Smiley73) <heiko@zuerker.org>
Hugo Locurcio <hugo.locurcio@hugo.pro>
Iain Barnett <iainspeed@gmail.com>
Ian Johnson (anonymouse64) <ian.johnson@canonical.com> <person.uwsome@gmail.com>
Iskander (Alex) Sharipov <quasilyte@gmail.com>
Jaakko Hannikainen (jgke) <jgke@jgke.fi>
Jacek Szafarkiewicz (hadogenes) <szafar@linux.pl>
Jake Peterson (acogdev) <jake@acogdev.com>

View File

@@ -1,6 +1,6 @@
FROM golang:1.11 AS builder
WORKDIR /go/src/github.com/syncthing/syncthing
WORKDIR /src
COPY . .
ENV CGO_ENABLED=0
@@ -16,7 +16,7 @@ VOLUME ["/var/syncthing"]
RUN apk add --no-cache ca-certificates su-exec
COPY --from=builder /go/src/github.com/syncthing/syncthing/syncthing /bin/syncthing
COPY --from=builder /src/syncthing /bin/syncthing
ENV PUID=1000 PGID=1000

View File

@@ -34,21 +34,22 @@ import (
)
var (
versionRe = regexp.MustCompile(`-[0-9]{1,3}-g[0-9a-f]{5,10}`)
goarch string
goos string
noupgrade bool
version string
goCmd string
goVersion float64
race bool
debug = os.Getenv("BUILDDEBUG") != ""
extraTags string
installSuffix string
pkgdir string
cc string
debugBinary bool
timeout = "120s"
versionRe = regexp.MustCompile(`-[0-9]{1,3}-g[0-9a-f]{5,10}`)
goarch string
goos string
noupgrade bool
version string
goCmd string
goVersion float64
race bool
debug = os.Getenv("BUILDDEBUG") != ""
extraTags string
installSuffix string
pkgdir string
cc string
debugBinary bool
timeout = "120s"
gogoProtoVersion = "v1.2.0"
)
type target struct {
@@ -198,7 +199,7 @@ type dependencyRepo struct {
}
var dependencyRepos = []dependencyRepo{
{path: "protobuf", repo: "https://github.com/gogo/protobuf.git", commit: "v1.2.0"},
{path: "protobuf", repo: "https://github.com/gogo/protobuf.git", commit: gogoProtoVersion},
{path: "xdr", repo: "https://github.com/calmh/xdr.git", commit: "08e072f9cb16"},
}
@@ -253,9 +254,6 @@ func main() {
func runCommand(cmd string, target target) {
switch cmd {
case "setup":
setup()
case "install":
var tags []string
if noupgrade {
@@ -335,39 +333,12 @@ func parseFlags() {
flag.Parse()
}
func setup() {
packages := []string{
"github.com/alecthomas/gometalinter",
"github.com/AlekSi/gocov-xml",
"github.com/axw/gocov/gocov",
"github.com/FiloSottile/gvt",
"golang.org/x/lint/golint",
"github.com/gordonklaus/ineffassign",
"github.com/mdempsky/unconvert",
"github.com/mitchellh/go-wordwrap",
"github.com/opennota/check/cmd/...",
"github.com/tsenart/deadcode",
"golang.org/x/net/html",
"golang.org/x/tools/cmd/cover",
"honnef.co/go/tools/cmd/gosimple",
"honnef.co/go/tools/cmd/staticcheck",
"honnef.co/go/tools/cmd/unused",
"github.com/josephspurrier/goversioninfo",
}
for _, pkg := range packages {
fmt.Println(pkg)
runPrint(goCmd, "get", "-u", pkg)
}
runPrint(goCmd, "install", "-v", "github.com/syncthing/syncthing/vendor/github.com/gogo/protobuf/protoc-gen-gogofast")
}
func test(pkgs ...string) {
lazyRebuildAssets()
useRace := runtime.GOARCH == "amd64"
switch runtime.GOOS {
case "darwin", "linux", "freebsd", "windows":
case "darwin", "linux", "freebsd": // , "windows": # See https://github.com/golang/go/issues/27089
default:
useRace = false
}
@@ -761,6 +732,7 @@ func shouldRebuildAssets(target, srcdir string) bool {
}
func proto() {
runPrint(goCmd, "get", fmt.Sprintf("github.com/gogo/protobuf/protoc-gen-gogofast@%v", gogoProtoVersion))
os.MkdirAll("repos", 0755)
for _, dep := range dependencyRepos {
path := filepath.Join("repos", dep.path)
@@ -796,10 +768,10 @@ func ldflags() string {
b := new(bytes.Buffer)
b.WriteString("-w")
fmt.Fprintf(b, " -X main.Version%c%s", sep, version)
fmt.Fprintf(b, " -X main.BuildStamp%c%d", sep, buildStamp())
fmt.Fprintf(b, " -X main.BuildUser%c%s", sep, buildUser())
fmt.Fprintf(b, " -X main.BuildHost%c%s", sep, buildHost())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Version%c%s", sep, version)
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Stamp%c%d", sep, buildStamp())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.User%c%s", sep, buildUser())
fmt.Fprintf(b, " -X github.com/syncthing/syncthing/lib/build.Host%c%s", sep, buildHost())
return b.String()
}

View File

@@ -1,19 +0,0 @@
Copyright (C) 2014 Audrius Butkevičius
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,115 +1,95 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
type APIClient struct {
httpClient http.Client
endpoint string
apikey string
username string
password string
id string
csrf string
http.Client
cfg config.GUIConfiguration
apikey string
}
var instance *APIClient
func getClient(c *cli.Context) *APIClient {
if instance != nil {
return instance
}
endpoint := c.GlobalString("endpoint")
if !strings.HasPrefix(endpoint, "http") {
endpoint = "http://" + endpoint
}
func getClient(cfg config.GUIConfiguration) *APIClient {
httpClient := http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: c.GlobalBool("insecure"),
InsecureSkipVerify: true,
},
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
return net.Dial(cfg.Network(), cfg.Address())
},
},
}
client := APIClient{
httpClient: httpClient,
endpoint: endpoint,
apikey: c.GlobalString("apikey"),
username: c.GlobalString("username"),
password: c.GlobalString("password"),
return &APIClient{
Client: httpClient,
cfg: cfg,
apikey: cfg.APIKey,
}
if client.apikey == "" {
request, err := http.NewRequest("GET", client.endpoint, nil)
die(err)
response := client.handleRequest(request)
client.id = response.Header.Get("X-Syncthing-ID")
if client.id == "" {
die("Failed to get device ID")
}
for _, item := range response.Cookies() {
if item.Name == "CSRF-Token-"+client.id[:5] {
client.csrf = item.Value
goto csrffound
}
}
die("Failed to get CSRF token")
csrffound:
}
instance = &client
return &client
}
func (client *APIClient) handleRequest(request *http.Request) *http.Response {
if client.apikey != "" {
request.Header.Set("X-API-Key", client.apikey)
func (c *APIClient) Endpoint() string {
if c.cfg.Network() == "unix" {
return "http://unix/"
}
if client.username != "" || client.password != "" {
request.SetBasicAuth(client.username, client.password)
}
if client.csrf != "" {
request.Header.Set("X-CSRF-Token-"+client.id[:5], client.csrf)
url := c.cfg.URL()
if !strings.HasSuffix(url, "/") {
url += "/"
}
return url
}
response, err := client.httpClient.Do(request)
die(err)
func (c *APIClient) Do(req *http.Request) (*http.Response, error) {
req.Header.Set("X-API-Key", c.apikey)
resp, err := c.Client.Do(req)
if err != nil {
return nil, err
}
return resp, checkResponse(resp)
}
func (c *APIClient) Get(url string) (*http.Response, error) {
request, err := http.NewRequest("GET", c.Endpoint()+"rest/"+url, nil)
if err != nil {
return nil, err
}
return c.Do(request)
}
func (c *APIClient) Post(url, body string) (*http.Response, error) {
request, err := http.NewRequest("POST", c.Endpoint()+"rest/"+url, bytes.NewBufferString(body))
if err != nil {
return nil, err
}
return c.Do(request)
}
func checkResponse(response *http.Response) error {
if response.StatusCode == 404 {
die("Invalid endpoint or API call")
} else if response.StatusCode == 401 {
die("Invalid username or password")
return fmt.Errorf("Invalid endpoint or API call")
} else if response.StatusCode == 403 {
if client.apikey == "" {
die("Invalid CSRF token")
}
die("Invalid API key")
return fmt.Errorf("Invalid API key")
} else if response.StatusCode != 200 {
body := strings.TrimSpace(string(responseToBArray(response)))
if body != "" {
die(body)
data, err := responseToBArray(response)
if err != nil {
return err
}
die("Unknown HTTP status returned: " + response.Status)
body := strings.TrimSpace(string(data))
return fmt.Errorf("Unexpected HTTP status returned: %s\n%s", response.Status, body)
}
return response
}
func httpGet(c *cli.Context, url string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("GET", client.endpoint+"/rest/"+url, nil)
die(err)
return client.handleRequest(request)
}
func httpPost(c *cli.Context, url string, body string) *http.Response {
client := getClient(c)
request, err := http.NewRequest("POST", client.endpoint+"/rest/"+url, bytes.NewBufferString(body))
die(err)
return client.handleRequest(request)
return nil
}

View File

@@ -1,188 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "devices",
HideHelp: true,
Usage: "Device command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List registered devices",
Requires: &cli.Requires{},
Action: devicesList,
},
{
Name: "add",
Usage: "Add a new device",
Requires: &cli.Requires{"device id", "device name?"},
Action: devicesAdd,
},
{
Name: "remove",
Usage: "Remove an existing device",
Requires: &cli.Requires{"device id"},
Action: devicesRemove,
},
{
Name: "get",
Usage: "Get a property of a device",
Requires: &cli.Requires{"device id", "property"},
Action: devicesGet,
},
{
Name: "set",
Usage: "Set a property of a device",
Requires: &cli.Requires{"device id", "property", "value..."},
Action: devicesSet,
},
},
})
}
func devicesList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, device := range cfg.Devices {
if !first {
fmt.Fprintln(writer)
}
fmt.Fprintln(writer, "ID:\t", device.DeviceID, "\t")
fmt.Fprintln(writer, "Name:\t", device.Name, "\t(name)")
fmt.Fprintln(writer, "Address:\t", strings.Join(device.Addresses, " "), "\t(address)")
fmt.Fprintln(writer, "Compression:\t", device.Compression, "\t(compression)")
fmt.Fprintln(writer, "Certificate name:\t", device.CertName, "\t(certname)")
fmt.Fprintln(writer, "Introducer:\t", device.Introducer, "\t(introducer)")
first = false
}
writer.Flush()
}
func devicesAdd(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
newDevice := config.DeviceConfiguration{
DeviceID: id,
Name: nid,
Addresses: []string{"dynamic"},
}
if len(c.Args()) > 1 {
newDevice.Name = c.Args()[1]
}
if len(c.Args()) > 2 {
addresses := c.Args()[2:]
for _, item := range addresses {
if item == "dynamic" {
continue
}
validAddress(item)
}
newDevice.Addresses = addresses
}
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID == id {
die("Device " + nid + " already exists")
}
}
cfg.Devices = append(cfg.Devices, newDevice)
setConfig(c, cfg)
}
func devicesRemove(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
if nid == getMyID(c) {
die("Cannot remove yourself")
}
cfg := getConfig(c)
for i, device := range cfg.Devices {
if device.DeviceID == id {
last := len(cfg.Devices) - 1
cfg.Devices[i] = cfg.Devices[last]
cfg.Devices = cfg.Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + nid + " not found")
}
func devicesGet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
cfg := getConfig(c)
for _, device := range cfg.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
fmt.Println(device.Name)
case "address":
fmt.Println(strings.Join(device.Addresses, "\n"))
case "compression":
fmt.Println(device.Compression.String())
case "certname":
fmt.Println(device.CertName)
case "introducer":
fmt.Println(device.Introducer)
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
return
}
die("Device " + nid + " not found")
}
func devicesSet(c *cli.Context) {
nid := c.Args()[0]
id := parseDeviceID(nid)
arg := c.Args()[1]
config := getConfig(c)
for i, device := range config.Devices {
if device.DeviceID != id {
continue
}
switch strings.ToLower(arg) {
case "name":
config.Devices[i].Name = strings.Join(c.Args()[2:], " ")
case "address":
for _, item := range c.Args()[2:] {
if item == "dynamic" {
continue
}
validAddress(item)
}
config.Devices[i].Addresses = c.Args()[2:]
case "compression":
err := config.Devices[i].Compression.UnmarshalText([]byte(c.Args()[2]))
die(err)
case "certname":
config.Devices[i].CertName = strings.Join(c.Args()[2:], " ")
case "introducer":
config.Devices[i].Introducer = parseBool(c.Args()[2])
default:
die("Invalid property: " + arg + "\nAvailable properties: name, address, compression, certname, introducer")
}
setConfig(c, config)
return
}
die("Device " + nid + " not found")
}

View File

@@ -1,67 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Requires: &cli.Requires{},
Action: errorsShow,
},
{
Name: "push",
Usage: "Push an error to active clients",
Requires: &cli.Requires{"error message..."},
Action: errorsPush,
},
{
Name: "clear",
Usage: "Clear pending errors",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/error/clear"),
},
},
})
}
func errorsShow(c *cli.Context) {
response := httpGet(c, "system/error")
var data map[string][]map[string]interface{}
json.Unmarshal(responseToBArray(response), &data)
writer := newTableWriter()
for _, item := range data["errors"] {
time := item["when"].(string)[:19]
time = strings.Replace(time, "T", " ", 1)
err := item["message"].(string)
err = strings.TrimSpace(err)
fmt.Fprintln(writer, time+":\t"+err)
}
writer.Flush()
}
func errorsPush(c *cli.Context) {
err := strings.Join(c.Args(), " ")
response := httpPost(c, "system/error", strings.TrimSpace(err))
if response.StatusCode != 200 {
err = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
}

View File

@@ -1,361 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"path/filepath"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/fs"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "folders",
HideHelp: true,
Usage: "Folder command group",
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List available folders",
Requires: &cli.Requires{},
Action: foldersList,
},
{
Name: "add",
Usage: "Add a new folder",
Requires: &cli.Requires{"folder id", "directory"},
Action: foldersAdd,
},
{
Name: "remove",
Usage: "Remove an existing folder",
Requires: &cli.Requires{"folder id"},
Action: foldersRemove,
},
{
Name: "override",
Usage: "Override changes from other nodes for a master folder",
Requires: &cli.Requires{"folder id"},
Action: foldersOverride,
},
{
Name: "get",
Usage: "Get a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersGet,
},
{
Name: "set",
Usage: "Set a property of a folder",
Requires: &cli.Requires{"folder id", "property", "value..."},
Action: foldersSet,
},
{
Name: "unset",
Usage: "Unset a property of a folder",
Requires: &cli.Requires{"folder id", "property"},
Action: foldersUnset,
},
{
Name: "devices",
Usage: "Folder devices command group",
HideHelp: true,
Subcommands: []cli.Command{
{
Name: "list",
Usage: "List of devices which the folder is shared with",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesList,
},
{
Name: "add",
Usage: "Share a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesAdd,
},
{
Name: "remove",
Usage: "Unshare a folder with a device",
Requires: &cli.Requires{"folder id", "device id"},
Action: foldersDevicesRemove,
},
{
Name: "clear",
Usage: "Unshare a folder with all devices",
Requires: &cli.Requires{"folder id"},
Action: foldersDevicesClear,
},
},
},
},
})
}
func foldersList(c *cli.Context) {
cfg := getConfig(c)
first := true
writer := newTableWriter()
for _, folder := range cfg.Folders {
if !first {
fmt.Fprintln(writer)
}
fs := folder.Filesystem()
fmt.Fprintln(writer, "ID:\t", folder.ID, "\t")
fmt.Fprintln(writer, "Path:\t", fs.URI(), "\t(directory)")
fmt.Fprintln(writer, "Path type:\t", fs.Type(), "\t(directory-type)")
fmt.Fprintln(writer, "Folder type:\t", folder.Type, "\t(type)")
fmt.Fprintln(writer, "Ignore permissions:\t", folder.IgnorePerms, "\t(permissions)")
fmt.Fprintln(writer, "Rescan interval in seconds:\t", folder.RescanIntervalS, "\t(rescan)")
if folder.Versioning.Type != "" {
fmt.Fprintln(writer, "Versioning:\t", folder.Versioning.Type, "\t(versioning)")
for key, value := range folder.Versioning.Params {
fmt.Fprintf(writer, "Versioning %s:\t %s \t(versioning-%s)\n", key, value, key)
}
}
first = false
}
writer.Flush()
}
func foldersAdd(c *cli.Context) {
cfg := getConfig(c)
abs, err := filepath.Abs(c.Args()[1])
die(err)
folder := config.FolderConfiguration{
ID: c.Args()[0],
Path: filepath.Clean(abs),
FilesystemType: fs.FilesystemTypeBasic,
}
cfg.Folders = append(cfg.Folders, folder)
setConfig(c, cfg)
}
func foldersRemove(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for i, folder := range cfg.Folders {
if folder.ID == rid {
last := len(cfg.Folders) - 1
cfg.Folders[i] = cfg.Folders[last]
cfg.Folders = cfg.Folders[:last]
setConfig(c, cfg)
return
}
}
die("Folder " + rid + " not found")
}
func foldersOverride(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid && folder.Type == config.FolderTypeSendOnly {
response := httpPost(c, "db/override", "")
if response.StatusCode != 200 {
err := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
body := string(responseToBArray(response))
if body != "" {
err += "\nBody: " + body
}
die(err)
}
return
}
}
die("Folder " + rid + " not found or folder not master")
}
func foldersGet(c *cli.Context) {
cfg := getConfig(c)
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
value, ok := folder.Versioning.Params[arg]
if ok {
fmt.Println(value)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "directory":
fmt.Println(folder.Filesystem().URI())
case "directory-type":
fmt.Println(folder.Filesystem().Type())
case "type":
fmt.Println(folder.Type)
case "permissions":
fmt.Println(folder.IgnorePerms)
case "rescan":
fmt.Println(folder.RescanIntervalS)
case "versioning":
if folder.Versioning.Type != "" {
fmt.Println(folder.Versioning.Type)
}
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, directory-type, type, permissions, versioning, versioning-<key>")
}
return
}
die("Folder " + rid + " not found")
}
func foldersSet(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
val := strings.Join(c.Args()[2:], " ")
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
cfg.Folders[i].Versioning.Params[arg[11:]] = val
setConfig(c, cfg)
return
}
switch arg {
case "directory":
cfg.Folders[i].Path = val
case "directory-type":
var fsType fs.FilesystemType
fsType.UnmarshalText([]byte(val))
cfg.Folders[i].FilesystemType = fsType
case "type":
var t config.FolderType
if err := t.UnmarshalText([]byte(val)); err != nil {
die("Invalid folder type: " + err.Error())
}
cfg.Folders[i].Type = t
case "permissions":
cfg.Folders[i].IgnorePerms = parseBool(val)
case "rescan":
cfg.Folders[i].RescanIntervalS = parseInt(val)
case "versioning":
cfg.Folders[i].Versioning.Type = val
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: directory, master, permissions, versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersUnset(c *cli.Context) {
rid := c.Args()[0]
arg := strings.ToLower(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
if strings.HasPrefix(arg, "versioning-") {
arg = arg[11:]
if _, ok := folder.Versioning.Params[arg]; ok {
delete(cfg.Folders[i].Versioning.Params, arg)
setConfig(c, cfg)
return
}
die("Versioning property " + c.Args()[1][11:] + " not found")
}
switch arg {
case "versioning":
cfg.Folders[i].Versioning.Type = ""
cfg.Folders[i].Versioning.Params = make(map[string]string)
default:
die("Invalid property: " + c.Args()[1] + "\nAvailable properties: versioning, versioning-<key>")
}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesList(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for _, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
fmt.Println(device.DeviceID)
}
return
}
die("Folder " + rid + " not found")
}
func foldersDevicesAdd(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for _, device := range folder.Devices {
if device.DeviceID == nid {
die("Device " + c.Args()[1] + " is already part of this folder")
}
}
for _, device := range cfg.Devices {
if device.DeviceID == nid {
cfg.Folders[i].Devices = append(folder.Devices, config.FolderDeviceConfiguration{
DeviceID: device.DeviceID,
})
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found in device list")
}
die("Folder " + rid + " not found")
}
func foldersDevicesRemove(c *cli.Context) {
rid := c.Args()[0]
nid := parseDeviceID(c.Args()[1])
cfg := getConfig(c)
for ri, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
for ni, device := range folder.Devices {
if device.DeviceID == nid {
last := len(folder.Devices) - 1
cfg.Folders[ri].Devices[ni] = folder.Devices[last]
cfg.Folders[ri].Devices = cfg.Folders[ri].Devices[:last]
setConfig(c, cfg)
return
}
}
die("Device " + c.Args()[1] + " not found")
}
die("Folder " + rid + " not found")
}
func foldersDevicesClear(c *cli.Context) {
rid := c.Args()[0]
cfg := getConfig(c)
for i, folder := range cfg.Folders {
if folder.ID != rid {
continue
}
cfg.Folders[i].Devices = []config.FolderDeviceConfiguration{}
setConfig(c, cfg)
return
}
die("Folder " + rid + " not found")
}

View File

@@ -1,94 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, []cli.Command{
{
Name: "id",
Usage: "Get ID of the Syncthing client",
Requires: &cli.Requires{},
Action: generalID,
},
{
Name: "status",
Usage: "Configuration status, whether or not a restart is required for changes to take effect",
Requires: &cli.Requires{},
Action: generalStatus,
},
{
Name: "config",
Usage: "Configuration",
Requires: &cli.Requires{},
Action: generalConfiguration,
},
{
Name: "restart",
Usage: "Restart syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/restart"),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/shutdown"),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/reset"),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Requires: &cli.Requires{},
Action: wrappedHTTPPost("system/upgrade"),
},
{
Name: "version",
Usage: "Syncthing client version",
Requires: &cli.Requires{},
Action: generalVersion,
},
}...)
}
func generalID(c *cli.Context) {
fmt.Println(getMyID(c))
}
func generalStatus(c *cli.Context) {
response := httpGet(c, "system/config/insync")
var status struct{ ConfigInSync bool }
json.Unmarshal(responseToBArray(response), &status)
if !status.ConfigInSync {
die("Config out of sync")
}
fmt.Println("Config in sync")
}
func generalConfiguration(c *cli.Context) {
response := httpGet(c, "system/config")
var jsResponse interface{}
json.Unmarshal(responseToBArray(response), &jsResponse)
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.Encode(jsResponse)
}
func generalVersion(c *cli.Context) {
response := httpGet(c, "system/version")
version := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &version)
prettyPrintJSON(version)
}

View File

@@ -1,127 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "gui",
HideHelp: true,
Usage: "GUI command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all GUI configuration settings",
Requires: &cli.Requires{},
Action: guiDump,
},
{
Name: "get",
Usage: "Get a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiGet,
},
{
Name: "set",
Usage: "Set a GUI configuration setting",
Requires: &cli.Requires{"setting", "value"},
Action: guiSet,
},
{
Name: "unset",
Usage: "Unset a GUI configuration setting",
Requires: &cli.Requires{"setting"},
Action: guiUnset,
},
},
})
}
func guiDump(c *cli.Context) {
cfg := getConfig(c).GUI
writer := newTableWriter()
fmt.Fprintln(writer, "Enabled:\t", cfg.Enabled, "\t(enabled)")
fmt.Fprintln(writer, "Use HTTPS:\t", cfg.UseTLS(), "\t(tls)")
fmt.Fprintln(writer, "Listen Addresses:\t", cfg.Address(), "\t(address)")
if cfg.User != "" {
fmt.Fprintln(writer, "Authentication User:\t", cfg.User, "\t(username)")
fmt.Fprintln(writer, "Authentication Password:\t", cfg.Password, "\t(password)")
}
if cfg.APIKey != "" {
fmt.Fprintln(writer, "API Key:\t", cfg.APIKey, "\t(apikey)")
}
writer.Flush()
}
func guiGet(c *cli.Context) {
cfg := getConfig(c).GUI
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "enabled":
fmt.Println(cfg.Enabled)
case "tls":
fmt.Println(cfg.UseTLS())
case "address":
fmt.Println(cfg.Address())
case "user":
if cfg.User != "" {
fmt.Println(cfg.User)
}
case "password":
if cfg.User != "" {
fmt.Println(cfg.Password)
}
case "apikey":
if cfg.APIKey != "" {
fmt.Println(cfg.APIKey)
}
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
}
func guiSet(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "enabled":
cfg.GUI.Enabled = parseBool(val)
case "tls":
cfg.GUI.RawUseTLS = parseBool(val)
case "address":
validAddress(val)
cfg.GUI.RawAddress = val
case "user":
cfg.GUI.User = val
case "password":
cfg.GUI.Password = val
case "apikey":
cfg.GUI.APIKey = val
default:
die("Invalid setting: " + arg + "\nAvailable settings: enabled, tls, address, user, password, apikey")
}
setConfig(c, cfg)
}
func guiUnset(c *cli.Context) {
cfg := getConfig(c)
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "user":
cfg.GUI.User = ""
case "password":
cfg.GUI.Password = ""
case "apikey":
cfg.GUI.APIKey = ""
default:
die("Invalid setting: " + arg + "\nAvailable settings: user, password, apikey")
}
setConfig(c, cfg)
}

View File

@@ -1,173 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"fmt"
"strings"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "options",
HideHelp: true,
Usage: "Options command group",
Subcommands: []cli.Command{
{
Name: "dump",
Usage: "Show all Syncthing option settings",
Requires: &cli.Requires{},
Action: optionsDump,
},
{
Name: "get",
Usage: "Get a Syncthing option setting",
Requires: &cli.Requires{"setting"},
Action: optionsGet,
},
{
Name: "set",
Usage: "Set a Syncthing option setting",
Requires: &cli.Requires{"setting", "value..."},
Action: optionsSet,
},
},
})
}
func optionsDump(c *cli.Context) {
cfg := getConfig(c).Options
writer := newTableWriter()
fmt.Fprintln(writer, "Sync protocol listen addresses:\t", strings.Join(cfg.ListenAddresses, " "), "\t(addresses)")
fmt.Fprintln(writer, "Global discovery enabled:\t", cfg.GlobalAnnEnabled, "\t(globalannenabled)")
fmt.Fprintln(writer, "Global discovery servers:\t", strings.Join(cfg.GlobalAnnServers, " "), "\t(globalannserver)")
fmt.Fprintln(writer, "Local discovery enabled:\t", cfg.LocalAnnEnabled, "\t(localannenabled)")
fmt.Fprintln(writer, "Local discovery port:\t", cfg.LocalAnnPort, "\t(localannport)")
fmt.Fprintln(writer, "Outgoing rate limit in KiB/s:\t", cfg.MaxSendKbps, "\t(maxsend)")
fmt.Fprintln(writer, "Incoming rate limit in KiB/s:\t", cfg.MaxRecvKbps, "\t(maxrecv)")
fmt.Fprintln(writer, "Reconnect interval in seconds:\t", cfg.ReconnectIntervalS, "\t(reconnect)")
fmt.Fprintln(writer, "Start browser:\t", cfg.StartBrowser, "\t(browser)")
fmt.Fprintln(writer, "Enable UPnP:\t", cfg.NATEnabled, "\t(nat)")
fmt.Fprintln(writer, "UPnP Lease in minutes:\t", cfg.NATLeaseM, "\t(natlease)")
fmt.Fprintln(writer, "UPnP Renewal period in minutes:\t", cfg.NATRenewalM, "\t(natrenew)")
fmt.Fprintln(writer, "Restart on Wake Up:\t", cfg.RestartOnWakeup, "\t(wake)")
reporting := "unrecognized value"
switch cfg.URAccepted {
case -1:
reporting = "false"
case 0:
reporting = "undecided/false"
case 1:
reporting = "true"
}
fmt.Fprintln(writer, "Anonymous usage reporting:\t", reporting, "\t(reporting)")
writer.Flush()
}
func optionsGet(c *cli.Context) {
cfg := getConfig(c).Options
arg := c.Args()[0]
switch strings.ToLower(arg) {
case "address":
fmt.Println(strings.Join(cfg.ListenAddresses, "\n"))
case "globalannenabled":
fmt.Println(cfg.GlobalAnnEnabled)
case "globalannservers":
fmt.Println(strings.Join(cfg.GlobalAnnServers, "\n"))
case "localannenabled":
fmt.Println(cfg.LocalAnnEnabled)
case "localannport":
fmt.Println(cfg.LocalAnnPort)
case "maxsend":
fmt.Println(cfg.MaxSendKbps)
case "maxrecv":
fmt.Println(cfg.MaxRecvKbps)
case "reconnect":
fmt.Println(cfg.ReconnectIntervalS)
case "browser":
fmt.Println(cfg.StartBrowser)
case "nat":
fmt.Println(cfg.NATEnabled)
case "natlease":
fmt.Println(cfg.NATLeaseM)
case "natrenew":
fmt.Println(cfg.NATRenewalM)
case "reporting":
switch cfg.URAccepted {
case -1:
fmt.Println("false")
case 0:
fmt.Println("undecided/false")
case 1:
fmt.Println("true")
default:
fmt.Println("unknown")
}
case "wake":
fmt.Println(cfg.RestartOnWakeup)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
}
func optionsSet(c *cli.Context) {
config := getConfig(c)
arg := c.Args()[0]
val := c.Args()[1]
switch strings.ToLower(arg) {
case "address":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.ListenAddresses = c.Args().Tail()
case "globalannenabled":
config.Options.GlobalAnnEnabled = parseBool(val)
case "globalannserver":
for _, item := range c.Args().Tail() {
validAddress(item)
}
config.Options.GlobalAnnServers = c.Args().Tail()
case "localannenabled":
config.Options.LocalAnnEnabled = parseBool(val)
case "localannport":
config.Options.LocalAnnPort = parsePort(val)
case "maxsend":
config.Options.MaxSendKbps = parseUint(val)
case "maxrecv":
config.Options.MaxRecvKbps = parseUint(val)
case "reconnect":
config.Options.ReconnectIntervalS = parseUint(val)
case "browser":
config.Options.StartBrowser = parseBool(val)
case "nat":
config.Options.NATEnabled = parseBool(val)
case "natlease":
config.Options.NATLeaseM = parseUint(val)
case "natrenew":
config.Options.NATRenewalM = parseUint(val)
case "reporting":
switch strings.ToLower(val) {
case "u", "undecided", "unset":
config.Options.URAccepted = 0
default:
boolvalue := parseBool(val)
if boolvalue {
config.Options.URAccepted = 1
} else {
config.Options.URAccepted = -1
}
}
case "wake":
config.Options.RestartOnWakeup = parseBool(val)
default:
die("Invalid setting: " + arg + "\nAvailable settings: address, globalannenabled, globalannserver, localannenabled, localannport, maxsend, maxrecv, reconnect, browser, upnp, upnplease, upnprenew, reporting, wake")
}
setConfig(c, config)
}

View File

@@ -1,72 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
import (
"encoding/json"
"fmt"
"github.com/AudriusButkevicius/cli"
)
func init() {
cliCommands = append(cliCommands, cli.Command{
Name: "report",
HideHelp: true,
Usage: "Reporting command group",
Subcommands: []cli.Command{
{
Name: "system",
Usage: "Report system state",
Requires: &cli.Requires{},
Action: reportSystem,
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Requires: &cli.Requires{},
Action: reportConnections,
},
{
Name: "usage",
Usage: "Usage report",
Requires: &cli.Requires{},
Action: reportUsage,
},
},
})
}
func reportSystem(c *cli.Context) {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
prettyPrintJSON(data)
}
func reportConnections(c *cli.Context) {
response := httpGet(c, "system/connections")
data := make(map[string]map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
var overall map[string]interface{}
for key, value := range data {
if key == "total" {
overall = value
continue
}
value["Device ID"] = key
prettyPrintJSON(value)
fmt.Println()
}
if overall != nil {
fmt.Println("=== Overall statistics ===")
prettyPrintJSON(overall)
}
}
func reportUsage(c *cli.Context) {
response := httpGet(c, "svc/report")
report := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &report)
prettyPrintJSON(report)
}

60
cmd/stcli/errors.go Normal file
View File

@@ -0,0 +1,60 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"strings"
"github.com/urfave/cli"
)
var errorsCommand = cli.Command{
Name: "errors",
HideHelp: true,
Usage: "Error command group",
Subcommands: []cli.Command{
{
Name: "show",
Usage: "Show pending errors",
Action: expects(0, dumpOutput("system/error")),
},
{
Name: "push",
Usage: "Push an error to active clients",
ArgsUsage: "[error message]",
Action: expects(1, errorsPush),
},
{
Name: "clear",
Usage: "Clear pending errors",
Action: expects(0, emptyPost("system/error/clear")),
},
},
}
func errorsPush(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
errStr := strings.Join(c.Args(), " ")
response, err := client.Post("system/error", strings.TrimSpace(errStr))
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr = fmt.Sprint("Failed to push error\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return fmt.Errorf(errStr)
}
return nil
}

View File

@@ -1,31 +0,0 @@
// Copyright (C) 2014 Audrius Butkevičius
package main
var jsonAttributeLabels = map[string]string{
"folderMaxMiB": "Largest folder size in MiB",
"folderMaxFiles": "Largest folder file count",
"longVersion": "Long version",
"totMiB": "Total size in MiB",
"totFiles": "Total files",
"uniqueID": "Unique ID",
"numFolders": "Folder count",
"numDevices": "Device count",
"memoryUsageMiB": "Memory usage in MiB",
"memorySize": "Total memory in MiB",
"sha256Perf": "SHA256 Benchmark",
"At": "Last contacted",
"Completion": "Percent complete",
"InBytesTotal": "Total bytes received",
"OutBytesTotal": "Total bytes sent",
"ClientVersion": "Client version",
"alloc": "Memory allocated in bytes",
"sys": "Memory using in bytes",
"cpuPercent": "CPU load in percent",
"extAnnounceOK": "External announcments working",
"goroutines": "Number of Go routines",
"myID": "Client ID",
"tilde": "Tilde expands to",
"arch": "Architecture",
"os": "OS",
}

View File

@@ -1,63 +1,192 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"sort"
"bufio"
"crypto/tls"
"encoding/json"
"flag"
"log"
"os"
"reflect"
"strings"
"github.com/AudriusButkevicius/cli"
"github.com/AudriusButkevicius/recli"
"github.com/flynn-archive/go-shlex"
"github.com/mattn/go-isatty"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/urfave/cli"
)
type ByAlphabet []cli.Command
func (a ByAlphabet) Len() int { return len(a) }
func (a ByAlphabet) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByAlphabet) Less(i, j int) bool { return a[i].Name < a[j].Name }
var cliCommands []cli.Command
func main() {
app := cli.NewApp()
app.Name = "syncthing-cli"
app.Author = "Audrius Butkevičius"
app.Email = "audrius.butkevicius@gmail.com"
app.Usage = "Syncthing command line interface"
app.Version = "0.1"
app.HideHelp = true
// This is somewhat a hack around a chicken and egg problem.
// We need to set the home directory and potentially other flags to know where the syncthing instance is running
// in order to get it's config ... which we then use to construct the actual CLI ... at which point it's too late
// to add flags there...
homeBaseDir := locations.GetBaseDir(locations.ConfigBaseDir)
guiCfg := config.GUIConfiguration{}
app.Flags = []cli.Flag{
flags := flag.NewFlagSet("", flag.ContinueOnError)
flags.StringVar(&guiCfg.RawAddress, "gui-address", guiCfg.RawAddress, "Override GUI address (e.g. \"http://192.0.2.42:8443\")")
flags.StringVar(&guiCfg.APIKey, "gui-apikey", guiCfg.APIKey, "Override GUI API key")
flags.StringVar(&homeBaseDir, "home", homeBaseDir, "Set configuration directory")
// Implement the same flags at the lower CLI, with the same default values (pre-parse), but do nothing with them.
// This is so that we could reuse os.Args
fakeFlags := []cli.Flag{
cli.StringFlag{
Name: "endpoint, e",
Value: "http://127.0.0.1:8384",
Usage: "End point to connect to",
EnvVar: "STENDPOINT",
Name: "gui-address",
Value: guiCfg.RawAddress,
Usage: "Override GUI address (e.g. \"http://192.0.2.42:8443\")",
},
cli.StringFlag{
Name: "apikey, k",
Value: "",
Usage: "API Key",
EnvVar: "STAPIKEY",
Name: "gui-apikey",
Value: guiCfg.APIKey,
Usage: "Override GUI API key",
},
cli.StringFlag{
Name: "username, u",
Value: "",
Usage: "Username",
EnvVar: "STUSERNAME",
},
cli.StringFlag{
Name: "password, p",
Value: "",
Usage: "Password",
EnvVar: "STPASSWORD",
},
cli.BoolFlag{
Name: "insecure, i",
Usage: "Do not verify SSL certificate",
EnvVar: "STINSECURE",
Name: "home",
Value: homeBaseDir,
Usage: "Set configuration directory",
},
}
sort.Sort(ByAlphabet(cliCommands))
app.Commands = cliCommands
app.RunAndExitOnError()
// Do not print usage of these flags, and ignore errors as this can't understand plenty of things
flags.Usage = func() {}
_ = flags.Parse(os.Args[1:])
// Now if the API key and address is not provided (we are not connecting to a remote instance),
// try to rip it out of the config.
if guiCfg.RawAddress == "" && guiCfg.APIKey == "" {
// Update the base directory
err := locations.SetBaseDir(locations.ConfigBaseDir, homeBaseDir)
if err != nil {
log.Fatal(errors.Wrap(err, "setting home"))
}
// Load the certs and get the ID
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
log.Fatal(errors.Wrap(err, "reading device ID"))
}
myID := protocol.NewDeviceID(cert.Certificate[0])
// Load the config
cfg, err := config.Load(locations.Get(locations.ConfigFile), myID)
if err != nil {
log.Fatalln(errors.Wrap(err, "loading config"))
}
guiCfg = cfg.GUI()
} else if guiCfg.Address() == "" || guiCfg.APIKey == "" {
log.Fatalln("Both -gui-address and -gui-apikey should be specified")
}
if guiCfg.Address() == "" {
log.Fatalln("Could not find GUI Address")
}
if guiCfg.APIKey == "" {
log.Fatalln("Could not find GUI API key")
}
client := getClient(guiCfg)
cfg, err := getConfig(client)
original := cfg.Copy()
if err != nil {
log.Fatalln(errors.Wrap(err, "getting config"))
}
// Copy the config and set the default flags
recliCfg := recli.DefaultConfig
recliCfg.IDTag.Name = "xml"
recliCfg.SkipTag.Name = "json"
commands, err := recli.New(recliCfg).Construct(&cfg)
if err != nil {
log.Fatalln(errors.Wrap(err, "config reflect"))
}
// Construct the actual CLI
app := cli.NewApp()
app.Name = "stcli"
app.HelpName = app.Name
app.Author = "The Syncthing Authors"
app.Usage = "Syncthing command line interface"
app.Version = strings.Replace(build.LongVersion, "syncthing", app.Name, 1)
app.Flags = fakeFlags
app.Metadata = map[string]interface{}{
"client": client,
}
app.Commands = []cli.Command{
{
Name: "config",
HideHelp: true,
Usage: "Configuration modification command group",
Subcommands: commands,
},
showCommand,
operationCommand,
errorsCommand,
}
tty := isatty.IsTerminal(os.Stdin.Fd()) || isatty.IsCygwinTerminal(os.Stdin.Fd())
if !tty {
// Not a TTY, consume from stdin
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
input, err := shlex.Split(scanner.Text())
if err != nil {
log.Fatalln(errors.Wrap(err, "parsing input"))
}
if len(input) == 0 {
continue
}
err = app.Run(append(os.Args, input...))
if err != nil {
log.Fatalln(err)
}
}
err = scanner.Err()
if err != nil {
log.Fatalln(err)
}
} else {
err = app.Run(os.Args)
if err != nil {
log.Fatalln(err)
}
}
if !reflect.DeepEqual(cfg, original) {
body, err := json.MarshalIndent(cfg, "", " ")
if err != nil {
log.Fatalln(err)
}
resp, err := client.Post("system/config", string(body))
if err != nil {
log.Fatalln(err)
}
if resp.StatusCode != 200 {
body, err := responseToBArray(resp)
if err != nil {
log.Fatalln(err)
}
log.Fatalln(string(body))
}
}
}

78
cmd/stcli/operations.go Normal file
View File

@@ -0,0 +1,78 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"fmt"
"github.com/urfave/cli"
)
var operationCommand = cli.Command{
Name: "operations",
HideHelp: true,
Usage: "Operation command group",
Subcommands: []cli.Command{
{
Name: "restart",
Usage: "Restart syncthing",
Action: expects(0, emptyPost("system/restart")),
},
{
Name: "shutdown",
Usage: "Shutdown syncthing",
Action: expects(0, emptyPost("system/shutdown")),
},
{
Name: "reset",
Usage: "Reset syncthing deleting all folders and devices",
Action: expects(0, emptyPost("system/reset")),
},
{
Name: "upgrade",
Usage: "Upgrade syncthing (if a newer version is available)",
Action: expects(0, emptyPost("system/upgrade")),
},
{
Name: "folder-override",
Usage: "Override changes on folder (remote for sendonly, local for receiveonly)",
ArgsUsage: "[folder id]",
Action: expects(1, foldersOverride),
},
},
}
func foldersOverride(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
cfg, err := getConfig(client)
if err != nil {
return err
}
rid := c.Args()[0]
for _, folder := range cfg.Folders {
if folder.ID == rid {
response, err := client.Post("db/override", "")
if err != nil {
return err
}
if response.StatusCode != 200 {
errStr := fmt.Sprint("Failed to override changes\nStatus code: ", response.StatusCode)
bytes, err := responseToBArray(response)
if err != nil {
return err
}
body := string(bytes)
if body != "" {
errStr += "\nBody: " + body
}
return fmt.Errorf(errStr)
}
return nil
}
}
return fmt.Errorf("Folder " + rid + " not found")
}

44
cmd/stcli/show.go Normal file
View File

@@ -0,0 +1,44 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"github.com/urfave/cli"
)
var showCommand = cli.Command{
Name: "show",
HideHelp: true,
Usage: "Show command group",
Subcommands: []cli.Command{
{
Name: "version",
Usage: "Show syncthing client version",
Action: expects(0, dumpOutput("system/version")),
},
{
Name: "config-status",
Usage: "Show configuration status, whether or not a restart is required for changes to take effect",
Action: expects(0, dumpOutput("system/config/insync")),
},
{
Name: "system",
Usage: "Show system status",
Action: expects(0, dumpOutput("system/status")),
},
{
Name: "connections",
Usage: "Report about connections to other devices",
Action: expects(0, dumpOutput("system/connections")),
},
{
Name: "usage",
Usage: "Show usage report",
Action: expects(0, dumpOutput("svc/report")),
},
},
}

View File

@@ -1,4 +1,8 @@
// Copyright (C) 2014 Audrius Butkevičius
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
@@ -8,78 +12,37 @@ import (
"io/ioutil"
"net/http"
"os"
"regexp"
"sort"
"strconv"
"strings"
"text/tabwriter"
"unicode"
"github.com/AudriusButkevicius/cli"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/urfave/cli"
)
func responseToBArray(response *http.Response) []byte {
defer response.Body.Close()
func responseToBArray(response *http.Response) ([]byte, error) {
bytes, err := ioutil.ReadAll(response.Body)
if err != nil {
die(err)
return nil, err
}
return bytes
return bytes, response.Body.Close()
}
func die(vals ...interface{}) {
if len(vals) > 1 || vals[0] != nil {
os.Stderr.WriteString(fmt.Sprintln(vals...))
os.Exit(1)
func emptyPost(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
_, err := client.Post(url, "")
return err
}
}
func wrappedHTTPPost(url string) func(c *cli.Context) {
return func(c *cli.Context) {
httpPost(c, url, "")
}
}
func prettyPrintJSON(json map[string]interface{}) {
writer := newTableWriter()
remap := make(map[string]interface{})
for k, v := range json {
key, ok := jsonAttributeLabels[k]
if !ok {
key = firstUpper(k)
func dumpOutput(url string) cli.ActionFunc {
return func(c *cli.Context) error {
client := c.App.Metadata["client"].(*APIClient)
response, err := client.Get(url)
if err != nil {
return err
}
remap[key] = v
return prettyPrintResponse(c, response)
}
jsonKeys := make([]string, 0, len(remap))
for key := range remap {
jsonKeys = append(jsonKeys, key)
}
sort.Strings(jsonKeys)
for _, k := range jsonKeys {
var value string
rvalue := remap[k]
switch rvalue.(type) {
case int, int16, int32, int64, uint, uint16, uint32, uint64, float32, float64:
value = fmt.Sprintf("%.0f", rvalue)
default:
value = fmt.Sprint(rvalue)
}
if value == "" {
continue
}
fmt.Fprintln(writer, k+":\t"+value)
}
writer.Flush()
}
func firstUpper(str string) string {
for i, v := range str {
return string(unicode.ToUpper(v)) + str[i+1:]
}
return ""
}
func newTableWriter() *tabwriter.Writer {
@@ -88,78 +51,51 @@ func newTableWriter() *tabwriter.Writer {
return writer
}
func getMyID(c *cli.Context) string {
response := httpGet(c, "system/status")
data := make(map[string]interface{})
json.Unmarshal(responseToBArray(response), &data)
return data["myID"].(string)
}
func getConfig(c *cli.Context) config.Configuration {
response := httpGet(c, "system/config")
config := config.Configuration{}
json.Unmarshal(responseToBArray(response), &config)
return config
}
func setConfig(c *cli.Context, cfg config.Configuration) {
body, err := json.Marshal(cfg)
die(err)
response := httpPost(c, "system/config", string(body))
if response.StatusCode != 200 {
die("Unexpected status code", response.StatusCode)
}
}
func parseBool(input string) bool {
val, err := strconv.ParseBool(input)
func getConfig(c *APIClient) (config.Configuration, error) {
cfg := config.Configuration{}
response, err := c.Get("system/config")
if err != nil {
die(input + " is not a valid value for a boolean")
return cfg, err
}
return val
}
func parseInt(input string) int {
val, err := strconv.ParseInt(input, 0, 64)
bytes, err := responseToBArray(response)
if err != nil {
die(input + " is not a valid value for an integer")
return cfg, err
}
return int(val)
err = json.Unmarshal(bytes, &cfg)
if err == nil {
return cfg, err
}
return cfg, nil
}
func parseUint(input string) int {
val, err := strconv.ParseUint(input, 0, 64)
func expects(n int, actionFunc cli.ActionFunc) cli.ActionFunc {
return func(ctx *cli.Context) error {
if ctx.NArg() != n {
plural := ""
if n != 1 {
plural = "s"
}
return fmt.Errorf("expected %d argument%s, got %d", n, plural, ctx.NArg())
}
return actionFunc(ctx)
}
}
func prettyPrintJSON(data interface{}) error {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(data)
}
func prettyPrintResponse(c *cli.Context, response *http.Response) error {
bytes, err := responseToBArray(response)
if err != nil {
die(input + " is not a valid value for an unsigned integer")
return err
}
return int(val)
}
func parsePort(input string) int {
port := parseUint(input)
if port < 1 || port > 65535 {
die(input + " is not a valid port\nExpected value between 1 and 65535")
}
return port
}
func validAddress(input string) {
tokens := strings.Split(input, ":")
if len(tokens) != 2 {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
matched, err := regexp.MatchString("^[a-zA-Z0-9]+([-a-zA-Z0-9.]+[-a-zA-Z0-9]+)?$", tokens[0])
die(err)
if !matched {
die(input + " is not a valid value for an address\nExpected format <ip or hostname>:<port>")
}
parsePort(tokens[1])
}
func parseDeviceID(input string) protocol.DeviceID {
device, err := protocol.DeviceIDFromString(input)
if err != nil {
die(input + " is not a valid device id")
}
return device
var data interface{}
if err := json.Unmarshal(bytes, &data); err != nil {
return err
}
// TODO: Check flag for pretty print format
return prettyPrintJSON(data)
}

View File

@@ -88,11 +88,6 @@ var (
)
func main() {
const (
cleanIntv = 1 * time.Hour
statsIntv = 5 * time.Minute
)
var listen string
var dir string
var metricsListen string

View File

@@ -82,7 +82,7 @@ func generateOneFile(fd io.ReadSeeker, p1 string, s int64) error {
return err
}
_ = os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
os.Chmod(p1, os.FileMode(rand.Intn(0777)|0400))
t := time.Now().Add(-time.Duration(rand.Intn(30*86400)) * time.Second)
return os.Chtimes(p1, t, t)

View File

@@ -167,8 +167,8 @@ func idxck(ldb *db.Lowlevel) (success bool) {
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
dev, _ := deviceToIDs[string(vl.Versions[0].Device)]
fi, _ := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
dev := deviceToIDs[string(vl.Versions[0].Device)]
fi := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}

View File

@@ -126,7 +126,7 @@ func refreshStats() {
go func(rel *relay) {
t0 := time.Now()
stats := fetchStats(rel)
duration := time.Now().Sub(t0).Seconds()
duration := time.Since(t0).Seconds()
result := "success"
if stats == nil {
result = "failed"

View File

@@ -22,7 +22,7 @@ import (
var (
sessionMut = sync.RWMutex{}
activeSessions = make([]*session, 0)
pendingSessions = make(map[string]*session, 0)
pendingSessions = make(map[string]*session)
numProxies int64
bytesProxied int64
)

View File

@@ -37,7 +37,7 @@ func TestAuditService(t *testing.T) {
// This event should not be logged, since we have stopped.
events.Default.Log(events.ConfigSaved, "the third event")
result := string(buf.Bytes())
result := buf.String()
t.Log(result)
if strings.Contains(result, "first event") {

View File

@@ -27,13 +27,15 @@ import (
"strings"
"time"
"github.com/rcrowley/go-metrics"
metrics "github.com/rcrowley/go-metrics"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/logger"
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/protocol"
@@ -531,7 +533,6 @@ func corsMiddleware(next http.Handler, allowFrameLoading bool) http.Handler {
// For everything else, pass to the next handler
next.ServeHTTP(w, r)
return
})
}
@@ -568,7 +569,7 @@ func noCacheMiddleware(h http.Handler) http.Handler {
func withDetailsMiddleware(id protocol.DeviceID, h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-Syncthing-Version", Version)
w.Header().Set("X-Syncthing-Version", build.Version)
w.Header().Set("X-Syncthing-ID", id.String())
h.ServeHTTP(w, r)
})
@@ -610,14 +611,14 @@ func (s *apiService) getJSMetadata(w http.ResponseWriter, r *http.Request) {
func (s *apiService) getSystemVersion(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]interface{}{
"version": Version,
"codename": Codename,
"longVersion": LongVersion,
"version": build.Version,
"codename": build.Codename,
"longVersion": build.LongVersion,
"os": runtime.GOOS,
"arch": runtime.GOARCH,
"isBeta": IsBeta,
"isCandidate": IsCandidate,
"isRelease": IsRelease,
"isBeta": build.IsBeta,
"isCandidate": build.IsCandidate,
"isRelease": build.IsRelease,
})
}
@@ -1081,7 +1082,7 @@ func (s *apiService) getSupportBundle(w http.ResponseWriter, r *http.Request) {
}
// Panic files
if panicFiles, err := filepath.Glob(filepath.Join(baseDirs["config"], "panic*")); err == nil {
if panicFiles, err := filepath.Glob(filepath.Join(locations.GetBaseDir(locations.ConfigBaseDir), "panic*")); err == nil {
for _, f := range panicFiles {
if panicFile, err := ioutil.ReadFile(f); err != nil {
l.Warnf("Support bundle: failed to load %s: %s", filepath.Base(f), err)
@@ -1092,16 +1093,16 @@ func (s *apiService) getSupportBundle(w http.ResponseWriter, r *http.Request) {
}
// Archived log (default on Windows)
if logFile, err := ioutil.ReadFile(locations[locLogFile]); err == nil {
if logFile, err := ioutil.ReadFile(locations.Get(locations.LogFile)); err == nil {
files = append(files, fileEntry{name: "log-ondisk.txt", data: logFile})
}
// Version and platform information as a JSON
if versionPlatform, err := json.MarshalIndent(map[string]string{
"now": time.Now().Format(time.RFC3339),
"version": Version,
"codename": Codename,
"longVersion": LongVersion,
"version": build.Version,
"codename": build.Codename,
"longVersion": build.LongVersion,
"os": runtime.GOOS,
"arch": runtime.GOARCH,
}, "", " "); err == nil {
@@ -1119,17 +1120,19 @@ func (s *apiService) getSupportBundle(w http.ResponseWriter, r *http.Request) {
// Heap and CPU Proofs as a pprof extension
var heapBuffer, cpuBuffer bytes.Buffer
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, build.Version, time.Now().Format("150405")) // hhmmss
runtime.GC()
pprof.WriteHeapProfile(&heapBuffer)
files = append(files, fileEntry{name: filename, data: heapBuffer.Bytes()})
if err := pprof.WriteHeapProfile(&heapBuffer); err == nil {
files = append(files, fileEntry{name: filename, data: heapBuffer.Bytes()})
}
const duration = 4 * time.Second
filename = fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
pprof.StartCPUProfile(&cpuBuffer)
time.Sleep(duration)
pprof.StopCPUProfile()
files = append(files, fileEntry{name: filename, data: cpuBuffer.Bytes()})
filename = fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, build.Version, time.Now().Format("150405")) // hhmmss
if err := pprof.StartCPUProfile(&cpuBuffer); err == nil {
time.Sleep(duration)
pprof.StopCPUProfile()
files = append(files, fileEntry{name: filename, data: cpuBuffer.Bytes()})
}
// Add buffer files to buffer zip
var zipFilesBuffer bytes.Buffer
@@ -1141,7 +1144,7 @@ func (s *apiService) getSupportBundle(w http.ResponseWriter, r *http.Request) {
// Set zip file name and path
zipFileName := fmt.Sprintf("support-bundle-%s-%s.zip", s.id.Short().String(), time.Now().Format("2006-01-02T150405"))
zipFilePath := filepath.Join(baseDirs["config"], zipFileName)
zipFilePath := filepath.Join(locations.GetBaseDir(locations.ConfigBaseDir), zipFileName)
// Write buffer zip to local zip file (back up)
if err := ioutil.WriteFile(zipFilePath, zipFilesBuffer.Bytes(), 0600); err != nil {
@@ -1322,16 +1325,16 @@ func (s *apiService) getSystemUpgrade(w http.ResponseWriter, r *http.Request) {
return
}
opts := s.cfg.Options()
rel, err := upgrade.LatestRelease(opts.ReleasesURL, Version, opts.UpgradeToPreReleases)
rel, err := upgrade.LatestRelease(opts.ReleasesURL, build.Version, opts.UpgradeToPreReleases)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
res := make(map[string]interface{})
res["running"] = Version
res["running"] = build.Version
res["latest"] = rel.Tag
res["newer"] = upgrade.CompareVersions(rel.Tag, Version) == upgrade.Newer
res["majorNewer"] = upgrade.CompareVersions(rel.Tag, Version) == upgrade.MajorNewer
res["newer"] = upgrade.CompareVersions(rel.Tag, build.Version) == upgrade.Newer
res["majorNewer"] = upgrade.CompareVersions(rel.Tag, build.Version) == upgrade.MajorNewer
sendJSON(w, res)
}
@@ -1364,14 +1367,14 @@ func (s *apiService) getLang(w http.ResponseWriter, r *http.Request) {
func (s *apiService) postSystemUpgrade(w http.ResponseWriter, r *http.Request) {
opts := s.cfg.Options()
rel, err := upgrade.LatestRelease(opts.ReleasesURL, Version, opts.UpgradeToPreReleases)
rel, err := upgrade.LatestRelease(opts.ReleasesURL, build.Version, opts.UpgradeToPreReleases)
if err != nil {
l.Warnln("getting latest release:", err)
http.Error(w, err.Error(), 500)
return
}
if upgrade.CompareVersions(rel.Tag, Version) > upgrade.Equal {
if upgrade.CompareVersions(rel.Tag, build.Version) > upgrade.Equal {
err = upgrade.To(rel)
if err != nil {
l.Warnln("upgrading:", err)
@@ -1640,18 +1643,19 @@ func (s *apiService) getCPUProf(w http.ResponseWriter, r *http.Request) {
duration = 30 * time.Second
}
filename := fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
filename := fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, build.Version, time.Now().Format("150405")) // hhmmss
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
pprof.StartCPUProfile(w)
time.Sleep(duration)
pprof.StopCPUProfile()
if err := pprof.StartCPUProfile(w); err == nil {
time.Sleep(duration)
pprof.StopCPUProfile()
}
}
func (s *apiService) getHeapProf(w http.ResponseWriter, r *http.Request) {
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, build.Version, time.Now().Format("150405")) // hhmmss
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)

View File

@@ -20,7 +20,7 @@ import (
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
"golang.org/x/crypto/bcrypt"
"gopkg.in/ldap.v2"
ldap "gopkg.in/ldap.v2"
)
var (
@@ -80,11 +80,10 @@ func basicAuthAndSessionMiddleware(cookieName string, guiCfg config.GUIConfigura
return
}
authOk := false
username := string(fields[0])
password := string(fields[1])
authOk = auth(username, password, guiCfg, ldapCfg)
authOk := auth(username, password, guiCfg, ldapCfg)
if !authOk {
usernameIso := string(iso88591ToUTF8([]byte(username)))
passwordIso := string(iso88591ToUTF8([]byte(password)))

View File

@@ -14,6 +14,7 @@ import (
"strings"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
@@ -115,7 +116,7 @@ func saveCsrfTokens() {
// We're ignoring errors in here. It's not super critical and there's
// nothing relevant we can do about them anyway...
name := locations[locCsrfTokens]
name := locations.Get(locations.CsrfTokens)
f, err := osutil.CreateAtomic(name)
if err != nil {
return
@@ -129,7 +130,7 @@ func saveCsrfTokens() {
}
func loadCsrfTokens() {
f, err := os.Open(locations[locCsrfTokens])
f, err := os.Open(locations.Get(locations.CsrfTokens))
if err != nil {
return
}

View File

@@ -49,7 +49,7 @@ func saveHeapProfiles(rate int) {
panic(err)
}
_ = os.Remove(name) // Error deliberately ignored
os.Remove(name) // Error deliberately ignored
err = os.Rename(name+".tmp", name)
if err != nil {
panic(err)

View File

@@ -1,125 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"os"
"path/filepath"
"runtime"
"strings"
"time"
"github.com/syncthing/syncthing/lib/fs"
)
type locationEnum string
// Use strings as keys to make printout and serialization of the locations map
// more meaningful.
const (
locConfigFile locationEnum = "config"
locCertFile = "certFile"
locKeyFile = "keyFile"
locHTTPSCertFile = "httpsCertFile"
locHTTPSKeyFile = "httpsKeyFile"
locDatabase = "database"
locLogFile = "logFile"
locCsrfTokens = "csrfTokens"
locPanicLog = "panicLog"
locAuditLog = "auditLog"
locGUIAssets = "GUIAssets"
locDefFolder = "defFolder"
)
// Platform dependent directories
var baseDirs = map[string]string{
"config": defaultConfigDir(), // Overridden by -home flag
"home": homeDir(), // User's home directory, *not* -home flag
}
// Use the variables from baseDirs here
var locations = map[locationEnum]string{
locConfigFile: "${config}/config.xml",
locCertFile: "${config}/cert.pem",
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.14.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",
locAuditLog: "${config}/audit-${timestamp}.log",
locGUIAssets: "${config}/gui",
locDefFolder: "${home}/Sync",
}
// expandLocations replaces the variables in the location map with actual
// directory locations.
func expandLocations() error {
for key, dir := range locations {
for varName, value := range baseDirs {
dir = strings.Replace(dir, "${"+varName+"}", value, -1)
}
var err error
dir, err = fs.ExpandTilde(dir)
if err != nil {
return err
}
locations[key] = dir
}
return nil
}
// defaultConfigDir returns the default configuration directory, as figured
// out by various the environment variables present on each platform, or dies
// trying.
func defaultConfigDir() string {
switch runtime.GOOS {
case "windows":
if p := os.Getenv("LocalAppData"); p != "" {
return filepath.Join(p, "Syncthing")
}
return filepath.Join(os.Getenv("AppData"), "Syncthing")
case "darwin":
dir, err := fs.ExpandTilde("~/Library/Application Support/Syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
default:
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
return filepath.Join(xdgCfg, "syncthing")
}
dir, err := fs.ExpandTilde("~/.config/syncthing")
if err != nil {
l.Fatalln(err)
}
return dir
}
}
// homeDir returns the user's home directory, or dies trying.
func homeDir() string {
home, err := fs.ExpandTilde("~")
if err != nil {
l.Fatalln(err)
}
return home
}
func timestampedLoc(key locationEnum) string {
// We take the roundtrip via "${timestamp}" instead of passing the path
// directly through time.Format() to avoid issues when the path we are
// expanding contains numbers; otherwise for example
// /home/user2006/.../panic-20060102-150405.log would get both instances of
// 2006 replaced by 2015...
tpl := locations[key]
now := time.Now().Format("20060102-150405")
return strings.Replace(tpl, "${timestamp}", now, -1)
}

View File

@@ -15,14 +15,13 @@ import (
"io"
"io/ioutil"
"log"
"net"
"net/http"
_ "net/http/pprof" // Need to import this to support STPROFILER.
"net/url"
"os"
"os/signal"
"path"
"path/filepath"
"regexp"
"runtime"
"runtime/pprof"
"sort"
@@ -31,6 +30,7 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections"
"github.com/syncthing/syncthing/lib/db"
@@ -38,6 +38,7 @@ import (
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/logger"
"github.com/syncthing/syncthing/lib/model"
"github.com/syncthing/syncthing/lib/osutil"
@@ -48,23 +49,6 @@ import (
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/thejerf/suture"
_ "net/http/pprof" // Need to import this to support STPROFILER.
)
var (
Version = "unknown-dev"
Codename = "Erbium Earthworm"
BuildStamp = "0"
BuildDate time.Time
BuildHost = "unknown"
BuildUser = "unknown"
IsRelease bool
IsCandidate bool
IsBeta bool
LongVersion string
BuildTags []string
allowedVersionExp = regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z0-9]+)*(\.\d+)*(\+\d+-g[0-9a-f]+)?(-[^\s]+)?$`)
)
const (
@@ -84,50 +68,9 @@ const (
maxSystemLog = 250
)
func init() {
if Version != "unknown-dev" {
// If not a generic dev build, version string should come from git describe
if !allowedVersionExp.MatchString(Version) {
l.Fatalf("Invalid version string %q;\n\tdoes not match regexp %v", Version, allowedVersionExp)
}
}
}
func setBuildMetadata() {
// Check for a clean release build. A release is something like
// "v0.1.2", with an optional suffix of letters and dot separated
// numbers like "-beta3.47". If there's more stuff, like a plus sign and
// a commit hash and so on, then it's not a release. If it has a dash in
// it, it's some sort of beta, release candidate or special build. If it
// has "-rc." in it, like "v0.14.35-rc.42", then it's a candidate build.
//
// So, every build that is not a stable release build has IsBeta = true.
// This is used to enable some extra debugging (the deadlock detector).
//
// Release candidate builds are also "betas" from this point of view and
// will have that debugging enabled. In addition, some features are
// forced for release candidates - auto upgrade, and usage reporting.
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z]+[\d\.]+)?$`)
IsRelease = exp.MatchString(Version)
IsCandidate = strings.Contains(Version, "-rc.")
IsBeta = strings.Contains(Version, "-")
stamp, _ := strconv.Atoi(BuildStamp)
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`syncthing %s "%s" (%s %s-%s) %s@%s %s`, Version, Codename, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
if len(BuildTags) > 0 {
LongVersion = fmt.Sprintf("%s [%s]", LongVersion, strings.Join(BuildTags, ", "))
}
}
var (
myID protocol.DeviceID
stop = make(chan int)
lans []*net.IPNet
)
const (
@@ -322,8 +265,6 @@ func parseCommandLineOptions() RuntimeOptions {
}
func main() {
setBuildMetadata()
options := parseCommandLineOptions()
l.SetFlags(options.logFlags)
@@ -357,27 +298,25 @@ func main() {
l.Fatalln(err)
}
}
baseDirs["config"] = options.confDir
}
if err := expandLocations(); err != nil {
l.Fatalln(err)
if err := locations.SetBaseDir(locations.ConfigBaseDir, options.confDir); err != nil {
l.Fatalln(err)
}
}
if options.logFile == "" {
// Blank means use the default logfile location. We must set this
// *after* expandLocations above.
options.logFile = locations[locLogFile]
options.logFile = locations.Get(locations.LogFile)
}
if options.assetDir == "" {
// The asset dir is blank if STGUIASSETS wasn't set, in which case we
// should look for extra assets in the default place.
options.assetDir = locations[locGUIAssets]
options.assetDir = locations.Get(locations.GUIAssets)
}
if options.showVersion {
fmt.Println(LongVersion)
fmt.Println(build.LongVersion)
return
}
@@ -392,7 +331,10 @@ func main() {
}
if options.showDeviceId {
cert, err := tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Fatalln("Error reading device ID:", err)
}
@@ -413,7 +355,7 @@ func main() {
}
// Ensure that our home directory exists.
ensureDir(baseDirs["config"], 0700)
ensureDir(locations.GetBaseDir(locations.ConfigBaseDir), 0700)
if options.upgradeTo != "" {
err := upgrade.ToURL(options.upgradeTo)
@@ -436,7 +378,9 @@ func main() {
}
if options.resetDatabase {
resetDB()
if err := resetDB(); err != nil {
l.Fatalln("Resetting database:", err)
}
return
}
@@ -450,7 +394,9 @@ func main() {
func openGUI() {
cfg, _ := loadOrDefaultConfig()
if cfg.GUI().Enabled {
openURL(cfg.GUI().URL())
if err := openURL(cfg.GUI().URL()); err != nil {
l.Fatalln("Open URL:", err)
}
} else {
l.Warnln("Browser: GUI is currently disabled")
}
@@ -519,24 +465,24 @@ func debugFacilities() string {
func checkUpgrade() upgrade.Release {
cfg, _ := loadOrDefaultConfig()
opts := cfg.Options()
release, err := upgrade.LatestRelease(opts.ReleasesURL, Version, opts.UpgradeToPreReleases)
release, err := upgrade.LatestRelease(opts.ReleasesURL, build.Version, opts.UpgradeToPreReleases)
if err != nil {
l.Fatalln("Upgrade:", err)
}
if upgrade.CompareVersions(release.Tag, Version) <= 0 {
if upgrade.CompareVersions(release.Tag, build.Version) <= 0 {
noUpgradeMessage := "No upgrade available (current %q >= latest %q)."
l.Infof(noUpgradeMessage, Version, release.Tag)
l.Infof(noUpgradeMessage, build.Version, release.Tag)
os.Exit(exitNoUpgradeAvailable)
}
l.Infof("Upgrade available (current %q < latest %q)", Version, release.Tag)
l.Infof("Upgrade available (current %q < latest %q)", build.Version, release.Tag)
return release
}
func performUpgrade(release upgrade.Release) {
// Use leveldb database locks to protect against concurrent upgrades
_, err := db.Open(locations[locDatabase])
_, err := db.Open(locations.Get(locations.Database))
if err == nil {
err = upgrade.To(release)
if err != nil {
@@ -634,10 +580,17 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
osutil.MaximizeOpenFileLimit()
// Ensure that we have a certificate and key.
cert, err := tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
cert, err := tls.LoadX509KeyPair(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
)
if err != nil {
l.Infof("Generating ECDSA key and certificate for %s...", tlsDefaultCommonName)
cert, err = tlsutil.NewCertificate(locations[locCertFile], locations[locKeyFile], tlsDefaultCommonName)
cert, err = tlsutil.NewCertificate(
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
tlsDefaultCommonName,
)
if err != nil {
l.Fatalln(err)
}
@@ -646,7 +599,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
myID = protocol.NewDeviceID(cert.Certificate[0])
l.SetPrefix(fmt.Sprintf("[%s] ", myID.String()[:5]))
l.Infoln(LongVersion)
l.Infoln(build.LongVersion)
l.Infoln("My ID:", myID)
// Select SHA256 implementation and report. Affected by the
@@ -657,7 +610,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
// Emit the Starting event, now that we know who we are.
events.Default.Log(events.Starting, map[string]string{
"home": baseDirs["config"],
"home": locations.GetBaseDir(locations.ConfigBaseDir),
"myID": myID.String(),
})
@@ -681,7 +634,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
perf := cpuBench(3, 150*time.Millisecond, true)
l.Infof("Hashing performance is %.02f MB/s", perf)
dbFile := locations[locDatabase]
dbFile := locations.Get(locations.Database)
ldb, err := db.Open(dbFile)
if err != nil {
l.Fatalln("Error opening database:", err)
@@ -696,10 +649,10 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
}
protectedFiles := []string{
locations[locDatabase],
locations[locConfigFile],
locations[locCertFile],
locations[locKeyFile],
locations.Get(locations.Database),
locations.Get(locations.ConfigFile),
locations.Get(locations.CertFile),
locations.Get(locations.KeyFile),
}
// Remove database entries for folders that no longer exist in the config
@@ -721,10 +674,10 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
// 0.14.45-pineapple is not.
prevParts := strings.Split(prevVersion, "-")
curParts := strings.Split(Version, "-")
curParts := strings.Split(build.Version, "-")
if prevParts[0] != curParts[0] {
if prevVersion != "" {
l.Infoln("Detected upgrade from", prevVersion, "to", Version)
l.Infoln("Detected upgrade from", prevVersion, "to", build.Version)
}
// Drop delta indexes in case we've changed random stuff we
@@ -732,16 +685,16 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
db.DropDeltaIndexIDs(ldb)
// Remember the new version.
miscDB.PutString("prevVersion", Version)
miscDB.PutString("prevVersion", build.Version)
}
m := model.NewModel(cfg, myID, "syncthing", Version, ldb, protectedFiles)
m := model.NewModel(cfg, myID, "syncthing", build.Version, ldb, protectedFiles)
if t := os.Getenv("STDEADLOCKTIMEOUT"); t != "" {
if secs, _ := strconv.Atoi(t); secs > 0 {
m.StartDeadlockDetector(time.Duration(secs) * time.Second)
}
} else if !IsRelease || IsBeta {
} else if !build.IsRelease || build.IsBeta {
m.StartDeadlockDetector(20 * time.Minute)
}
@@ -823,9 +776,11 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
if runtimeOptions.cpuProfile {
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
if err != nil {
log.Fatal(err)
l.Fatalln("Creating profile:", err)
}
if err := pprof.StartCPUProfile(f); err != nil {
l.Fatalln("Starting profile:", err)
}
pprof.StartCPUProfile(f)
}
myDev, _ := cfg.Device(myID)
@@ -838,7 +793,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
// Candidate builds always run with usage reporting.
if opts := cfg.Options(); IsCandidate {
if opts := cfg.Options(); build.IsCandidate {
l.Infoln("Anonymous usage reporting is always enabled for candidate releases.")
if opts.URAccepted != usageReportVersion {
opts.URAccepted = usageReportVersion
@@ -866,7 +821,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
// unless we are in a build where it's disabled or the STNOUPGRADE
// environment variable is set.
if IsCandidate && !upgrade.DisabledByCompilation && !noUpgradeFromEnv {
if build.IsCandidate && !upgrade.DisabledByCompilation && !noUpgradeFromEnv {
l.Infoln("Automatic upgrade is always enabled for candidate releases.")
if opts := cfg.Options(); opts.AutoUpgradeIntervalH == 0 || opts.AutoUpgradeIntervalH > 24 {
opts.AutoUpgradeIntervalH = 12
@@ -939,7 +894,7 @@ func setupSignalHandling() {
}
func loadOrDefaultConfig() (*config.Wrapper, error) {
cfgFile := locations[locConfigFile]
cfgFile := locations.Get(locations.ConfigFile)
cfg, err := config.Load(cfgFile, myID)
if err != nil {
@@ -950,7 +905,7 @@ func loadOrDefaultConfig() (*config.Wrapper, error) {
}
func loadConfigAtStartup() *config.Wrapper {
cfgFile := locations[locConfigFile]
cfgFile := locations.Get(locations.ConfigFile)
cfg, err := config.Load(cfgFile, myID)
if os.IsNotExist(err) {
cfg = defaultConfig(cfgFile)
@@ -1014,7 +969,7 @@ func startAuditing(mainService *suture.Supervisor, auditFile string) {
auditDest = "stderr"
} else {
if auditFile == "" {
auditFile = timestampedLoc(locAuditLog)
auditFile = locations.GetTimestamped(locations.AuditLog)
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_EXCL
} else {
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_APPEND
@@ -1050,7 +1005,7 @@ func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Mode
cpu := newCPUService()
mainService.Add(cpu)
api := newAPIService(myID, cfg, locations[locHTTPSCertFile], locations[locHTTPSKeyFile], runtimeOptions.assetDir, m, defaultSub, diskSub, discoverer, connectionsService, errors, systemLog, cpu)
api := newAPIService(myID, cfg, locations.Get(locations.HTTPSCertFile), locations.Get(locations.HTTPSKeyFile), runtimeOptions.assetDir, m, defaultSub, diskSub, discoverer, connectionsService, errors, systemLog, cpu)
cfg.Subscribe(api)
mainService.Add(api)
@@ -1058,55 +1013,25 @@ func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Mode
// Can potentially block if the utility we are invoking doesn't
// fork, and just execs, hence keep it in its own routine.
<-api.startedOnce
go openURL(guiCfg.URL())
go func() { _ = openURL(guiCfg.URL()) }()
}
}
func defaultConfig(cfgFile string) *config.Wrapper {
myName, _ := os.Hostname()
newCfg := config.NewWithFreePorts(myID)
var defaultFolder config.FolderConfiguration
if !noDefaultFolder {
l.Infoln("Default folder created and/or linked to new config")
defaultFolder = config.NewFolderConfiguration(myID, "default", "Default Folder", fs.FilesystemTypeBasic, locations[locDefFolder])
} else {
if noDefaultFolder {
l.Infoln("We will skip creation of a default folder on first start since the proper envvar is set")
return config.Wrap(cfgFile, newCfg)
}
thisDevice := config.NewDeviceConfiguration(myID, myName)
thisDevice.Addresses = []string{"dynamic"}
newCfg := config.New(myID)
if !noDefaultFolder {
newCfg.Folders = []config.FolderConfiguration{defaultFolder}
}
newCfg.Devices = []config.DeviceConfiguration{thisDevice}
port, err := getFreePort("127.0.0.1", 8384)
if err != nil {
l.Fatalln("get free port (GUI):", err)
}
newCfg.GUI.RawAddress = fmt.Sprintf("127.0.0.1:%d", port)
port, err = getFreePort("0.0.0.0", 22000)
if err != nil {
l.Fatalln("get free port (BEP):", err)
}
if port == 22000 {
newCfg.Options.ListenAddresses = []string{"default"}
} else {
newCfg.Options.ListenAddresses = []string{
fmt.Sprintf("tcp://%s", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
"dynamic+https://relays.syncthing.net/endpoint",
}
}
newCfg.Folders = append(newCfg.Folders, config.NewFolderConfiguration(myID, "default", "Default Folder", fs.FilesystemTypeBasic, locations.Get(locations.DefFolder)))
l.Infoln("Default folder created and/or linked to new config")
return config.Wrap(cfgFile, newCfg)
}
func resetDB() error {
return os.RemoveAll(locations[locDatabase])
return os.RemoveAll(locations.Get(locations.Database))
}
func restart() {
@@ -1141,27 +1066,6 @@ func ensureDir(dir string, mode fs.FileMode) {
}
}
// getFreePort returns a free TCP port fort listening on. The ports given are
// tried in succession and the first to succeed is returned. If none succeed,
// a random high port is returned.
func getFreePort(host string, ports ...int) (int, error) {
for _, port := range ports {
c, err := net.Listen("tcp", fmt.Sprintf("%s:%d", host, port))
if err == nil {
c.Close()
return port, nil
}
}
c, err := net.Listen("tcp", host+":0")
if err != nil {
return 0, err
}
addr := c.Addr().(*net.TCPAddr)
c.Close()
return addr.Port, nil
}
func standbyMonitor() {
restartDelay := 60 * time.Second
now := time.Now()
@@ -1189,10 +1093,10 @@ func autoUpgrade(cfg *config.Wrapper) {
select {
case event := <-sub.C():
data, ok := event.Data.(map[string]string)
if !ok || data["clientName"] != "syncthing" || upgrade.CompareVersions(data["clientVersion"], Version) != upgrade.Newer {
if !ok || data["clientName"] != "syncthing" || upgrade.CompareVersions(data["clientVersion"], build.Version) != upgrade.Newer {
continue
}
l.Infof("Connected to device %s with a newer version (current %q < remote %q). Checking for upgrades.", data["id"], Version, data["clientVersion"])
l.Infof("Connected to device %s with a newer version (current %q < remote %q). Checking for upgrades.", data["id"], build.Version, data["clientVersion"])
case <-timer.C:
}
@@ -1204,7 +1108,7 @@ func autoUpgrade(cfg *config.Wrapper) {
checkInterval = time.Hour
}
rel, err := upgrade.LatestRelease(opts.ReleasesURL, Version, opts.UpgradeToPreReleases)
rel, err := upgrade.LatestRelease(opts.ReleasesURL, build.Version, opts.UpgradeToPreReleases)
if err == upgrade.ErrUpgradeUnsupported {
events.Default.Unsubscribe(sub)
return
@@ -1217,13 +1121,13 @@ func autoUpgrade(cfg *config.Wrapper) {
continue
}
if upgrade.CompareVersions(rel.Tag, Version) != upgrade.Newer {
if upgrade.CompareVersions(rel.Tag, build.Version) != upgrade.Newer {
// Skip equal, older or majorly newer (incompatible) versions
timer.Reset(checkInterval)
continue
}
l.Infof("Automatic upgrade (current %q < latest %q)", Version, rel.Tag)
l.Infof("Automatic upgrade (current %q < latest %q)", build.Version, rel.Tag)
err = upgrade.To(rel)
if err != nil {
l.Warnln("Automatic upgrade:", err)
@@ -1256,7 +1160,7 @@ func cleanConfigDirectory() {
}
for pat, dur := range patterns {
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, baseDirs["config"])
fs := fs.NewFilesystem(fs.FilesystemTypeBasic, locations.GetBaseDir(locations.ConfigBaseDir))
files, err := fs.Glob(pat)
if err != nil {
l.Infoln("Cleaning:", err)
@@ -1297,13 +1201,13 @@ func checkShortIDs(cfg *config.Wrapper) error {
}
func showPaths(options RuntimeOptions) {
fmt.Printf("Configuration file:\n\t%s\n\n", locations[locConfigFile])
fmt.Printf("Database directory:\n\t%s\n\n", locations[locDatabase])
fmt.Printf("Device private key & certificate files:\n\t%s\n\t%s\n\n", locations[locKeyFile], locations[locCertFile])
fmt.Printf("HTTPS private key & certificate files:\n\t%s\n\t%s\n\n", locations[locHTTPSKeyFile], locations[locHTTPSCertFile])
fmt.Printf("Configuration file:\n\t%s\n\n", locations.Get(locations.ConfigFile))
fmt.Printf("Database directory:\n\t%s\n\n", locations.Get(locations.Database))
fmt.Printf("Device private key & certificate files:\n\t%s\n\t%s\n\n", locations.Get(locations.KeyFile), locations.Get(locations.CertFile))
fmt.Printf("HTTPS private key & certificate files:\n\t%s\n\t%s\n\n", locations.Get(locations.HTTPSKeyFile), locations.Get(locations.HTTPSCertFile))
fmt.Printf("Log file:\n\t%s\n\n", options.logFile)
fmt.Printf("GUI override directory:\n\t%s\n\n", options.assetDir)
fmt.Printf("Default sync folder directory:\n\t%s\n\n", locations[locDefFolder])
fmt.Printf("Default sync folder directory:\n\t%s\n\n", locations.Get(locations.DefFolder))
}
func setPauseState(cfg *config.Wrapper, paused bool) {

View File

@@ -36,29 +36,3 @@ func TestShortIDCheck(t *testing.T) {
t.Error("Should have gotten an error")
}
}
func TestAllowedVersions(t *testing.T) {
testcases := []struct {
ver string
allowed bool
}{
{"v0.13.0", true},
{"v0.12.11+22-gabcdef0", true},
{"v0.13.0-beta0", true},
{"v0.13.0-beta47", true},
{"v0.13.0-beta47+1-gabcdef0", true},
{"v0.13.0-beta.0", true},
{"v0.13.0-beta.47", true},
{"v0.13.0-beta.0+1-gabcdef0", true},
{"v0.13.0-beta.47+1-gabcdef0", true},
{"v0.13.0-some-weird-but-allowed-tag", true},
{"v0.13.0-allowed.to.do.this", true},
{"v0.13.0+not.allowed.to.do.this", false},
}
for i, c := range testcases {
if allowed := allowedVersionExp.MatchString(c.ver); allowed != c.allowed {
t.Errorf("%d: incorrect result %v != %v for %q", i, allowed, c.allowed, c.ver)
}
}
}

View File

@@ -17,6 +17,7 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/lib/locations"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/sync"
)
@@ -127,7 +128,7 @@ func monitorMain(runtimeOptions RuntimeOptions) {
select {
case s := <-stopSign:
l.Infof("Signal %d received; exiting", s)
cmd.Process.Kill()
cmd.Process.Signal(sigTerm)
<-exit
return
@@ -198,7 +199,7 @@ func copyStderr(stderr io.Reader, dst io.Writer) {
}
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(timestampedLoc(locPanicLog))
panicFd, err = os.Create(locations.GetTimestamped(locations.PanicLog))
if err != nil {
l.Warnln("Create panic log:", err)
continue
@@ -418,10 +419,10 @@ func (f *autoclosedFile) closerLoop() {
func childEnv() []string {
var env []string
for _, str := range os.Environ() {
if strings.HasPrefix("STNORESTART=", str) {
if strings.HasPrefix(str, "STNORESTART=") {
continue
}
if strings.HasPrefix("STMONITORED=", str) {
if strings.HasPrefix(str, "STMONITORED=") {
continue
}
env = append(env, str)

View File

@@ -38,7 +38,10 @@ func savePerfStats(file string) {
t0 := time.Now()
for t := range time.NewTicker(250 * time.Millisecond).C {
syscall.Getrusage(syscall.RUSAGE_SELF, &rusage)
if err := syscall.Getrusage(syscall.RUSAGE_SELF, &rusage); err != nil {
continue
}
curTime := time.Now().UnixNano()
timeDiff := curTime - prevTime
curUsage := rusage.Utime.Nano() + rusage.Stime.Nano()

View File

@@ -20,6 +20,7 @@ import (
"sync"
"time"
"github.com/syncthing/syncthing/lib/build"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections"
"github.com/syncthing/syncthing/lib/dialer"
@@ -41,8 +42,8 @@ func reportData(cfg configIntf, m modelIntf, connectionsService connectionsIntf,
res := make(map[string]interface{})
res["urVersion"] = version
res["uniqueID"] = opts.URUniqueID
res["version"] = Version
res["longVersion"] = LongVersion
res["version"] = build.Version
res["longVersion"] = build.LongVersion
res["platform"] = runtime.GOOS + "-" + runtime.GOARCH
res["numFolders"] = len(cfg.Folders())
res["numDevices"] = len(cfg.Devices())
@@ -190,7 +191,7 @@ func reportData(cfg configIntf, m modelIntf, connectionsService connectionsIntf,
res["upgradeAllowedPre"] = !(upgrade.DisabledByCompilation || noUpgradeFromEnv) && opts.AutoUpgradeIntervalH > 0 && opts.UpgradeToPreReleases
if version >= 3 {
res["uptime"] = int(time.Now().Sub(startTime).Seconds())
res["uptime"] = int(time.Since(startTime).Seconds())
res["natType"] = connectionsService.NATType()
res["alwaysLocalNets"] = len(opts.AlwaysLocalNets) > 0
res["cacheIgnoredFiles"] = opts.CacheIgnoredFiles
@@ -348,7 +349,9 @@ func newUsageReportingService(cfg *config.Wrapper, model *model.Model, connectio
func (s *usageReportingService) sendUsageReport() error {
d := reportData(s.cfg, s.model, s.connectionsService, s.cfg.Options().URAccepted, false)
var b bytes.Buffer
json.NewEncoder(&b).Encode(d)
if err := json.NewEncoder(&b).Encode(d); err != nil {
return err
}
client := &http.Client{
Transport: &http.Transport{
@@ -417,7 +420,7 @@ func (s *usageReportingService) Stop() {
s.stopMut.RUnlock()
}
func (usageReportingService) String() string {
func (*usageReportingService) String() string {
return "usageReportingService"
}

View File

@@ -6,4 +6,5 @@ Exec=/usr/bin/syncthing -no-browser
Icon=syncthing
Terminal=false
Type=Application
Keywords=synchronization;daemon;
Categories=Network;FileTransfer;P2P

View File

@@ -6,4 +6,5 @@ Exec=/usr/bin/syncthing -browser-only
Icon=syncthing
Terminal=false
Type=Application
Keywords=synchronization;interface;
Categories=Network;FileTransfer;P2P

22
go.mod
View File

@@ -1,47 +1,43 @@
module github.com/syncthing/syncthing
require (
github.com/AudriusButkevicius/cli v0.0.0-20140727204646-7f561c78b5a4
github.com/AudriusButkevicius/go-nat-pmp v0.0.0-20160522074932-452c97607362
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a // indirect
github.com/AudriusButkevicius/recli v0.0.5
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e
github.com/calmh/du v1.0.1
github.com/calmh/xdr v1.1.0
github.com/chmduquesne/rollinghash v0.0.0-20180912150627-a60f8e7142b5
github.com/d4l3k/messagediff v1.2.1
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568
github.com/gobwas/glob v0.0.0-20170212200151-51eb1ee00b6d
github.com/gogo/protobuf v1.2.0
github.com/golang/groupcache v0.0.0-20171101203131-84a468cf14b4
github.com/golang/protobuf v0.0.0-20171113180720-1e59b77b52bf // indirect
github.com/golang/snappy v0.0.0-20170215233205-553a64147049 // indirect
github.com/jackpal/gateway v0.0.0-20161225004348-5795ac81146e
github.com/kballard/go-shellquote v0.0.0-20170619183022-cd60e84ee657
github.com/kr/pretty v0.1.0 // indirect
github.com/lib/pq v1.0.0
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128
github.com/mattn/go-isatty v0.0.4
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3 // indirect
github.com/onsi/gomega v0.0.0-20171227184521-ba3724c94e4d // indirect
github.com/oschwald/geoip2-golang v1.1.0
github.com/oschwald/maxminddb-golang v0.0.0-20170901134056-26fe5ace1c70 // indirect
github.com/petermattis/goid v0.0.0-20170816195418-3db12ebb2a59 // indirect
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e
github.com/pkg/errors v0.8.1
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v0.9.0
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5 // indirect
github.com/prometheus/common v0.0.0-20171117163051-2e54d0b93cba // indirect
github.com/prometheus/procfs v0.0.0-20171226183907-b15cd069a834 // indirect
github.com/prometheus/client_golang v0.9.2
github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9
github.com/sasha-s/go-deadlock v0.2.0
github.com/stretchr/testify v1.2.2 // indirect
github.com/syncthing/notify v0.0.0-20181107104724-4e389ea6c0d8
github.com/syndtr/goleveldb v0.0.0-20171214120811-34011bf325bc
github.com/thejerf/suture v0.0.0-20180907184608-bf6ee6a0b047
github.com/thejerf/suture v3.0.2+incompatible
github.com/urfave/cli v1.20.0
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
golang.org/x/crypto v0.0.0-20171231215028-0fcca4842a8d
golang.org/x/net v0.0.0-20171212005608-d866cfc389ce
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f // indirect
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc
golang.org/x/sys v0.0.0-20181213200352-4d1cda033e06 // indirect
golang.org/x/text v0.0.0-20171227012246-e19ae1496984
golang.org/x/time v0.0.0-20170927054726-6dc17368e09b

50
go.sum
View File

@@ -1,9 +1,9 @@
github.com/AudriusButkevicius/cli v0.0.0-20140727204646-7f561c78b5a4 h1:Cy4N5BdzSyWRnkNyzkIMKPSuzENT4AGxC+YFo0OOcCI=
github.com/AudriusButkevicius/cli v0.0.0-20140727204646-7f561c78b5a4/go.mod h1:mK5FQv1k6rd64lZeDQ+JgG5hSERyVEYeC3qXrbN+2nw=
github.com/AudriusButkevicius/go-nat-pmp v0.0.0-20160522074932-452c97607362 h1:l4qGIzSY0WhdXdR74XMYAtfc0Ri/RJVM4p6x/E/+WkA=
github.com/AudriusButkevicius/go-nat-pmp v0.0.0-20160522074932-452c97607362/go.mod h1:CEaBhA5lh1spxbPOELh5wNLKGsVQoahjUhVrJViVK8s=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a h1:BtpsbiV638WQZwhA98cEZw2BsbnQJrbd0BI7tsy0W1c=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/AudriusButkevicius/recli v0.0.5 h1:xUa55PvWTHBm17T6RvjElRO3y5tALpdceH86vhzQ5wg=
github.com/AudriusButkevicius/recli v0.0.5/go.mod h1:Q2E26yc6RvWWEz/TJ/goUp6yXvipYdJI096hpoaqsNs=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e h1:2augTYh6E+XoNrrivZJBadpThP/dsvYKj0nzqfQ8tM4=
github.com/bkaradzic/go-lz4 v0.0.0-20160924222819-7224d8d8f27e/go.mod h1:0YdlkowM3VswSROI7qDxhRvJ3sLhlFrRRwjwegp5jy4=
github.com/calmh/du v1.0.1 h1:uDCrDbXVVPrzxSNRkpj6nqSfwrl5uRWH3zvrJgl7RRo=
@@ -16,14 +16,16 @@ github.com/d4l3k/messagediff v1.2.1 h1:ZcAIMYsUg0EAp9X+tt8/enBE/Q8Yd5kzPynLyKptt
github.com/d4l3k/messagediff v1.2.1/go.mod h1:Oozbb1TVXFac9FtSIxHBMnBCq2qeH/2KkEQxENCrlLo=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568 h1:BMXYYRWTLOJKlh+lOBt6nUQgXAfB7oVIQt5cNreqSLI=
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:rZfgFAXFS/z/lEd6LJmf9HVZ1LkgYiHx5pHhV5DR16M=
github.com/gobwas/glob v0.0.0-20170212200151-51eb1ee00b6d h1:IngNQgbqr5ZOU0exk395Szrvkzes9Ilk1fmJfkw7d+M=
github.com/gobwas/glob v0.0.0-20170212200151-51eb1ee00b6d/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/gogo/protobuf v1.2.0 h1:xU6/SpYbvkNYiptHJYEDRseDLvYE7wSqhYYNy0QSUzI=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/groupcache v0.0.0-20171101203131-84a468cf14b4 h1:6o8aP0LGMKzo3NzwhhX6EJsiJ3ejmj+9yA/3p8Fjjlw=
github.com/golang/groupcache v0.0.0-20171101203131-84a468cf14b4/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v0.0.0-20171113180720-1e59b77b52bf h1:pFr/u+m8QUBMW/itAczltF3guNRAL7XDs5tD3f6nSD0=
github.com/golang/protobuf v0.0.0-20171113180720-1e59b77b52bf/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.0-20170215233205-553a64147049 h1:K9KHZbXKpGydfDN0aZrsoHpLJlZsBrGMFWbgLDGnPZk=
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/jackpal/gateway v0.0.0-20161225004348-5795ac81146e h1:lS8IitpqG4RkZbEDlZg5Z7FvBdWLVjSVfsPGOKafEkI=
@@ -37,10 +39,12 @@ github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.0.0 h1:X5PMW56eZitiTeO7tKzZxFCSpbFZJtkMMooicw2us9A=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128 h1:hEDK0Zao06IGlO1ada0FLT2g3KEot2vCqFp8gdvJqzM=
github.com/minio/sha256-simd v0.0.0-20190104231041-e529fa194128/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338 h1:USW1+zAUkUSvk097CAX/i8KR3r6f+DHNhk6Xe025Oyw=
github.com/minio/sha256-simd v0.0.0-20190117184323-cc1980cb0338/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3 h1:ZN7kHmC0iunA+4UPmERwsuMQan4lUnntO6WX6H1jOO8=
github.com/onsi/ginkgo v0.0.0-20171221013426-6c46eb8334b3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20171227184521-ba3724c94e4d h1:r351oUAFgdsydkt/g+XR/iJWRwyxVpy6nkNdEl/QdAs=
@@ -51,18 +55,18 @@ github.com/oschwald/maxminddb-golang v0.0.0-20170901134056-26fe5ace1c70 h1:XGLYU
github.com/oschwald/maxminddb-golang v0.0.0-20170901134056-26fe5ace1c70/go.mod h1:3jhIUymTJ5VREKyIhWm66LJiQt04F0UCDdodShpjWsY=
github.com/petermattis/goid v0.0.0-20170816195418-3db12ebb2a59 h1:2pHcLyJYXivxVvpoCc29uo3GDU1qFfJ1ggXKGYMrM0E=
github.com/petermattis/goid v0.0.0-20170816195418-3db12ebb2a59/go.mod h1:jvVRKCrJTQWu0XVbaOlby/2lO20uSCHEMzzplHXte1o=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e h1:+RHxT/gm0O3UF7nLJbdNzAmULvCFt4XfXHWzh3XI/zs=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.0 h1:tXuTFVHC03mW0D+Ua1Q2d1EAVqLTuggX50V0VLICCzY=
github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5 h1:cLL6NowurKLMfCeQy4tIeph12XNQWgANCNvdyrOYKV4=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/common v0.0.0-20171117163051-2e54d0b93cba h1:/MUKoJbk4oXV3uxkpfHVkmVfL+wzWW6dttaW26s07Gg=
github.com/prometheus/common v0.0.0-20171117163051-2e54d0b93cba/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/procfs v0.0.0-20171226183907-b15cd069a834 h1:HRxr4uZnx/S86wVQsfXcKhadpzdceXn2qCzCtagcI6w=
github.com/prometheus/procfs v0.0.0-20171226183907-b15cd069a834/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/client_golang v0.9.2 h1:awm861/B8OKDd2I/6o1dy3ra4BamzKhYOiGItCeZ740=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910 h1:idejC8f05m9MGOsuEi1ATq9shN03HrxNkD/luQvxCv8=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275 h1:PnBWHBf+6L0jOqq0gIVUe6Yk0/QMZ640k6NvkxcBf+8=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a h1:9a8MnZMP0X2nLJdBg+pBmGgkJlSaKC2KaQmTCk1XDtE=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9 h1:jmLW6izPBVlIbk4d+XgK9+sChGbVKxxOPmd9eqRHCjw=
github.com/rcrowley/go-metrics v0.0.0-20171128170426-e181e095bae9/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/sasha-s/go-deadlock v0.2.0 h1:lMqc+fUb7RrFS3gQLtoQsJ7/6TV/pAIFvBsqX73DK8Y=
@@ -73,14 +77,16 @@ github.com/syncthing/notify v0.0.0-20181107104724-4e389ea6c0d8 h1:ewsMW/a4xDpqHy
github.com/syncthing/notify v0.0.0-20181107104724-4e389ea6c0d8/go.mod h1:Sn4ChoS7e4FxjCN1XHPVBT43AgnRLbuaB8pEc1Zcdjg=
github.com/syndtr/goleveldb v0.0.0-20171214120811-34011bf325bc h1:yhWARKbbDg8UBRi/M5bVcVOBg2viFKcNJEAtHMYbRBo=
github.com/syndtr/goleveldb v0.0.0-20171214120811-34011bf325bc/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
github.com/thejerf/suture v0.0.0-20180907184608-bf6ee6a0b047 h1:TRlvuQjC13jRLqqJTp8rbb5SjRTYCP/8sCIYRdEaJrg=
github.com/thejerf/suture v0.0.0-20180907184608-bf6ee6a0b047/go.mod h1:ibKwrVj+Uzf3XZdAiNWUouPaAbSoemxOHLmJmwheEMc=
github.com/thejerf/suture v3.0.2+incompatible h1:GtMydYcnK4zBJ0KL6Lx9vLzl6Oozb65wh252FTBxrvM=
github.com/thejerf/suture v3.0.2+incompatible/go.mod h1:ibKwrVj+Uzf3XZdAiNWUouPaAbSoemxOHLmJmwheEMc=
github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0 h1:okhMind4q9H1OxF44gNegWkiP4H/gsTFLalHFa4OOUI=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU=
golang.org/x/crypto v0.0.0-20171231215028-0fcca4842a8d h1:GrqEEc3+MtHKTsZrdIGVoYDgLpbSRzW1EF+nLu0PcHE=
golang.org/x/crypto v0.0.0-20171231215028-0fcca4842a8d/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/net v0.0.0-20171212005608-d866cfc389ce h1:4g3VPcb++AP2cNa6CQ0iACUoH7J/3Jxojq0mmJun9A4=
golang.org/x/net v0.0.0-20171212005608-d866cfc389ce/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc h1:a3CU5tJYVj92DY2LaA1kUkrsqD5/3mLDhx2NcNqyW+0=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180926160741-c2ed4eda69e7/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Автоматично приемане",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Автоматичното обновяване вече предлага избор между стабилни версии и кандидат версии.",
"Automatic upgrades": "Автоматично обновяване",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Автоматично създаване или споделяне на папки, които това устройство предлага в пътя по подразбиране.",
"Available debug logging facilities:": "Дебъгинг функционалност на разположение:",
"Be careful!": "Внимание!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Auto Acceptar",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "L'actualització automàtica ara ofereix l'elecció entre les versions estables i les versions candidates.",
"Automatic upgrades": "Actualitzacions automàtiques",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Crear o compartir automàticament les carpetes que aquest dispositiu anuncia en la ruta per defecte.",
"Available debug logging facilities:": "Hi han disponibles les següents utilitats per a depurar el registre:",
"Be careful!": "Tin precaució!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Přijmout automaticky",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatická aktualizace nyní nabízí volbu mezi stabilními vydáními a kandidáty na vydání.",
"Automatic upgrades": "Automatické aktualizace",
"Automatic upgrades are always enabled for candidate releases.": "Automatické aktualizace jsou vždy povolené u kandidátů na vydání.",
"Automatically create or share folders that this device advertises at the default path.": "Automaticky vytvářet nebo sdílet adresáře, které toto zařízení odesílá ve výchozí cestě.",
"Available debug logging facilities:": "Dostupná logovací zařízení pro ladění:",
"Be careful!": "Pozor!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Autoacceptér",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Den automatiske opdatering tilbyder nu valget mellem stabile udgivelser og udgivelseskandidater.",
"Automatic upgrades": "Automatisk opdatering",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Opret eller del automatisk mapper på standardstien, som denne enhed tilbyder.",
"Available debug logging facilities:": "Tilgængelige faciliteter for fejlretningslogning:",
"Be careful!": "Vær forsigtig!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Automatische Annahme",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Die automatische Aktualisierung bietet jetzt die Wahl zwischen stabilen Veröffentlichungen und Veröffentlichungskandidaten.",
"Automatic upgrades": "Automatische Aktualisierungen aktivieren",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatisch Ordner erstellen oder freigeben, die dieses Gerät im Standardpfad ankündigt.",
"Available debug logging facilities:": "Verfügbare Debugging-Möglichkeiten:",
"Be careful!": "Vorsicht!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Αυτόματη αποδοχή",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Για τις αυτόματες αναβαθμίσεις μπορείτε πλέον να επιλέξετε μεταξύ σταθερών εκδόσεων και υποψήφιων εκδόσεων.",
"Automatic upgrades": "Αυτόματη αναβάθμιση",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Αυτόματη δημιουργία ή κοινή χρήση φακέλων τους οποίους ανακοινώνει αυτή η συσκευή στην προκαθορισμένη διαδρομή.",
"Available debug logging facilities:": "Διαθέσιμες επιλογές μηνυμάτων αποσφαλμάτωσης:",
"Be careful!": "Με προσοχή!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Auto Accept",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatic upgrade now offers the choice between stable releases and release candidates.",
"Automatic upgrades": "Automatic upgrades",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatically create or share folders that this device advertises at the default path.",
"Available debug logging facilities:": "Available debug logging facilities:",
"Be careful!": "Be careful!",

View File

@@ -349,6 +349,9 @@
"Uptime": "Uptime",
"Usage reporting is always enabled for candidate releases.": "Usage reporting is always enabled for candidate releases.",
"Use HTTPS for GUI": "Use HTTPS for GUI",
"Use notifications from the filesystem to detect changed items.": "Use notifications from the filesystem to detect changed items.",
"Variable Size Blocks": "Variable Size Blocks",
"Variable size blocks (also \"large blocks\") are more efficient for large files.": "Variable size blocks (also \"large blocks\") are more efficient for large files.",
"Version": "Version",
"Versions": "Versions",
"Versions Path": "Versions Path",
@@ -361,6 +364,7 @@
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Warning: If you are using an external watcher like {{syncthingInotify}}, you should make sure it is deactivated.",
"Watch for Changes": "Watch for Changes",
"Watching for Changes": "Watching for Changes",
"Watching for changes discovers most changes without periodic scanning.": "Watching for changes discovers most changes without periodic scanning.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "When adding a new device, keep in mind that this device must be added on the other side too.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.",
"Yes": "Yes",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Auto aceptar",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Ahora la actualización automática permite elegir entre versiones estables o versiones candidatas.",
"Automatic upgrades": "Actualizaciones automáticas",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Crear o compartir automáticamente las carpetas que este dispositivo anuncia en la ruta por defecto.",
"Available debug logging facilities:": "Ayudas disponibles para la depuración del registro:",
"Be careful!": "¡Ten cuidado!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Aceptar automáticamente",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Ahora la actualización automática permite elegir entre versiones estables o versiones candidatas.",
"Automatic upgrades": "Actualizaciones automáticas",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Crear o compartir automáticamente carpetas que este dispositivo anuncia en la ruta por defecto.",
"Available debug logging facilities:": "Funciones de registro de depuración disponibles:",
"Be careful!": "¡Ten cuidado!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Hyväksy automaattisesti",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automaattinen päivitys sallii valita vakaiden- ja kehitysversioiden välillä.",
"Automatic upgrades": "Automaattiset päivitykset",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Hyväksy automaattisesti kansioiden luonti tai jakaminen, jotka tämä laite ehdottaa oletuspolussa.",
"Available debug logging facilities:": "Saatavilla olevat debug-luokat:",
"Be careful!": "Ole varovainen!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Accepter automatiquement",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Le système de mise à jour automatique propose le choix entre versions stables et versions préliminaires.",
"Automatic upgrades": "Mises à jour automatiques",
"Automatic upgrades are always enabled for candidate releases.": "Les mises à jour automatiques sont toujours activées pour les versions préliminaires (-rc.N).",
"Automatically create or share folders that this device advertises at the default path.": "ATTENTION !!! Créer ou partager automatiquement dans le chemin par défaut les partages que cet appareil annonce.",
"Available debug logging facilities:": "Outils de débogage disponibles :",
"Be careful!": "Faites attention !",
@@ -139,12 +140,12 @@
"Global State": "État global",
"Help": "Aide",
"Home page": "Page d'accueil",
"Ignore": "Ignorer",
"Ignore": "Refuser",
"Ignore Patterns": "Exclusions...",
"Ignore Permissions": "Ignorer les permissions",
"Ignored Devices": "Appareils ignorés",
"Ignored Folders": "Partages ignorés",
"Ignored at": "Ignoré le",
"Ignored Devices": "Appareils refusés",
"Ignored Folders": "Partages refusés",
"Ignored at": "Refusé le",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Introduced By": "Introduit par",
@@ -346,7 +347,7 @@
"Upgrading": "Mise à jour de Syncthing",
"Upload Rate": "Débit d'envoi",
"Uptime": "Durée de fonctionnement",
"Usage reporting is always enabled for candidate releases.": "Les statistiques d'utilisation sont toujours envoyées pour les versions préliminaires.",
"Usage reporting is always enabled for candidate releases.": "L'envoi des statistiques d'utilisation est obligatoirement actif pour les versions préliminaires.",
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
"Version": "Version",
"Versions": "Restauration...",
@@ -366,8 +367,8 @@
"You can also select one of these nearby devices:": "Vous pouvez également sélectionner l'un de ces appareils proches :",
"You can change your choice at any time in the Settings dialog.": "Vous pouvez changer votre choix dans la boîte de dialogue \"Configuration\".",
"You can read more about the two release channels at the link below.": "Vous pouvez en savoir plus sur les deux canaux de distribution via le lien ci-dessous.",
"You have no ignored devices.": "Vous n'avez aucun appareil ignoré.",
"You have no ignored folders.": "Vous n'avez aucun partage ignoré.",
"You have no ignored devices.": "Aucun appareil refusé.",
"You have no ignored folders.": "Aucun partage refusé.",
"You have unsaved changes. Do you really want to discard them?": "Vous avez des réglages non enregistrés. Voulez-vous vraiment les ignorer ?",
"You must keep at least one version.": "Vous devez garder au minimum une version.",
"days": "Jours",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Auto-akseptaasje",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatyske fernijing biedt no de kar tusken stabyle ferzjes en ferzje kandidaten",
"Automatic upgrades": "Automatyske fernijings",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Meitsje of diel automatysk mappen dy't dit apparaat advertearret op it standert paad.",
"Available debug logging facilities:": "Beskikbere debug-lochfoarsjennings:",
"Be careful!": "Tink derom!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Automatikus elfogadás",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Az automatikus frissítés most lehetőséget kínál a stabil és az előzetes kiadások közötti választásra.",
"Automatic upgrades": "Automatikus frissítések",
"Automatic upgrades are always enabled for candidate releases.": "Az előzetes kiadásokban az automatikus frissítések mindig engedélyezettek.",
"Automatically create or share folders that this device advertises at the default path.": "Az eszköz alapértelmezett útvonalon hirdetett mappáinak automatikus létrehozása vagy megosztása",
"Available debug logging facilities:": "Elérhető hibakeresésnaplózási képességek:",
"Be careful!": "Óvatosan!",

View File

@@ -32,8 +32,9 @@
"Are you sure you want to remove folder {%label%}?": "Sei sicuro di voler rimuovere la cartella {{label}}?",
"Are you sure you want to restore {%count%} files?": "Sei sicuro di voler ripristinare {{count}} file?",
"Auto Accept": "Accettazione Automatica",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Aggiornamenti automatici offrono la scelta tra rilasci stabili e candidati di rilascio.",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Gli aggiornamenti automatici offrono la scelta tra versioni stabili e versioni candidate al rilascio.",
"Automatic upgrades": "Aggiornamenti automatici",
"Automatic upgrades are always enabled for candidate releases.": "Gli aggiornamenti automatici sono sempre abilitati per le versioni candidate al rilascio.",
"Automatically create or share folders that this device advertises at the default path.": "Crea o condividi automaticamente le cartelle che questo dispositivo presenta sul percorso predefinito.",
"Available debug logging facilities:": "Servizi di debug disponibili:",
"Be careful!": "Fai attenzione!",
@@ -226,7 +227,7 @@
"Recent Changes": "Cambiamenti Recenti",
"Reduced by ignore patterns": "Ridotto da schemi di esclusione",
"Release Notes": "Note di Rilascio",
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Candidati di rilascio contengono le ultime funzionalita e aggiustamenti. Sono simili ai rilasci bisettimanali di Syncthing.",
"Release candidates contain the latest features and fixes. They are similar to the traditional bi-weekly Syncthing releases.": "Le versioni candidate al rilascio contengono le ultime funzionalità e aggiustamenti. Sono simili ai rilasci bisettimanali di Syncthing.",
"Remote Devices": "Dispositivi Remoti",
"Remove": "Rimuovi",
"Remove Device": "Rimuovi Dispositivo",
@@ -280,8 +281,8 @@
"Smallest First": "Prima il più piccolo",
"Some items could not be restored:": "Alcuni elementi non possono essere ripristinati:",
"Source Code": "Codice Sorgente",
"Stable releases and release candidates": "Rilasci stabili e candidati di rilascio",
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "Rilasci stabili sono in ritardo di circa due settimane. Durante questo tempo verranno testati come candidati di rilascio.",
"Stable releases and release candidates": "Versioni stabili e versioni candidate al rilascio",
"Stable releases are delayed by about two weeks. During this time they go through testing as release candidates.": "Le versioni stabili sono in ritardo di circa due settimane. Durante questo tempo verranno testati come candidati di rilascio.",
"Stable releases only": "Solo rilasci stabili",
"Staggered File Versioning": "Controllo Versione Cadenzato",
"Start Browser": "Avvia Browser",
@@ -346,7 +347,7 @@
"Upgrading": "Aggiornamento",
"Upload Rate": "Velocità Upload",
"Uptime": "Tempo di Funzionamento",
"Usage reporting is always enabled for candidate releases.": "Segnalazioni di utilizzo sono sempre abilitati per candidati di rilascio.",
"Usage reporting is always enabled for candidate releases.": "Le segnalazioni di utilizzo sono sempre abilitate le versioni candidate al rilascio.",
"Use HTTPS for GUI": "Utilizza HTTPS per l'interfaccia grafica",
"Version": "Versione",
"Versions": "Versioni",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "自動承諾",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "自動アップグレードは、安定版とリリース候補版のいずれかを選べるようになりました。",
"Automatic upgrades": "自動アップグレード",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatically create or share folders that this device advertises at the default path.",
"Available debug logging facilities:": "Available debug logging facilities:",
"Be careful!": "注意!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "자동 수락",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "자동 업데이트를 이제 안정 버전과 출시 후보 사이에 선택 할 수 있게 됩니다.",
"Automatic upgrades": "자동 업데이트",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "이 장치가 기본 경로에서 알리는 폴더를 자동으로 만들거나 공유합니다.",
"Available debug logging facilities:": "사용 가능한 디버그 로깅 기능:",
"Be careful!": "주의!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Automatiškai priimti",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatiniai atnaujinimai dabar siūlo pasirinkimą tarp stabilių versijų ir kandidatinių versijų.",
"Automatic upgrades": "Automatiniai atnaujinimai",
"Automatic upgrades are always enabled for candidate releases.": "Automatiniai naujinimai kandidatinėms versijoms visada yra įjungti.",
"Automatically create or share folders that this device advertises at the default path.": "Automatiškai sukurti ar dalintis aplankais, kuriuos šis įrenginys skelbia numatytajame kelyje.",
"Available debug logging facilities:": "Prieinamos derinimo registravimo priemonės:",
"Be careful!": "Būkite atsargūs!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Godta automatisk",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatisk oppgradering lar deg nå få valget mellom ferdige utgaver og utgivelseskandidater.",
"Automatic upgrades": "Automatiske oppdateringer",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Opprett eller del mapper automatisk i mapper som denne enheten melder som forvalgt mappe.",
"Available debug logging facilities:": "Tilgjengelige funksjoner for logging i feilrettingsøyemed:",
"Be careful!": "Vær forsiktig!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Automatisch aanvaarden",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatisch bijwerken biedt nu de keuze tussen stabiele releases en release canditates.",
"Automatic upgrades": "Automatische upgrades",
"Automatic upgrades are always enabled for candidate releases.": "Automatische upgrades zijn altijd ingeschakeld voor kandidaat-releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatisch mappen die dit apparaat aankondigt aanmaken of delen op de standaardlocatie.",
"Available debug logging facilities:": "Beschikbare debuglog-mogelijkheden:",
"Be careful!": "Wees voorzichtig!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Autoakceptacja",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatyczne aktualizacje pozwalają teraz wybrać pomiędzy wydaniami stabilnymi a wersjami kandydującymi.",
"Automatic upgrades": "Automatyczne aktualizacje",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatycznie utwórz lub udostępniaj katalogi udostępniane przez te urządzenie w domyślnej ścieżce",
"Available debug logging facilities:": "Dostępne narzędzia logowania debugowego",
"Be careful!": "Uważaj!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Aceitar automaticamente",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "A atualização automática agora oferece a escolha entre versões estáveis e candidatas ao lançamento.",
"Automatic upgrades": "Atualizações automáticas",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Criar ou compartilhar automaticamente pastas que este dispositivo anuncia no caminho padrão.",
"Available debug logging facilities:": "Facilidades de depuração disponíveis:",
"Be careful!": "Tenha cuidado!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Aceitar automaticamente",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "A actualização automática agora oferece a escolha entre versões estáveis e candidatas a lançamento.",
"Automatic upgrades": "Actualizações automáticas",
"Automatic upgrades are always enabled for candidate releases.": "As actualizações automáticas estão sempre activadas nas versões candidatas a lançamento.",
"Automatically create or share folders that this device advertises at the default path.": "Criar ou partilhar, de forma automática e no caminho predefinido, pastas que este dispositivo publicita.",
"Available debug logging facilities:": "Recursos de registo de depuração disponíveis:",
"Be careful!": "Tenha cuidado!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Автопринятие",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Автоматическое обновление теперь предлагает выбор между стабильными выпусками и кандидатами в релизы.",
"Automatic upgrades": "Автообновление",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automatically create or share folders that this device advertises at the default path.",
"Available debug logging facilities:": "Доступные средства отладочного журнала:",
"Be careful!": "Будьте осторожны!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Automatické prijatie",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatická aktualizácia teraz ponúka voľbu medzi stabilnými vydaniami a kandidátmi na vydanie.",
"Automatic upgrades": "Automatické aktualizácie",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Automaticky vytvoriť alebo zdieľať adresáre, ktoré toto zariadenie ohlasuje, v predvolenom adresári.",
"Available debug logging facilities:": "Available debug logging facilities:",
"Be careful!": "Buď opatrný!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Acceptera automatiskt",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Automatisk uppgradering erbjuder nu valet mellan stabila utgåvor och utgåvskandidater.",
"Automatic upgrades": "Automatiska uppgraderingar",
"Automatic upgrades are always enabled for candidate releases.": "Automatiska uppgraderingar är alltid aktiverade för kandidatutgåvor.",
"Automatically create or share folders that this device advertises at the default path.": "Skapa eller dela automatiskt mappar som den här enheten annonserar på standardsökvägen.",
"Available debug logging facilities:": "Tillgängliga felsökningsfunktioner:",
"Be careful!": "Var aktsam!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "Затверджувати автоматично пропоновані віддаленим пристроєм каталоги",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Автоматиче оновлення зараз дозволяє обирати між стабільними випусками та реліз-кандидатами.",
"Automatic upgrades": "Автоматичні оновлення",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "Автоматично створювати або поширювати каталоги, які цей пристрій декларує як створені по замовчанню.",
"Available debug logging facilities:": "Доступні засоби журналу для відладки:",
"Be careful!": "Будьте обережні!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "自动接受",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "自动升级现在提供了稳定版本和发布候选版之间的选择。",
"Automatic upgrades": "自动升级",
"Automatic upgrades are always enabled for candidate releases.": "候选版本会一直启用自动升级。",
"Automatically create or share folders that this device advertises at the default path.": "自动地创建或共享这个设备在默认路径通告的文件夹。",
"Available debug logging facilities:": "可用的调试日志功能:",
"Be careful!": "小心!",

View File

@@ -34,6 +34,7 @@
"Auto Accept": "自動接受",
"Automatic upgrade now offers the choice between stable releases and release candidates.": "自動更新目前有穩定發行版及發行候選版可供選擇。",
"Automatic upgrades": "自動升級",
"Automatic upgrades are always enabled for candidate releases.": "Automatic upgrades are always enabled for candidate releases.",
"Automatically create or share folders that this device advertises at the default path.": "自動在預設資料夾路徑建立或分享該裝置推薦的資料夾。",
"Available debug logging facilities:": "可用的除錯日誌工具:",
"Be careful!": "請小心!",

View File

@@ -376,7 +376,7 @@
<tr ng-if="model[folder.id].needTotalItems > 0">
<th><span class="fas fa-fw fa-cloud-download-alt"></span>&nbsp;<span translate>Out of Sync Items</span></th>
<td class="text-right">
<a href="" ng-click="showNeed(folder.id)">{{model[folder.id].needTotalItems | alwaysNumber}} <span translate>items</span>, ~{{model[folder.id].needBytes | binary}}B</a>
<a href="" ng-click="showNeed(folder.id)">{{model[folder.id].needTotalItems | alwaysNumber | localeNumber}} <span translate>items</span>, ~{{model[folder.id].needBytes | binary}}B</a>
</td>
</tr>
<tr ng-if="folderStatus(folder) === 'scanning' && scanRate(folder.id) > 0">
@@ -395,7 +395,7 @@
<tr ng-if="folder.type == 'receiveonly' && canRevert(folder.id)">
<th><span class="fas fa-fw fa-exclamation-circle"></span>&nbsp;<span translate>Locally Changed Items</span></th>
<td class="text-right">
<a href="" ng-click="showLocalChanged(folder.id)">{{model[folder.id].receiveOnlyTotalItems | alwaysNumber}} <span translate>items</span>, ~{{model[folder.id].receiveOnlyBytes | binary}}B</a>
<a href="" ng-click="showLocalChanged(folder.id)">{{model[folder.id].receiveOnlyTotalItems | alwaysNumber | localeNumber}} <span translate>items</span>, ~{{model[folder.id].receiveOnlyBytes | binary}}B</a>
</td>
</tr>
<tr ng-if="folder.type != 'sendreceive'">

View File

@@ -12,7 +12,7 @@
<p translate>Copyright &copy; 2014-2017 the following Contributors:</p>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, Aaron Bieber, Adam Piggott, Adel Qalieh, Alessandro G., Andrew Dunham, Andrew Rabert, Andrey D, Antoine Lamielle, Aranjedeath, Arthur Axel fREW Schmidt, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benno Fünfstück, Benny Ng, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Chris Tonkinson, Colin Kennedy, Cromefire_, Dale Visser, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Dmitry Saveliev, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Graham Miln, Han Boetes, Harrison Jones, Heiko Zuerker, Hugo Locurcio, Iain Barnett, Ian Johnson, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jaya Chithra, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonathan Cross, Jose Manuel Delicado, Jörg Thalheim, Karol Różycki, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Kurt Fitzner, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mark Pulford, Mateusz Naściszewski, Matic Potočnik, Matt Burke, Matteo Ruina, Maurizio Tomasi, Max Schulze, MaximAL, Maxime Thirouin, Michael Jephcote, Michael Tilli, Mike Boone, MikeLund, Nicholas Rishel, Nico Stapelbroek, Nicolas Braud-Santoni, Niels Peter Roest, Nils Jakobi, NoLooseEnds, Oyebanji Jacob Mayowa, Pascal Jungblut, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Richard Hartmann, Robert Carosi, Roman Zaynetdinov, Ross Smith II, Sacheendra Talluri, Scott Klupfel, Sly_tom_cat, Stefan Kuntz, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Nygren, Tobias Tom, Tomas Cerveny, Tommy Thorn, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, Vladimir Rusinov, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, chucic, derekriemer, desbma, janost, jaseg, klemens, marco-m, perewa, rubenbe, wangguoliang, xjtdy888, 佛跳墙
Jakob Borg, Audrius Butkevicius, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, Aaron Bieber, Adam Piggott, Adel Qalieh, Alessandro G., Andrew Dunham, Andrew Rabert, Andrey D, Antoine Lamielle, Aranjedeath, Arthur Axel fREW Schmidt, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benno Fünfstück, Benny Ng, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Chris Tonkinson, Colin Kennedy, Cromefire_, Dale Visser, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Dmitry Saveliev, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Graham Miln, Han Boetes, Harrison Jones, Heiko Zuerker, Hugo Locurcio, Iain Barnett, Ian Johnson, Iskander (Alex) Sharipov, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jaya Chithra, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonathan Cross, Jose Manuel Delicado, Jörg Thalheim, Karol Różycki, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Kurt Fitzner, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mark Pulford, Mateusz Naściszewski, Matic Potočnik, Matt Burke, Matteo Ruina, Maurizio Tomasi, Max Schulze, MaximAL, Maxime Thirouin, Michael Jephcote, Michael Tilli, Mike Boone, MikeLund, Nicholas Rishel, Nico Stapelbroek, Nicolas Braud-Santoni, Niels Peter Roest, Nils Jakobi, NoLooseEnds, Oyebanji Jacob Mayowa, Pascal Jungblut, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Richard Hartmann, Robert Carosi, Roman Zaynetdinov, Ross Smith II, Sacheendra Talluri, Scott Klupfel, Sly_tom_cat, Stefan Kuntz, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Nygren, Tobias Tom, Tomas Cerveny, Tommy Thorn, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, Vladimir Rusinov, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, chucic, derekriemer, desbma, janost, jaseg, klemens, marco-m, perewa, rubenbe, wangguoliang, xjtdy888, 佛跳墙
</div>
</div>
<hr />

View File

@@ -77,7 +77,8 @@ angular.module('syncthing.core')
staggeredVersionsPath: "",
externalCommand: "",
autoNormalize: true,
path: ""
path: "",
useLargeBlocks: true,
};
$scope.localStateTotal = {

View File

@@ -150,76 +150,78 @@
<div class="col-md-12">
<label translate>Scanning</label>
&nbsp;<a href="https://docs.syncthing.net/users/syncing.html#scanning" target="_blank"><span class="fas fa-question-circle"></span>&nbsp;<span translate>Help</span></a></br>
<div class="row">
<div class="col-md-6">
<input type="checkbox" ng-model="currentFolder.fsWatcherEnabled" ng-change="fsWatcherToggled()" tooltip data-original-title="{{'Use notifications from the filesystem to detect changed items.' | translate }}">&nbsp;<span translate>Watch for Changes</span>
<p translate class="help-block">Watching for changes discovers most changes without periodic scanning.</p>
<input type="checkbox" ng-model="currentFolder.useLargeBlocks"> <span translate>Variable Size Blocks</span>
<p translate class="help-block">Variable size blocks (also "large blocks") are more efficient for large files.</p>
</div>
<div class="col-md-6">
<div class="row">
<span class="col-md-8" translate>Full Rescan Interval (s)</span>
<div class="col-md-4">
<input name="rescanIntervalS" id="rescanIntervalS" class="form-control" type="number" ng-model="currentFolder.rescanIntervalS" required="" aria-required="true" min="0" />
</div>
</div>
<label for="rescanIntervalS" translate>Full Rescan Interval (s)</label>
<input name="rescanIntervalS" id="rescanIntervalS" class="form-control" type="number" ng-model="currentFolder.rescanIntervalS" required="" aria-required="true" min="0" />
<p class="help-block" ng-if="!folderEditor.rescanIntervalS.$valid && folderEditor.rescanIntervalS.$dirty" translate>
The rescan interval must be a non-negative number of seconds.
</p>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-md-6 form-group">
<label translate>Folder Type</label>
&nbsp;<a href="https://docs.syncthing.net/users/foldertypes.html" target="_blank"><span class="fas fa-question-circle"></span>&nbsp;<span translate>Help</span></a>
<select class="form-control" ng-model="currentFolder.type">
<option value="sendreceive" translate>Send &amp; Receive</option>
<option value="sendonly" translate>Send Only</option>
<option value="receiveonly" translate>Receive Only</option>
</select>
<p ng-if="currentFolder.type == 'sendonly'" translate class="help-block">Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.</p>
<p ng-if="currentFolder.type == 'receiveonly'" translate class="help-block">Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.</p>
</div>
<div class="col-md-6 form-group">
<label translate>File Pull Order</label>
<select class="form-control" ng-model="currentFolder.order">
<option value="random" translate>Random</option>
<option value="alphabetic" translate>Alphabetic</option>
<option value="smallestFirst" translate>Smallest First</option>
<option value="largestFirst" translate>Largest First</option>
<option value="oldestFirst" translate>Oldest First</option>
<option value="newestFirst" translate>Newest First</option>
</select>
</div>
</div>
<div class="row">
<div class="col-md-6 form-horizontal form-group" ng-class="{'has-error': folderEditor.minDiskFree.$invalid && folderEditor.minDiskFree.$dirty}">
<label for="minDiskFree" translate>Minimum Free Disk Space</label><br />
<div class="row">
<div class="col-md-9">
<input name="minDiskFree" id="minDiskFree" class="form-control" type="number" ng-model="currentFolder.minDiskFree.value" required="" aria-required="true" min="0" step="0.01" />
<div class="col-md-6 form-group">
<label translate>Folder Type</label>
&nbsp;<a href="https://docs.syncthing.net/users/foldertypes.html" target="_blank"><span class="fas fa-question-circle"></span>&nbsp;<span translate>Help</span></a>
<select class="form-control" ng-model="currentFolder.type">
<option value="sendreceive" translate>Send &amp; Receive</option>
<option value="sendonly" translate>Send Only</option>
<option value="receiveonly" translate>Receive Only</option>
</select>
<p ng-if="currentFolder.type == 'sendonly'" translate class="help-block">Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.</p>
<p ng-if="currentFolder.type == 'receiveonly'" translate class="help-block">Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.</p>
</div>
<div class="col-md-3">
<select class="form-control" ng-model="currentFolder.minDiskFree.unit">
<option value="%">%</option>
<option value="kB">kB</option>
<option value="MB">MB</option>
<option value="GB">GB</option>
<option value="TB">TB</option>
<div class="col-md-6 form-group">
<label translate>File Pull Order</label>
<select class="form-control" ng-model="currentFolder.order">
<option value="random" translate>Random</option>
<option value="alphabetic" translate>Alphabetic</option>
<option value="smallestFirst" translate>Smallest First</option>
<option value="largestFirst" translate>Largest First</option>
<option value="oldestFirst" translate>Oldest First</option>
<option value="newestFirst" translate>Newest First</option>
</select>
</div>
</div>
<p class="help-block" ng-show="folderEditor.minDiskFree.$invalid" translate>
Enter a non-negative number (e.g., "2.35") and select a unit. Percentages are as part of the total disk size.
</p>
</div>
<div class="col-md-6 form-group">
<label translate>Permissions</label><br />
<input type="checkbox" ng-model="currentFolder.ignorePerms" /> <span translate>Ignore</span>
<p translate class="help-block">File permission bits are ignored when looking for changes. Use on FAT file systems.</p>
<p class="col-xs-12 help-block" ng-show="folderEditor.minDiskFree.$invalid">
<span translate>Enter a non-negative number (e.g., "2.35") and select a unit. Percentages are as part of the total disk size.</span>
</p>
<div class="row">
<div class="col-md-6 form-horizontal form-group" ng-class="{'has-error': folderEditor.minDiskFree.$invalid && folderEditor.minDiskFree.$dirty}">
<label for="minDiskFree" translate>Minimum Free Disk Space</label><br />
<div class="row">
<div class="col-md-9">
<input name="minDiskFree" id="minDiskFree" class="form-control" type="number" ng-model="currentFolder.minDiskFree.value" required="" aria-required="true" min="0" step="0.01" />
</div>
<div class="col-md-3">
<select class="form-control" ng-model="currentFolder.minDiskFree.unit">
<option value="%">%</option>
<option value="kB">kB</option>
<option value="MB">MB</option>
<option value="GB">GB</option>
<option value="TB">TB</option>
</select>
</div>
</div>
<p class="help-block" ng-show="folderEditor.minDiskFree.$invalid" translate>
Enter a non-negative number (e.g., "2.35") and select a unit. Percentages are as part of the total disk size.
</p>
</div>
<div class="col-md-6 form-group">
<label translate>Permissions</label><br />
<input type="checkbox" ng-model="currentFolder.ignorePerms" /> <span translate>Ignore</span>
<p translate class="help-block">File permission bits are ignored when looking for changes. Use on FAT file systems.</p>
<p class="col-xs-12 help-block" ng-show="folderEditor.minDiskFree.$invalid">
<span translate>Enter a non-negative number (e.g., "2.35") and select a unit. Percentages are as part of the total disk size.</span>
</p>
</div>
</div>
</div>
</div>
</div>

81
lib/build/build.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package build
import (
"fmt"
"log"
"regexp"
"runtime"
"strconv"
"strings"
"time"
)
var (
// Injected by build script
Version = "unknown-dev"
Host = "unknown" // Set by build script
User = "unknown" // Set by build script
Stamp = "0" // Set by build script
// Static
Codename = "Erbium Earthworm"
// Set by init()
Date time.Time
IsRelease bool
IsCandidate bool
IsBeta bool
LongVersion string
// Set by Go build tags
Tags []string
allowedVersionExp = regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z0-9]+)*(\.\d+)*(\+\d+-g[0-9a-f]+)?(-[^\s]+)?$`)
)
func init() {
if Version != "unknown-dev" {
// If not a generic dev build, version string should come from git describe
if !allowedVersionExp.MatchString(Version) {
log.Fatalf("Invalid version string %q;\n\tdoes not match regexp %v", Version, allowedVersionExp)
}
}
setBuildData()
}
func setBuildData() {
// Check for a clean release build. A release is something like
// "v0.1.2", with an optional suffix of letters and dot separated
// numbers like "-beta3.47". If there's more stuff, like a plus sign and
// a commit hash and so on, then it's not a release. If it has a dash in
// it, it's some sort of beta, release candidate or special build. If it
// has "-rc." in it, like "v0.14.35-rc.42", then it's a candidate build.
//
// So, every build that is not a stable release build has IsBeta = true.
// This is used to enable some extra debugging (the deadlock detector).
//
// Release candidate builds are also "betas" from this point of view and
// will have that debugging enabled. In addition, some features are
// forced for release candidates - auto upgrade, and usage reporting.
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z]+[\d\.]+)?$`)
IsRelease = exp.MatchString(Version)
IsCandidate = strings.Contains(Version, "-rc.")
IsBeta = strings.Contains(Version, "-")
stamp, _ := strconv.Atoi(Stamp)
Date = time.Unix(int64(stamp), 0)
date := Date.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`syncthing %s "%s" (%s %s-%s) %s@%s %s`, Version, Codename, runtime.Version(), runtime.GOOS, runtime.GOARCH, User, Host, date)
if len(Tags) > 0 {
LongVersion = fmt.Sprintf("%s [%s]", LongVersion, strings.Join(Tags, ", "))
}
}

37
lib/build/build_test.go Normal file
View File

@@ -0,0 +1,37 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package build
import (
"testing"
)
func TestAllowedVersions(t *testing.T) {
testcases := []struct {
ver string
allowed bool
}{
{"v0.13.0", true},
{"v0.12.11+22-gabcdef0", true},
{"v0.13.0-beta0", true},
{"v0.13.0-beta47", true},
{"v0.13.0-beta47+1-gabcdef0", true},
{"v0.13.0-beta.0", true},
{"v0.13.0-beta.47", true},
{"v0.13.0-beta.0+1-gabcdef0", true},
{"v0.13.0-beta.47+1-gabcdef0", true},
{"v0.13.0-some-weird-but-allowed-tag", true},
{"v0.13.0-allowed.to.do.this", true},
{"v0.13.0+not.allowed.to.do.this", false},
}
for i, c := range testcases {
if allowed := allowedVersionExp.MatchString(c.ver); allowed != c.allowed {
t.Errorf("%d: incorrect result %v != %v for %q", i, allowed, c.allowed, c.ver)
}
}
}

View File

@@ -6,8 +6,8 @@
//+build noupgrade
package main
package build
func init() {
BuildTags = append(BuildTags, "noupgrade")
Tags = append(Tags, "noupgrade")
}

View File

@@ -6,8 +6,8 @@
//+build race
package main
package build
func init() {
BuildTags = append(BuildTags, "race")
Tags = append(Tags, "race")
}

View File

@@ -47,6 +47,7 @@ var (
util.Address("tcp", net.JoinHostPort("0.0.0.0", strconv.Itoa(DefaultTCPPort))),
"dynamic+https://relays.syncthing.net/endpoint",
}
DefaultGUIPort = 8384
// DefaultDiscoveryServersV4 should be substituted when the configuration
// contains <globalAnnounceServer>default-v4</globalAnnounceServer>.
DefaultDiscoveryServersV4 = []string{
@@ -83,6 +84,31 @@ func New(myID protocol.DeviceID) Configuration {
return cfg
}
func NewWithFreePorts(myID protocol.DeviceID) Configuration {
cfg := New(myID)
port, err := getFreePort("127.0.0.1", DefaultGUIPort)
if err != nil {
l.Fatalln("get free port (GUI):", err)
}
cfg.GUI.RawAddress = fmt.Sprintf("127.0.0.1:%d", port)
port, err = getFreePort("0.0.0.0", DefaultTCPPort)
if err != nil {
l.Fatalln("get free port (BEP):", err)
}
if port == DefaultTCPPort {
cfg.Options.ListenAddresses = []string{"default"}
} else {
cfg.Options.ListenAddresses = []string{
fmt.Sprintf("tcp://%s", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
"dynamic+https://relays.syncthing.net/endpoint",
}
}
return cfg
}
func ReadXML(r io.Reader, myID protocol.DeviceID) (Configuration, error) {
var cfg Configuration
@@ -840,3 +866,23 @@ func filterURLSchemePrefix(addrs []string, prefix string) []string {
}
return addrs
}
// tried in succession and the first to succeed is returned. If none succeed,
// a random high port is returned.
func getFreePort(host string, ports ...int) (int, error) {
for _, port := range ports {
c, err := net.Listen("tcp", fmt.Sprintf("%s:%d", host, port))
if err == nil {
c.Close()
return port, nil
}
}
c, err := net.Listen("tcp", host+":0")
if err != nil {
return 0, err
}
addr := c.Addr().(*net.TCPAddr)
c.Close()
return addr.Port, nil
}

View File

@@ -918,7 +918,7 @@ func TestIssue4219(t *testing.T) {
],
"folders": [
{
"id": "abcd123",
"id": "abcd123",
"devices":[
{"deviceID": "GYRZZQB-IRNPV4Z-T7TC52W-EQYJ3TT-FDQW6MW-DFLMU42-SSSU6EM-FBK2VAY"}
]

View File

@@ -10,12 +10,13 @@ import (
"sort"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
)
type DeviceConfiguration struct {
DeviceID protocol.DeviceID `xml:"id,attr" json:"deviceID"`
Name string `xml:"name,attr,omitempty" json:"name"`
Addresses []string `xml:"address,omitempty" json:"addresses"`
Addresses []string `xml:"address,omitempty" json:"addresses" default:"dynamic"`
Compression protocol.Compression `xml:"compression,attr" json:"compression"`
CertName string `xml:"certName,attr,omitempty" json:"certName"`
Introducer bool `xml:"introducer,attr" json:"introducer"`
@@ -36,6 +37,9 @@ func NewDeviceConfiguration(id protocol.DeviceID, name string) DeviceConfigurati
DeviceID: id,
Name: name,
}
util.SetDefaults(&d)
d.prepare(nil)
return d
}

View File

@@ -26,33 +26,34 @@ var (
const DefaultMarkerName = ".stfolder"
type FolderConfiguration struct {
ID string `xml:"id,attr" json:"id"`
Label string `xml:"label,attr" json:"label" restart:"false"`
FilesystemType fs.FilesystemType `xml:"filesystemType" json:"filesystemType"`
Path string `xml:"path,attr" json:"path"`
Type FolderType `xml:"type,attr" json:"type"`
Devices []FolderDeviceConfiguration `xml:"device" json:"devices"`
RescanIntervalS int `xml:"rescanIntervalS,attr" json:"rescanIntervalS"`
FSWatcherEnabled bool `xml:"fsWatcherEnabled,attr" json:"fsWatcherEnabled"`
FSWatcherDelayS int `xml:"fsWatcherDelayS,attr" json:"fsWatcherDelayS"`
IgnorePerms bool `xml:"ignorePerms,attr" json:"ignorePerms"`
AutoNormalize bool `xml:"autoNormalize,attr" json:"autoNormalize"`
MinDiskFree Size `xml:"minDiskFree" json:"minDiskFree"`
Versioning VersioningConfiguration `xml:"versioning" json:"versioning"`
Copiers int `xml:"copiers" json:"copiers"` // This defines how many files are handled concurrently.
PullerMaxPendingKiB int `xml:"pullerMaxPendingKiB" json:"pullerMaxPendingKiB"`
Hashers int `xml:"hashers" json:"hashers"` // Less than one sets the value to the number of cores. These are CPU bound due to hashing.
Order PullOrder `xml:"order" json:"order"`
IgnoreDelete bool `xml:"ignoreDelete" json:"ignoreDelete"`
ScanProgressIntervalS int `xml:"scanProgressIntervalS" json:"scanProgressIntervalS"` // Set to a negative value to disable. Value of 0 will get replaced with value of 2 (default value)
PullerPauseS int `xml:"pullerPauseS" json:"pullerPauseS"`
MaxConflicts int `xml:"maxConflicts" json:"maxConflicts"`
DisableSparseFiles bool `xml:"disableSparseFiles" json:"disableSparseFiles"`
DisableTempIndexes bool `xml:"disableTempIndexes" json:"disableTempIndexes"`
Paused bool `xml:"paused" json:"paused"`
WeakHashThresholdPct int `xml:"weakHashThresholdPct" json:"weakHashThresholdPct"` // Use weak hash if more than X percent of the file has changed. Set to -1 to always use weak hash.
MarkerName string `xml:"markerName" json:"markerName"`
UseLargeBlocks bool `xml:"useLargeBlocks" json:"useLargeBlocks"`
ID string `xml:"id,attr" json:"id"`
Label string `xml:"label,attr" json:"label" restart:"false"`
FilesystemType fs.FilesystemType `xml:"filesystemType" json:"filesystemType"`
Path string `xml:"path,attr" json:"path"`
Type FolderType `xml:"type,attr" json:"type"`
Devices []FolderDeviceConfiguration `xml:"device" json:"devices"`
RescanIntervalS int `xml:"rescanIntervalS,attr" json:"rescanIntervalS" default:"3600"`
FSWatcherEnabled bool `xml:"fsWatcherEnabled,attr" json:"fsWatcherEnabled" default:"true"`
FSWatcherDelayS int `xml:"fsWatcherDelayS,attr" json:"fsWatcherDelayS" default:"10"`
IgnorePerms bool `xml:"ignorePerms,attr" json:"ignorePerms"`
AutoNormalize bool `xml:"autoNormalize,attr" json:"autoNormalize" default:"true"`
MinDiskFree Size `xml:"minDiskFree" json:"minDiskFree" default:"1%"`
Versioning VersioningConfiguration `xml:"versioning" json:"versioning"`
Copiers int `xml:"copiers" json:"copiers"` // This defines how many files are handled concurrently.
PullerMaxPendingKiB int `xml:"pullerMaxPendingKiB" json:"pullerMaxPendingKiB"`
Hashers int `xml:"hashers" json:"hashers"` // Less than one sets the value to the number of cores. These are CPU bound due to hashing.
Order PullOrder `xml:"order" json:"order"`
IgnoreDelete bool `xml:"ignoreDelete" json:"ignoreDelete"`
ScanProgressIntervalS int `xml:"scanProgressIntervalS" json:"scanProgressIntervalS"` // Set to a negative value to disable. Value of 0 will get replaced with value of 2 (default value)
PullerPauseS int `xml:"pullerPauseS" json:"pullerPauseS"`
MaxConflicts int `xml:"maxConflicts" json:"maxConflicts" default:"-1"`
DisableSparseFiles bool `xml:"disableSparseFiles" json:"disableSparseFiles"`
DisableTempIndexes bool `xml:"disableTempIndexes" json:"disableTempIndexes"`
Paused bool `xml:"paused" json:"paused"`
WeakHashThresholdPct int `xml:"weakHashThresholdPct" json:"weakHashThresholdPct"` // Use weak hash if more than X percent of the file has changed. Set to -1 to always use weak hash.
MarkerName string `xml:"markerName" json:"markerName"`
UseLargeBlocks bool `xml:"useLargeBlocks" json:"useLargeBlocks"`
CopyOwnershipFromParent bool `xml:"copyOwnershipFromParent" json:"copyOwnershipFromParent"`
cachedFilesystem fs.Filesystem
@@ -68,18 +69,15 @@ type FolderDeviceConfiguration struct {
func NewFolderConfiguration(myID protocol.DeviceID, id, label string, fsType fs.FilesystemType, path string) FolderConfiguration {
f := FolderConfiguration{
ID: id,
Label: label,
RescanIntervalS: 3600,
FSWatcherEnabled: true,
FSWatcherDelayS: 10,
MinDiskFree: Size{Value: 1, Unit: "%"},
Devices: []FolderDeviceConfiguration{{DeviceID: myID}},
AutoNormalize: true,
MaxConflicts: -1,
FilesystemType: fsType,
Path: path,
ID: id,
Label: label,
Devices: []FolderDeviceConfiguration{{DeviceID: myID}},
FilesystemType: fsType,
Path: path,
}
util.SetDefaults(&f)
f.prepare()
return f
}
@@ -280,7 +278,7 @@ func (f *FolderConfiguration) CheckAvailableSpace(req int64) error {
}
usage.Free -= req
if usage.Free > 0 {
if err := checkFreeSpace(f.MinDiskFree, usage); err == nil {
if err := CheckFreeSpace(f.MinDiskFree, usage); err == nil {
return nil
}
}

View File

@@ -14,7 +14,7 @@ import (
type OptionsConfiguration struct {
ListenAddresses []string `xml:"listenAddress" json:"listenAddresses" default:"default"`
GlobalAnnServers []string `xml:"globalAnnounceServer" json:"globalAnnounceServers" json:"globalAnnounceServer" default:"default" restart:"true"`
GlobalAnnServers []string `xml:"globalAnnounceServer" json:"globalAnnounceServers" default:"default" restart:"true"`
GlobalAnnEnabled bool `xml:"globalAnnounceEnabled" json:"globalAnnounceEnabled" default:"true" restart:"true"`
LocalAnnEnabled bool `xml:"localAnnounceEnabled" json:"localAnnounceEnabled" default:"true" restart:"true"`
LocalAnnPort int `xml:"localAnnouncePort" json:"localAnnouncePort" default:"21027" restart:"true"`
@@ -58,7 +58,7 @@ type OptionsConfiguration struct {
DeprecatedUPnPRenewalM int `xml:"upnpRenewalMinutes,omitempty" json:"-"`
DeprecatedUPnPTimeoutS int `xml:"upnpTimeoutSeconds,omitempty" json:"-"`
DeprecatedRelayServers []string `xml:"relayServer,omitempty" json:"-"`
DeprecatedMinHomeDiskFreePct float64 `xml:"minHomeDiskFreePct" json:"-"`
DeprecatedMinHomeDiskFreePct float64 `xml:"minHomeDiskFreePct,omitempty" json:"-"`
}
func (orig OptionsConfiguration) Copy() OptionsConfiguration {

View File

@@ -72,11 +72,13 @@ func (s Size) String() string {
return fmt.Sprintf("%v %s", s.Value, s.Unit)
}
func (Size) ParseDefault(s string) (interface{}, error) {
return ParseSize(s)
func (s *Size) ParseDefault(str string) error {
sz, err := ParseSize(str)
*s = sz
return err
}
func checkFreeSpace(req Size, usage fs.Usage) error {
func CheckFreeSpace(req Size, usage fs.Usage) error {
val := req.BaseValue()
if val <= 0 {
return nil

View File

@@ -6,7 +6,28 @@
package config
import "testing"
import (
"testing"
"github.com/syncthing/syncthing/lib/util"
)
type TestStruct struct {
Size Size `default:"10%"`
}
func TestSizeDefaults(t *testing.T) {
x := &TestStruct{}
util.SetDefaults(x)
if !x.Size.Percentage() {
t.Error("not percentage")
}
if x.Size.Value != 10 {
t.Error("not ten")
}
}
func TestParseSize(t *testing.T) {
cases := []struct {

View File

@@ -7,14 +7,11 @@
package config
import (
"fmt"
"os"
"path/filepath"
"sync/atomic"
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
@@ -64,7 +61,6 @@ type Wrapper struct {
deviceMap map[protocol.DeviceID]DeviceConfiguration
folderMap map[string]FolderConfiguration
replaces chan Configuration
subs []Committer
mut sync.Mutex
@@ -79,7 +75,6 @@ func Wrap(path string, cfg Configuration) *Wrapper {
path: path,
mut: sync.NewMutex(),
}
w.replaces = make(chan Configuration)
return w
}
@@ -104,12 +99,6 @@ func (w *Wrapper) ConfigPath() string {
return w.path
}
// Stop stops the Serve() loop. Set and Replace operations will panic after a
// Stop.
func (w *Wrapper) Stop() {
close(w.replaces)
}
// Subscribe registers the given handler to be called on any future
// configuration changes.
func (w *Wrapper) Subscribe(c Committer) {
@@ -496,15 +485,3 @@ func (w *Wrapper) AddOrUpdatePendingFolder(id, label string, device protocol.Dev
panic("bug: adding pending folder for non-existing device")
}
// CheckHomeFreeSpace returns nil if the home disk has the required amount of
// free space, or if home disk free space checking is disabled.
func (w *Wrapper) CheckHomeFreeSpace() error {
path := filepath.Dir(w.ConfigPath())
if usage, err := fs.NewFilesystem(fs.FilesystemTypeBasic, path).Usage("."); err == nil {
if err = checkFreeSpace(w.Options().MinHomeDiskFree, usage); err != nil {
return fmt.Errorf("insufficient space on home disk (%v): %v", path, err)
}
}
return nil
}

View File

@@ -35,8 +35,8 @@ import (
)
var (
dialers = make(map[string]dialerFactory, 0)
listeners = make(map[string]listenerFactory, 0)
dialers = make(map[string]dialerFactory)
listeners = make(map[string]listenerFactory)
)
var (

View File

@@ -8,9 +8,6 @@ package db_test
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/syncthing/syncthing/lib/db"
@@ -39,30 +36,15 @@ func lazyInitBenchFileSet() {
secondHalf = files[middle:]
oneFile = firstHalf[middle-1 : middle]
ldb, _ := tempDB()
ldb := db.OpenMemory()
benchS = db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
replace(benchS, remoteDevice0, files)
replace(benchS, protocol.LocalDeviceID, firstHalf)
}
func tempDB() (*db.Lowlevel, string) {
dir, err := ioutil.TempDir("", "syncthing")
if err != nil {
panic(err)
}
dbi, err := db.Open(filepath.Join(dir, "db"))
if err != nil {
panic(err)
}
return dbi, dir
}
func BenchmarkReplaceAll(b *testing.B) {
ldb, dir := tempDB()
defer func() {
ldb.Close()
os.RemoveAll(dir)
}()
ldb := db.OpenMemory()
defer ldb.Close()
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -78,12 +60,11 @@ func BenchmarkUpdateOneChanged(b *testing.B) {
changed := make([]protocol.FileInfo, 1)
changed[0] = oneFile[0]
changed[0].Version = changed[0].Version.Update(myID)
changed[0].Blocks = genBlocks(len(changed[0].Blocks))
changed[0].Version = changed[0].Version.Copy().Update(myID)
b.ResetTimer()
for i := 0; i < b.N; i++ {
if i%1 == 0 {
if i%2 == 0 {
benchS.Update(protocol.LocalDeviceID, changed)
} else {
benchS.Update(protocol.LocalDeviceID, oneFile)
@@ -93,6 +74,48 @@ func BenchmarkUpdateOneChanged(b *testing.B) {
b.ReportAllocs()
}
func BenchmarkUpdate100Changed(b *testing.B) {
lazyInitBenchFileSet()
unchanged := files[100:200]
changed := append([]protocol.FileInfo{}, unchanged...)
for i := range changed {
changed[i].Version = changed[i].Version.Copy().Update(myID)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if i%2 == 0 {
benchS.Update(protocol.LocalDeviceID, changed)
} else {
benchS.Update(protocol.LocalDeviceID, unchanged)
}
}
b.ReportAllocs()
}
func BenchmarkUpdate100ChangedRemote(b *testing.B) {
lazyInitBenchFileSet()
unchanged := files[100:200]
changed := append([]protocol.FileInfo{}, unchanged...)
for i := range changed {
changed[i].Version = changed[i].Version.Copy().Update(myID)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if i%2 == 0 {
benchS.Update(remoteDevice0, changed)
} else {
benchS.Update(remoteDevice0, unchanged)
}
}
b.ReportAllocs()
}
func BenchmarkUpdateOneUnchanged(b *testing.B) {
lazyInitBenchFileSet()
@@ -122,6 +145,30 @@ func BenchmarkNeedHalf(b *testing.B) {
b.ReportAllocs()
}
func BenchmarkNeedHalfRemote(b *testing.B) {
lazyInitBenchFileSet()
ldb := db.OpenMemory()
defer ldb.Close()
fset := db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
replace(fset, remoteDevice0, firstHalf)
replace(fset, protocol.LocalDeviceID, files)
b.ResetTimer()
for i := 0; i < b.N; i++ {
count := 0
fset.WithNeed(remoteDevice0, func(fi db.FileIntf) bool {
count++
return true
})
if count != len(secondHalf) {
b.Errorf("wrong length %d != %d", count, len(secondHalf))
}
}
b.ReportAllocs()
}
func BenchmarkHave(b *testing.B) {
lazyInitBenchFileSet()

View File

@@ -11,125 +11,14 @@ import (
"fmt"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
)
var blockFinder *BlockFinder
const maxBatchSize = 1000
type BlockMap struct {
db *Lowlevel
folder uint32
}
func NewBlockMap(db *Lowlevel, folder string) *BlockMap {
return &BlockMap{
db: db,
folder: db.folderIdx.ID([]byte(folder)),
}
}
// Add files to the block map, ignoring any deleted or invalid files.
func (m *BlockMap) Add(files []protocol.FileInfo) error {
batch := new(leveldb.Batch)
buf := make([]byte, 4)
var key []byte
for _, file := range files {
m.checkFlush(batch)
if file.IsDirectory() || file.IsDeleted() || file.IsInvalid() {
continue
}
for i, block := range file.Blocks {
binary.BigEndian.PutUint32(buf, uint32(i))
key = m.blockKeyInto(key, block.Hash, file.Name)
batch.Put(key, buf)
}
}
return m.db.Write(batch, nil)
}
// Update block map state, removing any deleted or invalid files.
func (m *BlockMap) Update(files []protocol.FileInfo) error {
batch := new(leveldb.Batch)
buf := make([]byte, 4)
var key []byte
for _, file := range files {
m.checkFlush(batch)
switch {
case file.IsDirectory():
case file.IsDeleted() || file.IsInvalid():
for _, block := range file.Blocks {
key = m.blockKeyInto(key, block.Hash, file.Name)
batch.Delete(key)
}
default:
for i, block := range file.Blocks {
binary.BigEndian.PutUint32(buf, uint32(i))
key = m.blockKeyInto(key, block.Hash, file.Name)
batch.Put(key, buf)
}
}
}
return m.db.Write(batch, nil)
}
// Discard block map state, removing the given files
func (m *BlockMap) Discard(files []protocol.FileInfo) error {
batch := new(leveldb.Batch)
var key []byte
for _, file := range files {
m.checkFlush(batch)
m.discard(file, key, batch)
}
return m.db.Write(batch, nil)
}
func (m *BlockMap) discard(file protocol.FileInfo, key []byte, batch *leveldb.Batch) {
for _, block := range file.Blocks {
key = m.blockKeyInto(key, block.Hash, file.Name)
batch.Delete(key)
}
}
func (m *BlockMap) checkFlush(batch *leveldb.Batch) error {
if batch.Len() > maxBatchSize {
if err := m.db.Write(batch, nil); err != nil {
return err
}
batch.Reset()
}
return nil
}
// Drop block map, removing all entries related to this block map from the db.
func (m *BlockMap) Drop() error {
batch := new(leveldb.Batch)
iter := m.db.NewIterator(util.BytesPrefix(m.blockKeyInto(nil, nil, "")[:keyPrefixLen+keyFolderLen]), nil)
defer iter.Release()
for iter.Next() {
m.checkFlush(batch)
batch.Delete(iter.Key())
}
if iter.Error() != nil {
return iter.Error()
}
return m.db.Write(batch, nil)
}
func (m *BlockMap) blockKeyInto(o, hash []byte, file string) []byte {
return blockKeyInto(o, hash, m.folder, file)
}
type BlockFinder struct {
db *Lowlevel
db *instance
}
func NewBlockFinder(db *Lowlevel) *BlockFinder {
@@ -137,11 +26,9 @@ func NewBlockFinder(db *Lowlevel) *BlockFinder {
return blockFinder
}
f := &BlockFinder{
db: db,
return &BlockFinder{
db: newInstance(db),
}
return f
}
func (f *BlockFinder) String() string {
@@ -154,52 +41,23 @@ func (f *BlockFinder) String() string {
// reason. The iterator finally returns the result, whether or not a
// satisfying block was eventually found.
func (f *BlockFinder) Iterate(folders []string, hash []byte, iterFn func(string, string, int32) bool) bool {
t := f.db.newReadOnlyTransaction()
defer t.close()
var key []byte
for _, folder := range folders {
folderID := f.db.folderIdx.ID([]byte(folder))
key = blockKeyInto(key, hash, folderID, "")
iter := f.db.NewIterator(util.BytesPrefix(key), nil)
defer iter.Release()
key = f.db.keyer.GenerateBlockMapKey(key, []byte(folder), hash, nil)
iter := t.NewIterator(util.BytesPrefix(key), nil)
for iter.Next() && iter.Error() == nil {
file := blockKeyName(iter.Key())
file := string(f.db.keyer.NameFromBlockMapKey(iter.Key()))
index := int32(binary.BigEndian.Uint32(iter.Value()))
if iterFn(folder, osutil.NativeFilename(file), index) {
iter.Release()
return true
}
}
iter.Release()
}
return false
}
// m.blockKey returns a byte slice encoding the following information:
// keyTypeBlock (1 byte)
// folder (4 bytes)
// block hash (32 bytes)
// file name (variable size)
func blockKeyInto(o, hash []byte, folder uint32, file string) []byte {
reqLen := keyPrefixLen + keyFolderLen + keyHashLen + len(file)
if cap(o) < reqLen {
o = make([]byte, reqLen)
} else {
o = o[:reqLen]
}
o[0] = KeyTypeBlock
binary.BigEndian.PutUint32(o[keyPrefixLen:], folder)
copy(o[keyPrefixLen+keyFolderLen:], hash)
copy(o[keyPrefixLen+keyFolderLen+keyHashLen:], []byte(file))
return o
}
// blockKeyName returns the file name from the block key
func blockKeyName(data []byte) string {
if len(data) < keyPrefixLen+keyFolderLen+keyHashLen+1 {
panic("Incorrect key length")
}
if data[0] != KeyTypeBlock {
panic("Incorrect key type")
}
file := string(data[keyPrefixLen+keyFolderLen+keyHashLen:])
return file
}

View File

@@ -7,6 +7,7 @@
package db
import (
"encoding/binary"
"testing"
"github.com/syncthing/syncthing/lib/protocol"
@@ -48,19 +49,53 @@ func init() {
}
}
func setup() (*Lowlevel, *BlockFinder) {
func setup() (*instance, *BlockFinder) {
// Setup
db := OpenMemory()
return db, NewBlockFinder(db)
return newInstance(db), NewBlockFinder(db)
}
func dbEmpty(db *Lowlevel) bool {
func dbEmpty(db *instance) bool {
iter := db.NewIterator(util.BytesPrefix([]byte{KeyTypeBlock}), nil)
defer iter.Release()
return !iter.Next()
}
func addToBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
t := db.newReadWriteTransaction()
defer t.close()
var keyBuf []byte
blockBuf := make([]byte, 4)
for _, f := range fs {
if !f.IsDirectory() && !f.IsDeleted() && !f.IsInvalid() {
name := []byte(f.Name)
for i, block := range f.Blocks {
binary.BigEndian.PutUint32(blockBuf, uint32(i))
keyBuf = t.db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Put(keyBuf, blockBuf)
}
}
}
}
func discardFromBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
t := db.newReadWriteTransaction()
defer t.close()
var keyBuf []byte
for _, ef := range fs {
if !ef.IsDirectory() && !ef.IsDeleted() && !ef.IsInvalid() {
name := []byte(ef.Name)
for _, block := range ef.Blocks {
keyBuf = t.db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Delete(keyBuf)
}
}
}
}
func TestBlockMapAddUpdateWipe(t *testing.T) {
db, f := setup()
@@ -68,14 +103,11 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
t.Fatal("db not empty")
}
m := NewBlockMap(db, "folder1")
folder := []byte("folder1")
f3.Type = protocol.FileInfoTypeDirectory
err := m.Add([]protocol.FileInfo{f1, f2, f3})
if err != nil {
t.Fatal(err)
}
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
if folder != "folder1" || file != "f1" || index != 0 {
@@ -96,14 +128,12 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
return true
})
discardFromBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
f1.Deleted = true
f2.LocalFlags = protocol.FlagLocalMustRescan // one of the invalid markers
// Should remove
err = m.Update([]protocol.FileInfo{f1, f2, f3})
if err != nil {
t.Fatal(err)
}
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
t.Fatal("Unexpected block")
@@ -122,20 +152,14 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
return true
})
err = m.Drop()
if err != nil {
t.Fatal(err)
}
db.dropFolder(folder)
if !dbEmpty(db) {
t.Fatal("db not empty")
}
// Should not add
err = m.Add([]protocol.FileInfo{f1, f2})
if err != nil {
t.Fatal(err)
}
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2})
if !dbEmpty(db) {
t.Fatal("db not empty")
@@ -152,17 +176,11 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
func TestBlockFinderLookup(t *testing.T) {
db, f := setup()
m1 := NewBlockMap(db, "folder1")
m2 := NewBlockMap(db, "folder2")
folder1 := []byte("folder1")
folder2 := []byte("folder2")
err := m1.Add([]protocol.FileInfo{f1})
if err != nil {
t.Fatal(err)
}
err = m2.Add([]protocol.FileInfo{f1})
if err != nil {
t.Fatal(err)
}
addToBlockMap(db, folder1, []protocol.FileInfo{f1})
addToBlockMap(db, folder2, []protocol.FileInfo{f1})
counter := 0
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
@@ -186,12 +204,11 @@ func TestBlockFinderLookup(t *testing.T) {
t.Fatal("Incorrect count", counter)
}
discardFromBlockMap(db, folder1, []protocol.FileInfo{f1})
f1.Deleted = true
err = m1.Update([]protocol.FileInfo{f1})
if err != nil {
t.Fatal(err)
}
addToBlockMap(db, folder1, []protocol.FileInfo{f1})
counter = 0
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {

View File

@@ -8,16 +8,14 @@ package db
import (
"bytes"
"encoding/binary"
"fmt"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/util"
)
type deletionHandler func(t readWriteTransaction, folder, device, name []byte, dbi iterator.Iterator)
type instance struct {
*Lowlevel
keyer keyer
@@ -30,70 +28,93 @@ func newInstance(ll *Lowlevel) *instance {
}
}
func (db *instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, meta *metadataTracker) {
// updateRemoteFiles adds a list of fileinfos to the database and updates the
// global versionlist and metadata.
func (db *instance) updateRemoteFiles(folder, device []byte, fs []protocol.FileInfo, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
var fk []byte
var gk []byte
var dk, gk, keyBuf []byte
devID := protocol.DeviceIDFromBytes(device)
for _, f := range fs {
name := []byte(f.Name)
fk = db.keyer.GenerateDeviceFileKey(fk, folder, device, name)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, device, name)
// Get and unmarshal the file entry. If it doesn't exist or can't be
// unmarshalled we'll add it as a new entry.
bs, err := t.Get(fk, nil)
var ef FileInfoTruncated
if err == nil {
err = ef.Unmarshal(bs)
}
// Local flags or the invalid bit might change without the version
// being bumped. The IsInvalid() method handles both.
if err == nil && ef.Version.Equal(f.Version) && ef.IsInvalid() == f.IsInvalid() {
ef, ok := t.getFileTrunc(dk, true)
if ok && unchanged(f, ef) {
continue
}
devID := protocol.DeviceIDFromBytes(device)
if err == nil {
if ok {
meta.removeFile(devID, ef)
}
meta.addFile(devID, f)
t.insertFile(fk, folder, device, f)
l.Debugf("insert; folder=%q device=%v %v", folder, devID, f)
t.Put(dk, mustMarshal(&f))
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
t.updateGlobal(gk, folder, device, f, meta)
keyBuf, _ = t.updateGlobal(gk, keyBuf, folder, device, f, meta)
// Write out and reuse the batch every few records, to avoid the batch
// growing too large and thus allocating unnecessarily much memory.
t.checkFlush()
}
}
func (db *instance) addSequences(folder []byte, fs []protocol.FileInfo) {
// updateLocalFiles adds fileinfos to the db, and updates the global versionlist,
// metadata, sequence and blockmap buckets.
func (db *instance) updateLocalFiles(folder []byte, fs []protocol.FileInfo, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
var sk []byte
var dk []byte
var dk, gk, keyBuf []byte
blockBuf := make([]byte, 4)
for _, f := range fs {
sk = db.keyer.GenerateSequenceKey(sk, folder, f.Sequence)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(f.Name))
t.Put(sk, dk)
name := []byte(f.Name)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], name)
ef, ok := t.getFileByKey(dk)
if ok && unchanged(f, ef) {
continue
}
if ok {
if !ef.IsDirectory() && !ef.IsDeleted() && !ef.IsInvalid() {
for _, block := range ef.Blocks {
keyBuf = t.db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Delete(keyBuf)
}
}
keyBuf = db.keyer.GenerateSequenceKey(keyBuf, folder, ef.SequenceNo())
t.Delete(keyBuf)
l.Debugf("removing sequence; folder=%q sequence=%v %v", folder, ef.SequenceNo(), ef.FileName())
}
f.Sequence = meta.nextLocalSeq()
if ok {
meta.removeFile(protocol.LocalDeviceID, ef)
}
meta.addFile(protocol.LocalDeviceID, f)
l.Debugf("insert (local); folder=%q %v", folder, f)
t.Put(dk, mustMarshal(&f))
gk = t.db.keyer.GenerateGlobalVersionKey(gk, folder, []byte(f.Name))
keyBuf, _ = t.updateGlobal(gk, keyBuf, folder, protocol.LocalDeviceID[:], f, meta)
keyBuf = db.keyer.GenerateSequenceKey(keyBuf, folder, f.Sequence)
t.Put(keyBuf, dk)
l.Debugf("adding sequence; folder=%q sequence=%v %v", folder, f.Sequence, f.Name)
t.checkFlush()
}
}
func (db *instance) removeSequences(folder []byte, fs []protocol.FileInfo) {
t := db.newReadWriteTransaction()
defer t.close()
if !f.IsDirectory() && !f.IsDeleted() && !f.IsInvalid() {
for i, block := range f.Blocks {
binary.BigEndian.PutUint32(blockBuf, uint32(i))
keyBuf = t.db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Put(keyBuf, blockBuf)
}
}
var sk []byte
for _, f := range fs {
t.Delete(db.keyer.GenerateSequenceKey(sk, folder, f.Sequence))
l.Debugf("removing sequence; folder=%q sequence=%v %v", folder, f.Sequence, f.Name)
t.checkFlush()
}
}
@@ -167,8 +188,7 @@ func (db *instance) withAllFolderTruncated(folder []byte, fn func(device []byte,
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateDeviceFileKey(nil, folder, nil, nil).WithoutNameAndDevice()), nil)
defer dbi.Release()
var gk []byte
var gk, keyBuf []byte
for dbi.Next() {
device, ok := db.keyer.DeviceFromDeviceFileKey(dbi.Key())
if !ok {
@@ -193,7 +213,7 @@ func (db *instance) withAllFolderTruncated(folder []byte, fn func(device []byte,
l.Infof("Dropping invalid filename %q from database", f.Name)
name := []byte(f.Name)
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
t.removeFromGlobal(gk, folder, device, name, nil)
keyBuf = t.removeFromGlobal(gk, keyBuf, folder, device, name, nil)
t.Delete(dbi.Key())
t.checkFlush()
continue
@@ -206,51 +226,16 @@ func (db *instance) withAllFolderTruncated(folder []byte, fn func(device []byte,
}
func (db *instance) getFileDirty(folder, device, file []byte) (protocol.FileInfo, bool) {
key := db.keyer.GenerateDeviceFileKey(nil, folder, device, file)
bs, err := db.Get(key, nil)
if err == leveldb.ErrNotFound {
return protocol.FileInfo{}, false
}
if err != nil {
l.Debugln("surprise error:", err)
return protocol.FileInfo{}, false
}
var f protocol.FileInfo
if err := f.Unmarshal(bs); err != nil {
l.Debugln("unmarshal error:", err)
return protocol.FileInfo{}, false
}
return f, true
}
func (db *instance) getGlobal(folder, file []byte, truncate bool) (FileIntf, bool) {
t := db.newReadOnlyTransaction()
defer t.close()
_, _, f, ok := db.getGlobalInto(t, nil, nil, folder, file, truncate)
return f, ok
return t.getFile(folder, device, file)
}
func (db *instance) getGlobalInto(t readOnlyTransaction, gk, dk, folder, file []byte, truncate bool) ([]byte, []byte, FileIntf, bool) {
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, file)
bs, err := t.Get(gk, nil)
if err != nil {
return gk, dk, nil, false
}
vl, ok := unmarshalVersionList(bs)
if !ok {
return gk, dk, nil, false
}
dk = db.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[0].Device, file)
if fi, ok := t.getFileTrunc(dk, truncate); ok {
return gk, dk, fi, true
}
return gk, dk, nil, false
func (db *instance) getGlobalDirty(folder, file []byte, truncate bool) (FileIntf, bool) {
t := db.newReadOnlyTransaction()
defer t.close()
_, f, ok := t.getGlobal(nil, folder, file, truncate)
return f, ok
}
func (db *instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator) {
@@ -265,7 +250,7 @@ func (db *instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
prefix = append(prefix, '/')
}
if _, _, f, ok := db.getGlobalInto(t, nil, nil, folder, unslashedPrefix, truncate); ok && !fn(f) {
if _, f, ok := t.getGlobal(nil, folder, unslashedPrefix, truncate); ok && !fn(f) {
return
}
}
@@ -273,7 +258,7 @@ func (db *instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, prefix)), nil)
defer dbi.Release()
var fk []byte
var dk []byte
for dbi.Next() {
name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
if len(prefix) > 0 && !bytes.HasPrefix(name, prefix) {
@@ -285,9 +270,9 @@ func (db *instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
continue
}
fk = db.keyer.GenerateDeviceFileKey(fk, folder, vl.Versions[0].Device, name)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[0].Device, name)
f, ok := t.getFileTrunc(fk, truncate)
f, ok := t.getFileTrunc(dk, truncate)
if !ok {
continue
}
@@ -341,7 +326,8 @@ func (db *instance) withNeed(folder, device []byte, truncate bool, fn Iterator)
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var fk []byte
var dk []byte
devID := protocol.DeviceIDFromBytes(device)
for dbi.Next() {
vl, ok := unmarshalVersionList(dbi.Value())
if !ok {
@@ -371,16 +357,9 @@ func (db *instance) withNeed(folder, device []byte, truncate bool, fn Iterator)
continue
}
fk = db.keyer.GenerateDeviceFileKey(fk, folder, vl.Versions[i].Device, name)
bs, err := t.Get(fk, nil)
if err != nil {
l.Debugln("surprise error:", err)
continue
}
gf, err := unmarshalTrunc(bs, truncate)
if err != nil {
l.Debugln("unmarshal error:", err)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[i].Device, name)
gf, ok := t.getFileTrunc(dk, truncate)
if !ok {
continue
}
@@ -389,7 +368,7 @@ func (db *instance) withNeed(folder, device []byte, truncate bool, fn Iterator)
break
}
l.Debugf("need folder=%q device=%v name=%q have=%v invalid=%v haveV=%v globalV=%v globalDev=%v", folder, protocol.DeviceIDFromBytes(device), name, have, haveFV.Invalid, haveFV.Version, needVersion, needDevice)
l.Debugf("need folder=%q device=%v name=%q have=%v invalid=%v haveV=%v globalV=%v globalDev=%v", folder, devID, name, have, haveFV.Invalid, haveFV.Version, needVersion, needDevice)
if !fn(gf) {
return
@@ -408,12 +387,11 @@ func (db *instance) withNeedLocal(folder []byte, truncate bool, fn Iterator) {
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateNeedFileKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var dk []byte
var gk []byte
var keyBuf []byte
var f FileIntf
var ok bool
for dbi.Next() {
gk, dk, f, ok = db.getGlobalInto(t, gk, dk, folder, db.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate)
keyBuf, f, ok = t.getGlobal(keyBuf, folder, db.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate)
if !ok {
continue
}
@@ -436,6 +414,8 @@ func (db *instance) dropFolder(folder []byte) {
db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName(),
// Remove all needs related to the folder
db.keyer.GenerateNeedFileKey(nil, folder, nil).WithoutName(),
// Remove the blockmap of the folder
db.keyer.GenerateBlockMapKey(nil, folder, nil, nil).WithoutHashAndName(),
} {
t.deleteKeyPrefix(key)
}
@@ -448,16 +428,17 @@ func (db *instance) dropDeviceFolder(device, folder []byte, meta *metadataTracke
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateDeviceFileKey(nil, folder, device, nil)), nil)
defer dbi.Release()
var gk []byte
var gk, keyBuf []byte
for dbi.Next() {
key := dbi.Key()
name := db.keyer.NameFromDeviceFileKey(key)
name := db.keyer.NameFromDeviceFileKey(dbi.Key())
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
t.removeFromGlobal(gk, folder, device, name, meta)
t.Delete(key)
keyBuf = t.removeFromGlobal(gk, keyBuf, folder, device, name, meta)
t.Delete(dbi.Key())
t.checkFlush()
}
if bytes.Equal(device, protocol.LocalDeviceID[:]) {
t.deleteKeyPrefix(db.keyer.GenerateBlockMapKey(nil, folder, nil, nil).WithoutHashAndName())
}
}
func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
@@ -467,7 +448,7 @@ func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var fk []byte
var dk []byte
for dbi.Next() {
vl, ok := unmarshalVersionList(dbi.Value())
if !ok {
@@ -482,8 +463,8 @@ func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
var newVL VersionList
for i, version := range vl.Versions {
fk = db.keyer.GenerateDeviceFileKey(fk, folder, version.Device, name)
_, err := t.Get(fk, nil)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, version.Device, name)
_, err := t.Get(dk, nil)
if err == leveldb.ErrNotFound {
continue
}
@@ -494,7 +475,7 @@ func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
newVL.Versions = append(newVL.Versions, version)
if i == 0 {
if fi, ok := t.getFileByKey(fk); ok {
if fi, ok := t.getFileByKey(dk); ok {
meta.addFile(protocol.GlobalDeviceID, fi)
}
}
@@ -509,8 +490,7 @@ func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
}
func (db *instance) getIndexID(device, folder []byte) protocol.IndexID {
key := db.keyer.GenerateIndexIDKey(nil, device, folder)
cur, err := db.Get(key, nil)
cur, err := db.Get(db.keyer.GenerateIndexIDKey(nil, device, folder), nil)
if err != nil {
return 0
}
@@ -524,9 +504,8 @@ func (db *instance) getIndexID(device, folder []byte) protocol.IndexID {
}
func (db *instance) setIndexID(device, folder []byte, id protocol.IndexID) {
key := db.keyer.GenerateIndexIDKey(nil, device, folder)
bs, _ := id.Marshal() // marshalling can't fail
if err := db.Put(key, bs, nil); err != nil {
if err := db.Put(db.keyer.GenerateIndexIDKey(nil, device, folder), bs, nil); err != nil {
panic("storing index ID: " + err.Error())
}
}
@@ -543,12 +522,7 @@ func (db *instance) dropPrefix(prefix []byte) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(prefix), nil)
defer dbi.Release()
for dbi.Next() {
t.Delete(dbi.Key())
}
t.deleteKeyPrefix(prefix)
}
func unmarshalTrunc(bs []byte, truncate bool) (FileIntf, error) {
@@ -584,3 +558,10 @@ type errorSuggestion struct {
func (e errorSuggestion) Error() string {
return fmt.Sprintf("%s (%s)", e.inner.Error(), e.suggestion)
}
// unchanged checks if two files are the same and thus don't need to be updated.
// Local flags or the invalid bit might change without the version
// being bumped. The IsInvalid() method handles both.
func unchanged(nf, ef FileIntf) bool {
return ef.FileVersion().Equal(nf.FileVersion()) && ef.IsInvalid() == nf.IsInvalid()
}

View File

@@ -73,6 +73,10 @@ type keyer interface {
NameFromGlobalVersionKey(key []byte) []byte
FolderFromGlobalVersionKey(key []byte) ([]byte, bool)
// block map key stuff (former BlockMap)
GenerateBlockMapKey(key, folder, hash, name []byte) blockMapKey
NameFromBlockMapKey(key []byte) []byte
// file need index
GenerateNeedFileKey(key, folder, name []byte) needFileKey
@@ -154,6 +158,25 @@ func (k defaultKeyer) FolderFromGlobalVersionKey(key []byte) ([]byte, bool) {
return k.folderIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen:]))
}
type blockMapKey []byte
func (k defaultKeyer) GenerateBlockMapKey(key, folder, hash, name []byte) blockMapKey {
key = resize(key, keyPrefixLen+keyFolderLen+keyHashLen+len(name))
key[0] = KeyTypeBlock
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
copy(key[keyPrefixLen+keyFolderLen:], hash)
copy(key[keyPrefixLen+keyFolderLen+keyHashLen:], name)
return key
}
func (k defaultKeyer) NameFromBlockMapKey(key []byte) []byte {
return key[keyPrefixLen+keyFolderLen+keyHashLen:]
}
func (k blockMapKey) WithoutHashAndName() []byte {
return k[:keyPrefixLen+keyFolderLen]
}
type needFileKey []byte
func (k needFileKey) WithoutName() []byte {

View File

@@ -82,11 +82,6 @@ func OpenMemory() *Lowlevel {
return NewLowlevel(db, "<memory>")
}
// Location returns the filesystem path where the database is stored
func (db *Lowlevel) Location() string {
return db.location
}
// ListFolders returns the list of folders currently in the database
func (db *Lowlevel) ListFolders() []string {
return db.folderIdx.Values()

View File

@@ -272,12 +272,12 @@ func (m *metadataTracker) Counts(dev protocol.DeviceID, flag uint32) Counts {
return m.counts.Counts[idx]
}
// nextSeq allocates a new sequence number for the given device
func (m *metadataTracker) nextSeq(dev protocol.DeviceID) int64 {
// nextLocalSeq allocates a new local sequence number
func (m *metadataTracker) nextLocalSeq() int64 {
m.mut.Lock()
defer m.mut.Unlock()
c := m.countsPtr(dev, 0)
c := m.countsPtr(protocol.LocalDeviceID, 0)
c.Sequence++
return c.Sequence
}

View File

@@ -24,9 +24,14 @@ type NamespacedKV struct {
// NewNamespacedKV returns a new NamespacedKV that lives in the namespace
// specified by the prefix.
func NewNamespacedKV(db *Lowlevel, prefix string) *NamespacedKV {
prefixBs := []byte(prefix)
// After the conversion from string the cap will be larger than the len (in Go 1.11.5,
// 32 bytes cap for small strings). We need to cut it down to ensure append() calls
// on the prefix make a new allocation.
prefixBs = prefixBs[:len(prefixBs):len(prefixBs)]
return &NamespacedKV{
db: db,
prefix: []byte(prefix),
prefix: prefixBs,
}
}
@@ -54,17 +59,15 @@ func (n *NamespacedKV) Reset() {
// PutInt64 stores a new int64. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutInt64(key string, val int64) {
keyBs := append(n.prefix, []byte(key)...)
var valBs [8]byte
binary.BigEndian.PutUint64(valBs[:], uint64(val))
n.db.Put(keyBs, valBs[:], nil)
n.db.Put(n.prefixedKey(key), valBs[:], nil)
}
// Int64 returns the stored value interpreted as an int64 and a boolean that
// is false if no value was stored at the key.
func (n *NamespacedKV) Int64(key string) (int64, bool) {
keyBs := append(n.prefix, []byte(key)...)
valBs, err := n.db.Get(keyBs, nil)
valBs, err := n.db.Get(n.prefixedKey(key), nil)
if err != nil {
return 0, false
}
@@ -75,17 +78,15 @@ func (n *NamespacedKV) Int64(key string) (int64, bool) {
// PutTime stores a new time.Time. Any existing value (even if of another
// type) is overwritten.
func (n *NamespacedKV) PutTime(key string, val time.Time) {
keyBs := append(n.prefix, []byte(key)...)
valBs, _ := val.MarshalBinary() // never returns an error
n.db.Put(keyBs, valBs, nil)
n.db.Put(n.prefixedKey(key), valBs, nil)
}
// Time returns the stored value interpreted as a time.Time and a boolean
// that is false if no value was stored at the key.
func (n NamespacedKV) Time(key string) (time.Time, bool) {
var t time.Time
keyBs := append(n.prefix, []byte(key)...)
valBs, err := n.db.Get(keyBs, nil)
valBs, err := n.db.Get(n.prefixedKey(key), nil)
if err != nil {
return t, false
}
@@ -96,15 +97,13 @@ func (n NamespacedKV) Time(key string) (time.Time, bool) {
// PutString stores a new string. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutString(key, val string) {
keyBs := append(n.prefix, []byte(key)...)
n.db.Put(keyBs, []byte(val), nil)
n.db.Put(n.prefixedKey(key), []byte(val), nil)
}
// String returns the stored value interpreted as a string and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) String(key string) (string, bool) {
keyBs := append(n.prefix, []byte(key)...)
valBs, err := n.db.Get(keyBs, nil)
valBs, err := n.db.Get(n.prefixedKey(key), nil)
if err != nil {
return "", false
}
@@ -114,15 +113,13 @@ func (n NamespacedKV) String(key string) (string, bool) {
// PutBytes stores a new byte slice. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutBytes(key string, val []byte) {
keyBs := append(n.prefix, []byte(key)...)
n.db.Put(keyBs, val, nil)
n.db.Put(n.prefixedKey(key), val, nil)
}
// Bytes returns the stored value as a raw byte slice and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) Bytes(key string) ([]byte, bool) {
keyBs := append(n.prefix, []byte(key)...)
valBs, err := n.db.Get(keyBs, nil)
valBs, err := n.db.Get(n.prefixedKey(key), nil)
if err != nil {
return nil, false
}
@@ -132,19 +129,17 @@ func (n NamespacedKV) Bytes(key string) ([]byte, bool) {
// PutBool stores a new boolean. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutBool(key string, val bool) {
keyBs := append(n.prefix, []byte(key)...)
if val {
n.db.Put(keyBs, []byte{0x0}, nil)
n.db.Put(n.prefixedKey(key), []byte{0x0}, nil)
} else {
n.db.Put(keyBs, []byte{0x1}, nil)
n.db.Put(n.prefixedKey(key), []byte{0x1}, nil)
}
}
// Bool returns the stored value as a boolean and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) Bool(key string) (bool, bool) {
keyBs := append(n.prefix, []byte(key)...)
valBs, err := n.db.Get(keyBs, nil)
valBs, err := n.db.Get(n.prefixedKey(key), nil)
if err != nil {
return false, false
}
@@ -154,8 +149,11 @@ func (n NamespacedKV) Bool(key string) (bool, bool) {
// Delete deletes the specified key. It is allowed to delete a nonexistent
// key.
func (n NamespacedKV) Delete(key string) {
keyBs := append(n.prefix, []byte(key)...)
n.db.Delete(keyBs, nil)
n.db.Delete(n.prefixedKey(key), nil)
}
func (n NamespacedKV) prefixedKey(key string) []byte {
return append(n.prefix, []byte(key)...)
}
// Well known namespaces that can be instantiated without knowing the key

View File

@@ -101,7 +101,7 @@ func (db *schemaUpdater) updateSchema0to1() {
changedFolders := make(map[string]struct{})
ignAdded := 0
meta := newMetadataTracker() // dummy metadata tracker
var gk []byte
var gk, buf []byte
for dbi.Next() {
folder, ok := db.keyer.FolderFromDeviceFileKey(dbi.Key())
@@ -126,7 +126,7 @@ func (db *schemaUpdater) updateSchema0to1() {
changedFolders[string(folder)] = struct{}{}
}
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
t.removeFromGlobal(gk, folder, device, nil, nil)
buf = t.removeFromGlobal(gk, buf, folder, device, nil, nil)
t.Delete(dbi.Key())
t.checkFlush()
continue
@@ -156,8 +156,8 @@ func (db *schemaUpdater) updateSchema0to1() {
// Add invalid files to global list
if f.IsInvalid() {
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if t.updateGlobal(gk, folder, device, f, meta) {
if _, ok := changedFolders[string(folder)]; !ok {
if buf, ok = t.updateGlobal(gk, buf, folder, device, f, meta); ok {
if _, ok = changedFolders[string(folder)]; !ok {
changedFolders[string(folder)] = struct{}{}
}
ignAdded++

View File

@@ -14,7 +14,6 @@ package db
import (
"os"
"sort"
"time"
"github.com/syncthing/syncthing/lib/fs"
@@ -25,11 +24,10 @@ import (
)
type FileSet struct {
folder string
fs fs.Filesystem
db *instance
blockmap *BlockMap
meta *metadataTracker
folder string
fs fs.Filesystem
db *instance
meta *metadataTracker
updateMutex sync.Mutex // protects database updates and the corresponding metadata changes
}
@@ -75,7 +73,6 @@ func NewFileSet(folder string, fs fs.Filesystem, ll *Lowlevel) *FileSet {
folder: folder,
fs: fs,
db: db,
blockmap: NewBlockMap(ll, folder),
meta: newMetadataTracker(),
updateMutex: sync.NewMutex(),
}
@@ -116,7 +113,6 @@ func (s *FileSet) Drop(device protocol.DeviceID) {
s.db.dropDeviceFolder(device[:], []byte(s.folder), s.meta)
if device == protocol.LocalDeviceID {
s.blockmap.Drop()
s.meta.resetCounts(device)
// We deliberately do not reset the sequence number here. Dropping
// all files for the local device ID only happens in testing - which
@@ -147,52 +143,13 @@ func (s *FileSet) Update(device protocol.DeviceID, fs []protocol.FileInfo) {
defer s.meta.toDB(s.db, []byte(s.folder))
if device != protocol.LocalDeviceID {
// Easy case, just update the files and we're done.
s.db.updateFiles([]byte(s.folder), device[:], fs, s.meta)
if device == protocol.LocalDeviceID {
// For the local device we have a bunch of metadata to track.
s.db.updateLocalFiles([]byte(s.folder), fs, s.meta)
return
}
// For the local device we have a bunch of metadata to track however...
discards := make([]protocol.FileInfo, 0, len(fs))
updates := make([]protocol.FileInfo, 0, len(fs))
// db.UpdateFiles will sort unchanged files out -> save one db lookup
// filter slice according to https://github.com/golang/go/wiki/SliceTricks#filtering-without-allocating
oldFs := fs
fs = fs[:0]
folder := []byte(s.folder)
for _, nf := range oldFs {
ef, ok := s.db.getFileDirty(folder, device[:], []byte(osutil.NormalizedFilename(nf.Name)))
if ok && ef.Version.Equal(nf.Version) && ef.IsInvalid() == nf.IsInvalid() {
continue
}
nf.Sequence = s.meta.nextSeq(protocol.LocalDeviceID)
fs = append(fs, nf)
if ok {
discards = append(discards, ef)
}
updates = append(updates, nf)
}
// The ordering here is important. We first remove stuff that point to
// files we are going to update, then update them, then add new index
// pointers etc. In addition, we do the discards in reverse order so
// that a reader traversing the sequence index will get a consistent
// view up until the point they meet the writer.
sort.Slice(discards, func(a, b int) bool {
// n.b. "b < a" instead of the usual "a < b"
return discards[b].Sequence < discards[a].Sequence
})
s.blockmap.Discard(discards)
s.db.removeSequences(folder, discards)
s.db.updateFiles([]byte(s.folder), device[:], fs, s.meta)
s.db.addSequences(folder, updates)
s.blockmap.Update(updates)
// Easy case, just update the files and we're done.
s.db.updateRemoteFiles([]byte(s.folder), device[:], fs, s.meta)
}
func (s *FileSet) WithNeed(device protocol.DeviceID, fn Iterator) {
@@ -250,7 +207,7 @@ func (s *FileSet) Get(device protocol.DeviceID, file string) (protocol.FileInfo,
}
func (s *FileSet) GetGlobal(file string) (protocol.FileInfo, bool) {
fi, ok := s.db.getGlobal([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), false)
fi, ok := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), false)
if !ok {
return protocol.FileInfo{}, false
}
@@ -260,7 +217,7 @@ func (s *FileSet) GetGlobal(file string) (protocol.FileInfo, bool) {
}
func (s *FileSet) GetGlobalTruncated(file string) (FileInfoTruncated, bool) {
fi, ok := s.db.getGlobal([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), true)
fi, ok := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), true)
if !ok {
return FileInfoTruncated{}, false
}
@@ -327,8 +284,6 @@ func DropFolder(ll *Lowlevel, folder string) {
db.dropFolder([]byte(folder))
db.dropMtimes([]byte(folder))
db.dropFolderMeta([]byte(folder))
bm := NewBlockMap(ll, folder)
bm.Drop()
// Also clean out the folder ID mapping.
db.folderIdx.Delete([]byte(folder))

View File

@@ -418,10 +418,7 @@ func TestUpdateToInvalid(t *testing.T) {
})
if !f.Iterate([]string{folder}, localHave[4].Blocks[0].Hash, func(folder, file string, index int32) bool {
if file == localHave[4].Name {
return true
}
return false
return file == localHave[4].Name
}) {
t.Errorf("First block of un-invalidated file is missing from blockmap")
}

View File

@@ -56,7 +56,6 @@ func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (FileIntf, boo
l.Debugln("surprise error:", err)
return nil, false
}
f, err := unmarshalTrunc(bs, trunc)
if err != nil {
l.Debugln("unmarshal error:", err)
@@ -65,6 +64,27 @@ func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (FileIntf, boo
return f, true
}
func (t readOnlyTransaction) getGlobal(keyBuf, folder, file []byte, truncate bool) ([]byte, FileIntf, bool) {
keyBuf = t.db.keyer.GenerateGlobalVersionKey(keyBuf, folder, file)
bs, err := t.Get(keyBuf, nil)
if err != nil {
return keyBuf, nil, false
}
vl, ok := unmarshalVersionList(bs)
if !ok {
return keyBuf, nil, false
}
keyBuf = t.db.keyer.GenerateDeviceFileKey(keyBuf, folder, vl.Versions[0].Device, file)
if fi, ok := t.getFileTrunc(keyBuf, truncate); ok {
return keyBuf, fi, true
}
return keyBuf, nil, false
}
// A readWriteTransaction is a readOnlyTransaction plus a batch for writes.
// The batch will be committed on close() or by checkFlush() if it exceeds the
// batch size.
@@ -99,16 +119,10 @@ func (t readWriteTransaction) flush() {
}
}
func (t readWriteTransaction) insertFile(fk, folder, device []byte, file protocol.FileInfo) {
l.Debugf("insert; folder=%q device=%v %v", folder, protocol.DeviceIDFromBytes(device), file)
t.Put(fk, mustMarshal(&file))
}
// updateGlobal adds this device+version to the version list for the given
// file. If the device is already present in the list, the version is updated.
// If the file does not have an entry in the global list, it is created.
func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file protocol.FileInfo, meta *metadataTracker) bool {
func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, file protocol.FileInfo, meta *metadataTracker) ([]byte, bool) {
l.Debugf("update global; folder=%q device=%v file=%q version=%v invalid=%v", folder, protocol.DeviceIDFromBytes(device), file.Name, file.Version, file.IsInvalid())
var fl VersionList
@@ -118,7 +132,7 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
fl, removedFV, removedAt, insertedAt := fl.update(folder, device, file, t.readOnlyTransaction)
if insertedAt == -1 {
l.Debugln("update global; same version, global unchanged")
return false
return keyBuf, false
}
name := []byte(file.Name)
@@ -127,19 +141,22 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
if insertedAt == 0 {
// Inserted a new newest version
global = file
} else if new, ok := t.getFile(folder, fl.Versions[0].Device, name); ok {
global = new
} else {
panic("This file must exist in the db")
keyBuf = t.db.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, name)
if new, ok := t.getFileByKey(keyBuf); ok {
global = new
} else {
panic("This file must exist in the db")
}
}
// Fixup the list of files we need.
t.updateLocalNeed(folder, name, fl, global)
keyBuf = t.updateLocalNeed(keyBuf, folder, name, fl, global)
if removedAt != 0 && insertedAt != 0 {
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
t.Put(gk, mustMarshal(&fl))
return true
return keyBuf, true
}
// Remove the old global from the global size counter
@@ -150,7 +167,8 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
// The previous newest version is now at index 1
oldGlobalFV = fl.Versions[1]
}
if oldFile, ok := t.getFile(folder, oldGlobalFV.Device, name); ok {
keyBuf = t.db.keyer.GenerateDeviceFileKey(keyBuf, folder, oldGlobalFV.Device, name)
if oldFile, ok := t.getFileByKey(keyBuf); ok {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
meta.removeFile(protocol.GlobalDeviceID, oldFile)
@@ -162,24 +180,25 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
t.Put(gk, mustMarshal(&fl))
return true
return keyBuf, true
}
// updateLocalNeeds checks whether the given file is still needed on the local
// updateLocalNeed checks whether the given file is still needed on the local
// device according to the version list and global FileInfo given and updates
// the db accordingly.
func (t readWriteTransaction) updateLocalNeed(folder, name []byte, fl VersionList, global protocol.FileInfo) {
nk := t.db.keyer.GenerateNeedFileKey(nil, folder, name)
hasNeeded, _ := t.db.Has(nk, nil)
func (t readWriteTransaction) updateLocalNeed(keyBuf, folder, name []byte, fl VersionList, global protocol.FileInfo) []byte {
keyBuf = t.db.keyer.GenerateNeedFileKey(keyBuf, folder, name)
hasNeeded, _ := t.Has(keyBuf, nil)
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); need(global, haveLocalFV, localFV.Version) {
if !hasNeeded {
l.Debugf("local need insert; folder=%q, name=%q", folder, name)
t.Put(nk, nil)
t.Put(keyBuf, nil)
}
} else if hasNeeded {
l.Debugf("local need delete; folder=%q, name=%q", folder, name)
t.Delete(nk)
t.Delete(keyBuf)
}
return keyBuf
}
func need(global FileIntf, haveLocal bool, localVersion protocol.Vector) bool {
@@ -201,54 +220,59 @@ func need(global FileIntf, haveLocal bool, localVersion protocol.Vector) bool {
// removeFromGlobal removes the device from the global version list for the
// given file. If the version list is empty after this, the file entry is
// removed entirely.
func (t readWriteTransaction) removeFromGlobal(gk, folder, device, file []byte, meta *metadataTracker) {
func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device []byte, file []byte, meta *metadataTracker) []byte {
l.Debugf("remove from global; folder=%q device=%v file=%q", folder, protocol.DeviceIDFromBytes(device), file)
svl, err := t.Get(gk, nil)
if err != nil {
// We might be called to "remove" a global version that doesn't exist
// if the first update for the file is already marked invalid.
return
return keyBuf
}
var fl VersionList
err = fl.Unmarshal(svl)
if err != nil {
l.Debugln("unmarshal error:", err)
return
return keyBuf
}
fl, _, removedAt := fl.pop(device)
if removedAt == -1 {
// There is no version for the given device
return
return keyBuf
}
if removedAt == 0 {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
if f, ok := t.getFile(folder, device, file); ok {
keyBuf = t.db.keyer.GenerateDeviceFileKey(keyBuf, folder, device, file)
if f, ok := t.getFileByKey(keyBuf); ok {
meta.removeFile(protocol.GlobalDeviceID, f)
}
}
if len(fl.Versions) == 0 {
t.Delete(t.db.keyer.GenerateNeedFileKey(nil, folder, file))
keyBuf = t.db.keyer.GenerateNeedFileKey(keyBuf, folder, file)
t.Delete(keyBuf)
t.Delete(gk)
return
return keyBuf
}
if removedAt == 0 {
global, ok := t.getFile(folder, fl.Versions[0].Device, file)
keyBuf = t.db.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, file)
global, ok := t.getFileByKey(keyBuf)
if !ok {
panic("This file must exist in the db")
}
t.updateLocalNeed(folder, file, fl, global)
keyBuf = t.updateLocalNeed(keyBuf, folder, file, fl, global)
meta.addFile(protocol.GlobalDeviceID, global)
}
l.Debugf("new global after remove: %v", fl)
t.Put(gk, mustMarshal(&fl))
return keyBuf
}
func (t readWriteTransaction) deleteKeyPrefix(prefix []byte) {

View File

@@ -30,6 +30,10 @@ func writeJSONS(w io.Writer, db *leveldb.DB) {
}
}
// we know this function isn't generally used, nonetheless we want it in
// here and the linter to not complain.
var _ = writeJSONS
// openJSONS reads a JSON stream file into a leveldb.DB
func openJSONS(file string) (*leveldb.DB, error) {
fd, err := os.Open(file)

View File

@@ -77,7 +77,7 @@ func TestGlobalOverHTTP(t *testing.T) {
s := new(fakeDiscoveryServer)
mux := http.NewServeMux()
mux.HandleFunc("/", s.handler)
go http.Serve(list, mux)
go func() { _ = http.Serve(list, mux) }()
// This should succeed
addresses, err := testLookup("http://" + list.Addr().String() + "?insecure&noannounce")
@@ -125,7 +125,7 @@ func TestGlobalOverHTTPS(t *testing.T) {
s := new(fakeDiscoveryServer)
mux := http.NewServeMux()
mux.HandleFunc("/", s.handler)
go http.Serve(list, mux)
go func() { _ = http.Serve(list, mux) }()
// With default options the lookup code expects the server certificate to
// check out according to the usual CA chains etc. That won't be the case
@@ -190,7 +190,7 @@ func TestGlobalAnnounce(t *testing.T) {
s := new(fakeDiscoveryServer)
mux := http.NewServeMux()
mux.HandleFunc("/", s.handler)
go http.Serve(list, mux)
go func() { _ = http.Serve(list, mux) }()
url := "https://" + list.Addr().String() + "?insecure"
disco, err := NewGlobal(url, cert, new(fakeAddressLister))

View File

@@ -164,7 +164,7 @@ func TestGlobalIDs(t *testing.T) {
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
l.Log(DeviceConnected, "foo")
_ = l.Subscribe(AllEvents)
l.Subscribe(AllEvents)
l.Log(DeviceConnected, "bar")
ev, err := s.Poll(timeout)

Some files were not shown because too many files have changed in this diff Show More