Compare commits

..

32 Commits

Author SHA1 Message Date
Ben S
ba4554f053 gui: Sharing tab for folders (#5313)
Sharing tab for folders (like sharing tab for devices)
2018-11-13 08:57:45 +01:00
Simon Frei
33bed5b1ec lib/model: Don't compare permissions if IgnorePerms is true (fixes #5323) (#5322) 2018-11-13 08:54:49 +01:00
Simon Frei
4f27bdfc27 lib/model, lib/protocol: Handle request concurrency in model (#5216) 2018-11-13 08:53:55 +01:00
Cromefire_
9212303906 gui: Defer jsTree initialisation until next digest cycle (fixes #4738) 2018-11-11 12:42:53 +00:00
Simon Frei
603da2dce2 cmd/syncthing, lib/relay: Fixes regarding stopping of services (#5293) 2018-11-07 11:05:07 +01:00
Simon Frei
d510e3cca3 all: Display errors while scanning in web UI (fixes #4480) (#5215) 2018-11-07 11:04:41 +01:00
BAHADIR YILMAZ
f51514d0e7 gui: Select / Deselect all folders / devices (#fixes 4000) (#5307) 2018-11-07 09:44:52 +01:00
Jakob Borg
e67be59c5f gui, man, authors: Update docs, translations, and contributors 2018-11-07 07:45:25 +01:00
Benno Fünfstück
add12b43aa cmd/syncthing: Make directory auto-complete case insensitive (fixes #1347) 2018-11-01 19:13:11 +00:00
Simon Frei
aec91d8f32 cmd/ursrv: Add google maps api key (fixes #5296) (#5303) 2018-11-01 06:21:28 +01:00
Simon Frei
53b0f36be6 lib/fs: Use os.FileMode.String for fs.FileMode (#5302) 2018-10-31 12:49:50 +01:00
Jakob Borg
830bde2c83 gui, man, authors: Update docs, translations, and contributors 2018-10-31 07:45:23 +01:00
Jakob Borg
be1744a481 cmd/strelaypoolsrv: Hardcode a usable maps API key (fixes #5296)
Yeah it's not the most beautiful solution but it works for now.
2018-10-31 07:39:38 +01:00
Simon Frei
01ade9c8ae lib/connections: Don't panic on removed device (fixes #5299) (#5300) 2018-10-30 10:34:19 +01:00
Simon Frei
b1acc37c16 lib/db: Update local need on device removal (fixes #5294) (#5295) 2018-10-30 05:40:51 +01:00
Simon Frei
64a591610b lib/model: Check if files from queue are invalid (fixes #5291) (#5292) 2018-10-26 19:13:35 +01:00
Jakob Borg
9a07b22d4a gui, man, authors: Update docs, translations, and contributors 2018-10-24 07:45:24 +02:00
Simon Frei
406bedf1e3 lib/logger: Disable debug flags without debugging (fixes #4780) (#5278) 2018-10-23 15:17:40 +02:00
Simon Frei
7f55fbbe84 gui: Show usage reporting title regardless of RC (#5284) 2018-10-23 17:30:13 +09:00
Alexandre Viau
75f9ea623c cmd: Update prometheus_client (fixes #5280) (#5282) 2018-10-21 16:11:26 +01:00
Alexandre Viau
9745679c63 lib: chmod -x on progressemitter.go and errors.go (#5281) 2018-10-21 16:08:14 +01:00
Jakob Borg
8519a24ba6 cmd/*, lib/tlsutil: Refactor TLS stuff (fixes #5256) (#5276)
This changes the TLS and certificate handling in a few ways:

- We always use TLS 1.2, both for sync connections (as previously) and
  the GUI/REST/discovery stuff. This is a tightening of the requirements
  on the GUI. AS far as I can tell from caniusethis.com every browser from
  2013 and forward supports TLS 1.2, so I think we should be fine.

- We always greate ECDSA certificates. Previously we'd create
  ECDSA-with-RSA certificates for sync connections and pure RSA
  certificates for the web stuff. The new default is more modern and the
  same everywhere. These certificates are OK in TLS 1.2.

- We use the Go CPU detection stuff to choose the cipher suites to use,
  indirectly. The TLS package uses CPU capabilities probing to select
  either AES-GCM (fast if we have AES-NI) or ChaCha20 (faster if we
  don't). These CPU detection things aren't exported though, so the tlsutil
  package now does a quick TLS handshake with itself as part of init().
  If the chosen cipher suite was AES-GCM we prioritize that, otherwise we
  prefer ChaCha20. Some might call this ugly. I think it's awesome.
2018-10-21 14:17:50 +09:00
Simon Frei
c0be9987d0 build: Add desktop files and icons to .deb (fixes #3439) (#5277) 2018-10-20 08:25:59 +02:00
Jakob Borg
c1f1fd71fe cmd/stdiscosrv: Unflake test (fixes #5247) 2018-10-18 20:39:36 +09:00
Jakob Borg
3c657d1749 gui, man, authors: Update docs, translations, and contributors 2018-10-17 07:45:24 +02:00
Nico Stapelbroek
53f80fdf73 Update golint import path (#5274)
Fixes error "code in directory GOPATH/src/github.com/golang/lint/golint expects import golang.org/x/lint/golint" during the
go run build.go setup command.

See https://github.com/golang/lint/issues/415
2018-10-16 19:53:03 +01:00
Simon Frei
6325ae070c cmd/syncthing: Correct type assertion in verboseService (fixes #5270) (#5272) 2018-10-16 01:09:24 +02:00
Simon Frei
089c283ca6 lib/config: Disable folder free disk check when configured (fixes #5267) (#5268) 2018-10-12 12:34:56 +01:00
Simon Frei
8d7ea0424d vendor: Update github.com/syncthing/notify (#5263) 2018-10-12 12:24:25 +02:00
Jakob Borg
f12d5771dc cmd/syncthing: We can use Go 1.8 constants now 2018-10-12 07:55:54 +02:00
Jakob Borg
7b0c49a1b6 cmd/stindex: Add index checking mode ("idxck") (#5262) 2018-10-11 20:48:39 +01:00
Simon Frei
0690fe7585 lib/model: Unnecessary return (#5264) 2018-10-11 15:08:37 +02:00
117 changed files with 4205 additions and 1814 deletions

View File

@@ -38,6 +38,7 @@ Ben Shepherd (benshep) <bjashepherd@gmail.com>
Ben Sidhom (bsidhom) <bsidhom@gmail.com>
Benedikt Heine (bebehei) <bebe@bebehei.de>
Benedikt Morbach <benedikt.morbach@googlemail.com>
Benno Fünfstück <benno.fuenfstueck@gmail.com>
Benny Ng (tpng) <benny.tpng@gmail.com>
Boris Rybalkin <ribalkin@gmail.com>
Brandon Philips (philips) <brandon@ifup.org>
@@ -131,6 +132,7 @@ Mike Boone <mike@boonedocks.net>
MikeLund <MikeLund@users.noreply.github.com>
Nate Morrison (nrm21) <natemorrison@gmail.com>
Nicholas Rishel (PrototypeNM1) <rishel.nick@gmail.com> <PrototypeNM1@users.noreply.github.com>
Nico Stapelbroek <3368018+nstapelbroek@users.noreply.github.com>
Nicolas Braud-Santoni <nicolas@braud-santoni.eu>
Niels Peter Roest (Niller303) <nielsproest@hotmail.com> <seje.niels@hotmail.com>
Nils Jakobi (thunderstorm99) <jakobi.nils@gmail.com>

View File

@@ -111,6 +111,14 @@ var targets = map[string]target{
{src: "etc/linux-systemd/system/syncthing-resume.service", dst: "deb/lib/systemd/system/syncthing-resume.service", perm: 0644},
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
{src: "etc/firewall-ufw/syncthing", dst: "deb/etc/ufw/applications.d/syncthing", perm: 0644},
{src: "etc/linux-desktop/syncthing-start.desktop", dst: "deb/usr/share/applications/syncthing-start.desktop", perm: 0644},
{src: "etc/linux-desktop/syncthing-ui.desktop", dst: "deb/usr/share/applications/syncthing-ui.desktop", perm: 0644},
{src: "assets/logo-32.png", dst: "deb/usr/share/icons/hicolor/32x32/apps/syncthing.png", perm: 0644},
{src: "assets/logo-64.png", dst: "deb/usr/share/icons/hicolor/64x64/apps/syncthing.png", perm: 0644},
{src: "assets/logo-128.png", dst: "deb/usr/share/icons/hicolor/128x128/apps/syncthing.png", perm: 0644},
{src: "assets/logo-256.png", dst: "deb/usr/share/icons/hicolor/256x256/apps/syncthing.png", perm: 0644},
{src: "assets/logo-512.png", dst: "deb/usr/share/icons/hicolor/512x512/apps/syncthing.png", perm: 0644},
{src: "assets/logo-only.svg", dst: "deb/usr/share/icons/hicolor/scalable/apps/syncthing.svg", perm: 0644},
},
},
"stdiscosrv": {
@@ -361,7 +369,7 @@ func setup() {
"github.com/AlekSi/gocov-xml",
"github.com/axw/gocov/gocov",
"github.com/FiloSottile/gvt",
"github.com/golang/lint/golint",
"golang.org/x/lint/golint",
"github.com/gordonklaus/ineffassign",
"github.com/mdempsky/unconvert",
"github.com/mitchellh/go-wordwrap",

View File

@@ -162,10 +162,10 @@ func (s *levelDBStore) Serve() {
// Start the statistics serve routine. It will exit with us when
// statisticsTrigger is closed.
statisticsTrigger := make(chan struct{})
defer close(statisticsTrigger)
statisticsDone := make(chan struct{})
go s.statisticsServe(statisticsTrigger, statisticsDone)
loop:
for {
select {
case fn := <-s.inbox:
@@ -184,12 +184,18 @@ func (s *levelDBStore) Serve() {
case <-s.stop:
// We're done.
return
close(statisticsTrigger)
break loop
}
}
// Also wait for statisticsServe to return
<-statisticsDone
}
func (s *levelDBStore) statisticsServe(trigger <-chan struct{}, done chan<- struct{}) {
defer close(done)
for range trigger {
t0 := time.Now()
nowNanos := t0.UnixNano()

View File

@@ -121,7 +121,7 @@ func main() {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv", 0)
cert, err = tlsutil.NewCertificate(certFile, keyFile, "stdiscosrv")
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}

View File

@@ -109,5 +109,15 @@ func init() {
databaseKeys, databaseStatisticsSeconds,
databaseOperations, databaseOperationSeconds)
prometheus.MustRegister(prometheus.NewProcessCollector(os.Getpid(), "syncthing_discovery"))
processCollectorOpts := prometheus.ProcessCollectorOpts{
Namespace: "syncthing_discovery",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
prometheus.NewProcessCollector(processCollectorOpts),
)
}

242
cmd/stindex/idxck.go Normal file
View File

@@ -0,0 +1,242 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package main
import (
"bytes"
"encoding/binary"
"fmt"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
)
type fileInfoKey struct {
folder uint32
device uint32
name string
}
type globalKey struct {
folder uint32
name string
}
type sequenceKey struct {
folder uint32
sequence uint64
}
func idxck(ldb *db.Lowlevel) (success bool) {
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
fileInfos := make(map[fileInfoKey]protocol.FileInfo)
globals := make(map[globalKey]db.VersionList)
sequences := make(map[sequenceKey]string)
needs := make(map[globalKey]struct{})
var localDeviceKey uint32
success = true
it := ldb.NewIterator(nil, nil)
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
var f protocol.FileInfo
err := f.Unmarshal(it.Value())
if err != nil {
fmt.Println("Unable to unmarshal FileInfo:", err)
success = false
continue
}
fileInfos[fileInfoKey{folder, device, name}] = f
case db.KeyTypeGlobal:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv db.VersionList
if err := flv.Unmarshal(it.Value()); err != nil {
fmt.Println("Unable to unmarshal VersionList:", err)
success = false
continue
}
globals[globalKey{folder, name}] = flv
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
folders[key] = string(it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
devices[key] = string(it.Value())
deviceToIDs[string(it.Value())] = key
if bytes.Equal(it.Value(), protocol.LocalDeviceID[:]) {
localDeviceKey = key
}
case db.KeyTypeSequence:
folder := binary.BigEndian.Uint32(key[1:])
seq := binary.BigEndian.Uint64(key[5:])
val := it.Value()
sequences[sequenceKey{folder, seq}] = string(val[9:])
case db.KeyTypeNeed:
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
needs[globalKey{folder, name}] = struct{}{}
}
}
if localDeviceKey == 0 {
fmt.Println("Missing key for local device in device index (bailing out)")
success = false
return
}
for fk, fi := range fileInfos {
if fk.name != fi.Name {
fmt.Printf("Mismatching FileInfo name, %q (key) != %q (actual)\n", fk.name, fi.Name)
success = false
}
folder := folders[fk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for FileInfo %q\n", fk.folder, fk.name)
success = false
continue
}
if devices[fk.device] == "" {
fmt.Printf("Unknown device ID %d for FileInfo %q, folder %q\n", fk.folder, fk.name, folder)
success = false
}
if fk.device == localDeviceKey {
name, ok := sequences[sequenceKey{fk.folder, uint64(fi.Sequence)}]
if !ok {
fmt.Printf("Sequence entry missing for FileInfo %q, folder %q, seq %d\n", fi.Name, folder, fi.Sequence)
success = false
continue
}
if name != fi.Name {
fmt.Printf("Sequence entry refers to wrong name, %q (seq) != %q (FileInfo), folder %q, seq %d\n", name, fi.Name, folder, fi.Sequence)
success = false
}
}
}
for gk, vl := range globals {
folder := folders[gk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for VersionList %q\n", gk.folder, gk.name)
success = false
}
for i, fv := range vl.Versions {
dev, ok := deviceToIDs[string(fv.Device)]
if !ok {
fmt.Printf("VersionList %q, folder %q refers to unknown device %q\n", gk.name, folder, fv.Device)
success = false
}
fi, ok := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !ok {
fmt.Printf("VersionList %q, folder %q, entry %d refers to unknown FileInfo\n", gk.name, folder, i)
success = false
}
if !fi.Version.Equal(fv.Version) {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo version mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, fv.Version, fi.Version)
success = false
}
if fi.IsInvalid() != fv.Invalid {
fmt.Printf("VersionList %q, folder %q, entry %d, FileInfo invalid mismatch, %v (VersionList) != %v (FileInfo)\n", gk.name, folder, i, fv.Invalid, fi.IsInvalid())
success = false
}
}
// If we need this file we should have a need entry for it. False
// positives from needsLocally for deleted files, where we might
// legitimately lack an entry if we never had it, and ignored files.
if needsLocally(vl) {
_, ok := needs[gk]
if !ok {
dev, _ := deviceToIDs[string(vl.Versions[0].Device)]
fi, _ := fileInfos[fileInfoKey{gk.folder, dev, gk.name}]
if !fi.IsDeleted() && !fi.IsIgnored() {
fmt.Printf("Missing need entry for needed file %q, folder %q\n", gk.name, folder)
}
}
}
}
seenSeq := make(map[fileInfoKey]uint64)
for sk, name := range sequences {
folder := folders[sk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for sequence entry %d, %q\n", sk.folder, sk.sequence, name)
success = false
continue
}
if prev, ok := seenSeq[fileInfoKey{folder: sk.folder, name: name}]; ok {
fmt.Printf("Duplicate sequence entry for %q, folder %q, seq %d (prev %d)\n", name, folder, sk.sequence, prev)
success = false
}
seenSeq[fileInfoKey{folder: sk.folder, name: name}] = sk.sequence
fi, ok := fileInfos[fileInfoKey{sk.folder, localDeviceKey, name}]
if !ok {
fmt.Printf("Missing FileInfo for sequence entry %d, folder %q, %q\n", sk.sequence, folder, name)
success = false
continue
}
if fi.Sequence != int64(sk.sequence) {
fmt.Printf("Sequence mismatch for %q, folder %q, %d (key) != %d (FileInfo)\n", name, folder, sk.sequence, fi.Sequence)
success = false
}
}
for nk := range needs {
folder := folders[nk.folder]
if folder == "" {
fmt.Printf("Unknown folder ID %d for need entry %q\n", nk.folder, nk.name)
success = false
continue
}
vl, ok := globals[nk]
if !ok {
fmt.Printf("Missing global for need entry %q, folder %q\n", nk.name, folder)
success = false
continue
}
if !needsLocally(vl) {
fmt.Printf("Need entry for file we don't need, %q, folder %q\n", nk.name, folder)
success = false
}
}
return
}
func needsLocally(vl db.VersionList) bool {
var lv *protocol.Vector
for _, fv := range vl.Versions {
if bytes.Equal(fv.Device, protocol.LocalDeviceID[:]) {
lv = &fv.Version
break
}
}
if lv == nil {
return true // proviosinally, it looks like we need the file
}
return !lv.GreaterEqual(vl.Versions[0].Version)
}

View File

@@ -21,7 +21,7 @@ func main() {
log.SetFlags(0)
log.SetOutput(os.Stdout)
flag.StringVar(&mode, "mode", "dump", "Mode of operation: dump, dumpsize")
flag.StringVar(&mode, "mode", "dump", "Mode of operation: dump, dumpsize, idxck")
flag.Parse()
@@ -30,9 +30,7 @@ func main() {
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
fmt.Println("Path:", path)
ldb, err := db.Open(path)
ldb, err := db.OpenRO(path)
if err != nil {
log.Fatal(err)
}
@@ -41,6 +39,10 @@ func main() {
dump(ldb)
} else if mode == "dumpsize" {
dumpsize(ldb)
} else if mode == "idxck" {
if !idxck(ldb) {
os.Exit(1)
}
} else {
fmt.Println("Unknown mode")
}

View File

@@ -187,7 +187,7 @@
<script type="text/javascript" src="//code.jquery.com/jquery-2.1.4.min.js"></script>
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.8/angular.min.js"></script>
<script type="text/javascript" src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script type="text/javascript" src="//maps.googleapis.com/maps/api/js"></script>
<script type="text/javascript" src="//maps.googleapis.com/maps/api/js?key=AIzaSyDk5WJ8s7ueLKb99X5DbQ-vkWtPDAKqYs0"></script>
</body>
<script>

View File

@@ -636,7 +636,7 @@ func createTestCertificate() tls.Certificate {
}
certFile, keyFile := filepath.Join(tmpDir, "cert.pem"), filepath.Join(tmpDir, "key.pem")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv", 3072)
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv")
if err != nil {
log.Fatalln("Failed to create test X509 key pair:", err)
}

View File

@@ -14,7 +14,16 @@ import (
)
func init() {
prometheus.MustRegister(prometheus.NewProcessCollector(os.Getpid(), "syncthing_relaypoolsrv"))
processCollectorOpts := prometheus.ProcessCollectorOpts{
Namespace: "syncthing_relaypoolsrv",
PidFn: func() (int, error) {
return os.Getpid(), nil
},
}
prometheus.MustRegister(
prometheus.NewProcessCollector(processCollectorOpts),
)
}
var (

View File

@@ -166,7 +166,7 @@ func main() {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv", 3072)
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv")
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}

View File

@@ -115,7 +115,7 @@ type modelIntf interface {
RemoteSequence(folder string) (int64, bool)
State(folder string) (string, time.Time, error)
UsageReportingStats(version int, preview bool) map[string]interface{}
PullErrors(folder string) ([]model.FileError, error)
FolderErrors(folder string) ([]model.FileError, error)
WatchError(folder string) error
}
@@ -185,28 +185,13 @@ func (s *apiService) getListener(guiCfg config.GUIConfiguration) (net.Listener,
name = tlsDefaultCommonName
}
cert, err = tlsutil.NewCertificate(s.httpsCertFile, s.httpsKeyFile, name, httpsRSABits)
cert, err = tlsutil.NewCertificate(s.httpsCertFile, s.httpsKeyFile, name)
}
if err != nil {
return nil, err
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS10, // No SSLv3
CipherSuites: []uint16{
// No RC4
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
},
}
tlsCfg := tlsutil.SecureDefault()
tlsCfg.Certificates = []tls.Certificate{cert}
if guiCfg.Network() == "unix" {
// When listening on a UNIX socket we should unlink before bind,
@@ -276,7 +261,8 @@ func (s *apiService) Serve() {
getRestMux.HandleFunc("/rest/db/status", s.getDBStatus) // folder
getRestMux.HandleFunc("/rest/db/browse", s.getDBBrowse) // folder [prefix] [dirsonly] [levels]
getRestMux.HandleFunc("/rest/folder/versions", s.getFolderVersions) // folder
getRestMux.HandleFunc("/rest/folder/pullerrors", s.getPullErrors) // folder
getRestMux.HandleFunc("/rest/folder/errors", s.getFolderErrors) // folder
getRestMux.HandleFunc("/rest/folder/pullerrors", s.getFolderErrors) // folder (deprecated)
getRestMux.HandleFunc("/rest/events", s.getIndexEvents) // [since] [limit] [timeout] [events]
getRestMux.HandleFunc("/rest/events/disk", s.getDiskEvents) // [since] [limit] [timeout]
getRestMux.HandleFunc("/rest/stats/device", s.getDeviceStats) // -
@@ -710,12 +696,13 @@ func (s *apiService) getDBStatus(w http.ResponseWriter, r *http.Request) {
func folderSummary(cfg configIntf, m modelIntf, folder string) (map[string]interface{}, error) {
var res = make(map[string]interface{})
pullErrors, err := m.PullErrors(folder)
errors, err := m.FolderErrors(folder)
if err != nil && err != model.ErrFolderPaused {
// Stats from the db can still be obtained if the folder is just paused
return nil, err
}
res["pullErrors"] = len(pullErrors)
res["errors"] = len(errors)
res["pullErrors"] = len(errors) // deprecated
res["invalid"] = "" // Deprecated, retains external API for now
@@ -1516,12 +1503,12 @@ func (s *apiService) postFolderVersionsRestore(w http.ResponseWriter, r *http.Re
sendJSON(w, ferr)
}
func (s *apiService) getPullErrors(w http.ResponseWriter, r *http.Request) {
func (s *apiService) getFolderErrors(w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
folder := qs.Get("folder")
page, perpage := getPagingParams(qs)
errors, err := s.model.PullErrors(folder)
errors, err := s.model.FolderErrors(folder)
if err != nil {
http.Error(w, err.Error(), http.StatusNotFound)
@@ -1557,6 +1544,24 @@ func (s *apiService) getSystemBrowse(w http.ResponseWriter, r *http.Request) {
sendJSON(w, browseFiles(current, fsType))
}
const (
matchExact int = iota
matchCaseIns
noMatch
)
func checkPrefixMatch(s, prefix string) int {
if strings.HasPrefix(s, prefix) {
return matchExact
}
if strings.HasPrefix(strings.ToLower(s), strings.ToLower(prefix)) {
return matchCaseIns
}
return noMatch
}
func browseFiles(current string, fsType fs.FilesystemType) []string {
if current == "" {
filesystem := fs.NewFilesystem(fsType, "")
@@ -1582,16 +1587,29 @@ func browseFiles(current string, fsType fs.FilesystemType) []string {
fs := fs.NewFilesystem(fsType, searchDir)
subdirectories, _ := fs.Glob(searchFile + "*")
subdirectories, _ := fs.DirNames(".")
exactMatches := make([]string, 0, len(subdirectories))
caseInsMatches := make([]string, 0, len(subdirectories))
ret := make([]string, 0, len(subdirectories))
for _, subdirectory := range subdirectories {
info, err := fs.Stat(subdirectory)
if err == nil && info.IsDir() {
ret = append(ret, filepath.Join(searchDir, subdirectory)+pathSeparator)
if err != nil || !info.IsDir() {
continue
}
switch checkPrefixMatch(subdirectory, searchFile) {
case matchExact:
exactMatches = append(exactMatches, filepath.Join(searchDir, subdirectory)+pathSeparator)
case matchCaseIns:
caseInsMatches = append(caseInsMatches, filepath.Join(searchDir, subdirectory)+pathSeparator)
}
}
return ret
// sort to return matches in deterministic order (don't depend on file system order)
sort.Strings(exactMatches)
sort.Strings(caseInsMatches)
return append(exactMatches, caseInsMatches...)
}
func (s *apiService) getCPUProf(w http.ResponseWriter, r *http.Request) {

View File

@@ -988,10 +988,14 @@ func TestBrowse(t *testing.T) {
if err := ioutil.WriteFile(filepath.Join(tmpDir, "file"), []byte("hello"), 0644); err != nil {
t.Fatal(err)
}
if err := os.Mkdir(filepath.Join(tmpDir, "MiXEDCase"), 0755); err != nil {
t.Fatal(err)
}
// We expect completion to return the full path to the completed
// directory, with an ending slash.
dirPath := filepath.Join(tmpDir, "dir") + pathSep
mixedCaseDirPath := filepath.Join(tmpDir, "MiXEDCase") + pathSep
cases := []struct {
current string
@@ -1002,13 +1006,15 @@ func TestBrowse(t *testing.T) {
// With slash it's completed to its contents.
// Dirs are given pathSeps.
// Files are not returned.
{tmpDir + pathSep, []string{dirPath}},
{tmpDir + pathSep, []string{mixedCaseDirPath, dirPath}},
// Globbing is automatic based on prefix.
{tmpDir + pathSep + "d", []string{dirPath}},
{tmpDir + pathSep + "di", []string{dirPath}},
{tmpDir + pathSep + "dir", []string{dirPath}},
{tmpDir + pathSep + "f", nil},
{tmpDir + pathSep + "q", nil},
// Globbing is case-insensitve
{tmpDir + pathSep + "mixed", []string{mixedCaseDirPath}},
}
for _, tc := range cases {
@@ -1019,6 +1025,26 @@ func TestBrowse(t *testing.T) {
}
}
func TestPrefixMatch(t *testing.T) {
cases := []struct {
s string
prefix string
expected int
}{
{"aaaA", "aaa", matchExact},
{"AAAX", "BBB", noMatch},
{"AAAX", "aAa", matchCaseIns},
{"äÜX", "äü", matchCaseIns},
}
for _, tc := range cases {
ret := checkPrefixMatch(tc.s, tc.prefix)
if ret != tc.expected {
t.Errorf("checkPrefixMatch(%q, %q) => %v, expected %v", tc.s, tc.prefix, ret, tc.expected)
}
}
}
func equalStrings(a, b []string) bool {
if len(a) != len(b) {
return false

View File

@@ -78,8 +78,6 @@ const (
const (
bepProtocolName = "bep/1.0"
tlsDefaultCommonName = "syncthing"
httpsRSABits = 2048
bepRSABits = 0 // 384 bit ECDSA used instead
defaultEventTimeout = time.Minute
maxSystemErrors = 5
initialSystemLog = 10
@@ -471,7 +469,7 @@ func generate(generateDir string) {
l.Warnln("Key exists; will not overwrite.")
l.Infoln("Device ID:", protocol.NewDeviceID(cert.Certificate[0]))
} else {
cert, err = tlsutil.NewCertificate(certFile, keyFile, tlsDefaultCommonName, bepRSABits)
cert, err = tlsutil.NewCertificate(certFile, keyFile, tlsDefaultCommonName)
if err != nil {
l.Fatalln("Create certificate:", err)
}
@@ -639,7 +637,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
cert, err := tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
if err != nil {
l.Infof("Generating ECDSA key and certificate for %s...", tlsDefaultCommonName)
cert, err = tlsutil.NewCertificate(locations[locCertFile], locations[locKeyFile], tlsDefaultCommonName, bepRSABits)
cert, err = tlsutil.NewCertificate(locations[locCertFile], locations[locKeyFile], tlsDefaultCommonName)
if err != nil {
l.Fatalln(err)
}
@@ -680,30 +678,6 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
}()
}
// The TLS configuration is used for both the listening socket and outgoing
// connections.
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
NextProtos: []string{bepProtocolName},
ClientAuth: tls.RequestClientCert,
SessionTicketsDisabled: true,
InsecureSkipVerify: true,
MinVersion: tls.VersionTLS12,
CipherSuites: []uint16{
0xCCA8, // TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, Go 1.8
0xCCA9, // TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, Go 1.8
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
},
}
perf := cpuBench(3, 150*time.Millisecond, true)
l.Infof("Hashing performance is %.02f MB/s", perf)
@@ -794,6 +768,16 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
cachedDiscovery := discover.NewCachingMux()
mainService.Add(cachedDiscovery)
// The TLS configuration is used for both the listening socket and outgoing
// connections.
tlsCfg := tlsutil.SecureDefault()
tlsCfg.Certificates = []tls.Certificate{cert}
tlsCfg.NextProtos = []string{bepProtocolName}
tlsCfg.ClientAuth = tls.RequestClientCert
tlsCfg.SessionTicketsDisabled = true
tlsCfg.InsecureSkipVerify = true
// Start connection management
connectionsService := connections.NewService(cfg, myID, m, tlsCfg, cachedDiscovery, bepProtocolName, tlsDefaultCommonName)

View File

@@ -139,7 +139,7 @@ func (m *mockedModel) UsageReportingStats(version int, preview bool) map[string]
return nil
}
func (m *mockedModel) PullErrors(folder string) ([]model.FileError, error) {
func (m *mockedModel) FolderErrors(folder string) ([]model.FileError, error) {
return nil, nil
}

View File

@@ -17,6 +17,7 @@ import (
"runtime"
"sort"
"strings"
"sync"
"time"
"github.com/syncthing/syncthing/lib/config"
@@ -326,6 +327,8 @@ type usageReportingService struct {
connectionsService *connections.Service
forceRun chan struct{}
stop chan struct{}
stopped chan struct{}
stopMut sync.RWMutex
}
func newUsageReportingService(cfg *config.Wrapper, model *model.Model, connectionsService *connections.Service) *usageReportingService {
@@ -335,7 +338,9 @@ func newUsageReportingService(cfg *config.Wrapper, model *model.Model, connectio
connectionsService: connectionsService,
forceRun: make(chan struct{}),
stop: make(chan struct{}),
stopped: make(chan struct{}),
}
close(svc.stopped) // Not yet running, dont block on Stop()
cfg.Subscribe(svc)
return svc
}
@@ -359,8 +364,16 @@ func (s *usageReportingService) sendUsageReport() error {
}
func (s *usageReportingService) Serve() {
s.stopMut.Lock()
s.stop = make(chan struct{})
s.stopped = make(chan struct{})
s.stopMut.Unlock()
t := time.NewTimer(time.Duration(s.cfg.Options().URInitialDelayS) * time.Second)
s.stopMut.RLock()
defer func() {
close(s.stopped)
s.stopMut.RUnlock()
}()
for {
select {
case <-s.stop:
@@ -387,14 +400,21 @@ func (s *usageReportingService) VerifyConfiguration(from, to config.Configuratio
func (s *usageReportingService) CommitConfiguration(from, to config.Configuration) bool {
if from.Options.URAccepted != to.Options.URAccepted || from.Options.URUniqueID != to.Options.URUniqueID || from.Options.URURL != to.Options.URURL {
s.forceRun <- struct{}{}
s.stopMut.RLock()
select {
case s.forceRun <- struct{}{}:
case <-s.stop:
}
s.stopMut.RUnlock()
}
return true
}
func (s *usageReportingService) Stop() {
s.stopMut.RLock()
close(s.stop)
close(s.forceRun)
<-s.stopped
s.stopMut.RUnlock()
}
func (usageReportingService) String() string {

View File

@@ -105,7 +105,7 @@ func (s *verboseService) formatEvent(ev events.Event) string {
return fmt.Sprintf("Device %v sent an index update for %q with %d items", data["device"], data["folder"], data["items"])
case events.DeviceRejected:
data := ev.Data.(map[string]interface{})
data := ev.Data.(map[string]string)
return fmt.Sprintf("Rejected connection from device %v at %v", data["device"], data["address"])
case events.FolderRejected:

View File

@@ -17,7 +17,7 @@ found in the LICENSE file.
<link href="static/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script type="text/javascript" src="static/bootstrap/js/bootstrap.min.js"></script>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?libraries=visualization"></script>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?libraries=visualization&key=AIzaSyDk5WJ8s7ueLKb99X5DbQ-vkWtPDAKqYs0"></script>
<style type="text/css">
body {
margin: 40px;

View File

@@ -0,0 +1,12 @@
# Desktop Entries
This directory contains files to integrate Syncthing in your desktop environment (DE).
Specifically this works for DEs that implement the [XDG Desktop Menu Specification][1], which
is virtually every DE.
To add Syncthing to desktop menus for all users, copy the `.desktop` files to
`/usr/local/share/applications` (root required). To add it for just your user, copy them to `~/.local/share/applications`.
To start Syncthing automatically, you have two options: Either you go to the autostart settings of your DE and choose Syncthing or you copy the `syncthing-start.desktop` file to `~/.config/autostart`.
For more information refer to the [ArchWiki page on Desktop entries][2]
[1]: https://specifications.freedesktop.org/menu-spec/menu-spec-latest.html
[2]: https://wiki.archlinux.org/index.php/Desktop_entries

View File

@@ -0,0 +1,9 @@
[Desktop Entry]
Name=Start Syncthing
GenericName=File synchronization
Comment=Starts the main syncthing process in the background.
Exec=/usr/bin/syncthing -no-browser
Icon=syncthing
Terminal=false
Type=Application
Categories=Network;FileTransfer;P2P

View File

@@ -0,0 +1,9 @@
[Desktop Entry]
Name=Syncthing Web UI
GenericName=File synchronization UI
Comment="Opens Syncthing's Web UI in the default browser (Syncthing must already be started)."
Exec=/usr/bin/syncthing -browser-only
Icon=syncthing
Terminal=false
Type=Application
Categories=Network;FileTransfer;P2P

View File

@@ -110,7 +110,7 @@
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с добавяне на дата и час.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с набавяне на дата и час.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Защитава файловете от промени направени на други устройства, но промените направени на това устройство ще бъдат синхронизирани с останалите устройства.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Промените направени на на други устройства ще бъдат прилагани локално, но локалните промени няма да бъдат синхронизирани с останалите устройства.\n",
"Filesystem Notifications": "Известия на системата",
"Filesystem Watcher Errors": "Filesystem Watcher Errors",
"Filter by date": "Филтриране по дата",
@@ -252,7 +252,7 @@
"Select oldest version": "Избор на най-старата версия",
"Select the devices to share this folder with.": "Изберете устройствата, с които да споделите папката.",
"Select the folders to share with this device.": "Изберете папките за споделяне с това устройство.",
"Send & Receive": "Изпращане & получаване",
"Send & Receive": "Изпращане и получаване",
"Send Only": "Само изпращане",
"Settings": "Настройки",
"Share": "Сподели",

View File

@@ -110,7 +110,7 @@
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronised from the cluster, but any changes made locally will not be sent to other devices.",
"Filesystem Notifications": "Filesystem Notifications",
"Filesystem Watcher Errors": "Filesystem Watcher Errors",
"Filter by date": "Filter by date",

View File

@@ -12,7 +12,7 @@
"Add Remote Device": "Añadir un dispositivo",
"Add devices from the introducer to our device list, for mutually shared folders.": "Añadir dispositivos desde el introductor a nuestra lista de dispositivos, para las carpetas compartidas mutuamente.",
"Add new folder?": "¿Agregar una carpeta nueva?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Además, el intervalo de reexploración completo se incrementará (60 veces, es decir, nuevo valor predeterminado de 1h). También puedes configurarlo manualmente para cada carpeta más adelante después de seleccionar No",
"Address": "Dirección",
"Addresses": "Direcciones",
"Advanced": "Avanzado",
@@ -23,7 +23,7 @@
"Allowed Networks": "Redes permitidas",
"Alphabetic": "Alfabético",
"An external command handles the versioning. It has to remove the file from the shared folder.": "Un comando externo gestiona las versiones. Tiene que eliminar el fichero de la carpeta compartida.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Un comando externo maneja las versiones. Tienes que eliminar el archivo de la carpeta compartida. Si la ruta a la aplicación contiene espacios, ésta debe estar entre comillas.",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Un comando externo controla la versión. El fichero debe ser eliminado de la carpeta sincronizada.",
"Anonymous Usage Reporting": "Informe anónimo de uso",
"Anonymous usage report format has changed. Would you like to move to the new format?": "El formato del informe de uso anónimo a cambiado. ¿Desearía usar el nuevo formato?",
@@ -35,7 +35,7 @@
"Automatic upgrade now offers the choice between stable releases and release candidates.": "Ahora la actualización automática permite elegir entre versiones estables o versiones candidatas.",
"Automatic upgrades": "Actualizaciones automáticas",
"Automatically create or share folders that this device advertises at the default path.": "Crear o compartir automáticamente carpetas que este dispositivo anuncia en la ruta por defecto.",
"Available debug logging facilities:": "Available debug logging facilities:",
"Available debug logging facilities:": "Funciones de registro de depuración disponibles:",
"Be careful!": "¡Ten cuidado!",
"Bugs": "Errores",
"CPU Utilization": "Uso de CPU",
@@ -50,7 +50,7 @@
"Connection Error": "Error de conexión",
"Connection Type": "Tipo de conexión",
"Connections": "Conexiones",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Ahora está disponible en Syncthing la búsqueda continua de cambios. Se detectarán los cambios en disco y se hará un escaneado sólo en las rutas modificadas. Los beneficios son que los cambios se propagan más rápido y que se requieren menos escaneos completos.",
"Copied from elsewhere": "Copiado de otro sitio",
"Copied from original": "Copiado del original",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 los siguientes Colaboradores:",
@@ -65,21 +65,21 @@
"Device ID": "ID del Dispositivo",
"Device Identification": "Identificación del Dispositivo",
"Device Name": "Nombre del Dispositivo",
"Device rate limits": "Device rate limits",
"Device rate limits": "Límites de velocidad del dispositivo",
"Device that last modified the item": "Dispositivo que modificó por última vez el ítem",
"Devices": "Dispositivos",
"Disabled": "Deshabilitado",
"Disabled periodic scanning and disabled watching for changes": "Disabled periodic scanning and disabled watching for changes",
"Disabled periodic scanning and enabled watching for changes": "Disabled periodic scanning and enabled watching for changes",
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:",
"Discard": "Discard",
"Disabled periodic scanning and disabled watching for changes": "Se desactivó el escaneo periódico y se desactivó el control de cambios",
"Disabled periodic scanning and enabled watching for changes": "Se desactivó el escaneo periódico y se activó el control de cambios",
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Se desactivó el escaneo periódico y falló la configuración para detectar cambios, volviendo a intentarlo cada 1 m:",
"Discard": "Descartar",
"Disconnected": "Desconectado",
"Discovered": "Descubierto",
"Discovery": "Descubrimiento",
"Discovery Failures": "Fallos de Descubrimiento",
"Do not restore": "No restaurar",
"Do not restore all": "No restaurar todos",
"Do you want to enable watching for changes for all your folders?": "Do you want to enable watching for changes for all your folders?",
"Do you want to enable watching for changes for all your folders?": "¿Deseas activar el control de cambios en todas tus carpetas?",
"Documentation": "Documentación",
"Download Rate": "Velocidad de descarga",
"Downloaded": "Descargado",
@@ -91,7 +91,7 @@
"Editing {%path%}.": "Editando {{path}}.",
"Enable NAT traversal": "Permitir NAT transversal",
"Enable Relaying": "Habilitar Retransmisión",
"Enabled": "Enabled",
"Enabled": "Activado",
"Enter a non-negative number (e.g., \"2.35\") and select a unit. Percentages are as part of the total disk size.": "Introduce un número no negativo (por ejemplo, \"2.35\") y selecciona una unidad. Los porcentajes son como parte del tamaño total del disco.",
"Enter a non-privileged port number (1024 - 65535).": "Introduce un puerto sin privilegios (1024 - 65535).",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduzca las direcciones, separadas por comas (\"tcp://ip:port\", \"tcp://host:port\"), o \"dynamic\" para llevar a cabo el descubrimiento automático de la dirección.",
@@ -99,8 +99,8 @@
"Error": "Error",
"External File Versioning": "Versionado externo de fichero",
"Failed Items": "Elementos fallidos",
"Failed to load ignore patterns": "Failed to load ignore patterns",
"Failed to setup, retrying": "Failed to setup, retrying",
"Failed to load ignore patterns": "No se cargaron los patrones de ignorar",
"Failed to setup, retrying": "Fallo en la configuración, reintentando",
"Failure to connect to IPv6 servers is expected if there is no IPv6 connectivity.": "Se espera un fallo al conectar a los servidores IPv6 si no hay conectividad IPv6.",
"File Pull Order": "Orden de obtención de los ficheros",
"File Versioning": "Versionado de ficheros",
@@ -110,9 +110,9 @@
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Los ficheros son movidos a una carpeta .stversions a versiones con control de fecha cuando son reemplazados o borrados por Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Los ficheros son cambiados a versiones con indicación de fecha en una carpeta \".stversions\" cuando son reemplazados o borrados por Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Los ficheros son protegidos por los cambios hechos en otros dispositivos, pero los cambios hechos en este dispositivo serán enviados al resto del grupo (cluster).",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Los archivos se sincronizan desde el clúster, pero los cambios realizados localmente no se enviarán a otros dispositivos.",
"Filesystem Notifications": "Notificaciones del sistema de archivos",
"Filesystem Watcher Errors": "Filesystem Watcher Errors",
"Filesystem Watcher Errors": "Errores del Vigilante del Sistema de Archivos",
"Filter by date": "Filtrar por fecha",
"Filter by name": "Filtrar por nombre",
"Folder": "Carpeta",
@@ -121,8 +121,8 @@
"Folder Path": "Ruta de la carpeta",
"Folder Type": "Tipo de carpeta",
"Folders": "Carpetas",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.",
"Full Rescan Interval (s)": "Full Rescan Interval (s)",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "En las siguientes carpetas se ha producido un error al empezar a buscar cambios. Se volverá a intentar cada minuto, por lo que los errores podrían solucionarse pronto. Si persisten, trata de arreglar el problema subyacente y pide ayuda si no puedes.",
"Full Rescan Interval (s)": "Intervalo de rescaneo completo (s)",
"GUI": "GUI",
"GUI Authentication Password": "Password de la Interfaz Gráfica de Usuario (GUI)",
"GUI Authentication User": "Autentificación de usuario de la Interfaz Gráfica de Usuario (GUI)",
@@ -140,9 +140,9 @@
"Ignore": "Ignorar",
"Ignore Patterns": "Patrones a ignorar",
"Ignore Permissions": "Permisos a ignorar",
"Ignored Devices": "Ignored Devices",
"Ignored Folders": "Ignored Folders",
"Ignored at": "Ignored at",
"Ignored Devices": "Dispositivos ignorados",
"Ignored Folders": "Carpetas ignoradas",
"Ignored at": "Ignorados en",
"Incoming Rate Limit (KiB/s)": "Límite de descarga (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Una configuración incorrecta puede dañar los contenidos de la carpeta y hacer que Syncthing no funcione.",
"Introduced By": "Introducido por",
@@ -164,7 +164,7 @@
"Local State (Total)": "Estado Local (Total)",
"Log": "Registro",
"Log tailing paused. Click here to continue.": "Seguimiento del registro pausado. Haga clic aquí para continuar.",
"Log tailing paused. Scroll to bottom continue.": "Log tailing paused. Scroll to bottom continue.",
"Log tailing paused. Scroll to bottom continue.": "Registro de cola en pausa. Mueve el cursor hasta la parte inferior para continuar.",
"Logs": "Registros",
"Major Upgrade": "Actualización importante",
"Mass actions": "Acción masiva",
@@ -203,11 +203,11 @@
"Pause": "Pausar",
"Pause All": "Pausar todo",
"Paused": "Pausado",
"Pending changes": "Pending changes",
"Periodic scanning at given interval and disabled watching for changes": "Periodic scanning at given interval and disabled watching for changes",
"Periodic scanning at given interval and enabled watching for changes": "Periodic scanning at given interval and enabled watching for changes",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:",
"Permissions": "Permissions",
"Pending changes": "Cambios pendientes",
"Periodic scanning at given interval and disabled watching for changes": "Escaneando periódicamente a un intervalo dado y detección de cambios desactivada",
"Periodic scanning at given interval and enabled watching for changes": "Escaneando periódicamente a un intervalo dado y detección de cambios activada",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Escaneando periódicamente a un intervalo dado y falló la configuración para detectar cambios, volviendo a intentarlo cada 1 m:",
"Permissions": "Permisos",
"Please consult the release notes before performing a major upgrade.": "Por favor, consultar las notas de la versión antes de realizar una actualización importante.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Por favor, introduzca un Usuario y Contraseña para la Autenticación de la Interfaz de Usuario en el panel de Ajustes.",
"Please wait": "Por favor, espere",
@@ -218,7 +218,7 @@
"Quick guide to supported patterns": "Guía rápida de patrones soportados",
"RAM Utilization": "Uso de RAM",
"Random": "Aleatorio",
"Receive Only": "Receive Only",
"Receive Only": "Solo Recibir",
"Recent Changes": "Cambios recientes",
"Reduced by ignore patterns": "Reducido por patrones de ignorar",
"Release Notes": "Notas de la versión",
@@ -231,7 +231,7 @@
"Rescan": "Volver a analizar",
"Rescan All": "Volver a analizar Todo",
"Rescan Interval": "Intervalo de análisis",
"Rescans": "Rescans",
"Rescans": "Reescaneos",
"Restart": "Reiniciar",
"Restart Needed": "Reinicio necesario",
"Restarting": "Reiniciando",
@@ -240,8 +240,8 @@
"Resume": "Continuar",
"Resume All": "Continuar todo",
"Reused": "Reutilizado",
"Revert Local Changes": "Revert Local Changes",
"Running": "Running",
"Revert Local Changes": "Revertir Cambios Locales",
"Running": "Ejecutando",
"Save": "Guardar",
"Scan Time Remaining": "Tiempo Restante de Escaneo",
"Scanning": "Analizando",
@@ -261,7 +261,7 @@
"Share With Devices": "Compartir con dispositivos",
"Share this folder?": "¿Deseas compartir esta carpeta?",
"Shared With": "Compartir con",
"Sharing": "Sharing",
"Sharing": "Compartiendo",
"Show ID": "Mostrar ID",
"Show QR": "Mostrar QR",
"Show diff with previous version": "Mostrar la diferencia con la versión anterior",
@@ -283,7 +283,7 @@
"Statistics": "Estadísticas",
"Stopped": "Detenido",
"Support": "Forum",
"Support Bundle": "Support Bundle",
"Support Bundle": "Paquete de Soporte",
"Sync Protocol Listen Addresses": "Direcciones de escucha del protocolo de sincronización",
"Syncing": "Sincronizando",
"Syncthing has been shut down.": "Syncthing se ha detenido.",
@@ -292,8 +292,8 @@
"Syncthing is upgrading.": "Syncthing se está actualizando.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing parece no estar activo o hay un problema con tu conexión de internet. Reintentando...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing tiene problemas para procesar tu solicitud. Por favor, actualiza la página o reinicia Syncthing si el problema persiste.",
"Take me back": "Take me back",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.",
"Take me back": "Llévame de vuelta",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "La dirección de la Interfaz Gráfica de Ususario (GUI) está sobreescrita por las opciones de inicio. Los cambios aquí no tendrán efecto mientras la sobreescritura esté activa.",
"The Syncthing admin interface is configured to allow remote access without a password.": "El panel de administración de Syncthing está configurado para permitir el acceso remoto sin contraseña.",
"The aggregated statistics are publicly available at the URL below.": "Las estadísticas agragadas están disponibles públicamente en la URL de abajo.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuración ha sido grabada pero no activada. Syncthing debe reiniciarse para activar la nueva configuración.",
@@ -329,7 +329,7 @@
"Unavailable": "No disponible",
"Unavailable/Disabled by administrator or maintainer": "No disponible/Deshabilitado por el administrador o mantenedor",
"Undecided (will prompt)": "No decidido (se preguntará)",
"Unignore": "Unignore",
"Unignore": "Dejar de ignorar",
"Unknown": "Desconocido",
"Unshared": "No compartido",
"Unused": "No usado",
@@ -350,18 +350,18 @@
"Warning, this path is a parent directory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "'Peligro! Esta ruta es un subdirectorio de la carpeta ya existente \"{{otherFolderLabel}}\" ({{otherFolder}}).",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Peligro! Esta ruta es un subdirectorio de una carpeta ya existente llamada \"{{otherFolder}}\".",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Peligro, esta ruta es un subdirectorio de una carpeta ya existente llamada \"{{otherFolderLabel}}\" ({{otherFolder}}).",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Warning: If you are using an external watcher like {{syncthingInotify}}, you should make sure it is deactivated.",
"Watch for Changes": "Watch for Changes",
"Watching for Changes": "Watching for Changes",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Advertencia: Si estás utilizando un observador externo como {{syncthingInotify}}, debes asegurarte de que está desactivado.",
"Watch for Changes": "Vigila los cambios",
"Watching for Changes": "Vigilando los cambios",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Cuando añada un nuevo dispositivo, tenga en cuenta que este debe añadirse también en el otro lado.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Cuando añada una nueva carpeta, tenga en cuenta que su ID se usa para unir carpetas entre dispositivos. Son sensibles a las mayúsculas y deben coincidir exactamente entre todos los dispositivos.",
"Yes": "Si",
"You can also select one of these nearby devices:": "También puede seleccionar uno de estos dispositivos cercanos:",
"You can change your choice at any time in the Settings dialog.": "Puedes cambiar tu elección en cualquier momento en el panel de Ajustes.",
"You can read more about the two release channels at the link below.": "Puedes leer más sobre los dos método de publicación de versiones en el siguiente enlace.",
"You have no ignored devices.": "You have no ignored devices.",
"You have no ignored folders.": "You have no ignored folders.",
"You have unsaved changes. Do you really want to discard them?": "You have unsaved changes. Do you really want to discard them?",
"You have no ignored devices.": "No tienes dispositivos ignorados",
"You have no ignored folders.": "No tienes carpetas ignoradas",
"You have unsaved changes. Do you really want to discard them?": "Tienes cambios sin guardar. ¿Quieres descartarlos realmente?",
"You must keep at least one version.": "Debes mantener al menos una versión.",
"days": "días",
"directories": "directorios",

View File

@@ -283,7 +283,7 @@
"Statistics": "Statistiken",
"Stopped": "Stoppe",
"Support": "Help (Forum)",
"Support Bundle": "Support Bundle",
"Support Bundle": "Helpbundel",
"Sync Protocol Listen Addresses": "Sync-protokolharkadressen",
"Syncing": "Oan it Syncen",
"Syncthing has been shut down.": "Syncthing is útsetten",
@@ -293,7 +293,7 @@
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "It liket dêrop dat Syncthing op dit stuit net rint, of der is in swierrichheid mei jo ynternetferbining. Wurd no opnij besocht...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "It liket dêrop dat Syncthing swierrichheden ûnderfynt mei it ferwurkjen fan jo fersyk. Graach de stee ferfarskje of Syncthing werstarte as it probleem der bliuwt.",
"Take me back": "Bring my werom",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "It ynterfaasje-adres waard oerskreaun troch opstart-opsjes. Feroarings wurde hjir net ynstelt wylst dizze oerskriuw-ynstelling aktyf is.",
"The Syncthing admin interface is configured to allow remote access without a password.": "De Syncthing haadbrûker-ynterfaasje is sa ynstelt dat tagong fan ôfstân sûnder wachtwurd tastean is.",
"The aggregated statistics are publicly available at the URL below.": "De fersammele statistiken binnen yn it publyk beskikber fia ûndersteande keppeling.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "De konfiguraasje is bewarre mar noch net aktivearre. Syncthing moat werstarte om de nije konfiguraasje te aktivearren.",

View File

@@ -201,7 +201,7 @@
"Path where versions should be stored (leave empty for the default .stversions directory in the shared folder).": "Caminho do diretório onde as versões são salvas (deixe em branco para que seja o diretório padrão .stversions dentro da pasta compartilhada). ",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "O caminho onde as versões serão salvas (deixe vazio para usar a pasta padrão .stversions dentro desta pasta).",
"Pause": "Pausar",
"Pause All": "Pausar Todas",
"Pause All": "Pausar todas",
"Paused": "Em pausa",
"Pending changes": "Pending changes",
"Periodic scanning at given interval and disabled watching for changes": "Verificação periódica habilitada no intervalo escolhido. Verificação automática de mudanças desabilitada",

View File

@@ -293,7 +293,7 @@
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing verkar avstängd eller så är det problem med din Internetanslutning. Försöker igen...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing verkar ha drabbats av ett problem med behandlingen av din begäran. Uppdatera sidan eller starta om Syncthing om problemet kvarstår.",
"Take me back": "Ta mig tillbaka",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.",
"The GUI address is overridden by startup options. Changes here will not take effect while the override is in place.": "Det grafiska gränssnittets adressen åsidosätts av startalternativ. Ändringar här träder inte i kraft så länge åsidosättandet är på plats.",
"The Syncthing admin interface is configured to allow remote access without a password.": "Syncthing administratör gränssnittet är konfigurerat för att tillåta fjärrtillträde utan ett lösenord.",
"The aggregated statistics are publicly available at the URL below.": "Den aggregerade statistiken är offentligt tillgänglig på webbadressen nedan.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfigurationen har sparats men inte aktiverats. Syncthing måste startas om för att aktivera den nya konfigurationen.",

View File

@@ -12,7 +12,7 @@
"Add Remote Device": "Додати віддалений пристрій",
"Add devices from the introducer to our device list, for mutually shared folders.": "Додати пристрої від пристрою-рекомендувача до нашого списку пристроїв для спільно розділених директорій.",
"Add new folder?": "Додати нову директорію?",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.",
"Additionally the full rescan interval will be increased (times 60, i.e. new default of 1h). You can also configure it manually for every folder later after choosing No.": "Крім того, буде збільшений інтервал повного сканування (у 60 разів, тобто нове значення за замовчанням - 1 година). Ви також можете налаштувати його вручну для кожної папки пізніше після вибору \"Ні\".",
"Address": "Адреса",
"Addresses": "Адреси",
"Advanced": "Розширені",
@@ -23,7 +23,7 @@
"Allowed Networks": "Дозволені мережі",
"Alphabetic": "За алфавітом",
"An external command handles the versioning. It has to remove the file from the shared folder.": "Зовнішня команда керування версіями. Вона має видалити файл із спільної директорії.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.",
"An external command handles the versioning. It has to remove the file from the shared folder. If the path to the application contains spaces, it should be quoted.": "Зовнішня команда керування версіями. Вона має видалити файл із спільної директорії. Якщо шлях до програми містить пробіли, він буде взятий у лапки.",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Зовнішня команда керування версіями. Вона має видалити файл із директорії, що синхронізується.",
"Anonymous Usage Reporting": "Анонімна статистика використання",
"Anonymous usage report format has changed. Would you like to move to the new format?": "Змінився формат анонімного звіту про користування. Бажаєте перейти на новий формат?",
@@ -50,7 +50,7 @@
"Connection Error": "Помилка з’єднання",
"Connection Type": "Тип з*єднання",
"Connections": "З'єднання",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.",
"Continuously watching for changes is now available within Syncthing. This will detect changes on disk and issue a scan on only the modified paths. The benefits are that changes are propagated quicker and that less full scans are required.": "Постійне стеження за змінами наразі доступне у Syncthing. Це дозволить виявити зміни на диску та сканувати тільки модифіковані шляхи. Переваги полягають у тому, що зміни поширюються швидше і зменшується кількість повних пересканувань.",
"Copied from elsewhere": "Скопійовано з іншого місця",
"Copied from original": "Скопійовано з оригіналу",
"Copyright © 2014-2016 the following Contributors:": "© 2014-2016 Всі права застережено, вклад внесли:",
@@ -65,21 +65,21 @@
"Device ID": "ID пристрою",
"Device Identification": "Ідентифікатор пристрою",
"Device Name": "Назва пристрою",
"Device rate limits": "Device rate limits",
"Device rate limits": "Обмеження пристрою",
"Device that last modified the item": "Пристрій, що останнім змінив елемент",
"Devices": "Пристрої",
"Disabled": "Вимкнено",
"Disabled periodic scanning and disabled watching for changes": "Disabled periodic scanning and disabled watching for changes",
"Disabled periodic scanning and enabled watching for changes": "Disabled periodic scanning and enabled watching for changes",
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:",
"Discard": "Discard",
"Disabled periodic scanning and disabled watching for changes": "Відключено періодичне сканування та відключено відстеження змін",
"Disabled periodic scanning and enabled watching for changes": "Відключено періодичне сканування та увімкнене стеження за змінами",
"Disabled periodic scanning and failed setting up watching for changes, retrying every 1m:": "Відключено періодичне сканування та не вдається налаштувати перегляд змін, повторення кожну 1 хв:",
"Discard": "Відхилити",
"Disconnected": "З’єднання відсутнє",
"Discovered": "Виявлено",
"Discovery": "Сервери координації NAT",
"Discovery Failures": "Помилки виявлення",
"Do not restore": "Не відновлювати",
"Do not restore all": "Не відновлювати все",
"Do you want to enable watching for changes for all your folders?": "Do you want to enable watching for changes for all your folders?",
"Do you want to enable watching for changes for all your folders?": "Бажаєте увімкнути стеження за змінами у всіх ваших папках?",
"Documentation": "Документація",
"Download Rate": "Швидкість завантаження",
"Downloaded": "Завантажено",
@@ -110,9 +110,9 @@
"Files are moved to date stamped versions in a .stversions directory when replaced or deleted by Syncthing.": "Файли будуть поміщатися у директорію .stversions із відповідною позначкою часу, коли вони будуть замінятися або видалятися програмою.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Файли будуть поміщатися у директорію .stversions із відповідною позначкою часу, коли вони будуть замінятися або видалятися програмою.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Вміст папки захищено від змін, зроблених на інших пристроях, але зміни зроблені на цьому пристрої можна розіслати решті пристроїв кластеру.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Файли синхронізуються з кластера, але будь-які внесені локально зміни не надсилатимуться на інші пристрої.",
"Filesystem Notifications": "Повідомлення файлової системи",
"Filesystem Watcher Errors": "Filesystem Watcher Errors",
"Filesystem Watcher Errors": "Помилки спостерігача файлової системи",
"Filter by date": "Фільтрувати по даті",
"Filter by name": "Фільтрувати по імені",
"Folder": "Директорія",
@@ -121,7 +121,7 @@
"Folder Path": "Шлях до директорії",
"Folder Type": "Тип директорії",
"Folders": "Директорії",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.",
"For the following folders an error occurred while starting to watch for changes. It will be retried every minute, so the errors might go away soon. If they persist, try to fix the underlying issue and ask for help if you can't.": "Сталася помилка при спробі відслідковувати зміни у вищенаведених папках. Їх доступність перевірятиметься щохвилини, доки помилка не зникне. Якщо помилки не зникають, спробуйте виправити права доступу або попросіть допомоги.",
"Full Rescan Interval (s)": "Інтервал повного пересканування (секунди)",
"GUI": "Графічний інтерфейс",
"GUI Authentication Password": "Пароль для доступу до панелі управління",
@@ -140,9 +140,9 @@
"Ignore": "Ігнорувати",
"Ignore Patterns": "Шаблони винятків",
"Ignore Permissions": "Ігнорувати права доступу до файлів",
"Ignored Devices": "Ignored Devices",
"Ignored Folders": "Ignored Folders",
"Ignored at": "Ignored at",
"Ignored Devices": "Ігноровані пристрох",
"Ignored Folders": "Ігноровані папки",
"Ignored at": "Ігноруються в",
"Incoming Rate Limit (KiB/s)": "Ліміт швидкості завантаження (КіБ/с)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Невірна конфігурація може пошкодити вміст вашої директорії та зробити Syncthing недієздатним.",
"Introduced By": "Введено",
@@ -164,7 +164,7 @@
"Local State (Total)": "Локальний статус (загалом)",
"Log": "Журнал",
"Log tailing paused. Click here to continue.": "Перемотка журналу призупинена. Натиснути для продовження.",
"Log tailing paused. Scroll to bottom continue.": "Log tailing paused. Scroll to bottom continue.",
"Log tailing paused. Scroll to bottom continue.": "Висвітлення журналу призупинене. Прокрутіть нижче, щоби продовжити.",
"Logs": "Журнали",
"Major Upgrade": "Мажорне оновлення",
"Mass actions": "Масові операції",
@@ -203,7 +203,7 @@
"Pause": "Пауза",
"Pause All": "Призупинити все",
"Paused": "Призупинено",
"Pending changes": "Pending changes",
"Pending changes": "Запит на зміни поставлено в чергу",
"Periodic scanning at given interval and disabled watching for changes": "Periodic scanning at given interval and disabled watching for changes",
"Periodic scanning at given interval and enabled watching for changes": "Periodic scanning at given interval and enabled watching for changes",
"Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:": "Periodic scanning at given interval and failed setting up watching for changes, retrying every 1m:",
@@ -218,7 +218,7 @@
"Quick guide to supported patterns": "Швидкий посібник по шаблонам, що підтримуються",
"RAM Utilization": "Використання RAM",
"Random": "Випадково",
"Receive Only": "Receive Only",
"Receive Only": "Тільки отримувати",
"Recent Changes": "Останні зміни",
"Reduced by ignore patterns": "Зменшено шаблонами ігнорування",
"Release Notes": "Примітки до випуску",
@@ -231,7 +231,7 @@
"Rescan": "Пересканувати",
"Rescan All": "Пересканувати усе",
"Rescan Interval": "Інтервал для повторного сканування",
"Rescans": "Rescans",
"Rescans": "Пересканування",
"Restart": "Перезапуск",
"Restart Needed": "Необхідний перезапуск",
"Restarting": "Відбувається перезапуск",
@@ -240,7 +240,7 @@
"Resume": "Продовжити",
"Resume All": "Продовжити всі",
"Reused": "Використано вдруге",
"Revert Local Changes": "Revert Local Changes",
"Revert Local Changes": "Інвертувати локальні зміни",
"Running": "Running",
"Save": "Зберегти",
"Scan Time Remaining": "Час до кінця сканування",
@@ -351,17 +351,17 @@
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Увага, цей шлях є підпапкою директорії \"{{otherFolder}}\", що й так синхронізується .",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolderLabel%}\" ({%otherFolder%}).": "Увага, цей шлях є підпапкою директорії \"{{otherFolderLabel}}\", що й так синхронізується ({{otherFolder}}).",
"Warning: If you are using an external watcher like {%syncthingInotify%}, you should make sure it is deactivated.": "Warning: If you are using an external watcher like {{syncthingInotify}}, you should make sure it is deactivated.",
"Watch for Changes": "Watch for Changes",
"Watching for Changes": "Watching for Changes",
"Watch for Changes": "Моніторити зміни",
"Watching for Changes": "Моніторинг щмін",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Коли додаєте новий вузол, пам’ятайте, що цей вузол повинен бути доданий і на іншій стороні.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Коли додаєте нову директорію, пам’ятайте, що ID цієї директорії використовується для того, щоб зв’язувати директорії разом між пристроями. Назви повинні точно співпадати між усіма пристроями, регістр символів має значення.",
"Yes": "Так",
"You can also select one of these nearby devices:": "Ви також можете обрати один із сусідніх пристроїв:",
"You can change your choice at any time in the Settings dialog.": "Ви завжди можете змінити свій вибір у вікні Налаштувань.",
"You can read more about the two release channels at the link below.": "Ви можете прочитати більше про два канали випусків за посиланням нижче.",
"You have no ignored devices.": "You have no ignored devices.",
"You have no ignored folders.": "You have no ignored folders.",
"You have unsaved changes. Do you really want to discard them?": "You have unsaved changes. Do you really want to discard them?",
"You have no ignored devices.": "Немає ігнорованих пристроїв",
"You have no ignored folders.": "Немає ігнорованих папок",
"You have unsaved changes. Do you really want to discard them?": "Внесені зміни не збережено, чи дійсно відмовитись від змін?",
"You must keep at least one version.": "Ви повинні зберігати щонайменше одну версію.",
"days": "днів",
"directories": "директорії",

View File

@@ -65,7 +65,7 @@
"Device ID": "裝置識別碼",
"Device Identification": "裝置識別",
"Device Name": "裝置名稱",
"Device rate limits": "Device rate limits",
"Device rate limits": "裝置速率限制",
"Device that last modified the item": "前次修改裝置",
"Devices": "裝置",
"Disabled": "停用",
@@ -112,7 +112,7 @@
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "其他裝置做的改變不會影響此裝置上的檔案,但在此裝置上的變動將被發送到叢集的其他部分。",
"Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.": "Files are synchronized from the cluster, but any changes made locally will not be sent to other devices.",
"Filesystem Notifications": "檔案系統通知",
"Filesystem Watcher Errors": "Filesystem Watcher Errors",
"Filesystem Watcher Errors": "檔案系統監視器錯誤\n",
"Filter by date": "以日期篩選",
"Filter by name": "以名稱篩選",
"Folder": "資料夾",
@@ -218,7 +218,7 @@
"Quick guide to supported patterns": "可支援樣式的快速指南",
"RAM Utilization": "記憶體使用量",
"Random": "隨機",
"Receive Only": "Receive Only",
"Receive Only": "僅接收\n",
"Recent Changes": "最近變動",
"Reduced by ignore patterns": "已由忽略樣式縮減",
"Release Notes": "版本資訊",
@@ -359,8 +359,8 @@
"You can also select one of these nearby devices:": "您亦可從這些附近裝置中擇一:",
"You can change your choice at any time in the Settings dialog.": "您可以在設定對話框中隨時更改您的選擇。",
"You can read more about the two release channels at the link below.": "您可於下方連結閱讀更多關於發行頻道的說明。",
"You have no ignored devices.": "You have no ignored devices.",
"You have no ignored folders.": "You have no ignored folders.",
"You have no ignored devices.": "您沒有已忽略的裝置。\n",
"You have no ignored folders.": "您沒有已忽略的資料夾。\n",
"You have unsaved changes. Do you really want to discard them?": "You have unsaved changes. Do you really want to discard them?",
"You must keep at least one version.": "您必須保留至少一個版本。",
"days": "日",

View File

@@ -12,7 +12,7 @@
<p translate>Copyright &copy; 2014-2017 the following Contributors:</p>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, Aaron Bieber, Adam Piggott, Adel Qalieh, Alessandro G., Andrew Dunham, Andrew Rabert, Andrey D, Antoine Lamielle, Aranjedeath, Arthur Axel fREW Schmidt, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benny Ng, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Chris Tonkinson, Colin Kennedy, Dale Visser, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Dmitry Saveliev, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Graham Miln, Han Boetes, Harrison Jones, Heiko Zuerker, Iain Barnett, Ian Johnson, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jaya Chithra, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonathan Cross, Jose Manuel Delicado, Karol Różycki, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Kurt Fitzner, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mark Pulford, Mateusz Naściszewski, Matic Potočnik, Matt Burke, Matteo Ruina, Max Schulze, MaximAL, Maxime Thirouin, Michael Jephcote, Michael Tilli, Mike Boone, MikeLund, Nicholas Rishel, Nicolas Braud-Santoni, Niels Peter Roest, Nils Jakobi, NoLooseEnds, Oyebanji Jacob Mayowa, Pascal Jungblut, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Richard Hartmann, Robert Carosi, Roman Zaynetdinov, Ross Smith II, Sacheendra Talluri, Scott Klupfel, Sly_tom_cat, Stefan Kuntz, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Nygren, Tobias Tom, Tomas Cerveny, Tommy Thorn, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, Vladimir Rusinov, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, chucic, derekriemer, janost, jaseg, klemens, marco-m, perewa, rubenbe, wangguoliang, xjtdy888, 佛跳墙
Jakob Borg, Audrius Butkevicius, Simon Frei, Alexander Graf, Alexandre Viau, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Nate Morrison, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Wulf Weich, Aaron Bieber, Adam Piggott, Adel Qalieh, Alessandro G., Andrew Dunham, Andrew Rabert, Andrey D, Antoine Lamielle, Aranjedeath, Arthur Axel fREW Schmidt, BAHADIR YILMAZ, Bart De Vries, Ben Curthoys, Ben Shepherd, Ben Sidhom, Benedikt Heine, Benedikt Morbach, Benno Fünfstück, Benny Ng, Boris Rybalkin, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Chris Tonkinson, Colin Kennedy, Dale Visser, Daniel Bergmann, Daniel Martí, Darshil Chanpura, David Rimmer, Denis A., Dennis Wilson, Dmitry Saveliev, Dominik Heidler, Elias Jarlebring, Elliot Huffman, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Graham Miln, Han Boetes, Harrison Jones, Heiko Zuerker, Iain Barnett, Ian Johnson, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jaya Chithra, Jens Diemer, Jerry Jacobs, Jochen Voss, Johan Andersson, Johan Vromans, John Rinehart, Jonathan Cross, Jose Manuel Delicado, Karol Różycki, Keith Turner, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Kurt Fitzner, Laurent Arnoud, Laurent Etiemble, Leo Arias, Liu Siyuan, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mark Pulford, Mateusz Naściszewski, Matic Potočnik, Matt Burke, Matteo Ruina, Max Schulze, MaximAL, Maxime Thirouin, Michael Jephcote, Michael Tilli, Mike Boone, MikeLund, Nicholas Rishel, Nico Stapelbroek, Nicolas Braud-Santoni, Niels Peter Roest, Nils Jakobi, NoLooseEnds, Oyebanji Jacob Mayowa, Pascal Jungblut, Pawel Palenica, Paweł Rozlach, Peter Badida, Peter Dave Hello, Peter Hoeg, Peter Marquardt, Phil Davis, Phill Luby, Pier Paolo Ramon, Piotr Bejda, Pramodh KP, Richard Hartmann, Robert Carosi, Roman Zaynetdinov, Ross Smith II, Sacheendra Talluri, Scott Klupfel, Sly_tom_cat, Stefan Kuntz, Suhas Gundimeda, Taylor Khan, Thomas Hipp, Tim Abell, Tim Howes, Tobias Nygren, Tobias Tom, Tomas Cerveny, Tommy Thorn, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, Vladimir Rusinov, William A. Kennington III, Xavier O., Yannic A., andresvia, andyleap, chucic, derekriemer, janost, jaseg, klemens, marco-m, perewa, rubenbe, wangguoliang, xjtdy888, 佛跳墙
</div>
</div>
<hr/>

View File

@@ -1371,6 +1371,20 @@ angular.module('syncthing.core')
$('#editDevice').modal();
};
$scope.selectAllFolders = function() {
Object.entries($scope.folders).forEach(entry =>{
let id = entry[1].id;
$scope.currentDevice.selectedFolders[id] = true;
});
};
$scope.deSelectAllFolders = function() {
Object.entries($scope.folders).forEach(entry =>{
let id = entry[1].id;
$scope.currentDevice.selectedFolders[id] = false;
});
};
$scope.addDevice = function (deviceID, name) {
return $http.get(urlbase + '/system/discovery')
.success(function (registry) {
@@ -1694,6 +1708,20 @@ angular.module('syncthing.core')
$scope.editFolderModal();
};
$scope.selectAllDevices = function() {
var devices = $scope.otherDevices();
for (var i = 0; i < devices.length; i++){
$scope.currentFolder.selectedDevices[devices[i].deviceID] = true;
}
};
$scope.deSelectAllDevices = function() {
var devices = $scope.otherDevices();
for (var i = 0; i < devices.length; i++){
$scope.currentFolder.selectedDevices[devices[i].deviceID] = false;
}
};
$scope.addFolder = function () {
$http.get(urlbase + '/svc/random/string?length=10').success(function (data) {
$scope.editingExisting = false;
@@ -1968,114 +1996,116 @@ angular.module('syncthing.core')
});
$q.all([dataReceived, modalShown.promise]).then(function() {
if (closed) {
resetRestoreVersions();
return;
}
$timeout(function(){
if (closed) {
resetRestoreVersions();
return;
}
$scope.restoreVersions.tree = $("#restoreTree").fancytree({
extensions: ["table", "filter"],
quicksearch: true,
filter: {
autoApply: true,
counter: true,
hideExpandedCounter: true,
hideExpanders: true,
highlight: true,
leavesOnly: false,
nodata: true,
mode: "hide"
},
table: {
indentation: 20,
nodeColumnIdx: 0,
},
debugLevel: 2,
source: buildTree($scope.restoreVersions.versions),
renderColumns: function(event, data) {
var node = data.node,
$tdList = $(node.tr).find(">td"),
template;
if (node.folder) {
template = '<div ng-include="\'syncthing/folder/restoreVersionsMassActions.html\'" class="pull-right"/>';
} else {
template = '<div ng-include="\'syncthing/folder/restoreVersionsVersionSelector.html\'" class="pull-right"/>';
$scope.restoreVersions.tree = $("#restoreTree").fancytree({
extensions: ["table", "filter"],
quicksearch: true,
filter: {
autoApply: true,
counter: true,
hideExpandedCounter: true,
hideExpanders: true,
highlight: true,
leavesOnly: false,
nodata: true,
mode: "hide"
},
table: {
indentation: 20,
nodeColumnIdx: 0,
},
debugLevel: 2,
source: buildTree($scope.restoreVersions.versions),
renderColumns: function(event, data) {
var node = data.node,
$tdList = $(node.tr).find(">td"),
template;
if (node.folder) {
template = '<div ng-include="\'syncthing/folder/restoreVersionsMassActions.html\'" class="pull-right"/>';
} else {
template = '<div ng-include="\'syncthing/folder/restoreVersionsVersionSelector.html\'" class="pull-right"/>';
}
var scope = $rootScope.$new(true);
scope.key = node.key;
scope.restoreVersions = $scope.restoreVersions;
$tdList.eq(1).html(
$compile(template)(scope)
);
// Force angular to redraw.
$timeout(function() {
$scope.$apply();
});
}
}).fancytree("getTree");
var scope = $rootScope.$new(true);
scope.key = node.key;
scope.restoreVersions = $scope.restoreVersions;
var minDate = moment(),
maxDate = moment(0, 'X'),
date;
$tdList.eq(1).html(
$compile(template)(scope)
);
// Find version window.
$.each($scope.restoreVersions.versions, function(key) {
$.each($scope.restoreVersions.versions[key], function(idx, version) {
date = moment(version.versionTime);
if (date.isBefore(minDate)) {
minDate = date;
}
if (date.isAfter(maxDate)) {
maxDate = date;
}
});
});
// Force angular to redraw.
$scope.restoreVersions.filters['start'] = minDate;
$scope.restoreVersions.filters['end'] = maxDate;
var ranges = {
'All time': [minDate, maxDate],
'Today': [moment(), moment()],
'Yesterday': [moment().subtract(1, 'days'), moment().subtract(1, 'days')],
'Last 7 Days': [moment().subtract(6, 'days'), moment()],
'Last 30 Days': [moment().subtract(29, 'days'), moment()],
'This Month': [moment().startOf('month'), moment().endOf('month')],
'Last Month': [moment().subtract(1, 'month').startOf('month'), moment().subtract(1, 'month').endOf('month')]
};
// Filter out invalid ranges.
$.each(ranges, function(key, range) {
if (!range[0].isBetween(minDate, maxDate, null, '[]') && !range[1].isBetween(minDate, maxDate, null, '[]')) {
delete ranges[key];
}
});
$("#restoreVersionDateRange").daterangepicker({
timePicker: true,
timePicker24Hour: true,
timePickerSeconds: true,
autoUpdateInput: true,
opens: "left",
drops: "up",
startDate: minDate,
endDate: maxDate,
minDate: minDate,
maxDate: maxDate,
ranges: ranges,
locale: {
format: 'YYYY/MM/DD HH:mm:ss',
}
}).on('apply.daterangepicker', function(ev, picker) {
$scope.restoreVersions.filters['start'] = picker.startDate;
$scope.restoreVersions.filters['end'] = picker.endDate;
// Events for this UI element are not managed by angular.
// Force angular to wake up.
$timeout(function() {
$scope.$apply();
});
}
}).fancytree("getTree");
var minDate = moment(),
maxDate = moment(0, 'X'),
date;
// Find version window.
$.each($scope.restoreVersions.versions, function(key) {
$.each($scope.restoreVersions.versions[key], function(idx, version) {
date = moment(version.versionTime);
if (date.isBefore(minDate)) {
minDate = date;
}
if (date.isAfter(maxDate)) {
maxDate = date;
}
});
});
$scope.restoreVersions.filters['start'] = minDate;
$scope.restoreVersions.filters['end'] = maxDate;
var ranges = {
'All time': [minDate, maxDate],
'Today': [moment(), moment()],
'Yesterday': [moment().subtract(1, 'days'), moment().subtract(1, 'days')],
'Last 7 Days': [moment().subtract(6, 'days'), moment()],
'Last 30 Days': [moment().subtract(29, 'days'), moment()],
'This Month': [moment().startOf('month'), moment().endOf('month')],
'Last Month': [moment().subtract(1, 'month').startOf('month'), moment().subtract(1, 'month').endOf('month')]
};
// Filter out invalid ranges.
$.each(ranges, function(key, range) {
if (!range[0].isBetween(minDate, maxDate, null, '[]') && !range[1].isBetween(minDate, maxDate, null, '[]')) {
delete ranges[key];
}
});
$("#restoreVersionDateRange").daterangepicker({
timePicker: true,
timePicker24Hour: true,
timePickerSeconds: true,
autoUpdateInput: true,
opens: "left",
drops: "up",
startDate: minDate,
endDate: maxDate,
minDate: minDate,
maxDate: maxDate,
ranges: ranges,
locale: {
format: 'YYYY/MM/DD HH:mm:ss',
}
}).on('apply.daterangepicker', function(ev, picker) {
$scope.restoreVersions.filters['start'] = picker.startDate;
$scope.restoreVersions.filters['end'] = picker.endDate;
// Events for this UI element are not managed by angular.
// Force angular to wake up.
$timeout(function() {
$scope.$apply();
});
});
});

View File

@@ -38,7 +38,7 @@
<p translate ng-if="currentDevice.deviceID != myID" class="help-block">Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.</p>
</div>
</div>
<div class="tab-pane" id="device-sharing">
<div id="device-sharing" class="tab-pane">
<div class="row">
<div class="col-md-6">
<div class="form-group">
@@ -67,7 +67,11 @@
<div class="col-md-12">
<div class="form-group">
<label translate for="folders">Share Folders With Device</label>
<p translate class="help-block">Select the folders to share with this device.</p>
<p class="help-block">
<span translate>Select the folders to share with this device.</span>&emsp;
<small><a href="#" ng-click="selectAllFolders()" translate>Select All</a>&emsp;
<a href="#" ng-click="deSelectAllFolders()" translate>Deselect All</a></small>
</p>
<div class="row">
<div class="col-md-4" ng-repeat="folder in folderList()">
<div class="checkbox">
@@ -84,7 +88,7 @@
</div>
</div>
</div>
<div class="tab-pane" id="device-advanced">
<div id="device-advanced" class="tab-pane">
<div class="row form-group">
<div class="col-md-6">
<div class="form-group">

View File

@@ -3,6 +3,7 @@
<form role="form" name="folderEditor">
<ul class="nav nav-tabs" ng-init="loadFormIntoScope(folderEditor)">
<li class="active"><a data-toggle="tab" href="#folder-general"><span class="fas fa-cog"></span> <span translate>General</span></a></li>
<li><a data-toggle="tab" href="#folder-sharing"><span class="fas fa-share-alt"></span> <span translate>Sharing</span></a></li>
<li><a data-toggle="tab" href="#folder-versioning"><span class="fas fa-copy"></span> <span translate>File Versioning</span></a></li>
<li><a data-toggle="tab" href="#folder-ignores"><span class="fas fa-filter"></span> <span translate>Ignore Patterns</span></a></li>
<li><a data-toggle="tab" href="#folder-advanced"><span class="fas fa-cogs"></span> <span translate>Advanced</span></a></li>
@@ -23,6 +24,7 @@
<span translate ng-if="folderEditor.folderID.$valid || folderEditor.folderID.$pristine">Required identifier for the folder. Must be the same on all cluster devices.</span>
<span translate ng-if="folderEditor.folderID.$error.uniqueFolder">The folder ID must be unique.</span>
<span translate ng-if="folderEditor.folderID.$error.required && folderEditor.folderID.$dirty">The folder ID cannot be blank.</span>
<span translate ng-show="!editingExisting">When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.</span>
</p>
</div>
<div class="form-group" ng-class="{'has-error': folderEditor.folderPath.$invalid && folderEditor.folderPath.$dirty}">
@@ -40,9 +42,15 @@
<span class="text-danger" translate translate-value-other-folder="{{folderPathErrors.otherID}}" translate-value-other-folder-label="{{folderPathErrors.otherLabel}}" ng-if="folderPathErrors.isParent && folderPathErrors.otherLabel.length != 0">Warning, this path is a parent directory of an existing folder "{%otherFolderLabel%}" ({%otherFolder%}).</span>
</p>
</div>
</div>
<div id="folder-sharing" class="tab-pane">
<div class="form-group">
<label translate for="devices">Share With Devices</label>
<p translate class="help-block">Select the devices to share this folder with.</p>
<p class="help-block">
<span translate>Select the devices to share this folder with.</span>&emsp;
<small><a href="#" ng-click="selectAllDevices()" translate>Select All</a>&emsp;
<a href="#" ng-click="deSelectAllDevices()" translate>Deselect All</a></small>
</p>
<div class="row">
<div class="col-md-4" ng-repeat="device in otherDevices()">
<div class="checkbox">
@@ -53,7 +61,6 @@
</div>
</div>
</div>
<div translate ng-show="!editingExisting" class="help-block">When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.</div>
</div>
<div id="folder-versioning" class="tab-pane">
<div class="form-group">

View File

@@ -69,8 +69,8 @@
<div class="row">
<div class="col-md-6">
<div class="form-group">
<label translate for="urVersion">Anonymous Usage Reporting</label> (<a href="" translate data-toggle="modal" data-target="#urPreview">Preview</a>)
<div ng-if="tmpOptions.upgrades != 'candidate'">
<label translate for="urVersion">Anonymous Usage Reporting</label> (<a href="" translate data-toggle="modal" data-target="#urPreview">Preview</a>)
<select class="form-control" id="urVersion" ng-model="tmpOptions._urAcceptedStr">
<option ng-repeat="n in urVersions()" value="{{n}}">{{'Version' | translate}} {{n}}</option>
<!-- 1 does not exist, as we did not support incremental formats back then. -->
@@ -79,7 +79,7 @@
</select>
</div>
<p class="help-block" ng-if="tmpOptions.upgrades == 'candidate'">
<span translate>Usage reporting is always enabled for candidate releases.</span> (<a href="" translate data-toggle="modal" data-target="#urPreview">Preview</a>)
<span translate>Usage reporting is always enabled for candidate releases.</span>
</p>
</div>
</div>

View File

@@ -28,6 +28,7 @@ type DeviceConfiguration struct {
MaxRecvKbps int `xml:"maxRecvKbps" json:"maxRecvKbps"`
IgnoredFolders []ObservedFolder `xml:"ignoredFolder" json:"ignoredFolders"`
PendingFolders []ObservedFolder `xml:"pendingFolder" json:"pendingFolders"`
MaxRequestKiB int `xml:"maxRequestKiB" json:"maxRequestKiB"`
}
func NewDeviceConfiguration(id protocol.DeviceID, name string) DeviceConfiguration {

View File

@@ -269,6 +269,10 @@ func (f *FolderConfiguration) SharedWith(device protocol.DeviceID) bool {
}
func (f *FolderConfiguration) CheckAvailableSpace(req int64) error {
val := f.MinDiskFree.BaseValue()
if val <= 0 {
return nil
}
fs := f.Filesystem()
usage, err := fs.Usage(".")
if err != nil {

View File

@@ -245,7 +245,9 @@ next:
deviceCfg, ok := s.cfg.Device(remoteID)
if !ok {
panic("bug: unknown device should already have been rejected")
l.Infof("Device %s removed from config during connection attempt at %s", remoteID, c)
c.Close()
continue
}
// Verify the name on the certificate. By default we set it to

View File

@@ -43,7 +43,19 @@ func Open(location string) (*Lowlevel, error) {
OpenFilesCacheCapacity: dbMaxOpenFiles,
WriteBuffer: dbWriteBuffer,
}
return open(location, opts)
}
// OpenRO attempts to open the database at the given location, read only.
func OpenRO(location string) (*Lowlevel, error) {
opts := &opt.Options{
OpenFilesCacheCapacity: dbMaxOpenFiles,
ReadOnly: true,
}
return open(location, opts)
}
func open(location string, opts *opt.Options) (*Lowlevel, error) {
db, err := leveldb.OpenFile(location, opts)
if leveldbIsCorrupted(err) {
db, err = leveldb.RecoverFile(location, opts)

View File

@@ -22,9 +22,10 @@ import (
// 4: v0.14.49
// 5: v0.14.49
// 6: v0.14.50
// 7: v0.14.53
const (
dbVersion = 6
dbMinSyncthingVersion = "v0.14.50"
dbVersion = 7
dbMinSyncthingVersion = "v0.14.53"
)
type databaseDowngradeError struct {
@@ -79,6 +80,9 @@ func (db *schemaUpdater) updateSchema() error {
if prevVersion < 6 {
db.updateSchema5to6()
}
if prevVersion < 7 {
db.updateSchema6to7()
}
miscDB.PutInt64("dbVersion", dbVersion)
miscDB.PutString("dbMinSyncthingVersion", dbMinSyncthingVersion)
@@ -259,3 +263,39 @@ func (db *schemaUpdater) updateSchema5to6() {
})
}
}
// updateSchema6to7 checks whether all currently locally needed files are really
// needed and removes them if not.
func (db *schemaUpdater) updateSchema6to7() {
t := db.newReadWriteTransaction()
defer t.close()
var gk []byte
var nk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
db.withNeedLocal(folder, false, func(f FileIntf) bool {
name := []byte(f.FileName())
global := f.(protocol.FileInfo)
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
svl, err := t.Get(gk, nil)
if err != nil {
// If there is no global list, we hardly need it.
t.Delete(t.db.keyer.GenerateNeedFileKey(nk, folder, name))
return true
}
var fl VersionList
err = fl.Unmarshal(svl)
if err != nil {
// This can't happen, but it's ignored everywhere else too,
// so lets not act on it.
return true
}
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); !need(global, haveLocalFV, localFV.Version) {
t.Delete(t.db.keyer.GenerateNeedFileKey(nk, folder, name))
}
return true
})
}
}

View File

@@ -1309,6 +1309,31 @@ func TestNeedWithNewerInvalid(t *testing.T) {
}
}
func TestNeedAfterDeviceRemove(t *testing.T) {
ldb := db.OpenMemory()
file := "foo"
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
fs := fileList{{Name: file, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1}}}}}
s.Update(protocol.LocalDeviceID, fs)
fs[0].Version = fs[0].Version.Update(myID)
s.Update(remoteDevice0, fs)
if need := needList(s, protocol.LocalDeviceID); len(need) != 1 {
t.Fatal("Expected one local need, got", need)
}
s.Drop(remoteDevice0)
if need := needList(s, protocol.LocalDeviceID); len(need) != 0 {
t.Fatal("Expected no local need, got", need)
}
}
func replace(fs *db.FileSet, device protocol.DeviceID, files []protocol.FileInfo) {
fs.Drop(device)
fs.Update(device, files)

View File

@@ -161,15 +161,7 @@ func (vl VersionList) String() string {
// VersionList, a potentially removed old FileVersion and its index, as well as
// the index where the new FileVersion was inserted.
func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, db *instance) (_ VersionList, removedFV FileVersion, removedAt int, insertedAt int) {
removedAt, insertedAt = -1, -1
for i, v := range vl.Versions {
if bytes.Equal(v.Device, device) {
removedAt = i
removedFV = v
vl.Versions = append(vl.Versions[:i], vl.Versions[i+1:]...)
break
}
}
vl, removedFV, removedAt = vl.pop(device)
nv := FileVersion{
Device: device,
@@ -222,6 +214,20 @@ func (vl VersionList) insertAt(i int, v FileVersion) VersionList {
return vl
}
// pop returns the VersionList without the entry for the given device, as well
// as the removed FileVersion and the position, where that FileVersion was.
// If there is no FileVersion for the given device, the position is -1.
func (vl VersionList) pop(device []byte) (VersionList, FileVersion, int) {
removedAt := -1
for i, v := range vl.Versions {
if bytes.Equal(v.Device, device) {
vl.Versions = append(vl.Versions[:i], vl.Versions[i+1:]...)
return vl, v, i
}
}
return vl, FileVersion{}, removedAt
}
func (vl VersionList) Get(device []byte) (FileVersion, bool) {
for _, v := range vl.Versions {
if bytes.Equal(v.Device, device) {

View File

@@ -7,8 +7,6 @@
package db
import (
"bytes"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
@@ -100,29 +98,18 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
name := []byte(file.Name)
var newGlobal protocol.FileInfo
var global protocol.FileInfo
if insertedAt == 0 {
// Inserted a new newest version
newGlobal = file
global = file
} else if new, ok := t.getFile(folder, fl.Versions[0].Device, name); ok {
// The previous second version is now the first
newGlobal = new
global = new
} else {
panic("This file must exist in the db")
}
// Fixup the list of files we need.
nk := t.db.keyer.GenerateNeedFileKey(nil, folder, name)
hasNeeded, _ := t.db.Has(nk, nil)
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); need(newGlobal, haveLocalFV, localFV.Version) {
if !hasNeeded {
l.Debugf("local need insert; folder=%q, name=%q", folder, name)
t.Put(nk, nil)
}
} else if hasNeeded {
l.Debugf("local need delete; folder=%q, name=%q", folder, name)
t.Delete(nk)
}
t.updateLocalNeed(folder, name, fl, global)
if removedAt != 0 && insertedAt != 0 {
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
@@ -145,7 +132,7 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
}
// Add the new global to the global size counter
meta.addFile(protocol.GlobalDeviceID, newGlobal)
meta.addFile(protocol.GlobalDeviceID, global)
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
t.Put(gk, mustMarshal(&fl))
@@ -153,6 +140,23 @@ func (t readWriteTransaction) updateGlobal(gk, folder, device []byte, file proto
return true
}
// updateLocalNeeds checks whether the given file is still needed on the local
// device according to the version list and global FileInfo given and updates
// the db accordingly.
func (t readWriteTransaction) updateLocalNeed(folder, name []byte, fl VersionList, global protocol.FileInfo) {
nk := t.db.keyer.GenerateNeedFileKey(nil, folder, name)
hasNeeded, _ := t.db.Has(nk, nil)
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); need(global, haveLocalFV, localFV.Version) {
if !hasNeeded {
l.Debugf("local need insert; folder=%q, name=%q", folder, name)
t.Put(nk, nil)
}
} else if hasNeeded {
l.Debugf("local need delete; folder=%q, name=%q", folder, name)
t.Delete(nk)
}
}
func need(global FileIntf, haveLocal bool, localVersion protocol.Vector) bool {
// We never need an invalid file.
if global.IsInvalid() {
@@ -189,36 +193,37 @@ func (t readWriteTransaction) removeFromGlobal(gk, folder, device, file []byte,
return
}
removed := false
for i := range fl.Versions {
if bytes.Equal(fl.Versions[i].Device, device) {
if i == 0 && meta != nil {
f, ok := t.getFile(folder, device, file)
if !ok {
// didn't exist anyway, apparently
continue
}
meta.removeFile(protocol.GlobalDeviceID, f)
removed = true
}
fl.Versions = append(fl.Versions[:i], fl.Versions[i+1:]...)
break
fl, _, removedAt := fl.pop(device)
if removedAt == -1 {
// There is no version for the given device
return
}
if removedAt == 0 {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
if f, ok := t.getFile(folder, device, file); ok {
meta.removeFile(protocol.GlobalDeviceID, f)
}
}
if len(fl.Versions) == 0 {
t.Delete(t.db.keyer.GenerateNeedFileKey(nil, folder, file))
t.Delete(gk)
return
}
if removedAt == 0 {
global, ok := t.getFile(folder, fl.Versions[0].Device, file)
if !ok {
panic("This file must exist in the db")
}
t.updateLocalNeed(folder, file, fl, global)
meta.addFile(protocol.GlobalDeviceID, global)
}
l.Debugf("new global after remove: %v", fl)
t.Put(gk, mustMarshal(&fl))
if removed {
if f, ok := t.getFile(folder, fl.Versions[0].Device, file); ok {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
meta.addFile(protocol.GlobalDeviceID, f)
}
}
}
func (t readWriteTransaction) deleteKeyPrefix(prefix []byte) {

View File

@@ -110,9 +110,8 @@ func TestGlobalOverHTTPS(t *testing.T) {
t.Fatal(err)
}
// Generate a server certificate, using fewer bits than usual to hurry the
// process along a bit.
cert, err := tlsutil.NewCertificate(dir+"/cert.pem", dir+"/key.pem", "syncthing", 1024)
// Generate a server certificate.
cert, err := tlsutil.NewCertificate(dir+"/cert.pem", dir+"/key.pem", "syncthing")
if err != nil {
t.Fatal(err)
}
@@ -176,9 +175,8 @@ func TestGlobalAnnounce(t *testing.T) {
t.Fatal(err)
}
// Generate a server certificate, using fewer bits than usual to hurry the
// process along a bit.
cert, err := tlsutil.NewCertificate(dir+"/cert.pem", dir+"/key.pem", "syncthing", 1024)
// Generate a server certificate.
cert, err := tlsutil.NewCertificate(dir+"/cert.pem", dir+"/key.pem", "syncthing")
if err != nil {
t.Fatal(err)
}

View File

@@ -79,6 +79,10 @@ type FileInfo interface {
// FileMode is similar to os.FileMode
type FileMode uint32
func (fm FileMode) String() string {
return os.FileMode(fm).String()
}
// Usage represents filesystem space usage
type Usage struct {
Free int64

View File

@@ -98,3 +98,11 @@ func TestCanonicalize(t *testing.T) {
}
}
}
func TestFileModeString(t *testing.T) {
var fm FileMode = 0777
exp := "-rwxrwxrwx"
if fm.String() != exp {
t.Fatalf("Got %v, expected %v", fm.String(), exp)
}
}

View File

@@ -28,7 +28,10 @@ const (
NumLevels
)
const DebugFlags = log.Ltime | log.Ldate | log.Lmicroseconds | log.Lshortfile
const (
DefaultFlags = log.Ltime
DebugFlags = log.Ltime | log.Ldate | log.Lmicroseconds | log.Lshortfile
)
// A MessageHandler is called with the log level and message text.
type MessageHandler func(l LogLevel, msg string)
@@ -57,8 +60,8 @@ type Logger interface {
type logger struct {
logger *log.Logger
handlers [NumLevels][]MessageHandler
facilities map[string]string // facility name => description
debug map[string]bool // facility name => debugging enabled
facilities map[string]string // facility name => description
debug map[string]struct{} // only facility names with debugging enabled
mut sync.Mutex
}
@@ -66,16 +69,17 @@ type logger struct {
var DefaultLogger = New()
func New() Logger {
res := &logger{
facilities: make(map[string]string),
debug: make(map[string]struct{}),
}
if os.Getenv("LOGGER_DISCARD") != "" {
// Hack to completely disable logging, for example when running benchmarks.
return &logger{
logger: log.New(ioutil.Discard, "", 0),
}
}
return &logger{
logger: log.New(os.Stdout, "", log.Ltime),
res.logger = log.New(ioutil.Discard, "", 0)
return res
}
res.logger = log.New(os.Stdout, "", DefaultFlags)
return res
}
// AddHandler registers a new MessageHandler to receive messages with the
@@ -207,7 +211,7 @@ func (l *logger) Fatalf(format string, vals ...interface{}) {
// ShouldDebug returns true if the given facility has debugging enabled.
func (l *logger) ShouldDebug(facility string) bool {
l.mut.Lock()
res := l.debug[facility]
_, res := l.debug[facility]
l.mut.Unlock()
return res
}
@@ -215,20 +219,25 @@ func (l *logger) ShouldDebug(facility string) bool {
// SetDebug enabled or disables debugging for the given facility name.
func (l *logger) SetDebug(facility string, enabled bool) {
l.mut.Lock()
l.debug[facility] = enabled
l.mut.Unlock()
l.SetFlags(DebugFlags)
defer l.mut.Unlock()
if _, ok := l.debug[facility]; enabled && !ok {
l.SetFlags(DebugFlags)
l.debug[facility] = struct{}{}
} else if !enabled && ok {
delete(l.debug, facility)
if len(l.debug) == 0 {
l.SetFlags(DefaultFlags)
}
}
}
// FacilityDebugging returns the set of facilities that have debugging
// enabled.
func (l *logger) FacilityDebugging() []string {
var enabled []string
enabled := make([]string, 0, len(l.debug))
l.mut.Lock()
for facility, isEnabled := range l.debug {
if isEnabled {
enabled = append(enabled, facility)
}
for facility := range l.debug {
enabled = append(enabled, facility)
}
l.mut.Unlock()
return enabled
@@ -249,17 +258,7 @@ func (l *logger) Facilities() map[string]string {
// NewFacility returns a new logger bound to the named facility.
func (l *logger) NewFacility(facility, description string) Logger {
l.mut.Lock()
if l.facilities == nil {
l.facilities = make(map[string]string)
}
if description != "" {
l.facilities[facility] = description
}
if l.debug == nil {
l.debug = make(map[string]bool)
}
l.debug[facility] = false
l.facilities[facility] = description
l.mut.Unlock()
return &facilityLogger{

View File

@@ -0,0 +1,50 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import "sync"
type byteSemaphore struct {
max int
available int
mut sync.Mutex
cond *sync.Cond
}
func newByteSemaphore(max int) *byteSemaphore {
s := byteSemaphore{
max: max,
available: max,
}
s.cond = sync.NewCond(&s.mut)
return &s
}
func (s *byteSemaphore) take(bytes int) {
if bytes > s.max {
bytes = s.max
}
s.mut.Lock()
for bytes > s.available {
s.cond.Wait()
}
s.available -= bytes
s.mut.Unlock()
}
func (s *byteSemaphore) give(bytes int) {
if bytes > s.max {
bytes = s.max
}
s.mut.Lock()
if s.available+bytes > s.max {
panic("bug: can never give more than max")
}
s.available += bytes
s.cond.Broadcast()
s.mut.Unlock()
}

View File

@@ -11,14 +11,20 @@ import (
"errors"
"fmt"
"math/rand"
"path/filepath"
"sort"
"strings"
"sync/atomic"
"time"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/watchaggregator"
)
@@ -41,6 +47,8 @@ type folder struct {
scanDelay chan time.Duration
initialScanFinished chan struct{}
stopped chan struct{}
scanErrors []FileError
scanErrorsMut sync.Mutex
pullScheduled chan struct{}
@@ -80,6 +88,7 @@ func newFolder(model *Model, cfg config.FolderConfiguration) folder {
scanDelay: make(chan time.Duration),
initialScanFinished: make(chan struct{}),
stopped: make(chan struct{}),
scanErrorsMut: sync.NewMutex(),
pullScheduled: make(chan struct{}, 1), // This needs to be 1-buffered so that we queue a pull if we're busy when it comes.
@@ -266,14 +275,255 @@ func (f *folder) getHealthError() error {
}
func (f *folder) scanSubdirs(subDirs []string) error {
if err := f.model.internalScanFolderSubdirs(f.ctx, f.folderID, subDirs, f.localFlags); err != nil {
// Potentially sets the error twice, once in the scanner just
// by doing a check, and once here, if the error returned is
// the same one as returned by CheckHealth, though
// duplicate set is handled by setError.
if err := f.CheckHealth(); err != nil {
return err
}
f.model.fmut.RLock()
fset := f.model.folderFiles[f.ID]
ignores := f.model.folderIgnores[f.ID]
f.model.fmut.RUnlock()
mtimefs := fset.MtimeFS()
for i := range subDirs {
sub := osutil.NativeFilename(subDirs[i])
if sub == "" {
// A blank subdirs means to scan the entire folder. We can trim
// the subDirs list and go on our way.
subDirs = nil
break
}
subDirs[i] = sub
}
// Check if the ignore patterns changed as part of scanning this folder.
// If they did we should schedule a pull of the folder so that we
// request things we might have suddenly become unignored and so on.
oldHash := ignores.Hash()
defer func() {
if ignores.Hash() != oldHash {
l.Debugln("Folder", f.ID, "ignore patterns changed; triggering puller")
f.IgnoresUpdated()
}
}()
if err := ignores.Load(".stignore"); err != nil && !fs.IsNotExist(err) {
err = fmt.Errorf("loading ignores: %v", err)
f.setError(err)
return err
}
// Clean the list of subitems to ensure that we start at a known
// directory, and don't scan subdirectories of things we've already
// scanned.
subDirs = unifySubs(subDirs, func(f string) bool {
_, ok := fset.Get(protocol.LocalDeviceID, f)
return ok
})
f.setState(FolderScanning)
fchan := scanner.Walk(f.ctx, scanner.Config{
Folder: f.ID,
Subs: subDirs,
Matcher: ignores,
TempLifetime: time.Duration(f.model.cfg.Options().KeepTemporariesH) * time.Hour,
CurrentFiler: cFiler{f.model, f.ID},
Filesystem: mtimefs,
IgnorePerms: f.IgnorePerms,
AutoNormalize: f.AutoNormalize,
Hashers: f.model.numHashers(f.ID),
ShortID: f.model.shortID,
ProgressTickIntervalS: f.ScanProgressIntervalS,
UseLargeBlocks: f.UseLargeBlocks,
LocalFlags: f.localFlags,
})
batchFn := func(fs []protocol.FileInfo) error {
if err := f.CheckHealth(); err != nil {
l.Debugf("Stopping scan of folder %s due to: %s", f.Description(), err)
return err
}
f.model.updateLocalsFromScanning(f.ID, fs)
return nil
}
// Resolve items which are identical with the global state.
if f.localFlags&protocol.FlagLocalReceiveOnly != 0 {
oldBatchFn := batchFn // can't reference batchFn directly (recursion)
batchFn = func(fs []protocol.FileInfo) error {
for i := range fs {
switch gf, ok := fset.GetGlobal(fs[i].Name); {
case !ok:
continue
case gf.IsEquivalentOptional(fs[i], false, false, protocol.FlagLocalReceiveOnly):
// What we have locally is equivalent to the global file.
fs[i].Version = fs[i].Version.Merge(gf.Version)
fallthrough
case fs[i].IsDeleted() && gf.IsReceiveOnlyChanged():
// Our item is deleted and the global item is our own
// receive only file. We can't delete file infos, so
// we just pretend it is a normal deleted file (nobody
// cares about that).
fs[i].LocalFlags &^= protocol.FlagLocalReceiveOnly
}
}
return oldBatchFn(fs)
}
}
batch := newFileInfoBatch(batchFn)
// Schedule a pull after scanning, but only if we actually detected any
// changes.
changes := 0
defer func() {
if changes > 0 {
f.SchedulePull()
}
}()
f.clearScanErrors(subDirs)
for res := range fchan {
if res.Err != nil {
f.newScanError(res.Path, res.Err)
continue
}
if err := batch.flushIfFull(); err != nil {
return err
}
batch.append(res.File)
changes++
}
if err := batch.flush(); err != nil {
return err
}
if len(subDirs) == 0 {
// If we have no specific subdirectories to traverse, set it to one
// empty prefix so we traverse the entire folder contents once.
subDirs = []string{""}
}
// Do a scan of the database for each prefix, to check for deleted and
// ignored files.
var toIgnore []db.FileInfoTruncated
ignoredParent := ""
pathSep := string(fs.PathSeparator)
for _, sub := range subDirs {
var iterError error
fset.WithPrefixedHaveTruncated(protocol.LocalDeviceID, sub, func(fi db.FileIntf) bool {
file := fi.(db.FileInfoTruncated)
if err := batch.flushIfFull(); err != nil {
iterError = err
return false
}
if ignoredParent != "" && !strings.HasPrefix(file.Name, ignoredParent+pathSep) {
for _, file := range toIgnore {
l.Debugln("marking file as ignored", file)
nf := file.ConvertToIgnoredFileInfo(f.model.id.Short())
batch.append(nf)
changes++
if err := batch.flushIfFull(); err != nil {
iterError = err
return false
}
}
toIgnore = toIgnore[:0]
ignoredParent = ""
}
switch ignored := ignores.Match(file.Name).IsIgnored(); {
case !file.IsIgnored() && ignored:
// File was not ignored at last pass but has been ignored.
if file.IsDirectory() {
// Delay ignoring as a child might be unignored.
toIgnore = append(toIgnore, file)
if ignoredParent == "" {
// If the parent wasn't ignored already, set
// this path as the "highest" ignored parent
ignoredParent = file.Name
}
return true
}
l.Debugln("marking file as ignored", f)
nf := file.ConvertToIgnoredFileInfo(f.model.id.Short())
batch.append(nf)
changes++
case file.IsIgnored() && !ignored:
// Successfully scanned items are already un-ignored during
// the scan, so check whether it is deleted.
fallthrough
case !file.IsIgnored() && !file.IsDeleted() && !file.IsUnsupported():
// The file is not ignored, deleted or unsupported. Lets check if
// it's still here. Simply stat:ing it wont do as there are
// tons of corner cases (e.g. parent dir->symlink, missing
// permissions)
if !osutil.IsDeleted(mtimefs, file.Name) {
if ignoredParent != "" {
// Don't ignore parents of this not ignored item
toIgnore = toIgnore[:0]
ignoredParent = ""
}
return true
}
nf := protocol.FileInfo{
Name: file.Name,
Type: file.Type,
Size: 0,
ModifiedS: file.ModifiedS,
ModifiedNs: file.ModifiedNs,
ModifiedBy: f.model.id.Short(),
Deleted: true,
Version: file.Version.Update(f.model.shortID),
LocalFlags: f.localFlags,
}
// We do not want to override the global version
// with the deleted file. Keeping only our local
// counter makes sure we are in conflict with any
// other existing versions, which will be resolved
// by the normal pulling mechanisms.
if file.ShouldConflict() {
nf.Version = nf.Version.DropOthers(f.model.shortID)
}
batch.append(nf)
changes++
}
return true
})
if iterError == nil && len(toIgnore) > 0 {
for _, file := range toIgnore {
l.Debugln("marking file as ignored", f)
nf := file.ConvertToIgnoredFileInfo(f.model.id.Short())
batch.append(nf)
changes++
if iterError = batch.flushIfFull(); iterError != nil {
break
}
}
toIgnore = toIgnore[:0]
}
if iterError != nil {
return iterError
}
}
if err := batch.flush(); err != nil {
return err
}
f.model.folderStatRef(f.ID).ScanCompleted()
f.setState(FolderIdle)
return nil
}
@@ -430,3 +680,73 @@ func (f *folder) basePause() time.Duration {
func (f *folder) String() string {
return fmt.Sprintf("%s/%s@%p", f.Type, f.folderID, f)
}
func (f *folder) newScanError(path string, err error) {
f.scanErrorsMut.Lock()
f.scanErrors = append(f.scanErrors, FileError{
Err: err.Error(),
Path: path,
})
f.scanErrorsMut.Unlock()
}
func (f *folder) clearScanErrors(subDirs []string) {
f.scanErrorsMut.Lock()
defer f.scanErrorsMut.Unlock()
if len(subDirs) == 0 {
f.scanErrors = nil
return
}
filtered := f.scanErrors[:0]
pathSep := string(fs.PathSeparator)
outer:
for _, fe := range f.scanErrors {
for _, sub := range subDirs {
if strings.HasPrefix(fe.Path, sub+pathSep) {
continue outer
}
}
filtered = append(filtered, fe)
}
f.scanErrors = filtered
}
func (f *folder) Errors() []FileError {
f.scanErrorsMut.Lock()
defer f.scanErrorsMut.Unlock()
return append([]FileError{}, f.scanErrors...)
}
// The exists function is expected to return true for all known paths
// (excluding "" and ".")
func unifySubs(dirs []string, exists func(dir string) bool) []string {
if len(dirs) == 0 {
return nil
}
sort.Strings(dirs)
if dirs[0] == "" || dirs[0] == "." || dirs[0] == string(fs.PathSeparator) {
return nil
}
prev := "./" // Anything that can't be parent of a clean path
for i := 0; i < len(dirs); {
dir, err := fs.Canonicalize(dirs[i])
if err != nil {
l.Debugf("Skipping %v for scan: %s", dirs[i], err)
dirs = append(dirs[:i], dirs[i+1:]...)
continue
}
if dir == prev || strings.HasPrefix(dir, prev+string(fs.PathSeparator)) {
dirs = append(dirs[:i], dirs[i+1:]...)
continue
}
parent := filepath.Dir(dir)
for parent != "." && parent != string(fs.PathSeparator) && !exists(parent) {
dir = parent
parent = filepath.Dir(dir)
}
dirs[i] = dir
prev = dir
i++
}
return dirs
}

View File

@@ -15,7 +15,6 @@ import (
"runtime"
"sort"
"strings"
stdsync "sync"
"time"
"github.com/syncthing/syncthing/lib/config"
@@ -102,17 +101,17 @@ type sendReceiveFolder struct {
queue *jobQueue
errors map[string]string // path -> error string
errorsMut sync.Mutex
pullErrors map[string]string // path -> error string
pullErrorsMut sync.Mutex
}
func newSendReceiveFolder(model *Model, cfg config.FolderConfiguration, ver versioner.Versioner, fs fs.Filesystem) service {
f := &sendReceiveFolder{
folder: newFolder(model, cfg),
fs: fs,
versioner: ver,
queue: newJobQueue(),
errorsMut: sync.NewMutex(),
folder: newFolder(model, cfg),
fs: fs,
versioner: ver,
queue: newJobQueue(),
pullErrorsMut: sync.NewMutex(),
}
f.folder.puller = f
@@ -167,7 +166,7 @@ func (f *sendReceiveFolder) pull() bool {
l.Debugf("%v pulling (ignoresChanged=%v)", f, ignoresChanged)
f.setState(FolderSyncing)
f.clearErrors()
f.clearPullErrors()
scanChan := make(chan string)
go f.pullScannerRoutine(scanChan)
@@ -204,10 +203,10 @@ func (f *sendReceiveFolder) pull() bool {
// we're not making it. Probably there are write
// errors preventing us. Flag this with a warning and
// wait a bit longer before retrying.
if folderErrors := f.PullErrors(); len(folderErrors) > 0 {
if errors := f.Errors(); len(errors) > 0 {
events.Default.Log(events.FolderErrors, map[string]interface{}{
"folder": f.folderID,
"errors": folderErrors,
"errors": errors,
})
}
break
@@ -327,7 +326,7 @@ func (f *sendReceiveFolder) processNeeded(ignores *ignore.Matcher, folderFiles *
changed++
case runtime.GOOS == "windows" && fs.WindowsInvalidFilename(file.Name):
f.newError("pull", file.Name, fs.ErrInvalidFilename)
f.newPullError("pull", file.Name, fs.ErrInvalidFilename)
case file.IsDeleted():
if file.IsDirectory() {
@@ -448,7 +447,7 @@ nextFile:
continue
}
if fi.IsDeleted() || fi.Type != protocol.FileInfoTypeFile {
if fi.IsDeleted() || fi.IsInvalid() || fi.Type != protocol.FileInfoTypeFile {
// The item has changed type or status in the index while we
// were processing directories above.
f.queue.Done(fileName)
@@ -497,7 +496,7 @@ nextFile:
continue nextFile
}
}
f.newError("pull", fileName, errNotAvailable)
f.newPullError("pull", fileName, errNotAvailable)
}
return changed, fileDeletions, dirDeletions, nil
@@ -513,7 +512,7 @@ func (f *sendReceiveFolder) processDeletions(ignores *ignore.Matcher, fileDeleti
l.Debugln(f, "Deleting file", file.Name)
if update, err := f.deleteFile(file, scanChan); err != nil {
f.newError("delete file", file.Name, err)
f.newPullError("delete file", file.Name, err)
} else {
dbUpdateChan <- update
}
@@ -573,7 +572,7 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
case err == nil && (!info.IsDir() || info.IsSymlink()):
err = osutil.InWritableDir(f.fs.Remove, f.fs, file.Name)
if err != nil {
f.newError("dir replace", file.Name, err)
f.newPullError("dir replace", file.Name, err)
return
}
fallthrough
@@ -603,13 +602,13 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
if err = osutil.InWritableDir(mkdir, f.fs, file.Name); err == nil {
dbUpdateChan <- dbUpdateJob{file, dbUpdateHandleDir}
} else {
f.newError("dir mkdir", file.Name, err)
f.newPullError("dir mkdir", file.Name, err)
}
return
// Weird error when stat()'ing the dir. Probably won't work to do
// anything else with it if we can't even stat() it.
case err != nil:
f.newError("dir stat", file.Name, err)
f.newPullError("dir stat", file.Name, err)
return
}
@@ -621,7 +620,7 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
} else if err := f.fs.Chmod(file.Name, mode|(fs.FileMode(info.Mode())&retainBits)); err == nil {
dbUpdateChan <- dbUpdateJob{file, dbUpdateHandleDir}
} else {
f.newError("dir chmod", file.Name, err)
f.newPullError("dir chmod", file.Name, err)
}
}
@@ -631,7 +630,7 @@ func (f *sendReceiveFolder) checkParent(file string, scanChan chan<- string) boo
parent := filepath.Dir(file)
if err := osutil.TraversesSymlink(f.fs, parent); err != nil {
f.newError("traverses q", file, err)
f.newPullError("traverses q", file, err)
return false
}
@@ -652,7 +651,7 @@ func (f *sendReceiveFolder) checkParent(file string, scanChan chan<- string) boo
}
l.Debugf("%v resurrecting parent directory of %v", f, file)
if err := f.fs.MkdirAll(parent, 0755); err != nil {
f.newError("resurrecting parent dir", file, err)
f.newPullError("resurrecting parent dir", file, err)
return false
}
scanChan <- parent
@@ -691,7 +690,7 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
// Index entry from a Syncthing predating the support for including
// the link target in the index entry. We log this as an error.
err = errors.New("incompatible symlink entry; rescan with newer Syncthing on source")
f.newError("symlink", file.Name, err)
f.newPullError("symlink", file.Name, err)
return
}
@@ -701,7 +700,7 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
// path.
err = osutil.InWritableDir(f.fs.Remove, f.fs, file.Name)
if err != nil {
f.newError("symlink remove", file.Name, err)
f.newPullError("symlink remove", file.Name, err)
return
}
}
@@ -715,7 +714,7 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
if err = osutil.InWritableDir(createLink, f.fs, file.Name); err == nil {
dbUpdateChan <- dbUpdateJob{file, dbUpdateHandleSymlink}
} else {
f.newError("symlink create", file.Name, err)
f.newPullError("symlink create", file.Name, err)
}
}
@@ -743,7 +742,7 @@ func (f *sendReceiveFolder) handleDeleteDir(file protocol.FileInfo, ignores *ign
}()
if err = f.deleteDir(file.Name, ignores, scanChan); err != nil {
f.newError("delete dir", file.Name, err)
f.newPullError("delete dir", file.Name, err)
return
}
@@ -876,7 +875,7 @@ func (f *sendReceiveFolder) renameFile(cur, source, target protocol.FileInfo, db
err = errModified
default:
if fi, err := scanner.CreateFileInfo(stat, target.Name, f.fs); err == nil {
if !fi.IsEquivalentOptional(curTarget, false, true, protocol.LocalAllFlags) {
if !fi.IsEquivalentOptional(curTarget, f.IgnorePerms, true, protocol.LocalAllFlags) {
// Target changed
scanChan <- target.Name
err = errModified
@@ -1018,7 +1017,7 @@ func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, copyChan chan<- c
}
if err := f.CheckAvailableSpace(blocksSize); err != nil {
f.newError("pulling file", file.Name, err)
f.newPullError("pulling file", file.Name, err)
f.queue.Done(file.Name)
return
}
@@ -1130,7 +1129,7 @@ func (f *sendReceiveFolder) shortcutFile(file, curFile protocol.FileInfo, dbUpda
if !f.IgnorePerms && !file.NoPermissions {
if err = f.fs.Chmod(file.Name, fs.FileMode(file.Permissions&0777)); err != nil {
f.newError("shortcut", file.Name, err)
f.newPullError("shortcut", file.Name, err)
return
}
}
@@ -1142,14 +1141,15 @@ func (f *sendReceiveFolder) shortcutFile(file, curFile protocol.FileInfo, dbUpda
file.Version = file.Version.Merge(curFile.Version)
dbUpdateChan <- dbUpdateJob{file, dbUpdateShortcutFile}
return
}
// copierRoutine reads copierStates until the in channel closes and performs
// the relevant copies when possible, or passes it to the puller routine.
func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan chan<- pullBlockState, out chan<- *sharedPullerState) {
buf := make([]byte, protocol.MinBlockSize)
buf := protocol.BufferPool.Get(protocol.MinBlockSize)
defer func() {
protocol.BufferPool.Put(buf)
}()
for state := range in {
dstFd, err := state.tempFile()
@@ -1225,11 +1225,7 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
continue
}
if s := int(block.Size); s > cap(buf) {
buf = make([]byte, s)
} else {
buf = buf[:s]
}
buf = protocol.BufferPool.Upgrade(buf, int(block.Size))
found, err := weakHashFinder.Iterate(block.WeakHash, buf, func(offset int64) bool {
if verifyBuffer(buf, block) != nil {
@@ -1545,7 +1541,7 @@ func (f *sendReceiveFolder) finisherRoutine(ignores *ignore.Matcher, in <-chan *
}
if err != nil {
f.newError("finisher", state.file.Name, err)
f.newPullError("finisher", state.file.Name, err)
} else {
blockStatsMut.Lock()
blockStats["total"] += state.reused + state.copyTotal + state.pullTotal
@@ -1770,34 +1766,36 @@ func (f *sendReceiveFolder) moveForConflict(name string, lastModBy string) error
return err
}
func (f *sendReceiveFolder) newError(context, path string, err error) {
f.errorsMut.Lock()
defer f.errorsMut.Unlock()
func (f *sendReceiveFolder) newPullError(context, path string, err error) {
f.pullErrorsMut.Lock()
defer f.pullErrorsMut.Unlock()
// We might get more than one error report for a file (i.e. error on
// Write() followed by Close()); we keep the first error as that is
// probably closer to the root cause.
if _, ok := f.errors[path]; ok {
if _, ok := f.pullErrors[path]; ok {
return
}
l.Infof("Puller (folder %s, file %q): %s: %v", f.Description(), path, context, err)
f.errors[path] = fmt.Sprintf("%s: %s", context, err.Error())
f.pullErrors[path] = fmt.Sprintf("%s: %s", context, err.Error())
}
func (f *sendReceiveFolder) clearErrors() {
f.errorsMut.Lock()
f.errors = make(map[string]string)
f.errorsMut.Unlock()
func (f *sendReceiveFolder) clearPullErrors() {
f.pullErrorsMut.Lock()
f.pullErrors = make(map[string]string)
f.pullErrorsMut.Unlock()
}
func (f *sendReceiveFolder) PullErrors() []FileError {
f.errorsMut.Lock()
errors := make([]FileError, 0, len(f.errors))
for path, err := range f.errors {
func (f *sendReceiveFolder) Errors() []FileError {
scanErrors := f.folder.Errors()
f.pullErrorsMut.Lock()
errors := make([]FileError, 0, len(f.pullErrors)+len(f.scanErrors))
for path, err := range f.pullErrors {
errors = append(errors, FileError{path, err})
}
f.pullErrorsMut.Unlock()
errors = append(errors, scanErrors...)
sort.Sort(fileErrorList(errors))
f.errorsMut.Unlock()
return errors
}
@@ -1882,7 +1880,7 @@ func (f *sendReceiveFolder) checkToBeDeleted(cur protocol.FileInfo, scanChan cha
if err != nil {
return err
}
if !fi.IsEquivalentOptional(cur, false, true, protocol.LocalAllFlags) {
if !fi.IsEquivalentOptional(cur, f.IgnorePerms, true, protocol.LocalAllFlags) {
// File changed
scanChan <- cur.Name
return errModified
@@ -1935,41 +1933,3 @@ func componentCount(name string) int {
}
return count
}
type byteSemaphore struct {
max int
available int
mut stdsync.Mutex
cond *stdsync.Cond
}
func newByteSemaphore(max int) *byteSemaphore {
s := byteSemaphore{
max: max,
available: max,
}
s.cond = stdsync.NewCond(&s.mut)
return &s
}
func (s *byteSemaphore) take(bytes int) {
if bytes > s.max {
panic("bug: more than max bytes will never be available")
}
s.mut.Lock()
for bytes > s.available {
s.cond.Wait()
}
s.available -= bytes
s.mut.Unlock()
}
func (s *byteSemaphore) give(bytes int) {
s.mut.Lock()
if s.available+bytes > s.max {
panic("bug: can never give more than max")
}
s.available += bytes
s.cond.Broadcast()
s.mut.Unlock()
}

View File

@@ -97,9 +97,9 @@ func setUpSendReceiveFolder(model *Model) *sendReceiveFolder {
},
},
queue: newJobQueue(),
errors: make(map[string]string),
errorsMut: sync.NewMutex(),
queue: newJobQueue(),
pullErrors: make(map[string]string),
pullErrorsMut: sync.NewMutex(),
}
f.fs = fs.NewMtimeFS(f.Filesystem(), db.NewNamespacedKV(model.db, "mtime"))

150
lib/model/folder_test.go Normal file
View File

@@ -0,0 +1,150 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package model
import (
"path/filepath"
"runtime"
"testing"
"github.com/d4l3k/messagediff"
"github.com/syncthing/syncthing/lib/config"
)
type unifySubsCase struct {
in []string // input to unifySubs
exists []string // paths that exist in the database
out []string // expected output
}
func unifySubsCases() []unifySubsCase {
cases := []unifySubsCase{
{
// 0. trailing slashes are cleaned, known paths are just passed on
[]string{"foo/", "bar//"},
[]string{"foo", "bar"},
[]string{"bar", "foo"}, // the output is sorted
},
{
// 1. "foo/bar" gets trimmed as it's covered by foo
[]string{"foo", "bar/", "foo/bar/"},
[]string{"foo", "bar"},
[]string{"bar", "foo"},
},
{
// 2. "" gets simplified to the empty list; ie scan all
[]string{"foo", ""},
[]string{"foo"},
nil,
},
{
// 3. "foo/bar" is unknown, but it's kept
// because its parent is known
[]string{"foo/bar"},
[]string{"foo"},
[]string{"foo/bar"},
},
{
// 4. two independent known paths, both are kept
// "usr/lib" is not a prefix of "usr/libexec"
[]string{"usr/lib", "usr/libexec"},
[]string{"usr", "usr/lib", "usr/libexec"},
[]string{"usr/lib", "usr/libexec"},
},
{
// 5. "usr/lib" is a prefix of "usr/lib/exec"
[]string{"usr/lib", "usr/lib/exec"},
[]string{"usr", "usr/lib", "usr/libexec"},
[]string{"usr/lib"},
},
{
// 6. .stignore and .stfolder are special and are passed on
// verbatim even though they are unknown
[]string{config.DefaultMarkerName, ".stignore"},
[]string{},
[]string{config.DefaultMarkerName, ".stignore"},
},
{
// 7. but the presence of something else unknown forces an actual
// scan
[]string{config.DefaultMarkerName, ".stignore", "foo/bar"},
[]string{},
[]string{config.DefaultMarkerName, ".stignore", "foo"},
},
{
// 8. explicit request to scan all
nil,
[]string{"foo"},
nil,
},
{
// 9. empty list of subs
[]string{},
[]string{"foo"},
nil,
},
{
// 10. absolute path
[]string{"/foo"},
[]string{"foo"},
[]string{"foo"},
},
}
if runtime.GOOS == "windows" {
// Fixup path separators
for i := range cases {
for j, p := range cases[i].in {
cases[i].in[j] = filepath.FromSlash(p)
}
for j, p := range cases[i].exists {
cases[i].exists[j] = filepath.FromSlash(p)
}
for j, p := range cases[i].out {
cases[i].out[j] = filepath.FromSlash(p)
}
}
}
return cases
}
func unifyExists(f string, tc unifySubsCase) bool {
for _, e := range tc.exists {
if f == e {
return true
}
}
return false
}
func TestUnifySubs(t *testing.T) {
cases := unifySubsCases()
for i, tc := range cases {
exists := func(f string) bool {
return unifyExists(f, tc)
}
out := unifySubs(tc.in, exists)
if diff, equal := messagediff.PrettyDiff(tc.out, out); !equal {
t.Errorf("Case %d failed; got %v, expected %v, diff:\n%s", i, out, tc.out, diff)
}
}
}
func BenchmarkUnifySubs(b *testing.B) {
cases := unifySubsCases()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, tc := range cases {
exists := func(f string) bool {
return unifyExists(f, tc)
}
unifySubs(tc.in, exists)
}
}
}

View File

@@ -8,7 +8,6 @@ package model
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"errors"
@@ -18,7 +17,6 @@ import (
"path/filepath"
"reflect"
"runtime"
"sort"
"strings"
stdsync "sync"
"time"
@@ -67,7 +65,7 @@ type service interface {
Serve()
Stop()
CheckHealth() error
PullErrors() []FileError
Errors() []FileError
WatchError() error
getState() (folderState, time.Time, error)
@@ -107,6 +105,7 @@ type Model struct {
pmut sync.RWMutex // protects the below
conn map[protocol.DeviceID]connections.Connection
connRequestLimiters map[protocol.DeviceID]*byteSemaphore
closed map[protocol.DeviceID]chan struct{}
helloMessages map[protocol.DeviceID]protocol.HelloResult
deviceDownloads map[protocol.DeviceID]*deviceDownloadState
@@ -160,6 +159,7 @@ func NewModel(cfg *config.Wrapper, id protocol.DeviceID, clientName, clientVersi
folderRunnerTokens: make(map[string][]suture.ServiceToken),
folderStatRefs: make(map[string]*stats.FolderStatisticsReference),
conn: make(map[protocol.DeviceID]connections.Connection),
connRequestLimiters: make(map[protocol.DeviceID]*byteSemaphore),
closed: make(map[protocol.DeviceID]chan struct{}),
helloMessages: make(map[protocol.DeviceID]protocol.HelloResult),
deviceDownloads: make(map[protocol.DeviceID]*deviceDownloadState),
@@ -1283,6 +1283,7 @@ func (m *Model) Closed(conn protocol.Connection, err error) {
m.progressEmitter.temporaryIndexUnsubscribe(conn)
}
delete(m.conn, device)
delete(m.connRequestLimiters, device)
delete(m.helloMessages, device)
delete(m.deviceDownloads, device)
delete(m.remotePausedFolders, device)
@@ -1316,19 +1317,40 @@ func (m *Model) closeLocked(device protocol.DeviceID) {
closeRawConn(conn)
}
// Implements protocol.RequestResponse
type requestResponse struct {
data []byte
closed chan struct{}
once stdsync.Once
}
func newRequestResponse(size int) *requestResponse {
return &requestResponse{
data: protocol.BufferPool.Get(size),
closed: make(chan struct{}),
}
}
func (r *requestResponse) Data() []byte {
return r.data
}
func (r *requestResponse) Close() {
r.once.Do(func() {
protocol.BufferPool.Put(r.data)
close(r.closed)
})
}
func (r *requestResponse) Wait() {
<-r.closed
}
// Request returns the specified data segment by reading it from local disk.
// Implements the protocol.Model interface.
func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, offset int64, hash []byte, weakHash uint32, fromTemporary bool, buf []byte) error {
if offset < 0 {
return protocol.ErrInvalid
}
if cfg, ok := m.cfg.Folder(folder); !ok || !cfg.SharedWith(deviceID) {
l.Warnf("Request from %s for file %s in unshared folder %q", deviceID, name, folder)
return protocol.ErrNoSuchFile
} else if cfg.Paused {
l.Debugf("Request from %s for file %s in paused folder %q", deviceID, name, folder)
return protocol.ErrInvalid
func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (out protocol.RequestResponse, err error) {
if size < 0 || offset < 0 {
return nil, protocol.ErrInvalid
}
m.fmut.RLock()
@@ -1339,35 +1361,69 @@ func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, offset
// The folder might be already unpaused in the config, but not yet
// in the model.
l.Debugf("Request from %s for file %s in unstarted folder %q", deviceID, name, folder)
return protocol.ErrInvalid
return nil, protocol.ErrInvalid
}
if !folderCfg.SharedWith(deviceID) {
l.Warnf("Request from %s for file %s in unshared folder %q", deviceID, name, folder)
return nil, protocol.ErrNoSuchFile
}
if folderCfg.Paused {
l.Debugf("Request from %s for file %s in paused folder %q", deviceID, name, folder)
return nil, protocol.ErrInvalid
}
// Make sure the path is valid and in canonical form
var err error
if name, err = fs.Canonicalize(name); err != nil {
l.Debugf("Request from %s in folder %q for invalid filename %s", deviceID, folder, name)
return protocol.ErrInvalid
return nil, protocol.ErrInvalid
}
if deviceID != protocol.LocalDeviceID {
l.Debugf("%v REQ(in): %s: %q / %q o=%d s=%d t=%v", m, deviceID, folder, name, offset, len(buf), fromTemporary)
l.Debugf("%v REQ(in): %s: %q / %q o=%d s=%d t=%v", m, deviceID, folder, name, offset, size, fromTemporary)
}
if fs.IsInternal(name) {
l.Debugf("%v REQ(in) for internal file: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
if folderIgnores.Match(name).IsIgnored() {
l.Debugf("%v REQ(in) for ignored file: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
folderFs := folderCfg.Filesystem()
if fs.IsInternal(name) {
l.Debugf("%v REQ(in) for internal file: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, len(buf))
return protocol.ErrNoSuchFile
}
if folderIgnores.Match(name).IsIgnored() {
l.Debugf("%v REQ(in) for ignored file: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, len(buf))
return protocol.ErrNoSuchFile
}
if err := osutil.TraversesSymlink(folderFs, filepath.Dir(name)); err != nil {
l.Debugf("%v REQ(in) traversal check: %s - %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, len(buf))
return protocol.ErrNoSuchFile
l.Debugf("%v REQ(in) traversal check: %s - %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
// Restrict parallel requests by connection/device
m.pmut.RLock()
limiter := m.connRequestLimiters[deviceID]
m.pmut.RUnlock()
if limiter != nil {
limiter.take(int(size))
}
// The requestResponse releases the bytes to the limiter when its Close method is called.
res := newRequestResponse(int(size))
defer func() {
// Close it ourselves if it isn't returned due to an error
if err != nil {
res.Close()
}
}()
if limiter != nil {
go func() {
res.Wait()
limiter.give(int(size))
}()
}
// Only check temp files if the flag is set, and if we are set to advertise
@@ -1378,11 +1434,12 @@ func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, offset
if info, err := folderFs.Lstat(tempFn); err != nil || !info.IsRegular() {
// Reject reads for anything that doesn't exist or is something
// other than a regular file.
return protocol.ErrNoSuchFile
l.Debugf("%v REQ(in) failed stating temp file (%v): %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
err := readOffsetIntoBuf(folderFs, tempFn, offset, buf)
if err == nil && scanner.Validate(buf, hash, weakHash) {
return nil
err := readOffsetIntoBuf(folderFs, tempFn, offset, res.data)
if err == nil && scanner.Validate(res.data, hash, weakHash) {
return res, nil
}
// Fall through to reading from a non-temp file, just incase the temp
// file has finished downloading.
@@ -1391,21 +1448,25 @@ func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, offset
if info, err := folderFs.Lstat(name); err != nil || !info.IsRegular() {
// Reject reads for anything that doesn't exist or is something
// other than a regular file.
return protocol.ErrNoSuchFile
l.Debugf("%v REQ(in) failed stating file (%v): %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
if err = readOffsetIntoBuf(folderFs, name, offset, buf); fs.IsNotExist(err) {
return protocol.ErrNoSuchFile
if err := readOffsetIntoBuf(folderFs, name, offset, res.data); fs.IsNotExist(err) {
l.Debugf("%v REQ(in) file doesn't exist: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
} else if err != nil {
return protocol.ErrGeneric
l.Debugf("%v REQ(in) failed reading file (%v): %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, size)
return nil, protocol.ErrGeneric
}
if !scanner.Validate(buf, hash, weakHash) {
m.recheckFile(deviceID, folderFs, folder, name, int(offset)/len(buf), hash)
return protocol.ErrNoSuchFile
if !scanner.Validate(res.data, hash, weakHash) {
m.recheckFile(deviceID, folderFs, folder, name, int(offset)/int(size), hash)
l.Debugf("%v REQ(in) failed validating data (%v): %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, size)
return nil, protocol.ErrNoSuchFile
}
return nil
return res, nil
}
func (m *Model) recheckFile(deviceID protocol.DeviceID, folderFs fs.Filesystem, folder, name string, blockIndex int, hash []byte) {
@@ -1600,6 +1661,11 @@ func (m *Model) GetHello(id protocol.DeviceID) protocol.HelloIntf {
// folder changes.
func (m *Model) AddConnection(conn connections.Connection, hello protocol.HelloResult) {
deviceID := conn.ID()
device, ok := m.cfg.Device(deviceID)
if !ok {
l.Infoln("Trying to add connection to unknown device")
return
}
m.pmut.Lock()
if oldConn, ok := m.conn[deviceID]; ok {
@@ -1619,6 +1685,13 @@ func (m *Model) AddConnection(conn connections.Connection, hello protocol.HelloR
m.conn[deviceID] = conn
m.closed[deviceID] = make(chan struct{})
m.deviceDownloads[deviceID] = newDeviceDownloadState()
// 0: default, <0: no limiting
switch {
case device.MaxRequestKiB > 0:
m.connRequestLimiters[deviceID] = newByteSemaphore(1024 * device.MaxRequestKiB)
case device.MaxRequestKiB == 0:
m.connRequestLimiters[deviceID] = newByteSemaphore(1024 * defaultPullerPendingKiB)
}
m.helloMessages[deviceID] = hello
@@ -1646,8 +1719,7 @@ func (m *Model) AddConnection(conn connections.Connection, hello protocol.HelloR
cm := m.generateClusterConfig(deviceID)
conn.ClusterConfig(cm)
device, ok := m.cfg.Devices()[deviceID]
if ok && (device.Name == "" || m.cfg.Options().OverwriteRemoteDevNames) && hello.DeviceName != "" {
if (device.Name == "" || m.cfg.Options().OverwriteRemoteDevNames) && hello.DeviceName != "" {
device.Name = hello.DeviceName
m.cfg.SetDevice(device)
m.cfg.Save()
@@ -1981,264 +2053,6 @@ func (m *Model) ScanFolderSubdirs(folder string, subs []string) error {
return runner.Scan(subs)
}
func (m *Model) internalScanFolderSubdirs(ctx context.Context, folder string, subDirs []string, localFlags uint32) error {
m.fmut.RLock()
if err := m.checkFolderRunningLocked(folder); err != nil {
m.fmut.RUnlock()
return err
}
fset := m.folderFiles[folder]
folderCfg := m.folderCfgs[folder]
ignores := m.folderIgnores[folder]
runner := m.folderRunners[folder]
m.fmut.RUnlock()
mtimefs := fset.MtimeFS()
for i := range subDirs {
sub := osutil.NativeFilename(subDirs[i])
if sub == "" {
// A blank subdirs means to scan the entire folder. We can trim
// the subDirs list and go on our way.
subDirs = nil
break
}
subDirs[i] = sub
}
// Check if the ignore patterns changed as part of scanning this folder.
// If they did we should schedule a pull of the folder so that we
// request things we might have suddenly become unignored and so on.
oldHash := ignores.Hash()
defer func() {
if ignores.Hash() != oldHash {
l.Debugln("Folder", folder, "ignore patterns changed; triggering puller")
runner.IgnoresUpdated()
}
}()
if err := runner.CheckHealth(); err != nil {
return err
}
if err := ignores.Load(".stignore"); err != nil && !fs.IsNotExist(err) {
err = fmt.Errorf("loading ignores: %v", err)
runner.setError(err)
return err
}
// Clean the list of subitems to ensure that we start at a known
// directory, and don't scan subdirectories of things we've already
// scanned.
subDirs = unifySubs(subDirs, func(f string) bool {
_, ok := fset.Get(protocol.LocalDeviceID, f)
return ok
})
runner.setState(FolderScanning)
fchan := scanner.Walk(ctx, scanner.Config{
Folder: folderCfg.ID,
Subs: subDirs,
Matcher: ignores,
TempLifetime: time.Duration(m.cfg.Options().KeepTemporariesH) * time.Hour,
CurrentFiler: cFiler{m, folder},
Filesystem: mtimefs,
IgnorePerms: folderCfg.IgnorePerms,
AutoNormalize: folderCfg.AutoNormalize,
Hashers: m.numHashers(folder),
ShortID: m.shortID,
ProgressTickIntervalS: folderCfg.ScanProgressIntervalS,
UseLargeBlocks: folderCfg.UseLargeBlocks,
LocalFlags: localFlags,
})
if err := runner.CheckHealth(); err != nil {
return err
}
batchFn := func(fs []protocol.FileInfo) error {
if err := runner.CheckHealth(); err != nil {
l.Debugf("Stopping scan of folder %s due to: %s", folderCfg.Description(), err)
return err
}
m.updateLocalsFromScanning(folder, fs)
return nil
}
// Resolve items which are identical with the global state.
if localFlags&protocol.FlagLocalReceiveOnly != 0 {
oldBatchFn := batchFn // can't reference batchFn directly (recursion)
batchFn = func(fs []protocol.FileInfo) error {
for i := range fs {
switch gf, ok := fset.GetGlobal(fs[i].Name); {
case !ok:
continue
case gf.IsEquivalentOptional(fs[i], false, false, protocol.FlagLocalReceiveOnly):
// What we have locally is equivalent to the global file.
fs[i].Version = fs[i].Version.Merge(gf.Version)
fallthrough
case fs[i].IsDeleted() && gf.IsReceiveOnlyChanged():
// Our item is deleted and the global item is our own
// receive only file. We can't delete file infos, so
// we just pretend it is a normal deleted file (nobody
// cares about that).
fs[i].LocalFlags &^= protocol.FlagLocalReceiveOnly
}
}
return oldBatchFn(fs)
}
}
batch := newFileInfoBatch(batchFn)
// Schedule a pull after scanning, but only if we actually detected any
// changes.
changes := 0
defer func() {
if changes > 0 {
runner.SchedulePull()
}
}()
for f := range fchan {
if err := batch.flushIfFull(); err != nil {
return err
}
batch.append(f)
changes++
}
if err := batch.flush(); err != nil {
return err
}
if len(subDirs) == 0 {
// If we have no specific subdirectories to traverse, set it to one
// empty prefix so we traverse the entire folder contents once.
subDirs = []string{""}
}
// Do a scan of the database for each prefix, to check for deleted and
// ignored files.
var toIgnore []db.FileInfoTruncated
ignoredParent := ""
pathSep := string(fs.PathSeparator)
for _, sub := range subDirs {
var iterError error
fset.WithPrefixedHaveTruncated(protocol.LocalDeviceID, sub, func(fi db.FileIntf) bool {
f := fi.(db.FileInfoTruncated)
if err := batch.flushIfFull(); err != nil {
iterError = err
return false
}
if ignoredParent != "" && !strings.HasPrefix(f.Name, ignoredParent+pathSep) {
for _, f := range toIgnore {
l.Debugln("marking file as ignored", f)
nf := f.ConvertToIgnoredFileInfo(m.id.Short())
batch.append(nf)
changes++
if err := batch.flushIfFull(); err != nil {
iterError = err
return false
}
}
toIgnore = toIgnore[:0]
ignoredParent = ""
}
switch ignored := ignores.Match(f.Name).IsIgnored(); {
case !f.IsIgnored() && ignored:
// File was not ignored at last pass but has been ignored.
if f.IsDirectory() {
// Delay ignoring as a child might be unignored.
toIgnore = append(toIgnore, f)
if ignoredParent == "" {
// If the parent wasn't ignored already, set
// this path as the "highest" ignored parent
ignoredParent = f.Name
}
return true
}
l.Debugln("marking file as ignored", f)
nf := f.ConvertToIgnoredFileInfo(m.id.Short())
batch.append(nf)
changes++
case f.IsIgnored() && !ignored:
// Successfully scanned items are already un-ignored during
// the scan, so check whether it is deleted.
fallthrough
case !f.IsIgnored() && !f.IsDeleted() && !f.IsUnsupported():
// The file is not ignored, deleted or unsupported. Lets check if
// it's still here. Simply stat:ing it wont do as there are
// tons of corner cases (e.g. parent dir->symlink, missing
// permissions)
if !osutil.IsDeleted(mtimefs, f.Name) {
if ignoredParent != "" {
// Don't ignore parents of this not ignored item
toIgnore = toIgnore[:0]
ignoredParent = ""
}
return true
}
nf := protocol.FileInfo{
Name: f.Name,
Type: f.Type,
Size: 0,
ModifiedS: f.ModifiedS,
ModifiedNs: f.ModifiedNs,
ModifiedBy: m.id.Short(),
Deleted: true,
Version: f.Version.Update(m.shortID),
LocalFlags: localFlags,
}
// We do not want to override the global version
// with the deleted file. Keeping only our local
// counter makes sure we are in conflict with any
// other existing versions, which will be resolved
// by the normal pulling mechanisms.
if f.ShouldConflict() {
nf.Version = nf.Version.DropOthers(m.shortID)
}
batch.append(nf)
changes++
}
return true
})
if iterError == nil && len(toIgnore) > 0 {
for _, f := range toIgnore {
l.Debugln("marking file as ignored", f)
nf := f.ConvertToIgnoredFileInfo(m.id.Short())
batch.append(nf)
changes++
if iterError = batch.flushIfFull(); iterError != nil {
break
}
}
toIgnore = toIgnore[:0]
}
if iterError != nil {
return iterError
}
}
if err := batch.flush(); err != nil {
return err
}
m.folderStatRef(folder).ScanCompleted()
runner.setState(FolderIdle)
return nil
}
func (m *Model) DelayScan(folder string, next time.Duration) {
m.fmut.Lock()
runner, ok := m.folderRunners[folder]
@@ -2351,13 +2165,13 @@ func (m *Model) State(folder string) (string, time.Time, error) {
return state.String(), changed, err
}
func (m *Model) PullErrors(folder string) ([]FileError, error) {
func (m *Model) FolderErrors(folder string) ([]FileError, error) {
m.fmut.RLock()
defer m.fmut.RUnlock()
if err := m.checkFolderRunningLocked(folder); err != nil {
return nil, err
}
return m.folderRunners[folder].PullErrors(), nil
return m.folderRunners[folder].Errors(), nil
}
func (m *Model) WatchError(folder string) error {
@@ -2896,40 +2710,6 @@ func readOffsetIntoBuf(fs fs.Filesystem, file string, offset int64, buf []byte)
return err
}
// The exists function is expected to return true for all known paths
// (excluding "" and ".")
func unifySubs(dirs []string, exists func(dir string) bool) []string {
if len(dirs) == 0 {
return nil
}
sort.Strings(dirs)
if dirs[0] == "" || dirs[0] == "." || dirs[0] == string(fs.PathSeparator) {
return nil
}
prev := "./" // Anything that can't be parent of a clean path
for i := 0; i < len(dirs); {
dir, err := fs.Canonicalize(dirs[i])
if err != nil {
l.Debugf("Skipping %v for scan: %s", dirs[i], err)
dirs = append(dirs[:i], dirs[i+1:]...)
continue
}
if dir == prev || strings.HasPrefix(dir, prev+string(fs.PathSeparator)) {
dirs = append(dirs[:i], dirs[i+1:]...)
continue
}
parent := filepath.Dir(dir)
for parent != "." && parent != string(fs.PathSeparator) && !exists(parent) {
dir = parent
parent = filepath.Dir(dir)
}
dirs[i] = dir
prev = dir
i++
}
return dirs
}
// makeForgetUpdate takes an index update and constructs a download progress update
// causing to forget any progress for files which we've just been sent.
func makeForgetUpdate(files []protocol.FileInfo) []protocol.FileDownloadProgressUpdate {

View File

@@ -24,7 +24,6 @@ import (
"testing"
"time"
"github.com/d4l3k/messagediff"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/fs"
@@ -184,45 +183,42 @@ func TestRequest(t *testing.T) {
defer m.Stop()
m.ScanFolder("default")
bs := make([]byte, protocol.MinBlockSize)
// Existing, shared file
bs = bs[:6]
err := m.Request(device1, "default", "foo", 0, nil, 0, false, bs)
res, err := m.Request(device1, "default", "foo", 6, 0, nil, 0, false)
if err != nil {
t.Error(err)
}
bs := res.Data()
if !bytes.Equal(bs, []byte("foobar")) {
t.Errorf("Incorrect data from request: %q", string(bs))
}
// Existing, nonshared file
err = m.Request(device2, "default", "foo", 0, nil, 0, false, bs)
_, err = m.Request(device2, "default", "foo", 6, 0, nil, 0, false)
if err == nil {
t.Error("Unexpected nil error on insecure file read")
}
// Nonexistent file
err = m.Request(device1, "default", "nonexistent", 0, nil, 0, false, bs)
_, err = m.Request(device1, "default", "nonexistent", 6, 0, nil, 0, false)
if err == nil {
t.Error("Unexpected nil error on insecure file read")
}
// Shared folder, but disallowed file name
err = m.Request(device1, "default", "../walk.go", 0, nil, 0, false, bs)
_, err = m.Request(device1, "default", "../walk.go", 6, 0, nil, 0, false)
if err == nil {
t.Error("Unexpected nil error on insecure file read")
}
// Negative offset
err = m.Request(device1, "default", "foo", -4, nil, 0, false, bs[:0])
_, err = m.Request(device1, "default", "foo", -4, 0, nil, 0, false)
if err == nil {
t.Error("Unexpected nil error on insecure file read")
}
// Larger block than available
bs = bs[:42]
err = m.Request(device1, "default", "foo", 0, nil, 0, false, bs)
_, err = m.Request(device1, "default", "foo", 42, 0, nil, 0, false)
if err == nil {
t.Error("Unexpected nil error on insecure file read")
}
@@ -537,7 +533,7 @@ func BenchmarkRequestInSingleFile(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
if err := m.Request(device1, "default", "request/for/a/file/in/a/couple/of/dirs/128k", 0, nil, 0, false, buf); err != nil {
if _, err := m.Request(device1, "default", "request/for/a/file/in/a/couple/of/dirs/128k", 128<<10, 0, nil, 0, false); err != nil {
b.Error(err)
}
}
@@ -2352,140 +2348,6 @@ func benchmarkTree(b *testing.B, n1, n2 int) {
b.ReportAllocs()
}
type unifySubsCase struct {
in []string // input to unifySubs
exists []string // paths that exist in the database
out []string // expected output
}
func unifySubsCases() []unifySubsCase {
cases := []unifySubsCase{
{
// 0. trailing slashes are cleaned, known paths are just passed on
[]string{"foo/", "bar//"},
[]string{"foo", "bar"},
[]string{"bar", "foo"}, // the output is sorted
},
{
// 1. "foo/bar" gets trimmed as it's covered by foo
[]string{"foo", "bar/", "foo/bar/"},
[]string{"foo", "bar"},
[]string{"bar", "foo"},
},
{
// 2. "" gets simplified to the empty list; ie scan all
[]string{"foo", ""},
[]string{"foo"},
nil,
},
{
// 3. "foo/bar" is unknown, but it's kept
// because its parent is known
[]string{"foo/bar"},
[]string{"foo"},
[]string{"foo/bar"},
},
{
// 4. two independent known paths, both are kept
// "usr/lib" is not a prefix of "usr/libexec"
[]string{"usr/lib", "usr/libexec"},
[]string{"usr", "usr/lib", "usr/libexec"},
[]string{"usr/lib", "usr/libexec"},
},
{
// 5. "usr/lib" is a prefix of "usr/lib/exec"
[]string{"usr/lib", "usr/lib/exec"},
[]string{"usr", "usr/lib", "usr/libexec"},
[]string{"usr/lib"},
},
{
// 6. .stignore and .stfolder are special and are passed on
// verbatim even though they are unknown
[]string{config.DefaultMarkerName, ".stignore"},
[]string{},
[]string{config.DefaultMarkerName, ".stignore"},
},
{
// 7. but the presence of something else unknown forces an actual
// scan
[]string{config.DefaultMarkerName, ".stignore", "foo/bar"},
[]string{},
[]string{config.DefaultMarkerName, ".stignore", "foo"},
},
{
// 8. explicit request to scan all
nil,
[]string{"foo"},
nil,
},
{
// 9. empty list of subs
[]string{},
[]string{"foo"},
nil,
},
{
// 10. absolute path
[]string{"/foo"},
[]string{"foo"},
[]string{"foo"},
},
}
if runtime.GOOS == "windows" {
// Fixup path separators
for i := range cases {
for j, p := range cases[i].in {
cases[i].in[j] = filepath.FromSlash(p)
}
for j, p := range cases[i].exists {
cases[i].exists[j] = filepath.FromSlash(p)
}
for j, p := range cases[i].out {
cases[i].out[j] = filepath.FromSlash(p)
}
}
}
return cases
}
func unifyExists(f string, tc unifySubsCase) bool {
for _, e := range tc.exists {
if f == e {
return true
}
}
return false
}
func TestUnifySubs(t *testing.T) {
cases := unifySubsCases()
for i, tc := range cases {
exists := func(f string) bool {
return unifyExists(f, tc)
}
out := unifySubs(tc.in, exists)
if diff, equal := messagediff.PrettyDiff(tc.out, out); !equal {
t.Errorf("Case %d failed; got %v, expected %v, diff:\n%s", i, out, tc.out, diff)
}
}
}
func BenchmarkUnifySubs(b *testing.B) {
cases := unifySubsCases()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, tc := range cases {
exists := func(f string) bool {
return unifyExists(f, tc)
}
unifySubs(tc.in, exists)
}
}
}
func TestIssue3028(t *testing.T) {
// Create two files that we'll delete, one with a name that is a prefix of the other.
@@ -3802,6 +3664,7 @@ func TestFolderRestartZombies(t *testing.T) {
// would leave more than one folder runner alive.
wrapper := createTmpWrapper(defaultCfg.Copy())
defer os.Remove(wrapper.ConfigPath())
folderCfg, _ := wrapper.Folder("default")
folderCfg.FilesystemType = fs.FilesystemTypeFake
wrapper.SetFolder(folderCfg)
@@ -3894,3 +3757,45 @@ func (c *alwaysChanged) Seen(fs fs.Filesystem, name string) bool {
func (c *alwaysChanged) Changed() bool {
return true
}
func TestRequestLimit(t *testing.T) {
cfg := defaultCfg.Copy()
cfg.Devices = append(cfg.Devices, config.NewDeviceConfiguration(device2, "device2"))
cfg.Devices[1].MaxRequestKiB = 1
cfg.Folders[0].Devices = []config.FolderDeviceConfiguration{
{DeviceID: device1},
{DeviceID: device2},
}
m, _, wrapper := setupModelWithConnectionManual(cfg)
defer m.Stop()
defer os.Remove(wrapper.ConfigPath())
file := "tmpfile"
befReq := time.Now()
first, err := m.Request(device2, "default", file, 2000, 0, nil, 0, false)
if err != nil {
t.Fatalf("First request failed: %v", err)
}
reqDur := time.Since(befReq)
returned := make(chan struct{})
go func() {
second, err := m.Request(device2, "default", file, 2000, 0, nil, 0, false)
if err != nil {
t.Fatalf("Second request failed: %v", err)
}
close(returned)
second.Close()
}()
time.Sleep(10 * reqDur)
select {
case <-returned:
t.Fatalf("Second request returned before first was done")
default:
}
first.Close()
select {
case <-returned:
case <-time.After(time.Second):
t.Fatalf("Second request did not return after first was done")
}
}

0
lib/model/progressemitter.go Executable file → Normal file
View File

View File

@@ -98,9 +98,8 @@ func TestSymlinkTraversalRead(t *testing.T) {
<-done
// Request a file by traversing the symlink
buf := make([]byte, 10)
err := m.Request(device1, "default", "symlink/requests_test.go", 0, nil, 0, false, buf)
if err == nil || !bytes.Equal(buf, make([]byte, 10)) {
res, err := m.Request(device1, "default", "symlink/requests_test.go", 10, 0, nil, 0, false)
if err == nil || res != nil {
t.Error("Managed to traverse symlink")
}
}
@@ -225,6 +224,7 @@ func TestRequestVersioningSymlinkAttack(t *testing.T) {
defer os.RemoveAll(tmpDir)
cfg := defaultCfgWrapper.RawCopy()
cfg.Devices = append(cfg.Devices, config.NewDeviceConfiguration(device2, "device2"))
cfg.Folders[0] = config.NewFolderConfiguration(protocol.LocalDeviceID, "default", "default", fs.FilesystemTypeBasic, tmpDir)
cfg.Folders[0].Devices = []config.FolderDeviceConfiguration{
{DeviceID: device1},
@@ -519,12 +519,11 @@ func TestRescanIfHaveInvalidContent(t *testing.T) {
t.Fatalf("unexpected weak hash: %d != 103547413", f.Blocks[0].WeakHash)
}
buf := make([]byte, len(payload))
err := m.Request(device2, "default", "foo", 0, f.Blocks[0].Hash, f.Blocks[0].WeakHash, false, buf)
res, err := m.Request(device2, "default", "foo", int32(len(payload)), 0, f.Blocks[0].Hash, f.Blocks[0].WeakHash, false)
if err != nil {
t.Fatal(err)
}
buf := res.Data()
if !bytes.Equal(buf, payload) {
t.Errorf("%s != %s", buf, payload)
}
@@ -536,7 +535,7 @@ func TestRescanIfHaveInvalidContent(t *testing.T) {
t.Fatal(err)
}
err = m.Request(device2, "default", "foo", 0, f.Blocks[0].Hash, f.Blocks[0].WeakHash, false, buf)
res, err = m.Request(device2, "default", "foo", int32(len(payload)), 0, f.Blocks[0].Hash, f.Blocks[0].WeakHash, false)
if err == nil {
t.Fatalf("expected failure")
}

View File

@@ -171,12 +171,13 @@ func (m *fakeModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
func (m *fakeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
}
func (m *fakeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, weakHAsh uint32, fromTemporary bool, buf []byte) error {
func (m *fakeModel) Request(deviceID DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error) {
// We write the offset to the end of the buffer, so the receiver
// can verify that it did in fact get some data back over the
// connection.
buf := make([]byte, size)
binary.BigEndian.PutUint64(buf[len(buf)-8:], uint64(offset))
return nil
return &fakeRequestResponse{buf}, nil
}
func (m *fakeModel) ClusterConfig(deviceID DeviceID, config ClusterConfig) {

View File

@@ -4,32 +4,59 @@ package protocol
import "sync"
// Global pool to get buffers from. Requires Blocksizes to be initialised,
// therefore it is initialized in the same init() as BlockSizes
var BufferPool bufferPool
type bufferPool struct {
minSize int
pool sync.Pool
pools []sync.Pool
}
// get returns a new buffer of the requested size
func (p *bufferPool) get(size int) []byte {
intf := p.pool.Get()
if intf == nil {
// Pool is empty, must allocate.
return p.new(size)
}
bs := *intf.(*[]byte)
if cap(bs) < size {
// Buffer was too small, leave it for someone else and allocate.
p.pool.Put(intf)
return p.new(size)
}
return bs[:size]
func newBufferPool() bufferPool {
return bufferPool{make([]sync.Pool, len(BlockSizes))}
}
// upgrade grows the buffer to the requested size, while attempting to reuse
func (p *bufferPool) Get(size int) []byte {
// Too big, isn't pooled
if size > MaxBlockSize {
return make([]byte, size)
}
var i int
for i = range BlockSizes {
if size <= BlockSizes[i] {
break
}
}
var bs []byte
// Try the fitting and all bigger pools
for j := i; j < len(BlockSizes); j++ {
if intf := p.pools[j].Get(); intf != nil {
bs = *intf.(*[]byte)
return bs[:size]
}
}
// All pools are empty, must allocate.
return make([]byte, BlockSizes[i])[:size]
}
// Put makes the given byte slice availabe again in the global pool
func (p *bufferPool) Put(bs []byte) {
c := cap(bs)
// Don't buffer huge byte slices
if c > 2*MaxBlockSize {
return
}
for i := range BlockSizes {
if c >= BlockSizes[i] {
p.pools[i].Put(&bs)
return
}
}
}
// Upgrade grows the buffer to the requested size, while attempting to reuse
// it if possible.
func (p *bufferPool) upgrade(bs []byte, size int) []byte {
func (p *bufferPool) Upgrade(bs []byte, size int) []byte {
if cap(bs) >= size {
// Reslicing is enough, lets go!
return bs[:size]
@@ -37,23 +64,6 @@ func (p *bufferPool) upgrade(bs []byte, size int) []byte {
// It was too small. But it pack into the pool and try to get another
// buffer.
p.put(bs)
return p.get(size)
}
// put returns the buffer to the pool
func (p *bufferPool) put(bs []byte) {
p.pool.Put(&bs)
}
// new creates a new buffer of the requested size, taking the minimum
// allocation count into account. For internal use only.
func (p *bufferPool) new(size int) []byte {
allocSize := size
if allocSize < p.minSize {
// Avoid allocating tiny buffers that we won't be able to reuse for
// anything useful.
allocSize = p.minSize
}
return make([]byte, allocSize)[:size]
p.Put(bs)
return p.Get(size)
}

View File

@@ -9,7 +9,7 @@ type TestModel struct {
folder string
name string
offset int64
size int
size int32
hash []byte
weakHash uint32
fromTemporary bool
@@ -29,16 +29,17 @@ func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
}
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, hash []byte, weakHash uint32, fromTemporary bool, buf []byte) error {
func (t *TestModel) Request(deviceID DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error) {
t.folder = folder
t.name = name
t.offset = offset
t.size = len(buf)
t.size = size
t.hash = hash
t.weakHash = weakHash
t.fromTemporary = fromTemporary
buf := make([]byte, len(t.data))
copy(buf, t.data)
return nil
return &fakeRequestResponse{buf}, nil
}
func (t *TestModel) Closed(conn Connection, err error) {
@@ -60,3 +61,15 @@ func (t *TestModel) closedError() error {
return nil // Timeout
}
}
type fakeRequestResponse struct {
data []byte
}
func (r *fakeRequestResponse) Data() []byte {
return r.data
}
func (r *fakeRequestResponse) Close() {}
func (r *fakeRequestResponse) Wait() {}

0
lib/protocol/errors.go Executable file → Normal file
View File

View File

@@ -26,7 +26,7 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.Model.IndexUpdate(deviceID, folder, files)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, weakHash uint32, fromTemporary bool, buf []byte) error {
func (m nativeModel) Request(deviceID DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error) {
name = norm.NFD.String(name)
return m.Model.Request(deviceID, folder, name, offset, hash, weakHash, fromTemporary, buf)
return m.Model.Request(deviceID, folder, name, size, offset, hash, weakHash, fromTemporary)
}

View File

@@ -25,14 +25,14 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
m.Model.IndexUpdate(deviceID, folder, files)
}
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, weakHash uint32, fromTemporary bool, buf []byte) error {
func (m nativeModel) Request(deviceID DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error) {
if strings.Contains(name, `\`) {
l.Warnf("Dropping request for %s, contains invalid path separator", name)
return ErrNoSuchFile
return nil, ErrNoSuchFile
}
name = filepath.FromSlash(name)
return m.Model.Request(deviceID, folder, name, offset, hash, weakHash, fromTemporary, buf)
return m.Model.Request(deviceID, folder, name, size, offset, hash, weakHash, fromTemporary)
}
func fixupFiles(files []FileInfo) []FileInfo {

View File

@@ -2,8 +2,10 @@
package protocol
import "testing"
import "reflect"
import (
"reflect"
"testing"
)
func TestFixupFiles(t *testing.T) {
files := []FileInfo{

View File

@@ -48,6 +48,7 @@ func init() {
BlockSizes = append(BlockSizes, blockSize)
sha256OfEmptyBlock[blockSize] = sha256.Sum256(make([]byte, blockSize))
}
BufferPool = newBufferPool()
}
// BlockSize returns the block size to use for the given file size
@@ -125,7 +126,7 @@ type Model interface {
// An index update was received from the peer device
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo)
// A request was made by the peer device
Request(deviceID DeviceID, folder string, name string, offset int64, hash []byte, weakHash uint32, fromTemporary bool, buf []byte) error
Request(deviceID DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (RequestResponse, error)
// A cluster configuration message was received
ClusterConfig(deviceID DeviceID, config ClusterConfig)
// The peer device closed the connection
@@ -134,6 +135,12 @@ type Model interface {
DownloadProgress(deviceID DeviceID, folder string, updates []FileDownloadProgressUpdate)
}
type RequestResponse interface {
Data() []byte
Close() // Must always be called once the byte slice is no longer in use
Wait() // Blocks until Close is called
}
type Connection interface {
Start()
ID() DeviceID
@@ -166,7 +173,6 @@ type rawConnection struct {
outbox chan asyncMessage
closed chan struct{}
once sync.Once
pool bufferPool
compression Compression
}
@@ -184,7 +190,7 @@ type message interface {
type asyncMessage struct {
msg message
done chan struct{} // done closes when we're done marshalling the message and its contents can be reused
done chan struct{} // done closes when we're done sending the message
}
const (
@@ -196,12 +202,6 @@ const (
ReceiveTimeout = 300 * time.Second
)
// A buffer pool for global use. We don't allocate smaller buffers than 64k,
// in the hope of being able to reuse them later.
var buffers = bufferPool{
minSize: 64 << 10,
}
func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiver Model, name string, compress Compression) Connection {
cr := &countingReader{Reader: reader}
cw := &countingWriter{Writer: writer}
@@ -215,7 +215,6 @@ func NewConnection(deviceID DeviceID, reader io.Reader, writer io.Writer, receiv
awaiting: make(map[int32]chan asyncResult),
outbox: make(chan asyncMessage),
closed: make(chan struct{}),
pool: bufferPool{minSize: MinBlockSize},
compression: compress,
}
@@ -338,6 +337,7 @@ func (c *rawConnection) readerLoop() (err error) {
c.close(err)
}()
fourByteBuf := make([]byte, 4)
state := stateInitial
for {
select {
@@ -346,7 +346,7 @@ func (c *rawConnection) readerLoop() (err error) {
default:
}
msg, err := c.readMessage()
msg, err := c.readMessage(fourByteBuf)
if err == errUnknownMessage {
// Unknown message types are skipped, for future extensibility.
continue
@@ -394,7 +394,6 @@ func (c *rawConnection) readerLoop() (err error) {
if err := checkFilename(msg.Name); err != nil {
return fmt.Errorf("protocol error: request: %q: %v", msg.Name, err)
}
// Requests are handled asynchronously
go c.handleRequest(*msg)
case *Response:
@@ -429,30 +428,29 @@ func (c *rawConnection) readerLoop() (err error) {
}
}
func (c *rawConnection) readMessage() (message, error) {
hdr, err := c.readHeader()
func (c *rawConnection) readMessage(fourByteBuf []byte) (message, error) {
hdr, err := c.readHeader(fourByteBuf)
if err != nil {
return nil, err
}
return c.readMessageAfterHeader(hdr)
return c.readMessageAfterHeader(hdr, fourByteBuf)
}
func (c *rawConnection) readMessageAfterHeader(hdr Header) (message, error) {
func (c *rawConnection) readMessageAfterHeader(hdr Header, fourByteBuf []byte) (message, error) {
// First comes a 4 byte message length
buf := buffers.get(4)
if _, err := io.ReadFull(c.cr, buf); err != nil {
if _, err := io.ReadFull(c.cr, fourByteBuf[:4]); err != nil {
return nil, fmt.Errorf("reading message length: %v", err)
}
msgLen := int32(binary.BigEndian.Uint32(buf))
msgLen := int32(binary.BigEndian.Uint32(fourByteBuf))
if msgLen < 0 {
return nil, fmt.Errorf("negative message length %d", msgLen)
}
// Then comes the message
buf = buffers.upgrade(buf, int(msgLen))
buf := BufferPool.Get(int(msgLen))
if _, err := io.ReadFull(c.cr, buf); err != nil {
return nil, fmt.Errorf("reading message: %v", err)
}
@@ -465,7 +463,7 @@ func (c *rawConnection) readMessageAfterHeader(hdr Header) (message, error) {
case MessageCompressionLZ4:
decomp, err := c.lz4Decompress(buf)
buffers.put(buf)
BufferPool.Put(buf)
if err != nil {
return nil, fmt.Errorf("decompressing message: %v", err)
}
@@ -484,26 +482,25 @@ func (c *rawConnection) readMessageAfterHeader(hdr Header) (message, error) {
if err := msg.Unmarshal(buf); err != nil {
return nil, fmt.Errorf("unmarshalling message: %v", err)
}
buffers.put(buf)
BufferPool.Put(buf)
return msg, nil
}
func (c *rawConnection) readHeader() (Header, error) {
func (c *rawConnection) readHeader(fourByteBuf []byte) (Header, error) {
// First comes a 2 byte header length
buf := buffers.get(2)
if _, err := io.ReadFull(c.cr, buf); err != nil {
if _, err := io.ReadFull(c.cr, fourByteBuf[:2]); err != nil {
return Header{}, fmt.Errorf("reading length: %v", err)
}
hdrLen := int16(binary.BigEndian.Uint16(buf))
hdrLen := int16(binary.BigEndian.Uint16(fourByteBuf))
if hdrLen < 0 {
return Header{}, fmt.Errorf("negative header length %d", hdrLen)
}
// Then comes the header
buf = buffers.upgrade(buf, int(hdrLen))
buf := BufferPool.Get(int(hdrLen))
if _, err := io.ReadFull(c.cr, buf); err != nil {
return Header{}, fmt.Errorf("reading header: %v", err)
}
@@ -513,7 +510,7 @@ func (c *rawConnection) readHeader() (Header, error) {
return Header{}, fmt.Errorf("unmarshalling header: %v", err)
}
buffers.put(buf)
BufferPool.Put(buf)
return hdr, nil
}
@@ -590,38 +587,22 @@ func checkFilename(name string) error {
}
func (c *rawConnection) handleRequest(req Request) {
size := int(req.Size)
usePool := size <= MaxBlockSize
var buf []byte
var done chan struct{}
if usePool {
buf = c.pool.get(size)
done = make(chan struct{})
} else {
buf = make([]byte, size)
}
err := c.receiver.Request(c.id, req.Folder, req.Name, req.Offset, req.Hash, req.WeakHash, req.FromTemporary, buf)
res, err := c.receiver.Request(c.id, req.Folder, req.Name, req.Size, req.Offset, req.Hash, req.WeakHash, req.FromTemporary)
if err != nil {
c.send(&Response{
ID: req.ID,
Data: nil,
Code: errorToCode(err),
}, done)
} else {
c.send(&Response{
ID: req.ID,
Data: buf,
Code: errorToCode(err),
}, done)
}
if usePool {
<-done
c.pool.put(buf)
}, nil)
return
}
done := make(chan struct{})
c.send(&Response{
ID: req.ID,
Data: res.Data(),
Code: errorToCode(nil),
}, done)
<-done
res.Close()
}
func (c *rawConnection) handleResponse(resp Response) {
@@ -639,6 +620,9 @@ func (c *rawConnection) send(msg message, done chan struct{}) bool {
case c.outbox <- asyncMessage{msg, done}:
return true
case <-c.closed:
if done != nil {
close(done)
}
return false
}
}
@@ -647,7 +631,11 @@ func (c *rawConnection) writerLoop() {
for {
select {
case hm := <-c.outbox:
if err := c.writeMessage(hm); err != nil {
err := c.writeMessage(hm)
if hm.done != nil {
close(hm.done)
}
if err != nil {
c.close(err)
return
}
@@ -667,13 +655,10 @@ func (c *rawConnection) writeMessage(hm asyncMessage) error {
func (c *rawConnection) writeCompressedMessage(hm asyncMessage) error {
size := hm.msg.ProtoSize()
buf := buffers.get(size)
buf := BufferPool.Get(size)
if _, err := hm.msg.MarshalTo(buf); err != nil {
return fmt.Errorf("marshalling message: %v", err)
}
if hm.done != nil {
close(hm.done)
}
compressed, err := c.lz4Compress(buf)
if err != nil {
@@ -690,7 +675,7 @@ func (c *rawConnection) writeCompressedMessage(hm asyncMessage) error {
}
totSize := 2 + hdrSize + 4 + len(compressed)
buf = buffers.upgrade(buf, totSize)
buf = BufferPool.Upgrade(buf, totSize)
// Header length
binary.BigEndian.PutUint16(buf, uint16(hdrSize))
@@ -702,10 +687,10 @@ func (c *rawConnection) writeCompressedMessage(hm asyncMessage) error {
binary.BigEndian.PutUint32(buf[2+hdrSize:], uint32(len(compressed)))
// Message
copy(buf[2+hdrSize+4:], compressed)
buffers.put(compressed)
BufferPool.Put(compressed)
n, err := c.cw.Write(buf)
buffers.put(buf)
BufferPool.Put(buf)
l.Debugf("wrote %d bytes on the wire (2 bytes length, %d bytes header, 4 bytes message length, %d bytes message (%d uncompressed)), err=%v", n, hdrSize, len(compressed), size, err)
if err != nil {
@@ -726,7 +711,7 @@ func (c *rawConnection) writeUncompressedMessage(hm asyncMessage) error {
}
totSize := 2 + hdrSize + 4 + size
buf := buffers.get(totSize)
buf := BufferPool.Get(totSize)
// Header length
binary.BigEndian.PutUint16(buf, uint16(hdrSize))
@@ -740,12 +725,9 @@ func (c *rawConnection) writeUncompressedMessage(hm asyncMessage) error {
if _, err := hm.msg.MarshalTo(buf[2+hdrSize+4:]); err != nil {
return fmt.Errorf("marshalling message: %v", err)
}
if hm.done != nil {
close(hm.done)
}
n, err := c.cw.Write(buf[:totSize])
buffers.put(buf)
BufferPool.Put(buf)
l.Debugf("wrote %d bytes on the wire (2 bytes length, %d bytes header, 4 bytes message length, %d bytes message), err=%v", n, hdrSize, size, err)
if err != nil {
@@ -904,7 +886,7 @@ func (c *rawConnection) Statistics() Statistics {
func (c *rawConnection) lz4Compress(src []byte) ([]byte, error) {
var err error
buf := buffers.get(len(src))
buf := BufferPool.Get(len(src))
buf, err = lz4.Encode(buf, src)
if err != nil {
return nil, err
@@ -918,7 +900,7 @@ func (c *rawConnection) lz4Decompress(src []byte) ([]byte, error) {
size := binary.BigEndian.Uint32(src)
binary.LittleEndian.PutUint32(src, size)
var err error
buf := buffers.get(int(size))
buf := BufferPool.Get(int(size))
buf, err = lz4.Decode(buf, src)
if err != nil {
return nil, err

View File

@@ -28,6 +28,7 @@ type staticClient struct {
stop chan struct{}
stopped chan struct{}
stopMut sync.RWMutex
conn *tls.Conn
@@ -44,6 +45,8 @@ func newStaticClient(uri *url.URL, certs []tls.Certificate, invitations chan pro
invitations = make(chan protocol.SessionInvitation)
}
stopped := make(chan struct{})
close(stopped) // not yet started, don't block on Stop()
return &staticClient{
uri: uri,
invitations: invitations,
@@ -56,7 +59,8 @@ func newStaticClient(uri *url.URL, certs []tls.Certificate, invitations chan pro
connectTimeout: timeout,
stop: make(chan struct{}),
stopped: make(chan struct{}),
stopped: stopped,
stopMut: sync.NewRWMutex(),
mut: sync.NewRWMutex(),
}
@@ -64,8 +68,10 @@ func newStaticClient(uri *url.URL, certs []tls.Certificate, invitations chan pro
func (c *staticClient) Serve() {
defer c.cleanup()
c.stopMut.Lock()
c.stop = make(chan struct{})
c.stopped = make(chan struct{})
c.stopMut.Unlock()
defer close(c.stopped)
if err := c.connect(); err != nil {
@@ -104,6 +110,8 @@ func (c *staticClient) Serve() {
timeout := time.NewTimer(c.messageTimeout)
c.stopMut.RLock()
defer c.stopMut.RUnlock()
for {
select {
case message := <-messages:
@@ -169,12 +177,10 @@ func (c *staticClient) Serve() {
}
func (c *staticClient) Stop() {
if c.stop == nil {
return
}
c.stopMut.RLock()
close(c.stop)
<-c.stopped
c.stopMut.RUnlock()
}
func (c *staticClient) StatusOK() bool {

View File

@@ -64,14 +64,14 @@ func HashFile(ctx context.Context, fs fs.Filesystem, path string, blockSize int,
type parallelHasher struct {
fs fs.Filesystem
workers int
outbox chan<- protocol.FileInfo
outbox chan<- ScanResult
inbox <-chan protocol.FileInfo
counter Counter
done chan<- struct{}
wg sync.WaitGroup
}
func newParallelHasher(ctx context.Context, fs fs.Filesystem, workers int, outbox chan<- protocol.FileInfo, inbox <-chan protocol.FileInfo, counter Counter, done chan<- struct{}) {
func newParallelHasher(ctx context.Context, fs fs.Filesystem, workers int, outbox chan<- ScanResult, inbox <-chan protocol.FileInfo, counter Counter, done chan<- struct{}) {
ph := &parallelHasher{
fs: fs,
workers: workers,
@@ -122,7 +122,7 @@ func (ph *parallelHasher) hashFiles(ctx context.Context) {
}
select {
case ph.outbox <- f:
case ph.outbox <- ScanResult{File: f}:
case <-ctx.Done():
return
}

View File

@@ -8,6 +8,8 @@ package scanner
import (
"context"
"errors"
"fmt"
"path/filepath"
"runtime"
"strings"
@@ -61,7 +63,13 @@ type CurrentFiler interface {
CurrentFile(name string) (protocol.FileInfo, bool)
}
func Walk(ctx context.Context, cfg Config) chan protocol.FileInfo {
type ScanResult struct {
File protocol.FileInfo
Err error
Path string // to be set in case Err != nil and File == nil
}
func Walk(ctx context.Context, cfg Config) chan ScanResult {
w := walker{cfg}
if w.CurrentFiler == nil {
@@ -77,17 +85,23 @@ func Walk(ctx context.Context, cfg Config) chan protocol.FileInfo {
return w.walk(ctx)
}
var (
errUTF8Invalid = errors.New("item is not in UTF8 encoding")
errUTF8Normalization = errors.New("item is not in the correct UTF8 normalization form")
errUTF8Conflict = errors.New("item has UTF8 encoding conflict with another item")
)
type walker struct {
Config
}
// Walk returns the list of files found in the local folder by scanning the
// file system. Files are blockwise hashed.
func (w *walker) walk(ctx context.Context) chan protocol.FileInfo {
func (w *walker) walk(ctx context.Context) chan ScanResult {
l.Debugln("Walk", w.Subs, w.Matcher)
toHashChan := make(chan protocol.FileInfo)
finishedChan := make(chan protocol.FileInfo)
finishedChan := make(chan ScanResult)
// A routine which walks the filesystem tree, and sends files which have
// been modified to the counter routine.
@@ -182,7 +196,7 @@ func (w *walker) walk(ctx context.Context) chan protocol.FileInfo {
return finishedChan
}
func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protocol.FileInfo) fs.WalkFunc {
func (w *walker) walkAndHashFiles(ctx context.Context, toHashChan chan<- protocol.FileInfo, finishedChan chan<- ScanResult) fs.WalkFunc {
now := time.Now()
ignoredParent := ""
@@ -202,7 +216,7 @@ func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protoco
}
if err != nil {
l.Debugln("error:", path, info, err)
w.handleError(ctx, "scan", path, err, finishedChan)
return skip
}
@@ -225,7 +239,7 @@ func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protoco
}
if !utf8.ValidString(path) {
l.Warnf("File name %q is not in UTF8 encoding; skipping.", path)
w.handleError(ctx, "scan", path, errUTF8Invalid, finishedChan)
return skip
}
@@ -244,7 +258,7 @@ func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protoco
if ignoredParent == "" {
// parent isn't ignored, nothing special
return w.handleItem(ctx, path, fchan, dchan, skip)
return w.handleItem(ctx, path, toHashChan, finishedChan, skip)
}
// Part of current path below the ignored (potential) parent
@@ -253,17 +267,17 @@ func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protoco
// ignored path isn't actually a parent of the current path
if rel == path {
ignoredParent = ""
return w.handleItem(ctx, path, fchan, dchan, skip)
return w.handleItem(ctx, path, toHashChan, finishedChan, skip)
}
// The previously ignored parent directories of the current, not
// ignored path need to be handled as well.
if err = w.handleItem(ctx, ignoredParent, fchan, dchan, skip); err != nil {
if err = w.handleItem(ctx, ignoredParent, toHashChan, finishedChan, skip); err != nil {
return err
}
for _, name := range strings.Split(rel, string(fs.PathSeparator)) {
ignoredParent = filepath.Join(ignoredParent, name)
if err = w.handleItem(ctx, ignoredParent, fchan, dchan, skip); err != nil {
if err = w.handleItem(ctx, ignoredParent, toHashChan, finishedChan, skip); err != nil {
return err
}
}
@@ -273,21 +287,23 @@ func (w *walker) walkAndHashFiles(ctx context.Context, fchan, dchan chan protoco
}
}
func (w *walker) handleItem(ctx context.Context, path string, fchan, dchan chan protocol.FileInfo, skip error) error {
func (w *walker) handleItem(ctx context.Context, path string, toHashChan chan<- protocol.FileInfo, finishedChan chan<- ScanResult, skip error) error {
info, err := w.Filesystem.Lstat(path)
// An error here would be weird as we've already gotten to this point, but act on it nonetheless
if err != nil {
w.handleError(ctx, "scan", path, err, finishedChan)
return skip
}
path, shouldSkip := w.normalizePath(path, info)
if shouldSkip {
path, err = w.normalizePath(path, info)
if err != nil {
w.handleError(ctx, "normalizing path", path, err, finishedChan)
return skip
}
switch {
case info.IsSymlink():
if err := w.walkSymlink(ctx, path, dchan); err != nil {
if err := w.walkSymlink(ctx, path, finishedChan); err != nil {
return err
}
if info.IsDir() {
@@ -297,16 +313,16 @@ func (w *walker) handleItem(ctx context.Context, path string, fchan, dchan chan
return nil
case info.IsDir():
err = w.walkDir(ctx, path, info, dchan)
err = w.walkDir(ctx, path, info, finishedChan)
case info.IsRegular():
err = w.walkRegular(ctx, path, info, fchan)
err = w.walkRegular(ctx, path, info, toHashChan)
}
return err
}
func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileInfo, fchan chan protocol.FileInfo) error {
func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileInfo, toHashChan chan<- protocol.FileInfo) error {
curFile, hasCurFile := w.CurrentFiler.CurrentFile(relPath)
blockSize := protocol.MinBlockSize
@@ -352,7 +368,7 @@ func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileIn
l.Debugln("to hash:", relPath, f)
select {
case fchan <- f:
case toHashChan <- f:
case <-ctx.Done():
return ctx.Err()
}
@@ -360,7 +376,7 @@ func (w *walker) walkRegular(ctx context.Context, relPath string, info fs.FileIn
return nil
}
func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo, dchan chan protocol.FileInfo) error {
func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo, finishedChan chan<- ScanResult) error {
curFile, hasCurFile := w.CurrentFiler.CurrentFile(relPath)
f, _ := CreateFileInfo(info, relPath, nil)
@@ -384,7 +400,7 @@ func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo,
l.Debugln("dir:", relPath, f)
select {
case dchan <- f:
case finishedChan <- ScanResult{File: f}:
case <-ctx.Done():
return ctx.Err()
}
@@ -394,7 +410,7 @@ func (w *walker) walkDir(ctx context.Context, relPath string, info fs.FileInfo,
// walkSymlink returns nil or an error, if the error is of the nature that
// it should stop the entire walk.
func (w *walker) walkSymlink(ctx context.Context, relPath string, dchan chan protocol.FileInfo) error {
func (w *walker) walkSymlink(ctx context.Context, relPath string, finishedChan chan<- ScanResult) error {
// Symlinks are not supported on Windows. We ignore instead of returning
// an error.
if runtime.GOOS == "windows" {
@@ -408,7 +424,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, dchan chan pro
target, err := w.Filesystem.ReadSymlink(relPath)
if err != nil {
l.Debugln("readlink error:", relPath, err)
w.handleError(ctx, "reading link:", relPath, err, finishedChan)
return nil
}
@@ -439,7 +455,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, dchan chan pro
l.Debugln("symlink changedb:", relPath, f)
select {
case dchan <- f:
case finishedChan <- ScanResult{File: f}:
case <-ctx.Done():
return ctx.Err()
}
@@ -449,7 +465,7 @@ func (w *walker) walkSymlink(ctx context.Context, relPath string, dchan chan pro
// normalizePath returns the normalized relative path (possibly after fixing
// it on disk), or skip is true.
func (w *walker) normalizePath(path string, info fs.FileInfo) (normPath string, skip bool) {
func (w *walker) normalizePath(path string, info fs.FileInfo) (normPath string, err error) {
if runtime.GOOS == "darwin" {
// Mac OS X file names should always be NFD normalized.
normPath = norm.NFD.String(path)
@@ -462,14 +478,13 @@ func (w *walker) normalizePath(path string, info fs.FileInfo) (normPath string,
if path == normPath {
// The file name is already normalized: nothing to do
return path, false
return path, nil
}
if !w.AutoNormalize {
// We're not authorized to do anything about it, so complain and skip.
l.Warnf("File name %q is not in the correct UTF8 normalization form; skipping.", path)
return "", true
return "", errUTF8Normalization
}
// We will attempt to normalize it.
@@ -477,11 +492,12 @@ func (w *walker) normalizePath(path string, info fs.FileInfo) (normPath string,
if fs.IsNotExist(err) {
// Nothing exists with the normalized filename. Good.
if err = w.Filesystem.Rename(path, normPath); err != nil {
l.Infof(`Error normalizing UTF8 encoding of file "%s": %v`, path, err)
return "", true
return "", err
}
l.Infof(`Normalized UTF8 encoding of file name "%s".`, path)
} else if w.Filesystem.SameFile(info, normInfo) {
return normPath, nil
}
if w.Filesystem.SameFile(info, normInfo) {
// With some filesystems (ZFS), if there is an un-normalized path and you ask whether the normalized
// version exists, it responds with true. Therefore we need to check fs.SameFile as well.
// In this case, a call to Rename won't do anything, so we have to rename via a temp file.
@@ -491,23 +507,19 @@ func (w *walker) normalizePath(path string, info fs.FileInfo) (normPath string,
tempPath := fs.TempNameWithPrefix(normPath, "")
if err = w.Filesystem.Rename(path, tempPath); err != nil {
l.Infof(`Error during normalizing UTF8 encoding of file "%s" (renamed to "%s"): %v`, path, tempPath, err)
return "", true
return "", err
}
if err = w.Filesystem.Rename(tempPath, normPath); err != nil {
// I don't ever expect this to happen, but if it does, we should probably tell our caller that the normalized
// path is the temp path: that way at least the user's data still gets synced.
l.Warnf(`Error renaming "%s" to "%s" while normalizating UTF8 encoding: %v. You will want to rename this file back manually`, tempPath, normPath, err)
return tempPath, false
return tempPath, nil
}
} else {
// There is something already in the way at the normalized
// file name.
l.Infof(`File "%s" path has UTF8 encoding conflict with another file; ignoring.`, path)
return "", true
return normPath, nil
}
return normPath, false
// There is something already in the way at the normalized
// file name.
return "", errUTF8Conflict
}
// updateFileInfo updates walker specific members of protocol.FileInfo that do not depend on type
@@ -522,6 +534,16 @@ func (w *walker) updateFileInfo(file, curFile protocol.FileInfo) protocol.FileIn
file.LocalFlags = w.LocalFlags
return file
}
func (w *walker) handleError(ctx context.Context, context, path string, err error, finishedChan chan<- ScanResult) {
l.Infof("Scanner (folder %s, file %q): %s: %v", w.Folder, path, context, err)
select {
case finishedChan <- ScanResult{
Err: fmt.Errorf("%s: %s", context, err.Error()),
Path: path,
}:
case <-ctx.Done():
}
}
// A byteCounter gets bytes added to it via Update() and then provides the
// Total() and one minute moving average Rate() in bytes per second.

View File

@@ -74,7 +74,10 @@ func TestWalkSub(t *testing.T) {
})
var files []protocol.FileInfo
for f := range fchan {
files = append(files, f)
if f.Err != nil {
t.Errorf("Error while scanning %v: %v", f.Err, f.Path)
}
files = append(files, f.File)
}
// The directory contains two files, where one is ignored from a higher
@@ -107,7 +110,10 @@ func TestWalk(t *testing.T) {
var tmp []protocol.FileInfo
for f := range fchan {
tmp = append(tmp, f)
if f.Err != nil {
t.Errorf("Error while scanning %v: %v", f.Err, f.Path)
}
tmp = append(tmp, f.File)
}
sort.Sort(fileList(tmp))
files := fileList(tmp).testfiles()
@@ -246,8 +252,9 @@ func TestNormalization(t *testing.T) {
func TestIssue1507(t *testing.T) {
w := &walker{}
c := make(chan protocol.FileInfo, 100)
fn := w.walkAndHashFiles(context.TODO(), c, c)
h := make(chan protocol.FileInfo, 100)
f := make(chan ScanResult, 100)
fn := w.walkAndHashFiles(context.TODO(), h, f)
fn("", nil, protocol.ErrClosed)
}
@@ -471,7 +478,9 @@ func walkDir(fs fs.Filesystem, dir string, cfiler CurrentFiler, matcher *ignore.
var tmp []protocol.FileInfo
for f := range fchan {
tmp = append(tmp, f)
if f.Err == nil {
tmp = append(tmp, f.File)
}
}
sort.Sort(fileList(tmp))
@@ -580,7 +589,11 @@ func TestStopWalk(t *testing.T) {
dirs := 0
files := 0
for {
f := <-fchan
res := <-fchan
if res.Err != nil {
t.Errorf("Error while scanning %v: %v", res.Err, res.Path)
}
f := res.File
t.Log("Scanned", f)
if f.IsDirectory() {
if len(f.Name) == 0 || f.Permissions == 0 {
@@ -710,7 +723,10 @@ func TestIssue4841(t *testing.T) {
var files []protocol.FileInfo
for f := range fchan {
files = append(files, f)
if f.Err != nil {
t.Errorf("Error while scanning %v: %v", f.Err, f.Path)
}
files = append(files, f.File)
}
sort.Sort(fileList(files))

View File

@@ -27,17 +27,74 @@ var (
ErrIdentificationFailed = fmt.Errorf("failed to identify socket type")
)
// NewCertificate generates and returns a new TLS certificate. If tlsRSABits
// is greater than zero we generate an RSA certificate with the specified
// number of bits. Otherwise we create a 384 bit ECDSA certificate.
func NewCertificate(certFile, keyFile, tlsDefaultCommonName string, tlsRSABits int) (tls.Certificate, error) {
var priv interface{}
var err error
if tlsRSABits > 0 {
priv, err = rsa.GenerateKey(rand.Reader, tlsRSABits)
} else {
priv, err = ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
var (
// The list of cipher suites we will use / suggest for TLS connections.
// This is built based on the component slices below, depending on what
// the hardware prefers.
cipherSuites []uint16
// Suites that are good and fast on hardware with AES-NI. These are
// reordered from the Go default to put the 256 bit ciphers above the
// 128 bit ones - because that looks cooler, even though there is
// probably no relevant difference in strength yet.
gcmSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
}
// Suites that are good and fast on hardware *without* AES-NI.
chaChaSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
}
// The rest of the suites, minus DES stuff.
otherSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
}
)
func init() {
// Creates the list of ciper suites that SecureDefault uses.
cipherSuites = buildCipherSuites()
}
// SecureDefault returns a tls.Config with reasonable, secure defaults set.
func SecureDefault() *tls.Config {
// paranoia
cs := make([]uint16, len(cipherSuites))
copy(cs, cipherSuites)
return &tls.Config{
// TLS 1.2 is the minimum we accept
MinVersion: tls.VersionTLS12,
// We want the longer curves at the front, because that's more
// secure (so the web tells me, don't ask me to explain the
// details).
CurvePreferences: []tls.CurveID{tls.CurveP521, tls.CurveP384, tls.CurveP256},
// The cipher suite lists built above.
CipherSuites: cs,
// We've put some thought into this choice and would like it to
// matter.
PreferServerCipherSuites: true,
}
}
// NewCertificate generates and returns a new TLS certificate.
func NewCertificate(certFile, keyFile, commonName string) (tls.Certificate, error) {
priv, err := ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
if err != nil {
return tls.Certificate{}, fmt.Errorf("generate key: %s", err)
}
@@ -48,11 +105,11 @@ func NewCertificate(certFile, keyFile, tlsDefaultCommonName string, tlsRSABits i
template := x509.Certificate{
SerialNumber: new(big.Int).SetInt64(rand.Int63()),
Subject: pkix.Name{
CommonName: tlsDefaultCommonName,
CommonName: commonName,
},
NotBefore: notBefore,
NotAfter: notAfter,
NotBefore: notBefore,
NotAfter: notAfter,
SignatureAlgorithm: x509.ECDSAWithSHA256,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth, x509.ExtKeyUsageClientAuth},
BasicConstraintsValid: true,
@@ -185,3 +242,79 @@ func pemBlockForKey(priv interface{}) (*pem.Block, error) {
return nil, fmt.Errorf("unknown key type")
}
}
// buildCipherSuites returns a list of cipher suites with either AES-GCM or
// ChaCha20 at the top. This takes advantage of the CPU detection that the
// TLS package does to create an optimal cipher suite list for the current
// hardware.
func buildCipherSuites() []uint16 {
pref := preferredCipherSuite()
for _, suite := range gcmSuites {
if suite == pref {
// Go preferred an AES-GCM suite. Use those first.
return append(gcmSuites, append(chaChaSuites, otherSuites...)...)
}
}
// Use ChaCha20 at the top, then AES-GCM etc.
return append(chaChaSuites, append(gcmSuites, otherSuites...)...)
}
// preferredCipherSuite returns the cipher suite that is selected for a TLS
// connection made with the Go defaults to ourselves. This is (currently,
// probably) either a ChaCha20 suite or an AES-GCM suite, depending on what
// the CPU detection has decided is fastest on this hardware.
//
// The function will return zero if something odd happens, and there's no
// guarantee what cipher suite would be chosen anyway, so the return value
// should be taken with a grain of salt.
func preferredCipherSuite() uint16 {
// This is one of our certs from NewCertificate above, to avoid having
// to generate one at init time just for this function.
crtBs := []byte(`-----BEGIN CERTIFICATE-----
MIIBXDCCAQOgAwIBAgIIQUODl2/bE4owCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ
c3luY3RoaW5nMB4XDTE4MTAxNDA2MjU0M1oXDTQ5MTIzMTIzNTk1OVowFDESMBAG
A1UEAxMJc3luY3RoaW5nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEMqP+1lL4
0s/xtI3ygExzYc/GvLHr0qetpBrUVHaDwS/cR1yXDsYaJpJcUNtrf1XK49IlpWW1
Ds8seQsSg7/9BaM/MD0wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUF
BwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMAoGCCqGSM49BAMCA0cAMEQCIFxY
MDBA92FKqZYSZjmfdIbT1OI6S9CnAFvL/pJZJwNuAiAV7osre2NiCHtXABOvsGrH
vKWqDvXcHr6Tlo+LmTAdyg==
-----END CERTIFICATE-----
`)
keyBs := []byte(`-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIHtPxVHlj6Bhi9RgSR2/lAtIQ7APM9wmpaJAcds6TD2CoAoGCCqGSM49
AwEHoUQDQgAEMqP+1lL40s/xtI3ygExzYc/GvLHr0qetpBrUVHaDwS/cR1yXDsYa
JpJcUNtrf1XK49IlpWW1Ds8seQsSg7/9BQ==
-----END EC PRIVATE KEY-----
`)
cert, err := tls.X509KeyPair(crtBs, keyBs)
if err != nil {
return 0
}
serverCfg := &tls.Config{
MinVersion: tls.VersionTLS12,
PreferServerCipherSuites: true,
Certificates: []tls.Certificate{cert},
}
clientCfg := &tls.Config{
MinVersion: tls.VersionTLS12,
InsecureSkipVerify: true,
}
c0, c1 := net.Pipe()
c := tls.Client(c0, clientCfg)
go func() {
c.Handshake()
}()
s := tls.Server(c1, serverCfg)
if err := s.Handshake(); err != nil {
return 0
}
return c.ConnectionState().CipherSuite
}

View File

@@ -11,6 +11,7 @@ package tlsutil
import (
"bytes"
"crypto/tls"
"io"
"net"
"testing"
@@ -74,6 +75,53 @@ func TestUnionedConnection(t *testing.T) {
}
}
func TestCheckCipherSuites(t *testing.T) {
// This is the set of cipher suites we expect - only the order should
// differ.
allSuites := []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
}
suites := buildCipherSuites()
if len(suites) != len(allSuites) {
t.Fatal("should get a list representing all suites")
}
// Check that the returned list of suites doesn't contain anything
// unexpecteds and is free from duplicates.
seen := make(map[uint16]struct{})
nextSuite:
for _, s0 := range suites {
if _, ok := seen[s0]; ok {
t.Fatal("duplicate suite", s0)
}
for _, s1 := range allSuites {
if s0 == s1 {
seen[s0] = struct{}{}
continue nextSuite
}
}
t.Fatal("got unknown suite", s0)
}
}
type fakeAccepter struct {
data []byte
}

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "STDISCOSRV" "1" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "STDISCOSRV" "1" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "STRELAYSRV" "1" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "STRELAYSRV" "1" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-BEP" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-BEP" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-CONFIG" "5" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.
@@ -592,6 +592,10 @@ square brackets.
.B Wildcard and port (\fB0.0.0.0:12345\fP, \fB[::]:12345\fP, \fB:12345\fP)
These are equivalent and will result in Syncthing listening on all
interfaces via both IPv4 and IPv6.
.TP
.B UNIX socket location (\fB/var/run/st.sock\fP)
If the address is an absolute path it is interpreted as the path to a UNIX socket.
(Added in v0.14.52.)
.UNINDENT
.TP
.B user

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-DEVICE-IDS" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-EVENT-API" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-FAQ" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-GLOBALDISCO" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.
@@ -81,7 +81,7 @@ Many Requests).
.sp
Queries are performed as HTTPS GET requests to the announce server URL. The
requested device ID is passed as the query parameter “device”, in canonical
string form, i.e. \fBhttps://announce.syncthing.net/v2/?device=ABC12345\-....\fP
string form, i.e. \fBhttps://discovery.syncthing.net/?device=ABC12345\-....\fP
.sp
Successful responses will have status code \fB200\fP (OK) and carry a JSON payload
of the same format as the announcement above. The response will not contain
@@ -95,6 +95,29 @@ Found) is returned.
.sp
If the client has exceeded a rate limit, the server may respond with 429 (Too
Many Requests).
.SH AUTHENTICATION
.sp
Global discovery is spoken over HTTPS and is protected against attackers in
the same manner as other HTTPS traffic. However, there are a few Syncthing
specific considerations on top of this. As mentioned above, for
announcements the client must provide a certificate to prove ownership of
the announced device ID.
.sp
In addition, Syncthing has a mechanism to verify the identity of the
discovery server. While this would normally be accomplished by using a CA
signed certificate, Syncthing often runs in environments with outdated or
simply nonexistent root CA bundles. Instead, Syncthing can verify the
discovery server certificate fingerprint using the device ID mechanism. This
is certificate pinning and conveyed in the Syncthing configuration as a
synthetic “id” parameter on the discovery server URL:
\fBhttps://discovery.syncthing.net/?id=...\fP\&. The “id” parameter is not, in
fact, sent to the discovery server \- its used by Syncthing itself to know
which certificate to expect on the server side.
.sp
The public discovery network uses this authentication mechanism instead of
CA signed certificates.
.sp
The discovery server prints its certificate ID in this manner on startup.
.SH AUTHOR
The Syncthing Authors
.SH COPYRIGHT

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-LOCALDISCO" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-NETWORKING" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-RELAY" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-REST-API" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-SECURITY" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-STIGNORE" "5" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-VERSIONING" "7" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING" "1" "Sep 17, 2018" "v0.14" "Syncthing"
.TH "SYNCTHING" "1" "Nov 05, 2018" "v0.14" "Syncthing"
.SH NAME
syncthing \- Syncthing
.
@@ -83,7 +83,8 @@ Generate key and config in specified dir, then exit.
.INDENT 0.0
.TP
.B \-gui\-address=<address>
Override GUI listen address.
Override GUI listen address. Set this to an address (\fB0.0.0.0:8384\fP)
or file path (\fB/var/run/st.sock\fP, for UNIX sockets).
.UNINDENT
.INDENT 0.0
.TP

View File

@@ -29,27 +29,72 @@ type Collector interface {
// collected by this Collector to the provided channel and returns once
// the last descriptor has been sent. The sent descriptors fulfill the
// consistency and uniqueness requirements described in the Desc
// documentation. (It is valid if one and the same Collector sends
// duplicate descriptors. Those duplicates are simply ignored. However,
// two different Collectors must not send duplicate descriptors.) This
// method idempotently sends the same descriptors throughout the
// lifetime of the Collector. If a Collector encounters an error while
// executing this method, it must send an invalid descriptor (created
// with NewInvalidDesc) to signal the error to the registry.
// documentation.
//
// It is valid if one and the same Collector sends duplicate
// descriptors. Those duplicates are simply ignored. However, two
// different Collectors must not send duplicate descriptors.
//
// Sending no descriptor at all marks the Collector as “unchecked”,
// i.e. no checks will be performed at registration time, and the
// Collector may yield any Metric it sees fit in its Collect method.
//
// This method idempotently sends the same descriptors throughout the
// lifetime of the Collector. It may be called concurrently and
// therefore must be implemented in a concurrency safe way.
//
// If a Collector encounters an error while executing this method, it
// must send an invalid descriptor (created with NewInvalidDesc) to
// signal the error to the registry.
Describe(chan<- *Desc)
// Collect is called by the Prometheus registry when collecting
// metrics. The implementation sends each collected metric via the
// provided channel and returns once the last metric has been sent. The
// descriptor of each sent metric is one of those returned by
// Describe. Returned metrics that share the same descriptor must differ
// in their variable label values. This method may be called
// concurrently and must therefore be implemented in a concurrency safe
// way. Blocking occurs at the expense of total performance of rendering
// all registered metrics. Ideally, Collector implementations support
// concurrent readers.
// descriptor of each sent metric is one of those returned by Describe
// (unless the Collector is unchecked, see above). Returned metrics that
// share the same descriptor must differ in their variable label
// values.
//
// This method may be called concurrently and must therefore be
// implemented in a concurrency safe way. Blocking occurs at the expense
// of total performance of rendering all registered metrics. Ideally,
// Collector implementations support concurrent readers.
Collect(chan<- Metric)
}
// DescribeByCollect is a helper to implement the Describe method of a custom
// Collector. It collects the metrics from the provided Collector and sends
// their descriptors to the provided channel.
//
// If a Collector collects the same metrics throughout its lifetime, its
// Describe method can simply be implemented as:
//
// func (c customCollector) Describe(ch chan<- *Desc) {
// DescribeByCollect(c, ch)
// }
//
// However, this will not work if the metrics collected change dynamically over
// the lifetime of the Collector in a way that their combined set of descriptors
// changes as well. The shortcut implementation will then violate the contract
// of the Describe method. If a Collector sometimes collects no metrics at all
// (for example vectors like CounterVec, GaugeVec, etc., which only collect
// metrics after a metric with a fully specified label set has been accessed),
// it might even get registered as an unchecked Collecter (cf. the Register
// method of the Registerer interface). Hence, only use this shortcut
// implementation of Describe if you are certain to fulfill the contract.
//
// The Collector example demonstrates a use of DescribeByCollect.
func DescribeByCollect(c Collector, descs chan<- *Desc) {
metrics := make(chan Metric)
go func() {
c.Collect(metrics)
close(metrics)
}()
for m := range metrics {
descs <- m.Desc()
}
}
// selfCollector implements Collector for a single Metric so that the Metric
// collects itself. Add it as an anonymous field to a struct that implements
// Metric, and call init with the Metric itself as an argument.

View File

@@ -15,6 +15,10 @@ package prometheus
import (
"errors"
"math"
"sync/atomic"
dto "github.com/prometheus/client_model/go"
)
// Counter is a Metric that represents a single numerical value that only ever
@@ -42,6 +46,14 @@ type Counter interface {
type CounterOpts Opts
// NewCounter creates a new Counter based on the provided CounterOpts.
//
// The returned implementation tracks the counter value in two separate
// variables, a float64 and a uint64. The latter is used to track calls of the
// Inc method and calls of the Add method with a value that can be represented
// as a uint64. This allows atomic increments of the counter with optimal
// performance. (It is common to have an Inc call in very hot execution paths.)
// Both internal tracking values are added up in the Write method. This has to
// be taken into account when it comes to precision and overflow behavior.
func NewCounter(opts CounterOpts) Counter {
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
@@ -49,20 +61,58 @@ func NewCounter(opts CounterOpts) Counter {
nil,
opts.ConstLabels,
)
result := &counter{value: value{desc: desc, valType: CounterValue, labelPairs: desc.constLabelPairs}}
result := &counter{desc: desc, labelPairs: desc.constLabelPairs}
result.init(result) // Init self-collection.
return result
}
type counter struct {
value
// valBits contains the bits of the represented float64 value, while
// valInt stores values that are exact integers. Both have to go first
// in the struct to guarantee alignment for atomic operations.
// http://golang.org/pkg/sync/atomic/#pkg-note-BUG
valBits uint64
valInt uint64
selfCollector
desc *Desc
labelPairs []*dto.LabelPair
}
func (c *counter) Desc() *Desc {
return c.desc
}
func (c *counter) Add(v float64) {
if v < 0 {
panic(errors.New("counter cannot decrease in value"))
}
c.value.Add(v)
ival := uint64(v)
if float64(ival) == v {
atomic.AddUint64(&c.valInt, ival)
return
}
for {
oldBits := atomic.LoadUint64(&c.valBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + v)
if atomic.CompareAndSwapUint64(&c.valBits, oldBits, newBits) {
return
}
}
}
func (c *counter) Inc() {
atomic.AddUint64(&c.valInt, 1)
}
func (c *counter) Write(out *dto.Metric) error {
fval := math.Float64frombits(atomic.LoadUint64(&c.valBits))
ival := atomic.LoadUint64(&c.valInt)
val := fval + float64(ival)
return populateMetric(CounterValue, val, c.labelPairs, out)
}
// CounterVec is a Collector that bundles a set of Counters that all share the
@@ -85,11 +135,10 @@ func NewCounterVec(opts CounterOpts, labelNames []string) *CounterVec {
)
return &CounterVec{
metricVec: newMetricVec(desc, func(lvs ...string) Metric {
result := &counter{value: value{
desc: desc,
valType: CounterValue,
labelPairs: makeLabelPairs(desc, lvs),
}}
if len(lvs) != len(desc.variableLabels) {
panic(errInconsistentCardinality)
}
result := &counter{desc: desc, labelPairs: makeLabelPairs(desc, lvs)}
result.init(result) // Init self-collection.
return result
}),

View File

@@ -67,7 +67,7 @@ type Desc struct {
// NewDesc allocates and initializes a new Desc. Errors are recorded in the Desc
// and will be reported on registration time. variableLabels and constLabels can
// be nil if no such labels should be set. fqName and help must not be empty.
// be nil if no such labels should be set. fqName must not be empty.
//
// variableLabels only contain the label names. Their label values are variable
// and therefore not part of the Desc. (They are managed within the Metric.)
@@ -80,10 +80,6 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
help: help,
variableLabels: variableLabels,
}
if help == "" {
d.err = errors.New("empty help string")
return d
}
if !model.IsValidMetricName(model.LabelValue(fqName)) {
d.err = fmt.Errorf("%q is not a valid metric name", fqName)
return d
@@ -156,7 +152,7 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
Value: proto.String(v),
})
}
sort.Sort(LabelPairSorter(d.constLabelPairs))
sort.Sort(labelPairSorter(d.constLabelPairs))
return d
}

View File

@@ -11,10 +11,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// Package prometheus provides metrics primitives to instrument code for
// monitoring. It also offers a registry for metrics. Sub-packages allow to
// expose the registered metrics via HTTP (package promhttp) or push them to a
// Pushgateway (package push).
// Package prometheus is the core instrumentation package. It provides metrics
// primitives to instrument code for monitoring. It also offers a registry for
// metrics. Sub-packages allow to expose the registered metrics via HTTP
// (package promhttp) or push them to a Pushgateway (package push). There is
// also a sub-package promauto, which provides metrics constructors with
// automatic registration.
//
// All exported functions and methods are safe to be used concurrently unless
// specified otherwise.
@@ -72,7 +74,10 @@
// The number of exported identifiers in this package might appear a bit
// overwhelming. However, in addition to the basic plumbing shown in the example
// above, you only need to understand the different metric types and their
// vector versions for basic usage.
// vector versions for basic usage. Furthermore, if you are not concerned with
// fine-grained control of when and how to register metrics with the registry,
// have a look at the promauto package, which will effectively allow you to
// ignore registration altogether in simple cases.
//
// Above, you have already touched the Counter and the Gauge. There are two more
// advanced metric types: the Summary and Histogram. A more thorough description
@@ -116,7 +121,17 @@
// NewConstSummary (and their respective Must… versions). That will happen in
// the Collect method. The Describe method has to return separate Desc
// instances, representative of the “throw-away” metrics to be created later.
// NewDesc comes in handy to create those Desc instances.
// NewDesc comes in handy to create those Desc instances. Alternatively, you
// could return no Desc at all, which will marke the Collector “unchecked”. No
// checks are porformed at registration time, but metric consistency will still
// be ensured at scrape time, i.e. any inconsistencies will lead to scrape
// errors. Thus, with unchecked Collectors, the responsibility to not collect
// metrics that lead to inconsistencies in the total scrape result lies with the
// implementer of the Collector. While this is not a desirable state, it is
// sometimes necessary. The typical use case is a situatios where the exact
// metrics to be returned by a Collector cannot be predicted at registration
// time, but the implementer has sufficient knowledge of the whole system to
// guarantee metric consistency.
//
// The Collector example illustrates the use case. You can also look at the
// source code of the processCollector (mirroring process metrics), the

View File

@@ -1,3 +1,16 @@
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
// Inline and byte-free variant of hash/fnv's fnv64a.

View File

@@ -13,6 +13,14 @@
package prometheus
import (
"math"
"sync/atomic"
"time"
dto "github.com/prometheus/client_model/go"
)
// Gauge is a Metric that represents a single numerical value that can
// arbitrarily go up and down.
//
@@ -48,13 +56,74 @@ type Gauge interface {
type GaugeOpts Opts
// NewGauge creates a new Gauge based on the provided GaugeOpts.
//
// The returned implementation is optimized for a fast Set method. If you have a
// choice for managing the value of a Gauge via Set vs. Inc/Dec/Add/Sub, pick
// the former. For example, the Inc method of the returned Gauge is slower than
// the Inc method of a Counter returned by NewCounter. This matches the typical
// scenarios for Gauges and Counters, where the former tends to be Set-heavy and
// the latter Inc-heavy.
func NewGauge(opts GaugeOpts) Gauge {
return newValue(NewDesc(
desc := NewDesc(
BuildFQName(opts.Namespace, opts.Subsystem, opts.Name),
opts.Help,
nil,
opts.ConstLabels,
), GaugeValue, 0)
)
result := &gauge{desc: desc, labelPairs: desc.constLabelPairs}
result.init(result) // Init self-collection.
return result
}
type gauge struct {
// valBits contains the bits of the represented float64 value. It has
// to go first in the struct to guarantee alignment for atomic
// operations. http://golang.org/pkg/sync/atomic/#pkg-note-BUG
valBits uint64
selfCollector
desc *Desc
labelPairs []*dto.LabelPair
}
func (g *gauge) Desc() *Desc {
return g.desc
}
func (g *gauge) Set(val float64) {
atomic.StoreUint64(&g.valBits, math.Float64bits(val))
}
func (g *gauge) SetToCurrentTime() {
g.Set(float64(time.Now().UnixNano()) / 1e9)
}
func (g *gauge) Inc() {
g.Add(1)
}
func (g *gauge) Dec() {
g.Add(-1)
}
func (g *gauge) Add(val float64) {
for {
oldBits := atomic.LoadUint64(&g.valBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + val)
if atomic.CompareAndSwapUint64(&g.valBits, oldBits, newBits) {
return
}
}
}
func (g *gauge) Sub(val float64) {
g.Add(val * -1)
}
func (g *gauge) Write(out *dto.Metric) error {
val := math.Float64frombits(atomic.LoadUint64(&g.valBits))
return populateMetric(GaugeValue, val, g.labelPairs, out)
}
// GaugeVec is a Collector that bundles a set of Gauges that all share the same
@@ -77,7 +146,12 @@ func NewGaugeVec(opts GaugeOpts, labelNames []string) *GaugeVec {
)
return &GaugeVec{
metricVec: newMetricVec(desc, func(lvs ...string) Metric {
return newValue(desc, GaugeValue, 0, lvs...)
if len(lvs) != len(desc.variableLabels) {
panic(errInconsistentCardinality)
}
result := &gauge{desc: desc, labelPairs: makeLabelPairs(desc, lvs)}
result.init(result) // Init self-collection.
return result
}),
}
}

View File

@@ -1,3 +1,16 @@
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
@@ -17,8 +30,12 @@ type goCollector struct {
metrics memStatsMetrics
}
// NewGoCollector returns a collector which exports metrics about the current
// go process.
// NewGoCollector returns a collector which exports metrics about the current Go
// process. This includes memory stats. To collect those, runtime.ReadMemStats
// is called. This causes a stop-the-world, which is very short with Go1.9+
// (~25µs). However, with older Go versions, the stop-the-world duration depends
// on the heap size and can be quite significant (~1.7 ms/GiB as per
// https://go-review.googlesource.com/c/go/+/34937).
func NewGoCollector() Collector {
return &goCollector{
goroutinesDesc: NewDesc(
@@ -265,7 +282,7 @@ func (c *goCollector) Collect(ch chan<- Metric) {
quantiles[float64(idx+1)/float64(len(stats.PauseQuantiles)-1)] = pq.Seconds()
}
quantiles[0.0] = stats.PauseQuantiles[0].Seconds()
ch <- MustNewConstSummary(c.gcDesc, uint64(stats.NumGC), float64(stats.PauseTotal.Seconds()), quantiles)
ch <- MustNewConstSummary(c.gcDesc, uint64(stats.NumGC), stats.PauseTotal.Seconds(), quantiles)
ch <- MustNewConstMetric(c.goInfoDesc, GaugeValue, 1)

View File

@@ -191,8 +191,10 @@ func writeMetrics(w io.Writer, mfs []*dto.MetricFamily, prefix string, now model
buf := bufio.NewWriter(w)
for _, s := range vec {
if err := writeSanitized(buf, prefix); err != nil {
return err
for _, c := range prefix {
if _, err := buf.WriteRune(c); err != nil {
return err
}
}
if err := buf.WriteByte('.'); err != nil {
return err
@@ -273,7 +275,7 @@ func replaceInvalidRune(c rune) rune {
if c == ' ' {
return '.'
}
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_' || c == ':' || (c >= '0' && c <= '9')) {
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_' || c == ':' || c == '-' || (c >= '0' && c <= '9')) {
return '_'
}
return c

View File

@@ -16,7 +16,9 @@ package prometheus
import (
"fmt"
"math"
"runtime"
"sort"
"sync"
"sync/atomic"
"github.com/golang/protobuf/proto"
@@ -108,8 +110,9 @@ func ExponentialBuckets(start, factor float64, count int) []float64 {
}
// HistogramOpts bundles the options for creating a Histogram metric. It is
// mandatory to set Name and Help to a non-empty string. All other fields are
// optional and can safely be left at their zero value.
// mandatory to set Name to a non-empty string. All other fields are optional
// and can safely be left at their zero value, although it is strongly
// encouraged to set a Help string.
type HistogramOpts struct {
// Namespace, Subsystem, and Name are components of the fully-qualified
// name of the Histogram (created by joining these components with
@@ -120,7 +123,7 @@ type HistogramOpts struct {
Subsystem string
Name string
// Help provides information about this Histogram. Mandatory!
// Help provides information about this Histogram.
//
// Metrics with the same fully-qualified name must have the same Help
// string.
@@ -184,6 +187,7 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr
desc: desc,
upperBounds: opts.Buckets,
labelPairs: makeLabelPairs(desc, labelValues),
counts: [2]*histogramCounts{&histogramCounts{}, &histogramCounts{}},
}
for i, upperBound := range h.upperBounds {
if i < len(h.upperBounds)-1 {
@@ -200,28 +204,53 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr
}
}
}
// Finally we know the final length of h.upperBounds and can make counts.
h.counts = make([]uint64, len(h.upperBounds))
// Finally we know the final length of h.upperBounds and can make counts
// for both states:
h.counts[0].buckets = make([]uint64, len(h.upperBounds))
h.counts[1].buckets = make([]uint64, len(h.upperBounds))
h.init(h) // Init self-collection.
return h
}
type histogram struct {
type histogramCounts struct {
// sumBits contains the bits of the float64 representing the sum of all
// observations. sumBits and count have to go first in the struct to
// guarantee alignment for atomic operations.
// http://golang.org/pkg/sync/atomic/#pkg-note-BUG
sumBits uint64
count uint64
buckets []uint64
}
type histogram struct {
// countAndHotIdx is a complicated one. For lock-free yet atomic
// observations, we need to save the total count of observations again,
// combined with the index of the currently-hot counts struct, so that
// we can perform the operation on both values atomically. The least
// significant bit defines the hot counts struct. The remaining 63 bits
// represent the total count of observations. This happens under the
// assumption that the 63bit count will never overflow. Rationale: An
// observations takes about 30ns. Let's assume it could happen in
// 10ns. Overflowing the counter will then take at least (2^63)*10ns,
// which is about 3000 years.
//
// This has to be first in the struct for 64bit alignment. See
// http://golang.org/pkg/sync/atomic/#pkg-note-BUG
countAndHotIdx uint64
selfCollector
// Note that there is no mutex required.
desc *Desc
desc *Desc
writeMtx sync.Mutex // Only used in the Write method.
upperBounds []float64
counts []uint64
// Two counts, one is "hot" for lock-free observations, the other is
// "cold" for writing out a dto.Metric. It has to be an array of
// pointers to guarantee 64bit alignment of the histogramCounts, see
// http://golang.org/pkg/sync/atomic/#pkg-note-BUG.
counts [2]*histogramCounts
hotIdx int // Index of currently-hot counts. Only used within Write.
labelPairs []*dto.LabelPair
}
@@ -241,36 +270,113 @@ func (h *histogram) Observe(v float64) {
// 100 buckets: 78.1 ns/op linear - binary 54.9 ns/op
// 300 buckets: 154 ns/op linear - binary 61.6 ns/op
i := sort.SearchFloat64s(h.upperBounds, v)
if i < len(h.counts) {
atomic.AddUint64(&h.counts[i], 1)
// We increment h.countAndHotIdx by 2 so that the counter in the upper
// 63 bits gets incremented by 1. At the same time, we get the new value
// back, which we can use to find the currently-hot counts.
n := atomic.AddUint64(&h.countAndHotIdx, 2)
hotCounts := h.counts[n%2]
if i < len(h.upperBounds) {
atomic.AddUint64(&hotCounts.buckets[i], 1)
}
atomic.AddUint64(&h.count, 1)
for {
oldBits := atomic.LoadUint64(&h.sumBits)
oldBits := atomic.LoadUint64(&hotCounts.sumBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + v)
if atomic.CompareAndSwapUint64(&h.sumBits, oldBits, newBits) {
if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) {
break
}
}
// Increment count last as we take it as a signal that the observation
// is complete.
atomic.AddUint64(&hotCounts.count, 1)
}
func (h *histogram) Write(out *dto.Metric) error {
his := &dto.Histogram{}
buckets := make([]*dto.Bucket, len(h.upperBounds))
var (
his = &dto.Histogram{}
buckets = make([]*dto.Bucket, len(h.upperBounds))
hotCounts, coldCounts *histogramCounts
count uint64
)
his.SampleSum = proto.Float64(math.Float64frombits(atomic.LoadUint64(&h.sumBits)))
his.SampleCount = proto.Uint64(atomic.LoadUint64(&h.count))
var count uint64
// For simplicity, we mutex the rest of this method. It is not in the
// hot path, i.e. Observe is called much more often than Write. The
// complication of making Write lock-free isn't worth it.
h.writeMtx.Lock()
defer h.writeMtx.Unlock()
// This is a bit arcane, which is why the following spells out this if
// clause in English:
//
// If the currently-hot counts struct is #0, we atomically increment
// h.countAndHotIdx by 1 so that from now on Observe will use the counts
// struct #1. Furthermore, the atomic increment gives us the new value,
// which, in its most significant 63 bits, tells us the count of
// observations done so far up to and including currently ongoing
// observations still using the counts struct just changed from hot to
// cold. To have a normal uint64 for the count, we bitshift by 1 and
// save the result in count. We also set h.hotIdx to 1 for the next
// Write call, and we will refer to counts #1 as hotCounts and to counts
// #0 as coldCounts.
//
// If the currently-hot counts struct is #1, we do the corresponding
// things the other way round. We have to _decrement_ h.countAndHotIdx
// (which is a bit arcane in itself, as we have to express -1 with an
// unsigned int...).
if h.hotIdx == 0 {
count = atomic.AddUint64(&h.countAndHotIdx, 1) >> 1
h.hotIdx = 1
hotCounts = h.counts[1]
coldCounts = h.counts[0]
} else {
count = atomic.AddUint64(&h.countAndHotIdx, ^uint64(0)) >> 1 // Decrement.
h.hotIdx = 0
hotCounts = h.counts[0]
coldCounts = h.counts[1]
}
// Now we have to wait for the now-declared-cold counts to actually cool
// down, i.e. wait for all observations still using it to finish. That's
// the case once the count in the cold counts struct is the same as the
// one atomically retrieved from the upper 63bits of h.countAndHotIdx.
for {
if count == atomic.LoadUint64(&coldCounts.count) {
break
}
runtime.Gosched() // Let observations get work done.
}
his.SampleCount = proto.Uint64(count)
his.SampleSum = proto.Float64(math.Float64frombits(atomic.LoadUint64(&coldCounts.sumBits)))
var cumCount uint64
for i, upperBound := range h.upperBounds {
count += atomic.LoadUint64(&h.counts[i])
cumCount += atomic.LoadUint64(&coldCounts.buckets[i])
buckets[i] = &dto.Bucket{
CumulativeCount: proto.Uint64(count),
CumulativeCount: proto.Uint64(cumCount),
UpperBound: proto.Float64(upperBound),
}
}
his.Bucket = buckets
out.Histogram = his
out.Label = h.labelPairs
// Finally add all the cold counts to the new hot counts and reset the cold counts.
atomic.AddUint64(&hotCounts.count, count)
atomic.StoreUint64(&coldCounts.count, 0)
for {
oldBits := atomic.LoadUint64(&hotCounts.sumBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + his.GetSampleSum())
if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) {
atomic.StoreUint64(&coldCounts.sumBits, 0)
break
}
}
for i := range h.upperBounds {
atomic.AddUint64(&hotCounts.buckets[i], atomic.LoadUint64(&coldCounts.buckets[i]))
atomic.StoreUint64(&coldCounts.buckets[i], 0)
}
return nil
}
@@ -454,7 +560,7 @@ func (h *constHistogram) Write(out *dto.Metric) error {
// bucket.
//
// NewConstHistogram returns an error if the length of labelValues is not
// consistent with the variable labels in Desc.
// consistent with the variable labels in Desc or if Desc is invalid.
func NewConstHistogram(
desc *Desc,
count uint64,
@@ -462,6 +568,9 @@ func NewConstHistogram(
buckets map[float64]uint64,
labelValues ...string,
) (Metric, error) {
if desc.err != nil {
return nil, desc.err
}
if err := validateLabelValues(labelValues, len(desc.variableLabels)); err != nil {
return nil, err
}

View File

@@ -61,16 +61,15 @@ func giveBuf(buf *bytes.Buffer) {
// name).
//
// Deprecated: Please note the issues described in the doc comment of
// InstrumentHandler. You might want to consider using promhttp.Handler instead
// (which is not instrumented, but can be instrumented with the tooling provided
// in package promhttp).
// InstrumentHandler. You might want to consider using promhttp.Handler instead.
func Handler() http.Handler {
return InstrumentHandler("prometheus", UninstrumentedHandler())
}
// UninstrumentedHandler returns an HTTP handler for the DefaultGatherer.
//
// Deprecated: Use promhttp.Handler instead. See there for further documentation.
// Deprecated: Use promhttp.HandlerFor(DefaultGatherer, promhttp.HandlerOpts{})
// instead. See there for further documentation.
func UninstrumentedHandler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
mfs, err := DefaultGatherer.Gather()
@@ -116,7 +115,7 @@ func decorateWriter(request *http.Request, writer io.Writer) (io.Writer, string)
header := request.Header.Get(acceptEncodingHeader)
parts := strings.Split(header, ",")
for _, part := range parts {
part := strings.TrimSpace(part)
part = strings.TrimSpace(part)
if part == "gzip" || strings.HasPrefix(part, "gzip;") {
return gzip.NewWriter(writer), "gzip"
}
@@ -140,16 +139,6 @@ var now nower = nowFunc(func() time.Time {
return time.Now()
})
func nowSeries(t ...time.Time) nower {
return nowFunc(func() time.Time {
defer func() {
t = t[1:]
}()
return t[0]
})
}
// InstrumentHandler wraps the given HTTP handler for instrumentation. It
// registers four metric collectors (if not already done) and reports HTTP
// metrics to the (newly or already) registered collectors: http_requests_total
@@ -160,21 +149,14 @@ func nowSeries(t ...time.Time) nower {
// (label name "method") and HTTP status code (label name "code").
//
// Deprecated: InstrumentHandler has several issues. Use the tooling provided in
// package promhttp instead. The issues are the following:
//
// - It uses Summaries rather than Histograms. Summaries are not useful if
// aggregation across multiple instances is required.
//
// - It uses microseconds as unit, which is deprecated and should be replaced by
// seconds.
//
// - The size of the request is calculated in a separate goroutine. Since this
// calculator requires access to the request header, it creates a race with
// any writes to the header performed during request handling.
// httputil.ReverseProxy is a prominent example for a handler
// performing such writes.
//
// - It has additional issues with HTTP/2, cf.
// package promhttp instead. The issues are the following: (1) It uses Summaries
// rather than Histograms. Summaries are not useful if aggregation across
// multiple instances is required. (2) It uses microseconds as unit, which is
// deprecated and should be replaced by seconds. (3) The size of the request is
// calculated in a separate goroutine. Since this calculator requires access to
// the request header, it creates a race with any writes to the header performed
// during request handling. httputil.ReverseProxy is a prominent example for a
// handler performing such writes. (4) It has additional issues with HTTP/2, cf.
// https://github.com/prometheus/client_golang/issues/272.
func InstrumentHandler(handlerName string, handler http.Handler) http.HandlerFunc {
return InstrumentHandlerFunc(handlerName, handler.ServeHTTP)
@@ -318,7 +300,7 @@ func InstrumentHandlerFuncWithOpts(opts SummaryOpts, handlerFunc func(http.Respo
}
func computeApproximateRequestSize(r *http.Request) <-chan int {
// Get URL length in current go routine for avoiding a race condition.
// Get URL length in current goroutine for avoiding a race condition.
// HandlerFunc that runs in parallel may modify the URL.
s := 0
if r.URL != nil {
@@ -353,10 +335,9 @@ func computeApproximateRequestSize(r *http.Request) <-chan int {
type responseWriterDelegator struct {
http.ResponseWriter
handler, method string
status int
written int64
wroteHeader bool
status int
written int64
wroteHeader bool
}
func (r *responseWriterDelegator) WriteHeader(code int) {

View File

@@ -0,0 +1,85 @@
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package internal
import (
"sort"
dto "github.com/prometheus/client_model/go"
)
// metricSorter is a sortable slice of *dto.Metric.
type metricSorter []*dto.Metric
func (s metricSorter) Len() int {
return len(s)
}
func (s metricSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s metricSorter) Less(i, j int) bool {
if len(s[i].Label) != len(s[j].Label) {
// This should not happen. The metrics are
// inconsistent. However, we have to deal with the fact, as
// people might use custom collectors or metric family injection
// to create inconsistent metrics. So let's simply compare the
// number of labels in this case. That will still yield
// reproducible sorting.
return len(s[i].Label) < len(s[j].Label)
}
for n, lp := range s[i].Label {
vi := lp.GetValue()
vj := s[j].Label[n].GetValue()
if vi != vj {
return vi < vj
}
}
// We should never arrive here. Multiple metrics with the same
// label set in the same scrape will lead to undefined ingestion
// behavior. However, as above, we have to provide stable sorting
// here, even for inconsistent metrics. So sort equal metrics
// by their timestamp, with missing timestamps (implying "now")
// coming last.
if s[i].TimestampMs == nil {
return false
}
if s[j].TimestampMs == nil {
return true
}
return s[i].GetTimestampMs() < s[j].GetTimestampMs()
}
// NormalizeMetricFamilies returns a MetricFamily slice with empty
// MetricFamilies pruned and the remaining MetricFamilies sorted by name within
// the slice, with the contained Metrics sorted within each MetricFamily.
func NormalizeMetricFamilies(metricFamiliesByName map[string]*dto.MetricFamily) []*dto.MetricFamily {
for _, mf := range metricFamiliesByName {
sort.Sort(metricSorter(mf.Metric))
}
names := make([]string, 0, len(metricFamiliesByName))
for name, mf := range metricFamiliesByName {
if len(mf.Metric) > 0 {
names = append(names, name)
}
}
sort.Strings(names)
result := make([]*dto.MetricFamily, 0, len(names))
for _, name := range names {
result = append(result, metricFamiliesByName[name])
}
return result
}

View File

@@ -1,3 +1,16 @@
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (

View File

@@ -15,6 +15,9 @@ package prometheus
import (
"strings"
"time"
"github.com/golang/protobuf/proto"
dto "github.com/prometheus/client_model/go"
)
@@ -43,9 +46,8 @@ type Metric interface {
// While populating dto.Metric, it is the responsibility of the
// implementation to ensure validity of the Metric protobuf (like valid
// UTF-8 strings or syntactically valid metric and label names). It is
// recommended to sort labels lexicographically. (Implementers may find
// LabelPairSorter useful for that.) Callers of Write should still make
// sure of sorting if they depend on it.
// recommended to sort labels lexicographically. Callers of Write should
// still make sure of sorting if they depend on it.
Write(*dto.Metric) error
// TODO(beorn7): The original rationale of passing in a pre-allocated
// dto.Metric protobuf to save allocations has disappeared. The
@@ -57,8 +59,9 @@ type Metric interface {
// implementation XXX has its own XXXOpts type, but in most cases, it is just be
// an alias of this type (which might change when the requirement arises.)
//
// It is mandatory to set Name and Help to a non-empty string. All other fields
// are optional and can safely be left at their zero value.
// It is mandatory to set Name to a non-empty string. All other fields are
// optional and can safely be left at their zero value, although it is strongly
// encouraged to set a Help string.
type Opts struct {
// Namespace, Subsystem, and Name are components of the fully-qualified
// name of the Metric (created by joining these components with
@@ -69,7 +72,7 @@ type Opts struct {
Subsystem string
Name string
// Help provides information about this metric. Mandatory!
// Help provides information about this metric.
//
// Metrics with the same fully-qualified name must have the same Help
// string.
@@ -110,37 +113,22 @@ func BuildFQName(namespace, subsystem, name string) string {
return name
}
// LabelPairSorter implements sort.Interface. It is used to sort a slice of
// dto.LabelPair pointers. This is useful for implementing the Write method of
// custom metrics.
type LabelPairSorter []*dto.LabelPair
// labelPairSorter implements sort.Interface. It is used to sort a slice of
// dto.LabelPair pointers.
type labelPairSorter []*dto.LabelPair
func (s LabelPairSorter) Len() int {
func (s labelPairSorter) Len() int {
return len(s)
}
func (s LabelPairSorter) Swap(i, j int) {
func (s labelPairSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s LabelPairSorter) Less(i, j int) bool {
func (s labelPairSorter) Less(i, j int) bool {
return s[i].GetName() < s[j].GetName()
}
type hashSorter []uint64
func (s hashSorter) Len() int {
return len(s)
}
func (s hashSorter) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s hashSorter) Less(i, j int) bool {
return s[i] < s[j]
}
type invalidMetric struct {
desc *Desc
err error
@@ -156,3 +144,31 @@ func NewInvalidMetric(desc *Desc, err error) Metric {
func (m *invalidMetric) Desc() *Desc { return m.desc }
func (m *invalidMetric) Write(*dto.Metric) error { return m.err }
type timestampedMetric struct {
Metric
t time.Time
}
func (m timestampedMetric) Write(pb *dto.Metric) error {
e := m.Metric.Write(pb)
pb.TimestampMs = proto.Int64(m.t.Unix()*1000 + int64(m.t.Nanosecond()/1000000))
return e
}
// NewMetricWithTimestamp returns a new Metric wrapping the provided Metric in a
// way that it has an explicit timestamp set to the provided Time. This is only
// useful in rare cases as the timestamp of a Prometheus metric should usually
// be set by the Prometheus server during scraping. Exceptions include mirroring
// metrics with given timestamps from other metric
// sources.
//
// NewMetricWithTimestamp works best with MustNewConstMetric,
// MustNewConstHistogram, and MustNewConstSummary, see example.
//
// Currently, the exposition formats used by Prometheus are limited to
// millisecond resolution. Thus, the provided time will be rounded down to the
// next full millisecond value.
func NewMetricWithTimestamp(t time.Time, m Metric) Metric {
return timestampedMetric{Metric: m, t: t}
}

View File

@@ -13,46 +13,74 @@
package prometheus
import "github.com/prometheus/procfs"
import (
"errors"
"os"
"github.com/prometheus/procfs"
)
type processCollector struct {
pid int
collectFn func(chan<- Metric)
pidFn func() (int, error)
reportErrors bool
cpuTotal *Desc
openFDs, maxFDs *Desc
vsize, rss *Desc
vsize, maxVsize *Desc
rss *Desc
startTime *Desc
}
// NewProcessCollector returns a collector which exports the current state of
// process metrics including cpu, memory and file descriptor usage as well as
// the process start time for the given process id under the given namespace.
func NewProcessCollector(pid int, namespace string) Collector {
return NewProcessCollectorPIDFn(
func() (int, error) { return pid, nil },
namespace,
)
// ProcessCollectorOpts defines the behavior of a process metrics collector
// created with NewProcessCollector.
type ProcessCollectorOpts struct {
// PidFn returns the PID of the process the collector collects metrics
// for. It is called upon each collection. By default, the PID of the
// current process is used, as determined on construction time by
// calling os.Getpid().
PidFn func() (int, error)
// If non-empty, each of the collected metrics is prefixed by the
// provided string and an underscore ("_").
Namespace string
// If true, any error encountered during collection is reported as an
// invalid metric (see NewInvalidMetric). Otherwise, errors are ignored
// and the collected metrics will be incomplete. (Possibly, no metrics
// will be collected at all.) While that's usually not desired, it is
// appropriate for the common "mix-in" of process metrics, where process
// metrics are nice to have, but failing to collect them should not
// disrupt the collection of the remaining metrics.
ReportErrors bool
}
// NewProcessCollectorPIDFn returns a collector which exports the current state
// of process metrics including cpu, memory and file descriptor usage as well
// as the process start time under the given namespace. The given pidFn is
// called on each collect and is used to determine the process to export
// metrics for.
func NewProcessCollectorPIDFn(
pidFn func() (int, error),
namespace string,
) Collector {
// NewProcessCollector returns a collector which exports the current state of
// process metrics including CPU, memory and file descriptor usage as well as
// the process start time. The detailed behavior is defined by the provided
// ProcessCollectorOpts. The zero value of ProcessCollectorOpts creates a
// collector for the current process with an empty namespace string and no error
// reporting.
//
// Currently, the collector depends on a Linux-style proc filesystem and
// therefore only exports metrics for Linux.
//
// Note: An older version of this function had the following signature:
//
// NewProcessCollector(pid int, namespace string) Collector
//
// Most commonly, it was called as
//
// NewProcessCollector(os.Getpid(), "")
//
// The following call of the current version is equivalent to the above:
//
// NewProcessCollector(ProcessCollectorOpts{})
func NewProcessCollector(opts ProcessCollectorOpts) Collector {
ns := ""
if len(namespace) > 0 {
ns = namespace + "_"
if len(opts.Namespace) > 0 {
ns = opts.Namespace + "_"
}
c := processCollector{
pidFn: pidFn,
collectFn: func(chan<- Metric) {},
c := &processCollector{
reportErrors: opts.ReportErrors,
cpuTotal: NewDesc(
ns+"process_cpu_seconds_total",
"Total user and system CPU time spent in seconds.",
@@ -73,6 +101,11 @@ func NewProcessCollectorPIDFn(
"Virtual memory size in bytes.",
nil, nil,
),
maxVsize: NewDesc(
ns+"process_virtual_memory_max_bytes",
"Maximum amount of virtual memory available in bytes.",
nil, nil,
),
rss: NewDesc(
ns+"process_resident_memory_bytes",
"Resident memory size in bytes.",
@@ -85,12 +118,23 @@ func NewProcessCollectorPIDFn(
),
}
if opts.PidFn == nil {
pid := os.Getpid()
c.pidFn = func() (int, error) { return pid, nil }
} else {
c.pidFn = opts.PidFn
}
// Set up process metric collection if supported by the runtime.
if _, err := procfs.NewStat(); err == nil {
c.collectFn = c.processCollect
} else {
c.collectFn = func(ch chan<- Metric) {
c.reportError(ch, nil, errors.New("process metrics not supported on this platform"))
}
}
return &c
return c
}
// Describe returns all descriptions of the collector.
@@ -99,6 +143,7 @@ func (c *processCollector) Describe(ch chan<- *Desc) {
ch <- c.openFDs
ch <- c.maxFDs
ch <- c.vsize
ch <- c.maxVsize
ch <- c.rss
ch <- c.startTime
}
@@ -108,16 +153,16 @@ func (c *processCollector) Collect(ch chan<- Metric) {
c.collectFn(ch)
}
// TODO(ts): Bring back error reporting by reverting 7faf9e7 as soon as the
// client allows users to configure the error behavior.
func (c *processCollector) processCollect(ch chan<- Metric) {
pid, err := c.pidFn()
if err != nil {
c.reportError(ch, nil, err)
return
}
p, err := procfs.NewProc(pid)
if err != nil {
c.reportError(ch, nil, err)
return
}
@@ -127,14 +172,33 @@ func (c *processCollector) processCollect(ch chan<- Metric) {
ch <- MustNewConstMetric(c.rss, GaugeValue, float64(stat.ResidentMemory()))
if startTime, err := stat.StartTime(); err == nil {
ch <- MustNewConstMetric(c.startTime, GaugeValue, startTime)
} else {
c.reportError(ch, c.startTime, err)
}
} else {
c.reportError(ch, nil, err)
}
if fds, err := p.FileDescriptorsLen(); err == nil {
ch <- MustNewConstMetric(c.openFDs, GaugeValue, float64(fds))
} else {
c.reportError(ch, c.openFDs, err)
}
if limits, err := p.NewLimits(); err == nil {
ch <- MustNewConstMetric(c.maxFDs, GaugeValue, float64(limits.OpenFiles))
ch <- MustNewConstMetric(c.maxVsize, GaugeValue, float64(limits.AddressSpace))
} else {
c.reportError(ch, nil, err)
}
}
func (c *processCollector) reportError(ch chan<- Metric, desc *Desc, err error) {
if !c.reportErrors {
return
}
if desc == nil {
desc = NewInvalidDesc(err)
}
ch <- NewInvalidMetric(desc, err)
}

View File

@@ -0,0 +1,223 @@
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package promauto provides constructors for the usual Prometheus metrics that
// return them already registered with the global registry
// (prometheus.DefaultRegisterer). This allows very compact code, avoiding any
// references to the registry altogether, but all the constructors in this
// package will panic if the registration fails.
//
// The following example is a complete program to create a histogram of normally
// distributed random numbers from the math/rand package:
//
// package main
//
// import (
// "math/rand"
// "net/http"
//
// "github.com/prometheus/client_golang/prometheus"
// "github.com/prometheus/client_golang/prometheus/promauto"
// "github.com/prometheus/client_golang/prometheus/promhttp"
// )
//
// var histogram = promauto.NewHistogram(prometheus.HistogramOpts{
// Name: "random_numbers",
// Help: "A histogram of normally distributed random numbers.",
// Buckets: prometheus.LinearBuckets(-3, .1, 61),
// })
//
// func Random() {
// for {
// histogram.Observe(rand.NormFloat64())
// }
// }
//
// func main() {
// go Random()
// http.Handle("/metrics", promhttp.Handler())
// http.ListenAndServe(":1971", nil)
// }
//
// Prometheus's version of a minimal hello-world program:
//
// package main
//
// import (
// "fmt"
// "net/http"
//
// "github.com/prometheus/client_golang/prometheus"
// "github.com/prometheus/client_golang/prometheus/promauto"
// "github.com/prometheus/client_golang/prometheus/promhttp"
// )
//
// func main() {
// http.Handle("/", promhttp.InstrumentHandlerCounter(
// promauto.NewCounterVec(
// prometheus.CounterOpts{
// Name: "hello_requests_total",
// Help: "Total number of hello-world requests by HTTP code.",
// },
// []string{"code"},
// ),
// http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// fmt.Fprint(w, "Hello, world!")
// }),
// ))
// http.Handle("/metrics", promhttp.Handler())
// http.ListenAndServe(":1971", nil)
// }
//
// This appears very handy. So why are these constructors locked away in a
// separate package? There are two caveats:
//
// First, in more complex programs, global state is often quite problematic.
// That's the reason why the metrics constructors in the prometheus package do
// not interact with the global prometheus.DefaultRegisterer on their own. You
// are free to use the Register or MustRegister functions to register them with
// the global prometheus.DefaultRegisterer, but you could as well choose a local
// Registerer (usually created with prometheus.NewRegistry, but there are other
// scenarios, e.g. testing).
//
// The second issue is that registration may fail, e.g. if a metric inconsistent
// with the newly to be registered one is already registered. But how to signal
// and handle a panic in the automatic registration with the default registry?
// The only way is panicking. While panicking on invalid input provided by the
// programmer is certainly fine, things are a bit more subtle in this case: You
// might just add another package to the program, and that package (in its init
// function) happens to register a metric with the same name as your code. Now,
// all of a sudden, either your code or the code of the newly imported package
// panics, depending on initialization order, without any opportunity to handle
// the case gracefully. Even worse is a scenario where registration happens
// later during the runtime (e.g. upon loading some kind of plugin), where the
// panic could be triggered long after the code has been deployed to
// production. A possibility to panic should be explicitly called out by the
// Must… idiom, cf. prometheus.MustRegister. But adding a separate set of
// constructors in the prometheus package called MustRegisterNewCounterVec or
// similar would be quite unwieldy. Adding an extra MustRegister method to each
// metric, returning the registered metric, would result in nice code for those
// using the method, but would pollute every single metric interface for
// everybody avoiding the global registry.
//
// To address both issues, the problematic auto-registering and possibly
// panicking constructors are all in this package with a clear warning
// ahead. And whoever cares about avoiding global state and possibly panicking
// function calls can simply ignore the existence of the promauto package
// altogether.
//
// A final note: There is a similar case in the net/http package of the standard
// library. It has DefaultServeMux as a global instance of ServeMux, and the
// Handle function acts on it, panicking if a handler for the same pattern has
// already been registered. However, one might argue that the whole HTTP routing
// is usually set up closely together in the same package or file, while
// Prometheus metrics tend to be spread widely over the codebase, increasing the
// chance of surprising registration failures. Furthermore, the use of global
// state in net/http has been criticized widely, and some avoid it altogether.
package promauto
import "github.com/prometheus/client_golang/prometheus"
// NewCounter works like the function of the same name in the prometheus package
// but it automatically registers the Counter with the
// prometheus.DefaultRegisterer. If the registration fails, NewCounter panics.
func NewCounter(opts prometheus.CounterOpts) prometheus.Counter {
c := prometheus.NewCounter(opts)
prometheus.MustRegister(c)
return c
}
// NewCounterVec works like the function of the same name in the prometheus
// package but it automatically registers the CounterVec with the
// prometheus.DefaultRegisterer. If the registration fails, NewCounterVec
// panics.
func NewCounterVec(opts prometheus.CounterOpts, labelNames []string) *prometheus.CounterVec {
c := prometheus.NewCounterVec(opts, labelNames)
prometheus.MustRegister(c)
return c
}
// NewCounterFunc works like the function of the same name in the prometheus
// package but it automatically registers the CounterFunc with the
// prometheus.DefaultRegisterer. If the registration fails, NewCounterFunc
// panics.
func NewCounterFunc(opts prometheus.CounterOpts, function func() float64) prometheus.CounterFunc {
g := prometheus.NewCounterFunc(opts, function)
prometheus.MustRegister(g)
return g
}
// NewGauge works like the function of the same name in the prometheus package
// but it automatically registers the Gauge with the
// prometheus.DefaultRegisterer. If the registration fails, NewGauge panics.
func NewGauge(opts prometheus.GaugeOpts) prometheus.Gauge {
g := prometheus.NewGauge(opts)
prometheus.MustRegister(g)
return g
}
// NewGaugeVec works like the function of the same name in the prometheus
// package but it automatically registers the GaugeVec with the
// prometheus.DefaultRegisterer. If the registration fails, NewGaugeVec panics.
func NewGaugeVec(opts prometheus.GaugeOpts, labelNames []string) *prometheus.GaugeVec {
g := prometheus.NewGaugeVec(opts, labelNames)
prometheus.MustRegister(g)
return g
}
// NewGaugeFunc works like the function of the same name in the prometheus
// package but it automatically registers the GaugeFunc with the
// prometheus.DefaultRegisterer. If the registration fails, NewGaugeFunc panics.
func NewGaugeFunc(opts prometheus.GaugeOpts, function func() float64) prometheus.GaugeFunc {
g := prometheus.NewGaugeFunc(opts, function)
prometheus.MustRegister(g)
return g
}
// NewSummary works like the function of the same name in the prometheus package
// but it automatically registers the Summary with the
// prometheus.DefaultRegisterer. If the registration fails, NewSummary panics.
func NewSummary(opts prometheus.SummaryOpts) prometheus.Summary {
s := prometheus.NewSummary(opts)
prometheus.MustRegister(s)
return s
}
// NewSummaryVec works like the function of the same name in the prometheus
// package but it automatically registers the SummaryVec with the
// prometheus.DefaultRegisterer. If the registration fails, NewSummaryVec
// panics.
func NewSummaryVec(opts prometheus.SummaryOpts, labelNames []string) *prometheus.SummaryVec {
s := prometheus.NewSummaryVec(opts, labelNames)
prometheus.MustRegister(s)
return s
}
// NewHistogram works like the function of the same name in the prometheus
// package but it automatically registers the Histogram with the
// prometheus.DefaultRegisterer. If the registration fails, NewHistogram panics.
func NewHistogram(opts prometheus.HistogramOpts) prometheus.Histogram {
h := prometheus.NewHistogram(opts)
prometheus.MustRegister(h)
return h
}
// NewHistogramVec works like the function of the same name in the prometheus
// package but it automatically registers the HistogramVec with the
// prometheus.DefaultRegisterer. If the registration fails, NewHistogramVec
// panics.
func NewHistogramVec(opts prometheus.HistogramOpts, labelNames []string) *prometheus.HistogramVec {
h := prometheus.NewHistogramVec(opts, labelNames)
prometheus.MustRegister(h)
return h
}

Some files were not shown because too many files have changed in this diff Show More