Compare commits

..

44 Commits

Author SHA1 Message Date
Jakob Borg
325c3c1fa7 lib/db, lib/protocol: Compact FileInfo and BlockInfo alignment (#6215)
* lib/db, lib/protocol: Compact FileInfo and BlockInfo alignment

This fixes the following two lint warnings

    FileInfo: struct of size 160 bytes could be of size 136 bytes
    BlockInfo: struct of size 48 bytes could be of size 40 bytes

by reordering fields in alignment order (64 bit fields, then 32 bit
fields, then 16 bit fields (if any), then small ones). The end result is
a slightly less aesthetically pleasing struct field order, but since
these are the objects we often juggle in bulk and keep large queues of I
think it's worth it.

It's a micro optimization, but a cheap one.
2019-12-08 13:31:26 +01:00
Jakob Borg
be0508cf26 lib/model, lib/protocol: Use error handling to avoid panic on non-started folder (fixes #6174) (#6212)
This adds error returns to model methods called by the protocol layer.
Returning an error will cause the connection to be torn down as the
message couldn't be handled. Using this to signal that a folder isn't
currently available will then cause a reconnection a few moments later,
when it'll hopefully work better.

Tested manually by running with STRECHECKDBEVERY=0 on a nontrivially
sized setup. This panics reliably before this patch, but just causes a
disconnect/reconnect now.
2019-12-04 10:46:55 +01:00
Simon Frei
6fd5e78740 lib: Consistently unsubscribe from config-wrapper (fixes #6133) (#6205) 2019-12-04 07:15:00 +01:00
Jakob Borg
a9e490adfa cmd/ursrv: Show more architectures (fixes #6211) 2019-12-03 21:34:32 +01:00
Jakob Borg
d8e7e92512 build: Generalize code signing
Should, at the least, also codesign when building zip for mac.
2019-12-03 08:37:43 +01:00
Paul Brit
eca156fd7f lib/osutil: Increase maxfiles on macOS properly (fixes #6206) (#6207) 2019-12-03 07:26:22 +01:00
Marcus Legendre
b3fd9a8d53 lib/ignore: Don't create empty ".stignore" files (fixes #6190) (#6197)
This will:

1. prevent creation of a new .stignore if there are no ignore patterns
2. delete an existing .stignore if all ignore patterns are removed
2019-12-02 08:19:02 +01:00
Simon Frei
0bec01b827 lib/db: Remove *instance by making everything *Lowlevel (#6204) 2019-12-02 08:18:04 +01:00
Jakob Borg
e82a7e3dfa all: Propagate errors from NamespacedKV (#6203)
As foretold by the prophecy, "once the database refactor is merged, then
shall appear a request to propagate errors from the store known
throughout the land as the NamedspacedKV, and it shall be good".
2019-11-30 13:03:24 +01:00
Jakob Borg
928767e316 lib/model: gofmt lol :( 2019-11-29 09:29:59 +01:00
Evgeny Kuznetsov
1c277fc096 lib/fs: Add case-insensitive fakefs (#6074) 2019-11-29 09:17:42 +01:00
Jakob Borg
c71116ee94 Implement database abstraction, error checking (ref #5907) (#6107)
This PR does two things, because one lead to the other:

- Move the leveldb specific stuff into a small "backend" package that
defines a backend interface and the leveldb implementation. This allows,
potentially, in the future, switching the db implementation so another
KV store should we wish to do so.

- Add proper error handling all along the way. The db and backend
packages are now errcheck clean. However, I drew the line at modifying
the FileSet API in order to keep this manageable and not continue
refactoring all of the rest of Syncthing. As such, the FileSet methods
still panic on database errors, except for the "database is closed"
error which is instead handled by silently returning as quickly as
possible, with the assumption that we're anyway "on the way out".
2019-11-29 09:11:52 +01:00
Robin Schoonover
a5bbc12625 gui: Sort versions by date in restore dropdown (#6201) 2019-11-29 08:32:25 +01:00
Simon Frei
606154b183 lib/model: Also send folder summary from sync-preparing (ref #6028) (#6202) 2019-11-29 08:30:17 +01:00
Jakob Borg
f9c380d45b cmd/syncthing: Implement log rotation (fixes #6104) (#6198)
Since we've taken upon ourselves to create a log file by default on
Windows, this adds proper management of that log file. There are two new
options:

  -log-max-old-files="3"    Number of old files to keep (zero to keep only current).
  -log-max-size="10485760"  Maximum size of any file (zero to disable log rotation).

The default values result in four files (syncthing.log, synchting.0.log,
..., syncthing.3.log) each up to 10 MiB in size. To not use log rotation
at all, the user can say --log-max-size=0.
2019-11-28 12:26:14 +01:00
Aman Gupta
a04f54a16a lib/upnp: Use simple continue in loop (#6192) 2019-11-26 22:55:34 +00:00
Aman Gupta
509d123251 lib/upnp: Ensure uPnP http requests have trailing \r\n (#6193) 2019-11-26 22:54:46 +00:00
Simon Frei
b32821a586 lib/config, lib/connections: Remove ListenAddresses hack (#6188) 2019-11-26 17:07:25 +01:00
Otiel
8ced8ad562 github: bump Syncthing version to v1... (#6191) 2019-11-26 08:25:50 +00:00
Simon Frei
1bae4b7f50 all: Use context in lib/dialer (#6177)
* all: Use context in lib/dialer

* a bit slimmer

* https://github.com/syncthing/syncthing/pull/5753

* bot

* missed adding debug.go

* errors.Cause

* simultaneous dialing

* anti-leak
2019-11-26 07:39:51 +00:00
Jakob Borg
4e151d380c lib/versioner: Reduce surface area (#6186)
* lib/versioner: Reduce surface area

This is a refactor while I was anyway rooting around in the versioner.
Instead of exporting every possible implementation and the factory and
letting the caller do whatever, this now encapsulates all that and
exposes a New() that takes a config.VersioningConfiguration.

Given that and that we don't know (from the outside) how a versioner
works or what state it keeps, we now just construct it once per folder
and keep it around. Previously it was recreated for each restore
request.

* unparam

* wip
2019-11-26 07:39:31 +00:00
Simon Frei
f747ba6d69 lib/ignore: Keep skipping ignored dirs for rooted patterns (#6151)
* lib/ignore: Keep skipping ignored dirs for rooted patterns

* review

* clarify comment and lint

* glob.QuoteMeta

* review
2019-11-26 07:37:41 +00:00
Simon Frei
33258b06f4 lib/connections: Dialer code deduplication (#6187) 2019-11-26 07:36:58 +00:00
Jakob Borg
4340589501 Merge branch 'release'
Discarding the commit on that branch...
2019-11-25 11:10:09 +01:00
Simon Frei
4d368a37e2 lib/model, lib/protocol: Add contexts sending indexes and download-progress (#6176) 2019-11-25 11:07:36 +01:00
dependabot-preview[bot]
999647b7d6 build(deps): bump github.com/urfave/cli from 1.22.1 to 1.22.2 (#6183)
Bumps [github.com/urfave/cli](https://github.com/urfave/cli) from 1.22.1 to 1.22.2.
- [Release notes](https://github.com/urfave/cli/releases)
- [Changelog](https://github.com/urfave/cli/blob/master/docs/CHANGELOG.md)
- [Commits](https://github.com/urfave/cli/compare/v1.22.1...v1.22.2)

Signed-off-by: dependabot-preview[bot] <support@dependabot.com>
2019-11-25 11:05:55 +01:00
Simon Frei
cf312abc72 lib: Wrap errors with errors.Wrap instead of fmt.Errorf (#6181) 2019-11-23 15:20:54 +00:00
Mateusz Ż
e2f6d0d6c4 UI enhancements on mobile (#6180)
* Set fallback font for log viewer

* Enable logo scaling in About view

* Don't split "dependency list" into 2 columns on mobile
2019-11-23 12:25:25 +00:00
Simon Frei
65d4dd32cb lib/model: Also handle ServeBackground (#6173) 2019-11-22 21:30:16 +01:00
Simon Frei
de886b3f22 lib/relay: Prevent lock nil deref when creation dynamic client (#6175) 2019-11-21 17:45:06 +00:00
Jakob Borg
8c91e012c7 Merge branch 'release'
* release:
  gui: Prioritize non-idle folder states (fixes #6169) (#6170)
2019-11-21 09:36:03 +01:00
Simon Frei
57d668ed1d lib/config: Do introductions in a single config change (#6162) 2019-11-21 08:41:41 +01:00
Simon Frei
90d85fd0a2 lib: Replace done channel with contexts in and add names to util services (#6166) 2019-11-21 08:41:15 +01:00
Simon Frei
552ea68672 gui: Prioritize non-idle folder states (fixes #6169) (#6170) 2019-11-20 19:06:03 +01:00
Artur Zubilewicz
80eac473d9 gui: Make 'Nearby devices' look like links (fixes #6057) (#6165)
- The 'help-box' with nearby devices will appear only when there are
  any nearby devices
- IDs of nearby devices will look like links (i.e. underlined when
  hovered over)
2019-11-19 22:15:27 +01:00
Ruslan Yevdokymov
c1db8b2680 gui: Add upgrade confirmation dialog (fixes #5887) (#6167) 2019-11-19 22:05:41 +01:00
Jakob Borg
df866e10c8 gui: Increase padding a bit again (ref #6153)
I change my mind on this, the modals need *some* padding to not look weird.
2019-11-19 22:03:31 +01:00
Simon Frei
0d14ee4142 lib/model: Don't info log repeat pull errors (#6149) 2019-11-19 09:56:53 +01:00
Simon Frei
28edf2f5bb lib/model: Keep fmut locked while adding/starting/restarting folders (#6156) 2019-11-18 21:15:26 +01:00
Jakob Borg
e7100bc573 golang-ci: Skip "cognitive complexity" check for now 2019-11-17 08:54:59 +01:00
Simon Frei
5edf4660e2 lib/model: Prevent cleanup-race in testing (ref #6152) (#6155) 2019-11-14 23:08:40 +01:00
Domenic Horner
a5699d40a8 gui: Decrease padding on the panel and modal bodies (#6153)
This allows better viewing when on a condensed screen, and reduces screen real estate slightly.
2019-11-13 15:14:00 +01:00
Simon Frei
f80ce17497 lib/model: In tests prevent goroutine leaks and increase timeouts (#6152) 2019-11-13 10:21:54 +01:00
Simon Frei
ce72bee576 lib/model: Simplify pull error/retry logic (fixes #6139) (#6141) 2019-11-11 15:50:28 +01:00
143 changed files with 5454 additions and 3188 deletions

View File

@@ -36,7 +36,7 @@ its entirety.
### Version Information
Syncthing Version: v0.x.y
Syncthing Version: v1.x.y
OS Version: Windows 7 / Ubuntu 14.04 / ...
Browser Version: (if applicable, for GUI issues)

View File

@@ -15,6 +15,7 @@ linters:
- gocyclo
- funlen
- wsl
- gocognit
service:
golangci-lint-version: 1.21.x

View File

@@ -489,10 +489,7 @@ func buildTar(target target) {
}
build(target, tags)
if goos == "darwin" {
macosCodesign(target.BinaryName())
}
codesign(target)
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -515,10 +512,7 @@ func buildZip(target target) {
}
build(target, tags)
if goos == "windows" {
windowsCodesign(target.BinaryName())
}
codesign(target)
for i := range target.archiveFiles {
target.archiveFiles[i].src = strings.Replace(target.archiveFiles[i].src, "{{binary}}", target.BinaryName(), 1)
@@ -1179,6 +1173,15 @@ func zipFile(out string, files []archiveFile) {
}
}
func codesign(target target) {
switch goos {
case "windows":
windowsCodesign(target.BinaryName())
case "darwin":
macosCodesign(target.BinaryName())
}
}
func macosCodesign(file string) {
if pass := os.Getenv("CODESIGN_KEYCHAIN_PASS"); pass != "" {
bs, err := runError("security", "unlock-keychain", "-p", pass)

View File

@@ -13,11 +13,15 @@ import (
"time"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
func dump(ldb *db.Lowlevel) {
it := ldb.NewIterator(nil, nil)
func dump(ldb backend.Backend) {
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
for it.Next() {
key := it.Key()
switch key[0] {

View File

@@ -10,8 +10,10 @@ import (
"container/heap"
"encoding/binary"
"fmt"
"log"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
type SizedElement struct {
@@ -37,11 +39,14 @@ func (h *ElementHeap) Pop() interface{} {
return x
}
func dumpsize(ldb *db.Lowlevel) {
func dumpsize(ldb backend.Backend) {
h := &ElementHeap{}
heap.Init(h)
it := ldb.NewIterator(nil, nil)
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
var ele SizedElement
for it.Next() {
key := it.Key()

View File

@@ -10,8 +10,10 @@ import (
"bytes"
"encoding/binary"
"fmt"
"log"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -31,7 +33,7 @@ type sequenceKey struct {
sequence uint64
}
func idxck(ldb *db.Lowlevel) (success bool) {
func idxck(ldb backend.Backend) (success bool) {
folders := make(map[uint32]string)
devices := make(map[uint32]string)
deviceToIDs := make(map[string]uint32)
@@ -42,7 +44,10 @@ func idxck(ldb *db.Lowlevel) (success bool) {
var localDeviceKey uint32
success = true
it := ldb.NewIterator(nil, nil)
it, err := ldb.NewPrefixIterator(nil)
if err != nil {
log.Fatal(err)
}
for it.Next() {
key := it.Key()
switch key[0] {

View File

@@ -13,7 +13,7 @@ import (
"os"
"path/filepath"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
func main() {
@@ -30,7 +30,7 @@ func main() {
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
ldb, err := db.OpenRO(path)
ldb, err := backend.OpenLevelDBRO(path)
if err != nil {
log.Fatal(err)
}

View File

@@ -7,6 +7,7 @@ package main
import (
"bytes"
"compress/gzip"
"context"
"crypto/tls"
"encoding/json"
"flag"
@@ -480,7 +481,7 @@ func handleRelayTest(request request) {
if debug {
log.Println("Request for", request.relay)
}
if !client.TestRelay(request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if !client.TestRelay(context.TODO(), request.relay.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if debug {
log.Println("Test for relay", request.relay, "failed")
}

View File

@@ -4,6 +4,7 @@ package main
import (
"bufio"
"context"
"crypto/tls"
"flag"
"log"
@@ -19,6 +20,8 @@ import (
)
func main() {
ctx := context.Background()
log.SetOutput(os.Stdout)
log.SetFlags(log.LstdFlags | log.Lshortfile)
@@ -76,7 +79,7 @@ func main() {
}()
for {
conn, err := client.JoinSession(<-recv)
conn, err := client.JoinSession(ctx, <-recv)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -90,13 +93,13 @@ func main() {
log.Fatal(err)
}
invite, err := client.GetInvitationFromRelay(uri, id, []tls.Certificate{cert}, 10*time.Second)
invite, err := client.GetInvitationFromRelay(ctx, uri, id, []tls.Certificate{cert}, 10*time.Second)
if err != nil {
log.Fatal(err)
}
log.Println("Received invitation", invite)
conn, err := client.JoinSession(invite)
conn, err := client.JoinSession(ctx, invite)
if err != nil {
log.Fatalln("Failed to join", err)
}
@@ -104,7 +107,7 @@ func main() {
connectToStdio(stdin, conn)
log.Println("Finished", conn.RemoteAddr(), conn.LocalAddr())
} else if test {
if client.TestRelay(uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
if client.TestRelay(ctx, uri, []tls.Certificate{cert}, time.Second, 2*time.Second, 4) {
log.Println("OK")
} else {
log.Println("FAIL")

View File

@@ -154,6 +154,8 @@ type RuntimeOptions struct {
browserOnly bool
hideConsole bool
logFile string
logMaxSize int
logMaxFiles int
auditEnabled bool
auditFile string
paused bool
@@ -180,6 +182,8 @@ func defaultRuntimeOptions() RuntimeOptions {
cpuProfile: os.Getenv("STCPUPROFILE") != "",
stRestarting: os.Getenv("STRESTART") != "",
logFlags: log.Ltime,
logMaxSize: 10 << 20, // 10 MiB
logMaxFiles: 3, // plus the current one
}
if os.Getenv("STTRACE") != "" {
@@ -222,6 +226,8 @@ func parseCommandLineOptions() RuntimeOptions {
flag.BoolVar(&options.paused, "paused", false, "Start with all devices and folders paused")
flag.BoolVar(&options.unpaused, "unpaused", false, "Start with all devices and folders unpaused")
flag.StringVar(&options.logFile, "logfile", options.logFile, "Log file name (still always logs to stdout). Cannot be used together with -no-restart/STNORESTART environment variable.")
flag.IntVar(&options.logMaxSize, "log-max-size", options.logMaxSize, "Maximum size of any file (zero to disable log rotation).")
flag.IntVar(&options.logMaxFiles, "log-max-old-files", options.logMaxFiles, "Number of old files to keep (zero to keep only current).")
flag.StringVar(&options.auditFile, "auditfile", options.auditFile, "Specify audit file (use \"-\" for stdout, \"--\" for stderr)")
flag.BoolVar(&options.allowNewerConfig, "allow-newer-config", false, "Allow loading newer than current config version")
if runtime.GOOS == "windows" {
@@ -512,7 +518,7 @@ func upgradeViaRest() error {
r.Header.Set("X-API-Key", cfg.GUI().APIKey)
tr := &http.Transport{
Dial: dialer.Dial,
DialContext: dialer.DialContext,
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}

View File

@@ -9,10 +9,12 @@ package main
import (
"bufio"
"context"
"fmt"
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
@@ -48,7 +50,15 @@ func monitorMain(runtimeOptions RuntimeOptions) {
logFile := runtimeOptions.logFile
if logFile != "-" {
var fileDst io.Writer = newAutoclosedFile(logFile, logFileAutoCloseDelay, logFileMaxOpenTime)
var fileDst io.Writer
if runtimeOptions.logMaxSize > 0 {
open := func(name string) (io.WriteCloser, error) {
return newAutoclosedFile(name, logFileAutoCloseDelay, logFileMaxOpenTime), nil
}
fileDst = newRotatedFile(logFile, open, int64(runtimeOptions.logMaxSize), runtimeOptions.logMaxFiles)
} else {
fileDst = newAutoclosedFile(logFile, logFileAutoCloseDelay, logFileMaxOpenTime)
}
if runtime.GOOS == "windows" {
// Translate line breaks to Windows standard
@@ -317,6 +327,81 @@ func restartMonitorWindows(args []string) error {
return cmd.Start()
}
// rotatedFile keeps a set of rotating logs. There will be the base file plus up
// to maxFiles rotated ones, each ~ maxSize bytes large.
type rotatedFile struct {
name string
create createFn
maxSize int64 // bytes
maxFiles int
currentFile io.WriteCloser
currentSize int64
}
// the createFn should act equivalently to os.Create
type createFn func(name string) (io.WriteCloser, error)
func newRotatedFile(name string, create createFn, maxSize int64, maxFiles int) *rotatedFile {
return &rotatedFile{
name: name,
create: create,
maxSize: maxSize,
maxFiles: maxFiles,
}
}
func (r *rotatedFile) Write(bs []byte) (int, error) {
// Check if we're about to exceed the max size, and if so close this
// file so we'll start on a new one.
if r.currentSize+int64(len(bs)) > r.maxSize {
r.currentFile.Close()
r.currentFile = nil
r.currentSize = 0
}
// If we have no current log, rotate old files out of the way and create
// a new one.
if r.currentFile == nil {
r.rotate()
fd, err := r.create(r.name)
if err != nil {
return 0, err
}
r.currentFile = fd
}
n, err := r.currentFile.Write(bs)
r.currentSize += int64(n)
return n, err
}
func (r *rotatedFile) rotate() {
// The files are named "name", "name.0", "name.1", ...
// "name.(r.maxFiles-1)". Increase the numbers on the
// suffixed ones.
for i := r.maxFiles - 1; i > 0; i-- {
from := numberedFile(r.name, i-1)
to := numberedFile(r.name, i)
err := os.Rename(from, to)
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// Rename the base to base.0
err := os.Rename(r.name, numberedFile(r.name, 0))
if err != nil && !os.IsNotExist(err) {
fmt.Println("LOG: Rotating logs:", err)
}
}
// numberedFile adds the number between the file name and the extension.
func numberedFile(name string, num int) string {
ext := filepath.Ext(name) // contains the dot
withoutExt := name[:len(name)-len(ext)]
return fmt.Sprintf("%s.%d%s", withoutExt, num, ext)
}
// An autoclosedFile is an io.WriteCloser that opens itself for appending on
// Write() and closes itself after an interval of no writes (closeDelay) or
// when the file has been open for too long (maxOpenTime). A call to Write()

View File

@@ -7,6 +7,7 @@
package main
import (
"io"
"io/ioutil"
"os"
"path/filepath"
@@ -14,6 +15,123 @@ import (
"time"
)
func TestRotatedFile(t *testing.T) {
// Verify that log rotation happens.
dir, err := ioutil.TempDir("", "syncthing")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(dir)
open := func(name string) (io.WriteCloser, error) {
return os.Create(name)
}
logName := filepath.Join(dir, "log.txt")
testData := []byte("12345678\n")
maxSize := int64(len(testData) + len(testData)/2)
// We allow the log file plus two rotated copies.
rf := newRotatedFile(logName, open, maxSize, 2)
// Write some bytes.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
// They should be in the log.
checkSize(t, logName, len(testData))
checkNotExist(t, logName+".0")
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, logName+".1")
// Write another byte. That should fit without causing an extra rotate.
_, _ = rf.Write([]byte{42})
checkSize(t, logName, len(testData)+1)
checkSize(t, numberedFile(logName, 0), len(testData))
checkNotExist(t, numberedFile(logName, 1))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData)+1) // the one we wrote extra to, now rotated
checkSize(t, numberedFile(logName, 1), len(testData))
checkNotExist(t, numberedFile(logName, 2))
// Write some more bytes. We should rotate and write into a new file as the
// new bytes don't fit.
if _, err := rf.Write(testData); err != nil {
t.Fatal(err)
}
checkSize(t, logName, len(testData))
checkSize(t, numberedFile(logName, 0), len(testData))
checkSize(t, numberedFile(logName, 1), len(testData)+1)
checkNotExist(t, numberedFile(logName, 2)) // exceeds maxFiles so deleted
}
func TestNumberedFile(t *testing.T) {
// Mostly just illustrates where the number ends up and makes sure it
// doesn't crash without an extension.
cases := []struct {
in string
num int
out string
}{
{
in: "syncthing.log",
num: 42,
out: "syncthing.42.log",
},
{
in: filepath.Join("asdfasdf", "syncthing.log.txt"),
num: 42,
out: filepath.Join("asdfasdf", "syncthing.log.42.txt"),
},
{
in: "syncthing-log",
num: 42,
out: "syncthing-log.42",
},
}
for _, tc := range cases {
res := numberedFile(tc.in, tc.num)
if res != tc.out {
t.Errorf("numberedFile(%q, %d) => %q, expected %q", tc.in, tc.num, res, tc.out)
}
}
}
func checkSize(t *testing.T, name string, size int) {
t.Helper()
info, err := os.Lstat(name)
if err != nil {
t.Fatal(err)
}
if info.Size() != int64(size) {
t.Errorf("%s wrong size: %d != expected %d", name, info.Size(), size)
}
}
func checkNotExist(t *testing.T, name string) {
t.Helper()
_, err := os.Lstat(name)
if !os.IsNotExist(err) {
t.Errorf("%s should not exist", name)
}
}
func TestAutoClosedFile(t *testing.T) {
os.RemoveAll("_autoclose")
defer os.RemoveAll("_autoclose")

View File

@@ -1416,7 +1416,7 @@ func getReport(db *sql.DB) map[string]interface{} {
r["categories"] = categories
r["versions"] = group(byVersion, analyticsFor(versions, 2000), 10)
r["versionPenetrations"] = penetrationLevels(analyticsFor(versions, 2000), []float64{50, 75, 90, 95})
r["platforms"] = group(byPlatform, analyticsFor(platforms, 2000), 5)
r["platforms"] = group(byPlatform, analyticsFor(platforms, 2000), 10)
r["compilers"] = group(byCompiler, analyticsFor(compilers, 2000), 5)
r["builders"] = analyticsFor(builders, 12)
r["distributions"] = analyticsFor(distributions, 10)

2
go.mod
View File

@@ -39,7 +39,7 @@ require (
github.com/syncthing/notify v0.0.0-20190709140112-69c7a957d3e2
github.com/syndtr/goleveldb v1.0.1-0.20190923125748-758128399b1d
github.com/thejerf/suture v3.0.2+incompatible
github.com/urfave/cli v1.22.1
github.com/urfave/cli v1.22.2
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297

2
go.sum
View File

@@ -207,6 +207,8 @@ github.com/urfave/cli v1.20.0 h1:fDqGv3UG/4jbVl/QkFwEdddtEDjh/5Ov6X+0B/3bPaw=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1 h1:+mkCCcOFKPnCmVYVcURKps1Xe+3zP90gSYGNfRkjoIY=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.2 h1:gsqYFH8bb9ekPA12kRo0hfjngWQjkJPlN9R0N78BoUo=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0 h1:okhMind4q9H1OxF44gNegWkiP4H/gsTFLalHFa4OOUI=
github.com/vitrun/qart v0.0.0-20160531060029-bf64b92db6b0/go.mod h1:TTbGUfE+cXXceWtbTHq6lqcTvYPBKLNejBEbnUsQJtU=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=

View File

@@ -246,6 +246,14 @@ a.toggler:hover {
text-decoration: none;
}
/**
* Panel padding decrease
*/
.panel-collapse .panel-body {
padding: 5px;
}
/**
* Progress bars with centered text
*/
@@ -348,6 +356,12 @@ ul.three-columns li, ul.two-columns li {
* columns. */
white-space: normal;
}
.two-columns {
-webkit-column-count: 1;
-moz-column-count: 1;
column-count: 1;
}
}
@media (max-width:479px) {
@@ -392,7 +406,7 @@ ul.three-columns li, ul.two-columns li {
max-width: 100%;
width: 100%;
}
/* all buttons, except panel headings, get bottom margin, as they won't fit
beside each other anymore */
.btn:not(.panel-heading),
@@ -400,4 +414,4 @@ ul.three-columns li, ul.two-columns li {
.btn:not(.panel-heading) + .btn:not(.panel-heading) {
margin-bottom: 1rem;
}
}
}

View File

@@ -42,7 +42,7 @@
<p class="navbar-text hidden-xs" ng-class="{'hidden-sm':upgradeInfo && upgradeInfo.newer}">{{thisDeviceName()}}</p>
<ul class="nav navbar-nav navbar-right">
<li ng-if="upgradeInfo && upgradeInfo.newer" class="upgrade-newer">
<button type="button" class="btn navbar-btn btn-primary btn-sm" ng-click="upgrade()">
<button type="button" class="btn navbar-btn btn-primary btn-sm" data-toggle="modal" data-target="#upgrade">
<span class="fas fa-arrow-circle-up"></span>
<span class="hidden-xs" translate translate-value-version="{{upgradeInfo.latest}}">Upgrade To {%version%}</span>
</button>
@@ -841,6 +841,7 @@
<ng-include src="'syncthing/transfer/failedFilesModalView.html'"></ng-include>
<ng-include src="'syncthing/transfer/remoteNeededFilesModalView.html'"></ng-include>
<ng-include src="'syncthing/transfer/localChangedFilesModalView.html'"></ng-include>
<ng-include src="'syncthing/core/upgradeModalView.html'"></ng-include>
<ng-include src="'syncthing/core/majorUpgradeModalView.html'"></ng-include>
<ng-include src="'syncthing/core/aboutModalView.html'"></ng-include>
<ng-include src="'syncthing/core/discoveryFailuresModalView.html'"></ng-include>

View File

@@ -1,7 +1,7 @@
<modal id="about" status="info" icon="far fa-heart" heading="{{'About' | translate}}" large="yes" closeable="yes">
<div class="modal-body">
<h1 class="text-center">
<img alt="Syncthing" src="assets/img/logo-horizontal.svg" style="vertical-align: -16px" height="100" width="366" />
<img alt="Syncthing" src="assets/img/logo-horizontal.svg" style="max-width: 366px; vertical-align: -16px" />
<br />
<small>{{versionString()}}</small>
<br />

View File

@@ -8,7 +8,7 @@
<div id="log-viewer-log" class="tab-pane in active">
<label translate ng-if="logging.logEntries.length == 0">Loading...</label>
<textarea id="logViewerText" class="form-control" rows="20" ng-if="logging.logEntries.length != 0" readonly style="font-family: Consolas; font-size: 11px; overflow: auto;">{{ logging.content() }}</textarea>
<textarea id="logViewerText" class="form-control" rows="20" ng-if="logging.logEntries.length != 0" readonly style="font-family: Consolas, monospace; font-size: 11px; overflow: auto;">{{ logging.content() }}</textarea>
<p translate class="help-block" ng-style="{'visibility': logging.paused ? 'visible' : 'hidden'}">Log tailing paused. Scroll to the bottom to continue.</p>
</div>

View File

@@ -1387,6 +1387,7 @@ angular.module('syncthing.core')
$scope.upgrade = function () {
restarting = true;
$('#upgrade').modal('hide');
$('#majorUpgrade').modal('hide');
$('#upgrading').modal();
$http.post(urlbase + '/system/upgrade').success(function () {
@@ -2055,6 +2056,9 @@ angular.module('syncthing.core')
value.modTime = new Date(value.modTime);
value.versionTime = new Date(value.versionTime);
});
values.sort(function (a, b) {
return b.versionTime - a.versionTime;
});
});
if (closed) return;
$scope.restoreVersions.versions = data;

View File

@@ -0,0 +1,18 @@
<modal id="upgrade" status="warning" icon="fas fa-arrow-circle-up" heading="{{'Upgrade' | translate}}" large="no" closeable="yes">
<div class="modal-body">
<p>
<span translate>Are you sure you want to upgrade?</span>
</p>
<p>
<a ng-href="https://github.com/syncthing/syncthing/releases/tag/{{upgradeInfo.latest}}" target="_blank" translate>Release Notes</a>
</p>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-primary btn-sm" ng-click="upgrade()">
<span class="fas fa-check"></span>&nbsp;<span translate>Upgrade</span>
</button>
<button type="button" class="btn btn-default btn-sm" data-dismiss="modal">
<span class="fas fa-times"></span>&nbsp;<span translate>Close</span>
</button>
</div>
</modal>

View File

@@ -15,10 +15,10 @@
<datalist id="discovery-list">
<option ng-repeat="id in discovery" value="{{id}}" />
</datalist>
<p class="help-block" ng-if="discovery">
<p class="help-block" ng-if="discovery && discovery.length !== 0">
<span translate>You can also select one of these nearby devices:</span>
<ul>
<li ng-repeat="id in discovery"><a ng-click="currentDevice.deviceID = id">{{id}}</a></li>
<li ng-repeat="id in discovery"><a href="#" ng-click="currentDevice.deviceID = id">{{id}}</a></li>
</ul>
</p>
<p class="help-block">

View File

@@ -8,6 +8,7 @@ package api
import (
"bytes"
"context"
"crypto/tls"
"crypto/x509"
"encoding/json"
@@ -136,7 +137,7 @@ func New(id protocol.DeviceID, cfg config.Wrapper, assetDir, tlsDefaultCommonNam
configChanged: make(chan struct{}),
startedOnce: make(chan struct{}),
}
s.Service = util.AsService(s.serve)
s.Service = util.AsService(s.serve, s.String())
return s
}
@@ -207,7 +208,7 @@ func sendJSON(w http.ResponseWriter, jsonObject interface{}) {
fmt.Fprintf(w, "%s\n", bs)
}
func (s *service) serve(stop chan struct{}) {
func (s *service) serve(ctx context.Context) {
listener, err := s.getListener(s.cfg.GUI())
if err != nil {
select {
@@ -381,7 +382,7 @@ func (s *service) serve(stop chan struct{}) {
// Wait for stop, restart or error signals
select {
case <-stop:
case <-ctx.Done():
// Shutting down permanently
l.Debugln("shutting down (stop)")
case <-s.configChanged:
@@ -768,11 +769,21 @@ func (s *service) getSystemConnections(w http.ResponseWriter, r *http.Request) {
}
func (s *service) getDeviceStats(w http.ResponseWriter, r *http.Request) {
sendJSON(w, s.model.DeviceStatistics())
stats, err := s.model.DeviceStatistics()
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
sendJSON(w, stats)
}
func (s *service) getFolderStats(w http.ResponseWriter, r *http.Request) {
sendJSON(w, s.model.FolderStatistics())
stats, err := s.model.FolderStatistics()
if err != nil {
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
sendJSON(w, stats)
}
func (s *service) getDBFile(w http.ResponseWriter, r *http.Request) {

View File

@@ -10,7 +10,6 @@ import (
"net"
"time"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/model"
@@ -49,12 +48,12 @@ func (m *mockedModel) ConnectionStats() map[string]interface{} {
return nil
}
func (m *mockedModel) DeviceStatistics() map[string]stats.DeviceStatistics {
return nil
func (m *mockedModel) DeviceStatistics() (map[string]stats.DeviceStatistics, error) {
return nil, nil
}
func (m *mockedModel) FolderStatistics() map[string]stats.FolderStatistics {
return nil
func (m *mockedModel) FolderStatistics() (map[string]stats.FolderStatistics, error) {
return nil, nil
}
func (m *mockedModel) CurrentFolderFile(folder string, file string) (protocol.FileInfo, bool) {
@@ -153,21 +152,29 @@ func (m *mockedModel) LocalChangedFiles(folder string, page, perpage int) []db.F
return nil
}
func (m *mockedModel) Serve() {}
func (m *mockedModel) Stop() {}
func (m *mockedModel) Index(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) {}
func (m *mockedModel) IndexUpdate(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) {
func (m *mockedModel) Serve() {}
func (m *mockedModel) Stop() {}
func (m *mockedModel) Index(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) error {
return nil
}
func (m *mockedModel) IndexUpdate(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo) error {
return nil
}
func (m *mockedModel) Request(deviceID protocol.DeviceID, folder, name string, size int32, offset int64, hash []byte, weakHash uint32, fromTemporary bool) (protocol.RequestResponse, error) {
return nil, nil
}
func (m *mockedModel) ClusterConfig(deviceID protocol.DeviceID, config protocol.ClusterConfig) {}
func (m *mockedModel) ClusterConfig(deviceID protocol.DeviceID, config protocol.ClusterConfig) error {
return nil
}
func (m *mockedModel) Closed(conn protocol.Connection, err error) {}
func (m *mockedModel) DownloadProgress(deviceID protocol.DeviceID, folder string, updates []protocol.FileDownloadProgressUpdate) {
func (m *mockedModel) DownloadProgress(deviceID protocol.DeviceID, folder string, updates []protocol.FileDownloadProgressUpdate) error {
return nil
}
func (m *mockedModel) AddConnection(conn connections.Connection, hello protocol.HelloResult) {}
@@ -180,10 +187,4 @@ func (m *mockedModel) GetHello(protocol.DeviceID) protocol.HelloIntf {
return nil
}
func (m *mockedModel) AddFolder(cfg config.FolderConfiguration) {}
func (m *mockedModel) RestartFolder(from, to config.FolderConfiguration) {}
func (m *mockedModel) StartFolder(folder string) {}
func (m *mockedModel) StartDeadlockDetector(timeout time.Duration) {}

View File

@@ -7,6 +7,7 @@
package beacon
import (
"context"
"fmt"
"net"
"time"
@@ -63,23 +64,23 @@ func newCast(name string) *cast {
}
}
func (c *cast) addReader(svc func(chan struct{}) error) {
func (c *cast) addReader(svc func(context.Context) error) {
c.reader = c.createService(svc, "reader")
c.Add(c.reader)
}
func (c *cast) addWriter(svc func(stop chan struct{}) error) {
func (c *cast) addWriter(svc func(ctx context.Context) error) {
c.writer = c.createService(svc, "writer")
c.Add(c.writer)
}
func (c *cast) createService(svc func(chan struct{}) error, suffix string) util.ServiceWithError {
return util.AsServiceWithError(func(stop chan struct{}) error {
func (c *cast) createService(svc func(context.Context) error, suffix string) util.ServiceWithError {
return util.AsServiceWithError(func(ctx context.Context) error {
l.Debugln("Starting", c.name, suffix)
err := svc(stop)
err := svc(ctx)
l.Debugf("Stopped %v %v: %v", c.name, suffix, err)
return err
})
}, fmt.Sprintf("%s/%s", c, suffix))
}
func (c *cast) Stop() {

View File

@@ -7,34 +7,32 @@
package beacon
import (
"context"
"net"
"time"
)
func NewBroadcast(port int) Interface {
c := newCast("broadcastBeacon")
c.addReader(func(stop chan struct{}) error {
return readBroadcasts(c.outbox, port, stop)
c.addReader(func(ctx context.Context) error {
return readBroadcasts(ctx, c.outbox, port)
})
c.addWriter(func(stop chan struct{}) error {
return writeBroadcasts(c.inbox, port, stop)
c.addWriter(func(ctx context.Context) error {
return writeBroadcasts(ctx, c.inbox, port)
})
return c
}
func writeBroadcasts(inbox <-chan []byte, port int, stop chan struct{}) error {
func writeBroadcasts(ctx context.Context, inbox <-chan []byte, port int) error {
conn, err := net.ListenUDP("udp4", nil)
if err != nil {
l.Debugln(err)
return err
}
done := make(chan struct{})
defer close(done)
doneCtx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
select {
case <-stop:
case <-done:
}
<-doneCtx.Done()
conn.Close()
}()
@@ -42,7 +40,7 @@ func writeBroadcasts(inbox <-chan []byte, port int, stop chan struct{}) error {
var bs []byte
select {
case bs = <-inbox:
case <-stop:
case <-doneCtx.Done():
return nil
}
@@ -99,19 +97,17 @@ func writeBroadcasts(inbox <-chan []byte, port int, stop chan struct{}) error {
}
}
func readBroadcasts(outbox chan<- recv, port int, stop chan struct{}) error {
func readBroadcasts(ctx context.Context, outbox chan<- recv, port int) error {
conn, err := net.ListenUDP("udp4", &net.UDPAddr{Port: port})
if err != nil {
l.Debugln(err)
return err
}
done := make(chan struct{})
defer close(done)
doneCtx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
select {
case <-stop:
case <-done:
}
<-doneCtx.Done()
conn.Close()
}()
@@ -129,7 +125,7 @@ func readBroadcasts(outbox chan<- recv, port int, stop chan struct{}) error {
copy(c, bs)
select {
case outbox <- recv{c, addr}:
case <-stop:
case <-doneCtx.Done():
return nil
default:
l.Debugln("dropping message")

View File

@@ -7,6 +7,7 @@
package beacon
import (
"context"
"errors"
"net"
"time"
@@ -16,16 +17,16 @@ import (
func NewMulticast(addr string) Interface {
c := newCast("multicastBeacon")
c.addReader(func(stop chan struct{}) error {
return readMulticasts(c.outbox, addr, stop)
c.addReader(func(ctx context.Context) error {
return readMulticasts(ctx, c.outbox, addr)
})
c.addWriter(func(stop chan struct{}) error {
return writeMulticasts(c.inbox, addr, stop)
c.addWriter(func(ctx context.Context) error {
return writeMulticasts(ctx, c.inbox, addr)
})
return c
}
func writeMulticasts(inbox <-chan []byte, addr string, stop chan struct{}) error {
func writeMulticasts(ctx context.Context, inbox <-chan []byte, addr string) error {
gaddr, err := net.ResolveUDPAddr("udp6", addr)
if err != nil {
l.Debugln(err)
@@ -37,13 +38,10 @@ func writeMulticasts(inbox <-chan []byte, addr string, stop chan struct{}) error
l.Debugln(err)
return err
}
done := make(chan struct{})
defer close(done)
doneCtx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
select {
case <-stop:
case <-done:
}
<-doneCtx.Done()
conn.Close()
}()
@@ -57,7 +55,7 @@ func writeMulticasts(inbox <-chan []byte, addr string, stop chan struct{}) error
var bs []byte
select {
case bs = <-inbox:
case <-stop:
case <-doneCtx.Done():
return nil
}
@@ -84,7 +82,7 @@ func writeMulticasts(inbox <-chan []byte, addr string, stop chan struct{}) error
success++
select {
case <-stop:
case <-doneCtx.Done():
return nil
default:
}
@@ -96,7 +94,7 @@ func writeMulticasts(inbox <-chan []byte, addr string, stop chan struct{}) error
}
}
func readMulticasts(outbox chan<- recv, addr string, stop chan struct{}) error {
func readMulticasts(ctx context.Context, outbox chan<- recv, addr string) error {
gaddr, err := net.ResolveUDPAddr("udp6", addr)
if err != nil {
l.Debugln(err)
@@ -108,13 +106,10 @@ func readMulticasts(outbox chan<- recv, addr string, stop chan struct{}) error {
l.Debugln(err)
return err
}
done := make(chan struct{})
defer close(done)
doneCtx, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
select {
case <-stop:
case <-done:
}
<-doneCtx.Done()
conn.Close()
}()
@@ -144,7 +139,7 @@ func readMulticasts(outbox chan<- recv, addr string, stop chan struct{}) error {
bs := make([]byte, 65536)
for {
select {
case <-stop:
case <-doneCtx.Done():
return nil
default:
}

View File

@@ -10,7 +10,6 @@ package config
import (
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io"
"io/ioutil"
@@ -22,6 +21,8 @@ import (
"strconv"
"strings"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
@@ -120,18 +121,18 @@ func NewWithFreePorts(myID protocol.DeviceID) (Configuration, error) {
port, err := getFreePort("127.0.0.1", DefaultGUIPort)
if err != nil {
return Configuration{}, fmt.Errorf("get free port (GUI): %v", err)
return Configuration{}, errors.Wrap(err, "get free port (GUI)")
}
cfg.GUI.RawAddress = fmt.Sprintf("127.0.0.1:%d", port)
port, err = getFreePort("0.0.0.0", DefaultTCPPort)
if err != nil {
return Configuration{}, fmt.Errorf("get free port (BEP): %v", err)
return Configuration{}, errors.Wrap(err, "get free port (BEP)")
}
if port == DefaultTCPPort {
cfg.Options.ListenAddresses = []string{"default"}
cfg.Options.RawListenAddresses = []string{"default"}
} else {
cfg.Options.ListenAddresses = []string{
cfg.Options.RawListenAddresses = []string{
fmt.Sprintf("tcp://%s", net.JoinHostPort("0.0.0.0", strconv.Itoa(port))),
"dynamic+https://relays.syncthing.net/endpoint",
}
@@ -304,8 +305,8 @@ func (cfg *Configuration) clean() error {
existingFolders[folder.ID] = folder
}
cfg.Options.ListenAddresses = util.UniqueTrimmedStrings(cfg.Options.ListenAddresses)
cfg.Options.GlobalAnnServers = util.UniqueTrimmedStrings(cfg.Options.GlobalAnnServers)
cfg.Options.RawListenAddresses = util.UniqueTrimmedStrings(cfg.Options.RawListenAddresses)
cfg.Options.RawGlobalAnnServers = util.UniqueTrimmedStrings(cfg.Options.RawGlobalAnnServers)
if cfg.Version > 0 && cfg.Version < OldestHandledVersion {
l.Warnf("Configuration version %d is deprecated. Attempting best effort conversion, but please verify manually.", cfg.Version)
@@ -395,7 +396,7 @@ nextPendingDevice:
// Deprecated protocols are removed from the list of listeners and
// device addresses. So far just kcp*.
for _, prefix := range []string{"kcp"} {
cfg.Options.ListenAddresses = filterURLSchemePrefix(cfg.Options.ListenAddresses, prefix)
cfg.Options.RawListenAddresses = filterURLSchemePrefix(cfg.Options.RawListenAddresses, prefix)
for i := range cfg.Devices {
dev := &cfg.Devices[i]
dev.Addresses = filterURLSchemePrefix(dev.Addresses, prefix)

View File

@@ -37,8 +37,8 @@ func init() {
func TestDefaultValues(t *testing.T) {
expected := OptionsConfiguration{
ListenAddresses: []string{"default"},
GlobalAnnServers: []string{"default"},
RawListenAddresses: []string{"default"},
RawGlobalAnnServers: []string{"default"},
GlobalAnnEnabled: true,
LocalAnnEnabled: true,
LocalAnnPort: 21027,
@@ -74,7 +74,7 @@ func TestDefaultValues(t *testing.T) {
CREnabled: true,
StunKeepaliveStartS: 180,
StunKeepaliveMinS: 20,
StunServers: []string{"default"},
RawStunServers: []string{"default"},
}
cfg := New(device1)
@@ -175,16 +175,16 @@ func TestNoListenAddresses(t *testing.T) {
}
expected := []string{""}
actual := cfg.Options().ListenAddresses
actual := cfg.Options().RawListenAddresses
if diff, equal := messagediff.PrettyDiff(expected, actual); !equal {
t.Errorf("Unexpected ListenAddresses. Diff:\n%s", diff)
t.Errorf("Unexpected RawListenAddresses. Diff:\n%s", diff)
}
}
func TestOverriddenValues(t *testing.T) {
expected := OptionsConfiguration{
ListenAddresses: []string{"tcp://:23000"},
GlobalAnnServers: []string{"udp4://syncthing.nym.se:22026"},
RawListenAddresses: []string{"tcp://:23000"},
RawGlobalAnnServers: []string{"udp4://syncthing.nym.se:22026"},
GlobalAnnEnabled: false,
LocalAnnEnabled: false,
LocalAnnPort: 42123,
@@ -222,7 +222,7 @@ func TestOverriddenValues(t *testing.T) {
CREnabled: false,
StunKeepaliveStartS: 9000,
StunKeepaliveMinS: 900,
StunServers: []string{"foo"},
RawStunServers: []string{"foo"},
}
os.Unsetenv("STNOUPGRADE")
@@ -424,20 +424,20 @@ func TestIssue1750(t *testing.T) {
t.Fatal(err)
}
if cfg.Options().ListenAddresses[0] != "tcp://:23000" {
t.Errorf("%q != %q", cfg.Options().ListenAddresses[0], "tcp://:23000")
if cfg.Options().RawListenAddresses[0] != "tcp://:23000" {
t.Errorf("%q != %q", cfg.Options().RawListenAddresses[0], "tcp://:23000")
}
if cfg.Options().ListenAddresses[1] != "tcp://:23001" {
t.Errorf("%q != %q", cfg.Options().ListenAddresses[1], "tcp://:23001")
if cfg.Options().RawListenAddresses[1] != "tcp://:23001" {
t.Errorf("%q != %q", cfg.Options().RawListenAddresses[1], "tcp://:23001")
}
if cfg.Options().GlobalAnnServers[0] != "udp4://syncthing.nym.se:22026" {
t.Errorf("%q != %q", cfg.Options().GlobalAnnServers[0], "udp4://syncthing.nym.se:22026")
if cfg.Options().RawGlobalAnnServers[0] != "udp4://syncthing.nym.se:22026" {
t.Errorf("%q != %q", cfg.Options().RawGlobalAnnServers[0], "udp4://syncthing.nym.se:22026")
}
if cfg.Options().GlobalAnnServers[1] != "udp4://syncthing.nym.se:22027" {
t.Errorf("%q != %q", cfg.Options().GlobalAnnServers[1], "udp4://syncthing.nym.se:22027")
if cfg.Options().RawGlobalAnnServers[1] != "udp4://syncthing.nym.se:22027" {
t.Errorf("%q != %q", cfg.Options().RawGlobalAnnServers[1], "udp4://syncthing.nym.se:22027")
}
}
@@ -553,13 +553,13 @@ func TestNewSaveLoad(t *testing.T) {
func TestPrepare(t *testing.T) {
var cfg Configuration
if cfg.Folders != nil || cfg.Devices != nil || cfg.Options.ListenAddresses != nil {
if cfg.Folders != nil || cfg.Devices != nil || cfg.Options.RawListenAddresses != nil {
t.Error("Expected nil")
}
cfg.prepare(device1)
if cfg.Folders == nil || cfg.Devices == nil || cfg.Options.ListenAddresses == nil {
if cfg.Folders == nil || cfg.Devices == nil || cfg.Options.RawListenAddresses == nil {
t.Error("Unexpected nil")
}
}
@@ -580,7 +580,7 @@ func TestCopy(t *testing.T) {
cfg.Devices[0].Addresses[0] = "wrong"
cfg.Folders[0].Devices[0].DeviceID = protocol.DeviceID{0, 1, 2, 3}
cfg.Options.ListenAddresses[0] = "wrong"
cfg.Options.RawListenAddresses[0] = "wrong"
cfg.GUI.APIKey = "wrong"
bsChanged, err := json.MarshalIndent(cfg, "", " ")
@@ -771,7 +771,7 @@ func TestV14ListenAddressesMigration(t *testing.T) {
cfg := Configuration{
Version: 13,
Options: OptionsConfiguration{
ListenAddresses: tc[0],
RawListenAddresses: tc[0],
DeprecatedRelayServers: tc[1],
},
}
@@ -781,8 +781,8 @@ func TestV14ListenAddressesMigration(t *testing.T) {
}
sort.Strings(tc[2])
if !reflect.DeepEqual(cfg.Options.ListenAddresses, tc[2]) {
t.Errorf("Migration error; actual %#v != expected %#v", cfg.Options.ListenAddresses, tc[2])
if !reflect.DeepEqual(cfg.Options.RawListenAddresses, tc[2]) {
t.Errorf("Migration error; actual %#v != expected %#v", cfg.Options.RawListenAddresses, tc[2])
}
}
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/versioner"
)
var (
@@ -105,18 +104,6 @@ func (f FolderConfiguration) Filesystem() fs.Filesystem {
return f.cachedFilesystem
}
func (f FolderConfiguration) Versioner() versioner.Versioner {
if f.Versioning.Type == "" {
return nil
}
versionerFactory, ok := versioner.Factories[f.Versioning.Type]
if !ok {
panic(fmt.Sprintf("Requested versioning type %q that does not exist", f.Versioning.Type))
}
return versionerFactory(f.ID, f.Filesystem(), f.Versioning.Params)
}
func (f FolderConfiguration) ModTimeWindow() time.Duration {
return f.cachedModTimeWindow
}

View File

@@ -216,12 +216,12 @@ func migrateToConfigV18(cfg *Configuration) {
func migrateToConfigV15(cfg *Configuration) {
// Undo v0.13.0 broken migration
for i, addr := range cfg.Options.GlobalAnnServers {
for i, addr := range cfg.Options.RawGlobalAnnServers {
switch addr {
case "default-v4v2/":
cfg.Options.GlobalAnnServers[i] = "default-v4"
cfg.Options.RawGlobalAnnServers[i] = "default-v4"
case "default-v6v2/":
cfg.Options.GlobalAnnServers[i] = "default-v6"
cfg.Options.RawGlobalAnnServers[i] = "default-v6"
}
}
}
@@ -248,9 +248,9 @@ func migrateToConfigV14(cfg *Configuration) {
hasDefault := false
for _, raddr := range cfg.Options.DeprecatedRelayServers {
if raddr == "dynamic+https://relays.syncthing.net/endpoint" {
for i, addr := range cfg.Options.ListenAddresses {
for i, addr := range cfg.Options.RawListenAddresses {
if addr == "tcp://0.0.0.0:22000" {
cfg.Options.ListenAddresses[i] = "default"
cfg.Options.RawListenAddresses[i] = "default"
hasDefault = true
break
}
@@ -269,16 +269,16 @@ func migrateToConfigV14(cfg *Configuration) {
if addr == "" {
continue
}
cfg.Options.ListenAddresses = append(cfg.Options.ListenAddresses, addr)
cfg.Options.RawListenAddresses = append(cfg.Options.RawListenAddresses, addr)
}
cfg.Options.DeprecatedRelayServers = nil
// For consistency
sort.Strings(cfg.Options.ListenAddresses)
sort.Strings(cfg.Options.RawListenAddresses)
var newAddrs []string
for _, addr := range cfg.Options.GlobalAnnServers {
for _, addr := range cfg.Options.RawGlobalAnnServers {
uri, err := url.Parse(addr)
if err != nil {
// That's odd. Skip the broken address.
@@ -291,7 +291,7 @@ func migrateToConfigV14(cfg *Configuration) {
newAddrs = append(newAddrs, addr)
}
cfg.Options.GlobalAnnServers = newAddrs
cfg.Options.RawGlobalAnnServers = newAddrs
for i, fcfg := range cfg.Folders {
if fcfg.DeprecatedReadOnly {
@@ -315,9 +315,9 @@ func migrateToConfigV13(cfg *Configuration) {
func migrateToConfigV12(cfg *Configuration) {
// Change listen address schema
for i, addr := range cfg.Options.ListenAddresses {
for i, addr := range cfg.Options.RawListenAddresses {
if len(addr) > 0 && !strings.HasPrefix(addr, "tcp://") {
cfg.Options.ListenAddresses[i] = util.Address("tcp", addr)
cfg.Options.RawListenAddresses[i] = util.Address("tcp", addr)
}
}
@@ -332,7 +332,7 @@ func migrateToConfigV12(cfg *Configuration) {
// Use new discovery server
var newDiscoServers []string
var useDefault bool
for _, addr := range cfg.Options.GlobalAnnServers {
for _, addr := range cfg.Options.RawGlobalAnnServers {
if addr == "udp4://announce.syncthing.net:22026" {
useDefault = true
} else if addr == "udp6://announce-v6.syncthing.net:22026" {
@@ -344,7 +344,7 @@ func migrateToConfigV12(cfg *Configuration) {
if useDefault {
newDiscoServers = append(newDiscoServers, "default")
}
cfg.Options.GlobalAnnServers = newDiscoServers
cfg.Options.RawGlobalAnnServers = newDiscoServers
// Use new multicast group
if cfg.Options.LocalAnnMCAddr == "[ff32::5222]:21026" {

View File

@@ -9,12 +9,13 @@ package config
import (
"fmt"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/util"
)
type OptionsConfiguration struct {
ListenAddresses []string `xml:"listenAddress" json:"listenAddresses" default:"default"`
GlobalAnnServers []string `xml:"globalAnnounceServer" json:"globalAnnounceServers" default:"default" restart:"true"`
RawListenAddresses []string `xml:"listenAddress" json:"listenAddresses" default:"default"`
RawGlobalAnnServers []string `xml:"globalAnnounceServer" json:"globalAnnounceServers" default:"default" restart:"true"`
GlobalAnnEnabled bool `xml:"globalAnnounceEnabled" json:"globalAnnounceEnabled" default:"true" restart:"true"`
LocalAnnEnabled bool `xml:"localAnnounceEnabled" json:"localAnnounceEnabled" default:"true" restart:"true"`
LocalAnnPort int `xml:"localAnnouncePort" json:"localAnnouncePort" default:"21027" restart:"true"`
@@ -56,7 +57,7 @@ type OptionsConfiguration struct {
CREnabled bool `xml:"crashReportingEnabled" json:"crashReportingEnabled" default:"true" restart:"true"`
StunKeepaliveStartS int `xml:"stunKeepaliveStartS" json:"stunKeepaliveStartS" default:"180"` // 0 for off
StunKeepaliveMinS int `xml:"stunKeepaliveMinS" json:"stunKeepaliveMinS" default:"20"` // 0 for off
StunServers []string `xml:"stunServer" json:"stunServers" default:"default"`
RawStunServers []string `xml:"stunServer" json:"stunServers" default:"default"`
DatabaseTuning Tuning `xml:"databaseTuning" json:"databaseTuning" restart:"true"`
DeprecatedUPnPEnabled bool `xml:"upnpEnabled,omitempty" json:"-"`
@@ -69,10 +70,10 @@ type OptionsConfiguration struct {
func (opts OptionsConfiguration) Copy() OptionsConfiguration {
optsCopy := opts
optsCopy.ListenAddresses = make([]string, len(opts.ListenAddresses))
copy(optsCopy.ListenAddresses, opts.ListenAddresses)
optsCopy.GlobalAnnServers = make([]string, len(opts.GlobalAnnServers))
copy(optsCopy.GlobalAnnServers, opts.GlobalAnnServers)
optsCopy.RawListenAddresses = make([]string, len(opts.RawListenAddresses))
copy(optsCopy.RawListenAddresses, opts.RawListenAddresses)
optsCopy.RawGlobalAnnServers = make([]string, len(opts.RawGlobalAnnServers))
copy(optsCopy.RawGlobalAnnServers, opts.RawGlobalAnnServers)
optsCopy.AlwaysLocalNets = make([]string, len(opts.AlwaysLocalNets))
copy(optsCopy.AlwaysLocalNets, opts.AlwaysLocalNets)
optsCopy.UnackedNotificationIDs = make([]string, len(opts.UnackedNotificationIDs))
@@ -97,3 +98,57 @@ func (opts OptionsConfiguration) RequiresRestartOnly() OptionsConfiguration {
func (opts OptionsConfiguration) IsStunDisabled() bool {
return opts.StunKeepaliveMinS < 1 || opts.StunKeepaliveStartS < 1 || !opts.NATEnabled
}
func (opts OptionsConfiguration) ListenAddresses() []string {
var addresses []string
for _, addr := range opts.RawListenAddresses {
switch addr {
case "default":
addresses = append(addresses, DefaultListenAddresses...)
default:
addresses = append(addresses, addr)
}
}
return util.UniqueTrimmedStrings(addresses)
}
func (opts OptionsConfiguration) StunServers() []string {
var addresses []string
for _, addr := range opts.RawStunServers {
switch addr {
case "default":
defaultPrimaryAddresses := make([]string, len(DefaultPrimaryStunServers))
copy(defaultPrimaryAddresses, DefaultPrimaryStunServers)
rand.Shuffle(defaultPrimaryAddresses)
addresses = append(addresses, defaultPrimaryAddresses...)
defaultSecondaryAddresses := make([]string, len(DefaultSecondaryStunServers))
copy(defaultSecondaryAddresses, DefaultSecondaryStunServers)
rand.Shuffle(defaultSecondaryAddresses)
addresses = append(addresses, defaultSecondaryAddresses...)
default:
addresses = append(addresses, addr)
}
}
addresses = util.UniqueTrimmedStrings(addresses)
return addresses
}
func (opts OptionsConfiguration) GlobalDiscoveryServers() []string {
var servers []string
for _, srv := range opts.RawGlobalAnnServers {
switch srv {
case "default":
servers = append(servers, DefaultDiscoveryServers...)
case "default-v4":
servers = append(servers, DefaultDiscoveryServersV4...)
case "default-v6":
servers = append(servers, DefaultDiscoveryServersV6...)
default:
servers = append(servers, srv)
}
}
return util.UniqueTrimmedStrings(servers)
}

View File

@@ -10,17 +10,17 @@ import (
"testing"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
)
func TestTuningMatches(t *testing.T) {
if int(config.TuningAuto) != int(db.TuningAuto) {
if int(config.TuningAuto) != int(backend.TuningAuto) {
t.Error("mismatch for TuningAuto")
}
if int(config.TuningSmall) != int(db.TuningSmall) {
if int(config.TuningSmall) != int(backend.TuningSmall) {
t.Error("mismatch for TuningSmall")
}
if int(config.TuningLarge) != int(db.TuningLarge) {
if int(config.TuningLarge) != int(backend.TuningLarge) {
t.Error("mismatch for TuningLarge")
}
}

View File

@@ -14,9 +14,7 @@ import (
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/util"
)
// The Committer interface is implemented by objects that need to know about
@@ -87,10 +85,6 @@ type Wrapper interface {
IgnoredDevice(id protocol.DeviceID) bool
IgnoredFolder(device protocol.DeviceID, folder string) bool
ListenAddresses() []string
GlobalDiscoveryServers() []string
StunServers() []string
Subscribe(c Committer)
Unsubscribe(c Committer)
}
@@ -100,6 +94,7 @@ type wrapper struct {
path string
evLogger events.Logger
waiter Waiter // Latest ongoing config change
deviceMap map[protocol.DeviceID]DeviceConfiguration
folderMap map[string]FolderConfiguration
subs []Committer
@@ -108,30 +103,6 @@ type wrapper struct {
requiresRestart uint32 // an atomic bool
}
func (w *wrapper) StunServers() []string {
var addresses []string
for _, addr := range w.cfg.Options.StunServers {
switch addr {
case "default":
defaultPrimaryAddresses := make([]string, len(DefaultPrimaryStunServers))
copy(defaultPrimaryAddresses, DefaultPrimaryStunServers)
rand.Shuffle(defaultPrimaryAddresses)
addresses = append(addresses, defaultPrimaryAddresses...)
defaultSecondaryAddresses := make([]string, len(DefaultSecondaryStunServers))
copy(defaultSecondaryAddresses, DefaultSecondaryStunServers)
rand.Shuffle(defaultSecondaryAddresses)
addresses = append(addresses, defaultSecondaryAddresses...)
default:
addresses = append(addresses, addr)
}
}
addresses = util.UniqueTrimmedStrings(addresses)
return addresses
}
// Wrap wraps an existing Configuration structure and ties it to a file on
// disk.
func Wrap(path string, cfg Configuration, evLogger events.Logger) Wrapper {
@@ -139,6 +110,7 @@ func Wrap(path string, cfg Configuration, evLogger events.Logger) Wrapper {
cfg: cfg,
path: path,
evLogger: evLogger,
waiter: noopWaiter{}, // Noop until first config change
mut: sync.NewMutex(),
}
return w
@@ -174,7 +146,8 @@ func (w *wrapper) Subscribe(c Committer) {
}
// Unsubscribe de-registers the given handler from any future calls to
// configuration changes
// configuration changes and only returns after a potential ongoing config
// change is done.
func (w *wrapper) Unsubscribe(c Committer) {
w.mut.Lock()
for i := range w.subs {
@@ -185,7 +158,11 @@ func (w *wrapper) Unsubscribe(c Committer) {
break
}
}
waiter := w.waiter
w.mut.Unlock()
// Waiting mustn't be done under lock, as the goroutines in notifyListener
// may dead-lock when trying to access lock on config read operations.
waiter.Wait()
}
// RawCopy returns a copy of the currently wrapped Configuration object.
@@ -221,7 +198,9 @@ func (w *wrapper) replaceLocked(to Configuration) (Waiter, error) {
w.deviceMap = nil
w.folderMap = nil
return w.notifyListeners(from.Copy(), to.Copy()), nil
w.waiter = w.notifyListeners(from.Copy(), to.Copy())
return w.waiter, nil
}
func (w *wrapper) notifyListeners(from, to Configuration) Waiter {
@@ -456,36 +435,6 @@ func (w *wrapper) Save() error {
return nil
}
func (w *wrapper) GlobalDiscoveryServers() []string {
var servers []string
for _, srv := range w.Options().GlobalAnnServers {
switch srv {
case "default":
servers = append(servers, DefaultDiscoveryServers...)
case "default-v4":
servers = append(servers, DefaultDiscoveryServersV4...)
case "default-v6":
servers = append(servers, DefaultDiscoveryServersV6...)
default:
servers = append(servers, srv)
}
}
return util.UniqueTrimmedStrings(servers)
}
func (w *wrapper) ListenAddresses() []string {
var addresses []string
for _, addr := range w.Options().ListenAddresses {
switch addr {
case "default":
addresses = append(addresses, DefaultListenAddresses...)
default:
addresses = append(addresses, addr)
}
}
return util.UniqueTrimmedStrings(addresses)
}
func (w *wrapper) RequiresRestart() bool {
return atomic.LoadUint32(&w.requiresRestart) != 0
}

View File

@@ -11,12 +11,12 @@ package connections
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/url"
"time"
"github.com/lucas-clemente/quic-go"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections/registry"
@@ -39,11 +39,10 @@ func init() {
}
type quicDialer struct {
cfg config.Wrapper
tlsCfg *tls.Config
commonDialer
}
func (d *quicDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error) {
func (d *quicDialer) Dial(ctx context.Context, _ protocol.DeviceID, uri *url.URL) (internalConn, error) {
uri = fixupPort(uri, config.DefaultQUICPort)
addr, err := net.ResolveUDPAddr("udp", uri.Host)
@@ -67,7 +66,7 @@ func (d *quicDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, erro
}
}
ctx, cancel := context.WithTimeout(context.Background(), quicOperationTimeout)
ctx, cancel := context.WithTimeout(ctx, quicOperationTimeout)
defer cancel()
session, err := quic.DialContext(ctx, conn, addr, uri.Host, d.tlsCfg, quicConfig)
@@ -75,7 +74,7 @@ func (d *quicDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, erro
if createdConn != nil {
_ = createdConn.Close()
}
return internalConn{}, fmt.Errorf("dial: %v", err)
return internalConn{}, errors.Wrap(err, "dial")
}
stream, err := session.OpenStreamSync(ctx)
@@ -85,26 +84,22 @@ func (d *quicDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, erro
if createdConn != nil {
_ = createdConn.Close()
}
return internalConn{}, fmt.Errorf("open stream: %v", err)
return internalConn{}, errors.Wrap(err, "open stream")
}
return internalConn{&quicTlsConn{session, stream, createdConn}, connTypeQUICClient, quicPriority}, nil
}
func (d *quicDialer) RedialFrequency() time.Duration {
return time.Duration(d.cfg.Options().ReconnectIntervalS) * time.Second
}
type quicDialerFactory struct {
cfg config.Wrapper
tlsCfg *tls.Config
}
func (quicDialerFactory) New(cfg config.Wrapper, tlsCfg *tls.Config) genericDialer {
return &quicDialer{
cfg: cfg,
tlsCfg: tlsCfg,
}
func (quicDialerFactory) New(opts config.OptionsConfiguration, tlsCfg *tls.Config) genericDialer {
return &quicDialer{commonDialer{
reconnectInterval: time.Duration(opts.ReconnectIntervalS) * time.Second,
tlsCfg: tlsCfg,
}}
}
func (quicDialerFactory) Priority() int {

View File

@@ -78,13 +78,9 @@ func (t *quicListener) OnExternalAddressChanged(address *stun.Host, via string)
}
}
func (t *quicListener) serve(stop chan struct{}) error {
func (t *quicListener) serve(ctx context.Context) error {
network := strings.Replace(t.uri.Scheme, "quic", "udp", -1)
// Convert the stop channel into a context
ctx, cancel := context.WithCancel(context.Background())
go func() { <-stop; cancel() }()
packetConn, err := net.ListenPacket(network, t.uri.Host)
if err != nil {
l.Infoln("Listen (BEP/quic):", err)
@@ -205,7 +201,7 @@ func (f *quicListenerFactory) New(uri *url.URL, cfg config.Wrapper, tlsCfg *tls.
conns: conns,
factory: f,
}
l.ServiceWithError = util.AsServiceWithError(l.serve)
l.ServiceWithError = util.AsServiceWithError(l.serve, l.String())
l.nat.Store(stun.NATUnknown)
return l
}

View File

@@ -7,6 +7,7 @@
package connections
import (
"context"
"crypto/tls"
"net/url"
"time"
@@ -24,17 +25,16 @@ func init() {
}
type relayDialer struct {
cfg config.Wrapper
tlsCfg *tls.Config
commonDialer
}
func (d *relayDialer) Dial(id protocol.DeviceID, uri *url.URL) (internalConn, error) {
inv, err := client.GetInvitationFromRelay(uri, id, d.tlsCfg.Certificates, 10*time.Second)
func (d *relayDialer) Dial(ctx context.Context, id protocol.DeviceID, uri *url.URL) (internalConn, error) {
inv, err := client.GetInvitationFromRelay(ctx, uri, id, d.tlsCfg.Certificates, 10*time.Second)
if err != nil {
return internalConn{}, err
}
conn, err := client.JoinSession(inv)
conn, err := client.JoinSession(ctx, inv)
if err != nil {
return internalConn{}, err
}
@@ -45,7 +45,7 @@ func (d *relayDialer) Dial(id protocol.DeviceID, uri *url.URL) (internalConn, er
return internalConn{}, err
}
err = dialer.SetTrafficClass(conn, d.cfg.Options().TrafficClass)
err = dialer.SetTrafficClass(conn, d.trafficClass)
if err != nil {
l.Debugln("Dial (BEP/relay): setting traffic class:", err)
}
@@ -66,17 +66,14 @@ func (d *relayDialer) Dial(id protocol.DeviceID, uri *url.URL) (internalConn, er
return internalConn{tc, connTypeRelayClient, relayPriority}, nil
}
func (d *relayDialer) RedialFrequency() time.Duration {
return time.Duration(d.cfg.Options().RelayReconnectIntervalM) * time.Minute
}
type relayDialerFactory struct{}
func (relayDialerFactory) New(cfg config.Wrapper, tlsCfg *tls.Config) genericDialer {
return &relayDialer{
cfg: cfg,
tlsCfg: tlsCfg,
}
func (relayDialerFactory) New(opts config.OptionsConfiguration, tlsCfg *tls.Config) genericDialer {
return &relayDialer{commonDialer{
trafficClass: opts.TrafficClass,
reconnectInterval: time.Duration(opts.RelayReconnectIntervalM) * time.Minute,
tlsCfg: tlsCfg,
}}
}
func (relayDialerFactory) Priority() int {

View File

@@ -7,11 +7,14 @@
package connections
import (
"context"
"crypto/tls"
"net/url"
"sync"
"time"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/dialer"
"github.com/syncthing/syncthing/lib/nat"
@@ -40,7 +43,7 @@ type relayListener struct {
mut sync.RWMutex
}
func (t *relayListener) serve(stop chan struct{}) error {
func (t *relayListener) serve(ctx context.Context) error {
clnt, err := client.NewClient(t.uri, t.tlsCfg.Certificates, nil, 10*time.Second)
if err != nil {
l.Infoln("Listen (BEP/relay):", err)
@@ -69,9 +72,11 @@ func (t *relayListener) serve(stop chan struct{}) error {
return err
}
conn, err := client.JoinSession(inv)
conn, err := client.JoinSession(ctx, inv)
if err != nil {
l.Infoln("Listen (BEP/relay): joining session:", err)
if errors.Cause(err) != context.Canceled {
l.Infoln("Listen (BEP/relay): joining session:", err)
}
continue
}
@@ -112,7 +117,7 @@ func (t *relayListener) serve(stop chan struct{}) error {
t.notifyAddressesChanged(t)
}
case <-stop:
case <-ctx.Done():
return nil
}
}
@@ -178,7 +183,7 @@ func (f *relayListenerFactory) New(uri *url.URL, cfg config.Wrapper, tlsCfg *tls
conns: conns,
factory: f,
}
t.ServiceWithError = util.AsServiceWithError(t.serve)
t.ServiceWithError = util.AsServiceWithError(t.serve, t.String())
return t
}

View File

@@ -7,8 +7,8 @@
package connections
import (
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/url"
@@ -30,6 +30,7 @@ import (
_ "github.com/syncthing/syncthing/lib/pmp"
_ "github.com/syncthing/syncthing/lib/upnp"
"github.com/pkg/errors"
"github.com/thejerf/suture"
"golang.org/x/time/rate"
)
@@ -185,18 +186,24 @@ func NewService(cfg config.Wrapper, myID protocol.DeviceID, mdl Model, tlsCfg *t
// the common handling regardless of whether the connection was
// incoming or outgoing.
service.Add(util.AsService(service.connect))
service.Add(util.AsService(service.handle))
service.Add(util.AsService(service.connect, fmt.Sprintf("%s/connect", service)))
service.Add(util.AsService(service.handle, fmt.Sprintf("%s/handle", service)))
service.Add(service.listenerSupervisor)
return service
}
func (s *service) handle(stop chan struct{}) {
func (s *service) Stop() {
s.cfg.Unsubscribe(s.limiter)
s.cfg.Unsubscribe(s)
s.Supervisor.Stop()
}
func (s *service) handle(ctx context.Context) {
var c internalConn
for {
select {
case <-stop:
case <-ctx.Done():
return
case c = <-s.conns:
}
@@ -324,7 +331,7 @@ func (s *service) handle(stop chan struct{}) {
}
}
func (s *service) connect(stop chan struct{}) {
func (s *service) connect(ctx context.Context) {
nextDial := make(map[string]time.Time)
// Used as delay for the first few connection attempts, increases
@@ -441,7 +448,7 @@ func (s *service) connect(stop chan struct{}) {
continue
}
dialer := dialerFactory.New(s.cfg, s.tlsCfg)
dialer := dialerFactory.New(s.cfg.Options(), s.tlsCfg)
nextDial[nextDialKey] = now.Add(dialer.RedialFrequency())
// For LAN addresses, increase the priority so that we
@@ -462,7 +469,7 @@ func (s *service) connect(stop chan struct{}) {
})
}
conn, ok := s.dialParallel(deviceCfg.DeviceID, dialTargets)
conn, ok := s.dialParallel(ctx, deviceCfg.DeviceID, dialTargets)
if ok {
s.conns <- conn
}
@@ -480,7 +487,7 @@ func (s *service) connect(stop chan struct{}) {
select {
case <-time.After(sleep):
case <-stop:
case <-ctx.Done():
return
}
}
@@ -581,7 +588,7 @@ func (s *service) CommitConfiguration(from, to config.Configuration) bool {
s.listenersMut.Lock()
seen := make(map[string]struct{})
for _, addr := range config.Wrap("", to, s.evLogger).ListenAddresses() {
for _, addr := range to.Options.ListenAddresses() {
if addr == "" {
// We can get an empty address if there is an empty listener
// element in the config, indicating no listeners should be
@@ -700,6 +707,10 @@ func (s *service) ConnectionStatus() map[string]ConnectionStatusEntry {
}
func (s *service) setConnectionStatus(address string, err error) {
if errors.Cause(err) != context.Canceled {
return
}
status := ConnectionStatusEntry{When: time.Now().UTC().Truncate(time.Second)}
if err != nil {
errStr := err.Error()
@@ -827,7 +838,7 @@ func IsAllowedNetwork(host string, allowed []string) bool {
return false
}
func (s *service) dialParallel(deviceID protocol.DeviceID, dialTargets []dialTarget) (internalConn, bool) {
func (s *service) dialParallel(ctx context.Context, deviceID protocol.DeviceID, dialTargets []dialTarget) (internalConn, bool) {
// Group targets into buckets by priority
dialTargetBuckets := make(map[int][]dialTarget, len(dialTargets))
for _, tgt := range dialTargets {
@@ -850,7 +861,7 @@ func (s *service) dialParallel(deviceID protocol.DeviceID, dialTargets []dialTar
for _, tgt := range tgts {
wg.Add(1)
go func(tgt dialTarget) {
conn, err := tgt.Dial()
conn, err := tgt.Dial(ctx)
if err == nil {
// Closes the connection on error
err = s.validateIdentity(conn, deviceID)

View File

@@ -7,6 +7,7 @@
package connections
import (
"context"
"crypto/tls"
"fmt"
"io"
@@ -146,15 +147,25 @@ func (c internalConn) String() string {
}
type dialerFactory interface {
New(config.Wrapper, *tls.Config) genericDialer
New(config.OptionsConfiguration, *tls.Config) genericDialer
Priority() int
AlwaysWAN() bool
Valid(config.Configuration) error
String() string
}
type commonDialer struct {
trafficClass int
reconnectInterval time.Duration
tlsCfg *tls.Config
}
func (d *commonDialer) RedialFrequency() time.Duration {
return d.reconnectInterval
}
type genericDialer interface {
Dial(protocol.DeviceID, *url.URL) (internalConn, error)
Dial(context.Context, protocol.DeviceID, *url.URL) (internalConn, error)
RedialFrequency() time.Duration
}
@@ -213,7 +224,7 @@ type dialTarget struct {
deviceID protocol.DeviceID
}
func (t dialTarget) Dial() (internalConn, error) {
func (t dialTarget) Dial(ctx context.Context) (internalConn, error) {
l.Debugln("dialing", t.deviceID, t.uri, "prio", t.priority)
return t.dialer.Dial(t.deviceID, t.uri)
return t.dialer.Dial(ctx, t.deviceID, t.uri)
}

View File

@@ -7,6 +7,7 @@
package connections
import (
"context"
"crypto/tls"
"net/url"
"time"
@@ -26,14 +27,15 @@ func init() {
}
type tcpDialer struct {
cfg config.Wrapper
tlsCfg *tls.Config
commonDialer
}
func (d *tcpDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error) {
func (d *tcpDialer) Dial(ctx context.Context, _ protocol.DeviceID, uri *url.URL) (internalConn, error) {
uri = fixupPort(uri, config.DefaultTCPPort)
conn, err := dialer.DialTimeout(uri.Scheme, uri.Host, 10*time.Second)
timeoutCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
conn, err := dialer.DialContext(timeoutCtx, uri.Scheme, uri.Host)
if err != nil {
return internalConn{}, err
}
@@ -43,7 +45,7 @@ func (d *tcpDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error
l.Debugln("Dial (BEP/tcp): setting tcp options:", err)
}
err = dialer.SetTrafficClass(conn, d.cfg.Options().TrafficClass)
err = dialer.SetTrafficClass(conn, d.trafficClass)
if err != nil {
l.Debugln("Dial (BEP/tcp): setting traffic class:", err)
}
@@ -58,17 +60,14 @@ func (d *tcpDialer) Dial(_ protocol.DeviceID, uri *url.URL) (internalConn, error
return internalConn{tc, connTypeTCPClient, tcpPriority}, nil
}
func (d *tcpDialer) RedialFrequency() time.Duration {
return time.Duration(d.cfg.Options().ReconnectIntervalS) * time.Second
}
type tcpDialerFactory struct{}
func (tcpDialerFactory) New(cfg config.Wrapper, tlsCfg *tls.Config) genericDialer {
return &tcpDialer{
cfg: cfg,
tlsCfg: tlsCfg,
}
func (tcpDialerFactory) New(opts config.OptionsConfiguration, tlsCfg *tls.Config) genericDialer {
return &tcpDialer{commonDialer{
trafficClass: opts.TrafficClass,
reconnectInterval: time.Duration(opts.ReconnectIntervalS) * time.Second,
tlsCfg: tlsCfg,
}}
}
func (tcpDialerFactory) Priority() int {

View File

@@ -7,6 +7,7 @@
package connections
import (
"context"
"crypto/tls"
"net"
"net/url"
@@ -42,7 +43,7 @@ type tcpListener struct {
mut sync.RWMutex
}
func (t *tcpListener) serve(stop chan struct{}) error {
func (t *tcpListener) serve(ctx context.Context) error {
tcaddr, err := net.ResolveTCPAddr(t.uri.Scheme, t.uri.Host)
if err != nil {
l.Infoln("Listen (BEP/tcp):", err)
@@ -76,7 +77,7 @@ func (t *tcpListener) serve(stop chan struct{}) error {
listener.SetDeadline(time.Now().Add(time.Second))
conn, err := listener.Accept()
select {
case <-stop:
case <-ctx.Done():
if err == nil {
conn.Close()
}
@@ -183,7 +184,7 @@ func (f *tcpListenerFactory) New(uri *url.URL, cfg config.Wrapper, tlsCfg *tls.C
natService: natService,
factory: f,
}
l.ServiceWithError = util.AsServiceWithError(l.serve)
l.ServiceWithError = util.AsServiceWithError(l.serve, l.String())
return l
}

170
lib/db/backend/backend.go Normal file
View File

@@ -0,0 +1,170 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"sync"
)
// The Reader interface specifies the read-only operations available on the
// main database and on read-only transactions (snapshots). Note that when
// called directly on the database handle these operations may take implicit
// transactions and performance may suffer.
type Reader interface {
Get(key []byte) ([]byte, error)
NewPrefixIterator(prefix []byte) (Iterator, error)
NewRangeIterator(first, last []byte) (Iterator, error)
}
// The Writer interface specifies the mutating operations available on the
// main database and on writable transactions. Note that when called
// directly on the database handle these operations may take implicit
// transactions and performance may suffer.
type Writer interface {
Put(key, val []byte) error
Delete(key []byte) error
}
// The ReadTransaction interface specifies the operations on read-only
// transactions. Every ReadTransaction must be released when no longer
// required.
type ReadTransaction interface {
Reader
Release()
}
// The WriteTransaction interface specifies the operations on writable
// transactions. Every WriteTransaction must be either committed or released
// (i.e., discarded) when no longer required. No further operations must be
// performed after release or commit (regardless of whether commit succeeded),
// with one exception -- it's fine to release an already committed or released
// transaction.
//
// A Checkpoint is a potential partial commit of the transaction so far, for
// purposes of saving memory when transactions are in-RAM. Note that
// transactions may be checkpointed *anyway* even if this is not called, due to
// resource constraints, but this gives you a chance to decide when.
type WriteTransaction interface {
ReadTransaction
Writer
Checkpoint() error
Commit() error
}
// The Iterator interface specifies the operations available on iterators
// returned by NewPrefixIterator and NewRangeIterator. The iterator pattern
// is to loop while Next returns true, then check Error after the loop. Next
// will return false when iteration is complete (Error() == nil) or when
// there is an error preventing iteration, which is then returned by
// Error(). For example:
//
// it, err := db.NewPrefixIterator(nil)
// if err != nil {
// // problem preventing iteration
// }
// defer it.Release()
// for it.Next() {
// // ...
// }
// if err := it.Error(); err != nil {
// // there was a database problem while iterating
// }
//
// An iterator must be Released when no longer required. The Error method
// can be called either before or after Release with the same results. If an
// iterator was created in a transaction (whether read-only or write) it
// must be released before the transaction is released (or committed).
type Iterator interface {
Next() bool
Key() []byte
Value() []byte
Error() error
Release()
}
// The Backend interface represents the main database handle. It supports
// both read/write operations and opening read-only or writable
// transactions. Depending on the actual implementation, individual
// read/write operations may be implicitly wrapped in transactions, making
// them perform quite badly when used repeatedly. For bulk operations,
// consider always using a transaction of the appropriate type. The
// transaction isolation level is "read committed" - there are no dirty
// reads.
type Backend interface {
Reader
Writer
NewReadTransaction() (ReadTransaction, error)
NewWriteTransaction() (WriteTransaction, error)
Close() error
}
type Tuning int
const (
// N.b. these constants must match those in lib/config.Tuning!
TuningAuto Tuning = iota
TuningSmall
TuningLarge
)
func Open(path string, tuning Tuning) (Backend, error) {
return OpenLevelDB(path, tuning)
}
func OpenMemory() Backend {
return OpenLevelDBMemory()
}
type errClosed struct{}
func (errClosed) Error() string { return "database is closed" }
type errNotFound struct{}
func (errNotFound) Error() string { return "key not found" }
func IsClosed(err error) bool {
if _, ok := err.(errClosed); ok {
return true
}
if _, ok := err.(*errClosed); ok {
return true
}
return false
}
func IsNotFound(err error) bool {
if _, ok := err.(errNotFound); ok {
return true
}
if _, ok := err.(*errNotFound); ok {
return true
}
return false
}
// releaser manages counting on top of a waitgroup
type releaser struct {
wg *sync.WaitGroup
once *sync.Once
}
func newReleaser(wg *sync.WaitGroup) *releaser {
wg.Add(1)
return &releaser{
wg: wg,
once: new(sync.Once),
}
}
func (r releaser) Release() {
// We use the Once because we may get called multiple times from
// Commit() and deferred Release().
r.once.Do(func() {
r.wg.Done()
})
}

View File

@@ -0,0 +1,53 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import "testing"
// testBackendBehavior is the generic test suite that must be fulfilled by
// every backend implementation. It should be called by each implementation
// as (part of) their test suite.
func testBackendBehavior(t *testing.T, open func() Backend) {
t.Run("WriteIsolation", func(t *testing.T) { testWriteIsolation(t, open) })
t.Run("DeleteNonexisten", func(t *testing.T) { testDeleteNonexistent(t, open) })
}
func testWriteIsolation(t *testing.T, open func() Backend) {
// Values written during a transaction should not be read back, our
// updateGlobal depends on this.
db := open()
defer db.Close()
// Sanity check
_ = db.Put([]byte("a"), []byte("a"))
v, _ := db.Get([]byte("a"))
if string(v) != "a" {
t.Fatal("read back should work")
}
// Now in a transaction we should still see the old value
tx, _ := db.NewWriteTransaction()
defer tx.Release()
_ = tx.Put([]byte("a"), []byte("b"))
v, _ = tx.Get([]byte("a"))
if string(v) != "a" {
t.Fatal("read in transaction should read the old value")
}
}
func testDeleteNonexistent(t *testing.T, open func() Backend) {
// Deleting a non-existent key is not an error
db := open()
defer db.Close()
err := db.Delete([]byte("a"))
if err != nil {
t.Error(err)
}
}

15
lib/db/backend/debug.go Normal file
View File

@@ -0,0 +1,15 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("backend", "The database backend")
)

View File

@@ -0,0 +1,173 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"sync"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
)
const (
// Never flush transactions smaller than this, even on Checkpoint()
dbFlushBatchMin = 1 << MiB
// Once a transaction reaches this size, flush it unconditionally.
dbFlushBatchMax = 128 << MiB
)
// leveldbBackend implements Backend on top of a leveldb
type leveldbBackend struct {
ldb *leveldb.DB
closeWG sync.WaitGroup
}
func (b *leveldbBackend) NewReadTransaction() (ReadTransaction, error) {
return b.newSnapshot()
}
func (b *leveldbBackend) newSnapshot() (leveldbSnapshot, error) {
snap, err := b.ldb.GetSnapshot()
if err != nil {
return leveldbSnapshot{}, wrapLeveldbErr(err)
}
return leveldbSnapshot{
snap: snap,
rel: newReleaser(&b.closeWG),
}, nil
}
func (b *leveldbBackend) NewWriteTransaction() (WriteTransaction, error) {
snap, err := b.newSnapshot()
if err != nil {
return nil, err // already wrapped
}
return &leveldbTransaction{
leveldbSnapshot: snap,
ldb: b.ldb,
batch: new(leveldb.Batch),
rel: newReleaser(&b.closeWG),
}, nil
}
func (b *leveldbBackend) Close() error {
b.closeWG.Wait()
return wrapLeveldbErr(b.ldb.Close())
}
func (b *leveldbBackend) Get(key []byte) ([]byte, error) {
val, err := b.ldb.Get(key, nil)
return val, wrapLeveldbErr(err)
}
func (b *leveldbBackend) NewPrefixIterator(prefix []byte) (Iterator, error) {
return b.ldb.NewIterator(util.BytesPrefix(prefix), nil), nil
}
func (b *leveldbBackend) NewRangeIterator(first, last []byte) (Iterator, error) {
return b.ldb.NewIterator(&util.Range{Start: first, Limit: last}, nil), nil
}
func (b *leveldbBackend) Put(key, val []byte) error {
return wrapLeveldbErr(b.ldb.Put(key, val, nil))
}
func (b *leveldbBackend) Delete(key []byte) error {
return wrapLeveldbErr(b.ldb.Delete(key, nil))
}
// leveldbSnapshot implements backend.ReadTransaction
type leveldbSnapshot struct {
snap *leveldb.Snapshot
rel *releaser
}
func (l leveldbSnapshot) Get(key []byte) ([]byte, error) {
val, err := l.snap.Get(key, nil)
return val, wrapLeveldbErr(err)
}
func (l leveldbSnapshot) NewPrefixIterator(prefix []byte) (Iterator, error) {
return l.snap.NewIterator(util.BytesPrefix(prefix), nil), nil
}
func (l leveldbSnapshot) NewRangeIterator(first, last []byte) (Iterator, error) {
return l.snap.NewIterator(&util.Range{Start: first, Limit: last}, nil), nil
}
func (l leveldbSnapshot) Release() {
l.snap.Release()
l.rel.Release()
}
// leveldbTransaction implements backend.WriteTransaction using a batch (not
// an actual leveldb transaction)
type leveldbTransaction struct {
leveldbSnapshot
ldb *leveldb.DB
batch *leveldb.Batch
rel *releaser
}
func (t *leveldbTransaction) Delete(key []byte) error {
t.batch.Delete(key)
return t.checkFlush(dbFlushBatchMax)
}
func (t *leveldbTransaction) Put(key, val []byte) error {
t.batch.Put(key, val)
return t.checkFlush(dbFlushBatchMax)
}
func (t *leveldbTransaction) Checkpoint() error {
return t.checkFlush(dbFlushBatchMin)
}
func (t *leveldbTransaction) Commit() error {
err := wrapLeveldbErr(t.flush())
t.leveldbSnapshot.Release()
t.rel.Release()
return err
}
func (t *leveldbTransaction) Release() {
t.leveldbSnapshot.Release()
t.rel.Release()
}
// checkFlush flushes and resets the batch if its size exceeds the given size.
func (t *leveldbTransaction) checkFlush(size int) error {
if len(t.batch.Dump()) < size {
return nil
}
return t.flush()
}
func (t *leveldbTransaction) flush() error {
if t.batch.Len() == 0 {
return nil
}
if err := t.ldb.Write(t.batch, nil); err != nil {
return wrapLeveldbErr(err)
}
t.batch.Reset()
return nil
}
// wrapLeveldbErr wraps errors so that the backend package can recognize them
func wrapLeveldbErr(err error) error {
if err == nil {
return nil
}
if err == leveldb.ErrClosed {
return errClosed{}
}
if err == leveldb.ErrNotFound {
return errNotFound{}
}
return err
}

View File

@@ -0,0 +1,226 @@
// Copyright (C) 2018 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import (
"fmt"
"os"
"strconv"
"strings"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/errors"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util"
)
const (
dbMaxOpenFiles = 100
// A large database is > 200 MiB. It's a mostly arbitrary value, but
// it's also the case that each file is 2 MiB by default and when we
// have dbMaxOpenFiles of them we will need to start thrashing fd:s.
// Switching to large database settings causes larger files to be used
// when compacting, reducing the number.
dbLargeThreshold = dbMaxOpenFiles * (2 << MiB)
KiB = 10
MiB = 20
)
// Open attempts to open the database at the given location, and runs
// recovery on it if opening fails. Worst case, if recovery is not possible,
// the database is erased and created from scratch.
func OpenLevelDB(location string, tuning Tuning) (Backend, error) {
opts := optsFor(location, tuning)
ldb, err := open(location, opts)
if err != nil {
return nil, err
}
return &leveldbBackend{ldb: ldb}, nil
}
// OpenRO attempts to open the database at the given location, read only.
func OpenLevelDBRO(location string) (Backend, error) {
opts := &opt.Options{
OpenFilesCacheCapacity: dbMaxOpenFiles,
ReadOnly: true,
}
ldb, err := open(location, opts)
if err != nil {
return nil, err
}
return &leveldbBackend{ldb: ldb}, nil
}
// OpenMemory returns a new Backend referencing an in-memory database.
func OpenLevelDBMemory() Backend {
ldb, _ := leveldb.Open(storage.NewMemStorage(), nil)
return &leveldbBackend{ldb: ldb}
}
// optsFor returns the database options to use when opening a database with
// the given location and tuning. Settings can be overridden by debug
// environment variables.
func optsFor(location string, tuning Tuning) *opt.Options {
large := false
switch tuning {
case TuningLarge:
large = true
case TuningAuto:
large = dbIsLarge(location)
}
var (
// Set defaults used for small databases.
defaultBlockCacheCapacity = 0 // 0 means let leveldb use default
defaultBlockSize = 0
defaultCompactionTableSize = 0
defaultCompactionTableSizeMultiplier = 0
defaultWriteBuffer = 16 << MiB // increased from leveldb default of 4 MiB
defaultCompactionL0Trigger = opt.DefaultCompactionL0Trigger // explicit because we use it as base for other stuff
)
if large {
// Change the parameters for better throughput at the price of some
// RAM and larger files. This results in larger batches of writes
// and compaction at a lower frequency.
l.Infoln("Using large-database tuning")
defaultBlockCacheCapacity = 64 << MiB
defaultBlockSize = 64 << KiB
defaultCompactionTableSize = 16 << MiB
defaultCompactionTableSizeMultiplier = 20 // 2.0 after division by ten
defaultWriteBuffer = 64 << MiB
defaultCompactionL0Trigger = 8 // number of l0 files
}
opts := &opt.Options{
BlockCacheCapacity: debugEnvValue("BlockCacheCapacity", defaultBlockCacheCapacity),
BlockCacheEvictRemoved: debugEnvValue("BlockCacheEvictRemoved", 0) != 0,
BlockRestartInterval: debugEnvValue("BlockRestartInterval", 0),
BlockSize: debugEnvValue("BlockSize", defaultBlockSize),
CompactionExpandLimitFactor: debugEnvValue("CompactionExpandLimitFactor", 0),
CompactionGPOverlapsFactor: debugEnvValue("CompactionGPOverlapsFactor", 0),
CompactionL0Trigger: debugEnvValue("CompactionL0Trigger", defaultCompactionL0Trigger),
CompactionSourceLimitFactor: debugEnvValue("CompactionSourceLimitFactor", 0),
CompactionTableSize: debugEnvValue("CompactionTableSize", defaultCompactionTableSize),
CompactionTableSizeMultiplier: float64(debugEnvValue("CompactionTableSizeMultiplier", defaultCompactionTableSizeMultiplier)) / 10.0,
CompactionTotalSize: debugEnvValue("CompactionTotalSize", 0),
CompactionTotalSizeMultiplier: float64(debugEnvValue("CompactionTotalSizeMultiplier", 0)) / 10.0,
DisableBufferPool: debugEnvValue("DisableBufferPool", 0) != 0,
DisableBlockCache: debugEnvValue("DisableBlockCache", 0) != 0,
DisableCompactionBackoff: debugEnvValue("DisableCompactionBackoff", 0) != 0,
DisableLargeBatchTransaction: debugEnvValue("DisableLargeBatchTransaction", 0) != 0,
NoSync: debugEnvValue("NoSync", 0) != 0,
NoWriteMerge: debugEnvValue("NoWriteMerge", 0) != 0,
OpenFilesCacheCapacity: debugEnvValue("OpenFilesCacheCapacity", dbMaxOpenFiles),
WriteBuffer: debugEnvValue("WriteBuffer", defaultWriteBuffer),
// The write slowdown and pause can be overridden, but even if they
// are not and the compaction trigger is overridden we need to
// adjust so that we don't pause writes for L0 compaction before we
// even *start* L0 compaction...
WriteL0SlowdownTrigger: debugEnvValue("WriteL0SlowdownTrigger", 2*debugEnvValue("CompactionL0Trigger", defaultCompactionL0Trigger)),
WriteL0PauseTrigger: debugEnvValue("WriteL0SlowdownTrigger", 3*debugEnvValue("CompactionL0Trigger", defaultCompactionL0Trigger)),
}
return opts
}
func open(location string, opts *opt.Options) (*leveldb.DB, error) {
db, err := leveldb.OpenFile(location, opts)
if leveldbIsCorrupted(err) {
db, err = leveldb.RecoverFile(location, opts)
}
if leveldbIsCorrupted(err) {
// The database is corrupted, and we've tried to recover it but it
// didn't work. At this point there isn't much to do beyond dropping
// the database and reindexing...
l.Infoln("Database corruption detected, unable to recover. Reinitializing...")
if err := os.RemoveAll(location); err != nil {
return nil, errorSuggestion{err, "failed to delete corrupted database"}
}
db, err = leveldb.OpenFile(location, opts)
}
if err != nil {
return nil, errorSuggestion{err, "is another instance of Syncthing running?"}
}
if debugEnvValue("CompactEverything", 0) != 0 {
if err := db.CompactRange(util.Range{}); err != nil {
l.Warnln("Compacting database:", err)
}
}
return db, nil
}
func debugEnvValue(key string, def int) int {
v, err := strconv.ParseInt(os.Getenv("STDEBUG_"+key), 10, 63)
if err != nil {
return def
}
return int(v)
}
// A "better" version of leveldb's errors.IsCorrupted.
func leveldbIsCorrupted(err error) bool {
switch {
case err == nil:
return false
case errors.IsCorrupted(err):
return true
case strings.Contains(err.Error(), "corrupted"):
return true
}
return false
}
// dbIsLarge returns whether the estimated size of the database at location
// is large enough to warrant optimization for large databases.
func dbIsLarge(location string) bool {
if ^uint(0)>>63 == 0 {
// We're compiled for a 32 bit architecture. We've seen trouble with
// large settings there.
// (https://forum.syncthing.net/t/many-small-ldb-files-with-database-tuning/13842)
return false
}
dir, err := os.Open(location)
if err != nil {
return false
}
fis, err := dir.Readdir(-1)
if err != nil {
return false
}
var size int64
for _, fi := range fis {
if fi.Name() == "LOG" {
// don't count the size
continue
}
size += fi.Size()
}
return size > dbLargeThreshold
}
type errorSuggestion struct {
inner error
suggestion string
}
func (e errorSuggestion) Error() string {
return fmt.Sprintf("%s (%s)", e.inner.Error(), e.suggestion)
}

View File

@@ -0,0 +1,13 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package backend
import "testing"
func TestLevelDBBackendBehavior(t *testing.T) {
testBackendBehavior(t, OpenLevelDBMemory)
}

View File

@@ -11,6 +11,7 @@ import (
"testing"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -40,7 +41,7 @@ func lazyInitBenchFiles() {
func getBenchFileSet() (*db.Lowlevel, *db.FileSet) {
lazyInitBenchFiles()
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
benchS := db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
replace(benchS, remoteDevice0, files)
replace(benchS, protocol.LocalDeviceID, firstHalf)
@@ -49,7 +50,7 @@ func getBenchFileSet() (*db.Lowlevel, *db.FileSet) {
}
func BenchmarkReplaceAll(b *testing.B) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
defer ldb.Close()
b.ResetTimer()
@@ -157,7 +158,7 @@ func BenchmarkNeedHalf(b *testing.B) {
}
func BenchmarkNeedHalfRemote(b *testing.B) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
defer ldb.Close()
fset := db.NewFileSet("test)", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
replace(fset, remoteDevice0, firstHalf)

View File

@@ -11,14 +11,12 @@ import (
"fmt"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syndtr/goleveldb/leveldb/util"
)
var blockFinder *BlockFinder
type BlockFinder struct {
db *instance
db *Lowlevel
}
func NewBlockFinder(db *Lowlevel) *BlockFinder {
@@ -27,7 +25,7 @@ func NewBlockFinder(db *Lowlevel) *BlockFinder {
}
return &BlockFinder{
db: newInstance(db),
db: db,
}
}
@@ -41,13 +39,22 @@ func (f *BlockFinder) String() string {
// reason. The iterator finally returns the result, whether or not a
// satisfying block was eventually found.
func (f *BlockFinder) Iterate(folders []string, hash []byte, iterFn func(string, string, int32) bool) bool {
t := f.db.newReadOnlyTransaction()
t, err := f.db.newReadOnlyTransaction()
if err != nil {
return false
}
defer t.close()
var key []byte
for _, folder := range folders {
key = f.db.keyer.GenerateBlockMapKey(key, []byte(folder), hash, nil)
iter := t.NewIterator(util.BytesPrefix(key), nil)
key, err = f.db.keyer.GenerateBlockMapKey(key, []byte(folder), hash, nil)
if err != nil {
return false
}
iter, err := t.NewPrefixIterator(key)
if err != nil {
return false
}
for iter.Next() && iter.Error() == nil {
file := string(f.db.keyer.NameFromBlockMapKey(iter.Key()))

View File

@@ -10,23 +10,10 @@ import (
"encoding/binary"
"testing"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb/util"
)
func genBlocks(n int) []protocol.BlockInfo {
b := make([]protocol.BlockInfo, n)
for i := range b {
h := make([]byte, 32)
for j := range h {
h[j] = byte(i + j)
}
b[i].Size = int32(i)
b[i].Hash = h
}
return b
}
var f1, f2, f3 protocol.FileInfo
var folders = []string{"folder1", "folder2"}
@@ -49,21 +36,27 @@ func init() {
}
}
func setup() (*instance, *BlockFinder) {
func setup() (*Lowlevel, *BlockFinder) {
// Setup
db := OpenMemory()
return newInstance(db), NewBlockFinder(db)
db := NewLowlevel(backend.OpenMemory())
return db, NewBlockFinder(db)
}
func dbEmpty(db *instance) bool {
iter := db.NewIterator(util.BytesPrefix([]byte{KeyTypeBlock}), nil)
func dbEmpty(db *Lowlevel) bool {
iter, err := db.NewPrefixIterator([]byte{KeyTypeBlock})
if err != nil {
panic(err)
}
defer iter.Release()
return !iter.Next()
}
func addToBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
t := db.newReadWriteTransaction()
func addToBlockMap(db *Lowlevel, folder []byte, fs []protocol.FileInfo) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var keyBuf []byte
@@ -73,15 +66,24 @@ func addToBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
name := []byte(f.Name)
for i, block := range f.Blocks {
binary.BigEndian.PutUint32(blockBuf, uint32(i))
keyBuf = t.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Put(keyBuf, blockBuf)
keyBuf, err = t.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
if err != nil {
return err
}
if err := t.Put(keyBuf, blockBuf); err != nil {
return err
}
}
}
}
return t.commit()
}
func discardFromBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
t := db.newReadWriteTransaction()
func discardFromBlockMap(db *Lowlevel, folder []byte, fs []protocol.FileInfo) error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var keyBuf []byte
@@ -89,11 +91,17 @@ func discardFromBlockMap(db *instance, folder []byte, fs []protocol.FileInfo) {
if !ef.IsDirectory() && !ef.IsDeleted() && !ef.IsInvalid() {
name := []byte(ef.Name)
for _, block := range ef.Blocks {
keyBuf = t.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Delete(keyBuf)
keyBuf, err = t.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
if err != nil {
return err
}
if err := t.Delete(keyBuf); err != nil {
return err
}
}
}
}
return t.commit()
}
func TestBlockMapAddUpdateWipe(t *testing.T) {
@@ -107,7 +115,9 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
f3.Type = protocol.FileInfoTypeDirectory
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
if err := addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3}); err != nil {
t.Fatal(err)
}
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
if folder != "folder1" || file != "f1" || index != 0 {
@@ -128,12 +138,16 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
return true
})
discardFromBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
if err := discardFromBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3}); err != nil {
t.Fatal(err)
}
f1.Deleted = true
f2.LocalFlags = protocol.FlagLocalMustRescan // one of the invalid markers
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3})
if err := addToBlockMap(db, folder, []protocol.FileInfo{f1, f2, f3}); err != nil {
t.Fatal(err)
}
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
t.Fatal("Unexpected block")
@@ -152,14 +166,18 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
return true
})
db.dropFolder(folder)
if err := db.dropFolder(folder); err != nil {
t.Fatal(err)
}
if !dbEmpty(db) {
t.Fatal("db not empty")
}
// Should not add
addToBlockMap(db, folder, []protocol.FileInfo{f1, f2})
if err := addToBlockMap(db, folder, []protocol.FileInfo{f1, f2}); err != nil {
t.Fatal(err)
}
if !dbEmpty(db) {
t.Fatal("db not empty")
@@ -179,8 +197,12 @@ func TestBlockFinderLookup(t *testing.T) {
folder1 := []byte("folder1")
folder2 := []byte("folder2")
addToBlockMap(db, folder1, []protocol.FileInfo{f1})
addToBlockMap(db, folder2, []protocol.FileInfo{f1})
if err := addToBlockMap(db, folder1, []protocol.FileInfo{f1}); err != nil {
t.Fatal(err)
}
if err := addToBlockMap(db, folder2, []protocol.FileInfo{f1}); err != nil {
t.Fatal(err)
}
counter := 0
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {
@@ -204,11 +226,15 @@ func TestBlockFinderLookup(t *testing.T) {
t.Fatal("Incorrect count", counter)
}
discardFromBlockMap(db, folder1, []protocol.FileInfo{f1})
if err := discardFromBlockMap(db, folder1, []protocol.FileInfo{f1}); err != nil {
t.Fatal(err)
}
f1.Deleted = true
addToBlockMap(db, folder1, []protocol.FileInfo{f1})
if err := addToBlockMap(db, folder1, []protocol.FileInfo{f1}); err != nil {
t.Fatal(err)
}
counter = 0
f.Iterate(folders, f1.Blocks[0].Hash, func(folder, file string, index int32) bool {

View File

@@ -1,237 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
// this is a really tedious test for an old issue
// +build ignore
package db_test
import (
"crypto/rand"
"log"
"os"
"testing"
"time"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/util"
)
var keys [][]byte
func init() {
for i := 0; i < nItems; i++ {
keys = append(keys, randomData(1))
}
}
const nItems = 10000
func randomData(prefix byte) []byte {
data := make([]byte, 1+32+64+32)
_, err := rand.Reader.Read(data)
if err != nil {
panic(err)
}
return append([]byte{prefix}, data...)
}
func setItems(db *leveldb.DB) error {
batch := new(leveldb.Batch)
for _, k1 := range keys {
k2 := randomData(2)
// k2 -> data
batch.Put(k2, randomData(42))
// k1 -> k2
batch.Put(k1, k2)
}
if testing.Verbose() {
log.Printf("batch write (set) %p", batch)
}
return db.Write(batch, nil)
}
func clearItems(db *leveldb.DB) error {
snap, err := db.GetSnapshot()
if err != nil {
return err
}
defer snap.Release()
// Iterate over k2
it := snap.NewIterator(util.BytesPrefix([]byte{1}), nil)
defer it.Release()
batch := new(leveldb.Batch)
for it.Next() {
k1 := it.Key()
k2 := it.Value()
// k2 should exist
_, err := snap.Get(k2, nil)
if err != nil {
return err
}
// Delete the k1 => k2 mapping first
batch.Delete(k1)
// Then the k2 => data mapping
batch.Delete(k2)
}
if testing.Verbose() {
log.Printf("batch write (clear) %p", batch)
}
return db.Write(batch, nil)
}
func scanItems(db *leveldb.DB) error {
snap, err := db.GetSnapshot()
if testing.Verbose() {
log.Printf("snap create %p", snap)
}
if err != nil {
return err
}
defer func() {
if testing.Verbose() {
log.Printf("snap release %p", snap)
}
snap.Release()
}()
// Iterate from the start of k2 space to the end
it := snap.NewIterator(util.BytesPrefix([]byte{1}), nil)
defer it.Release()
i := 0
for it.Next() {
// k2 => k1 => data
k1 := it.Key()
k2 := it.Value()
_, err := snap.Get(k2, nil)
if err != nil {
log.Printf("k1: %x", k1)
log.Printf("k2: %x (missing)", k2)
return err
}
i++
}
if testing.Verbose() {
log.Println("scanned", i)
}
return nil
}
func TestConcurrentSetClear(t *testing.T) {
if testing.Short() {
return
}
dur := 30 * time.Second
t0 := time.Now()
wg := sync.NewWaitGroup()
os.RemoveAll("testdata/concurrent-set-clear.db")
db, err := leveldb.OpenFile("testdata/concurrent-set-clear.db", &opt.Options{OpenFilesCacheCapacity: 10})
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll("testdata/concurrent-set-clear.db")
errChan := make(chan error, 3)
wg.Add(1)
go func() {
defer wg.Done()
for time.Since(t0) < dur {
if err := setItems(db); err != nil {
errChan <- err
return
}
if err := clearItems(db); err != nil {
errChan <- err
return
}
}
}()
wg.Add(1)
go func() {
defer wg.Done()
for time.Since(t0) < dur {
if err := scanItems(db); err != nil {
errChan <- err
return
}
}
}()
go func() {
wg.Wait()
errChan <- nil
}()
err = <-errChan
if err != nil {
t.Error(err)
}
db.Close()
}
func TestConcurrentSetOnly(t *testing.T) {
if testing.Short() {
return
}
dur := 30 * time.Second
t0 := time.Now()
wg := sync.NewWaitGroup()
os.RemoveAll("testdata/concurrent-set-only.db")
db, err := leveldb.OpenFile("testdata/concurrent-set-only.db", &opt.Options{OpenFilesCacheCapacity: 10})
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll("testdata/concurrent-set-only.db")
errChan := make(chan error, 3)
wg.Add(1)
go func() {
defer wg.Done()
for time.Since(t0) < dur {
if err := setItems(db); err != nil {
errChan <- err
return
}
}
}()
wg.Add(1)
go func() {
defer wg.Done()
for time.Since(t0) < dur {
if err := scanItems(db); err != nil {
errChan <- err
return
}
}
}()
go func() {
wg.Wait()
errChan <- nil
}()
err = <-errChan
if err != nil {
t.Error(err)
}
}

View File

@@ -9,17 +9,33 @@ package db
import (
"testing"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
)
func genBlocks(n int) []protocol.BlockInfo {
b := make([]protocol.BlockInfo, n)
for i := range b {
h := make([]byte, 32)
for j := range h {
h[j] = byte(i + j)
}
b[i].Size = int32(i)
b[i].Hash = h
}
return b
}
func TestIgnoredFiles(t *testing.T) {
ldb, err := openJSONS("testdata/v0.14.48-ignoredfiles.db.jsons")
if err != nil {
t.Fatal(err)
}
db := NewLowlevel(ldb, "<memory>")
UpdateSchema(db)
db := NewLowlevel(ldb)
if err := UpdateSchema(db); err != nil {
t.Fatal(err)
}
fs := NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), db)
@@ -142,25 +158,35 @@ func TestUpdate0to3(t *testing.T) {
t.Fatal(err)
}
db := newInstance(NewLowlevel(ldb, "<memory>"))
db := NewLowlevel(ldb)
updater := schemaUpdater{db}
folder := []byte(update0to3Folder)
updater.updateSchema0to1()
if err := updater.updateSchema0to1(); err != nil {
t.Fatal(err)
}
if _, ok := db.getFileDirty(folder, protocol.LocalDeviceID[:], []byte(slashPrefixed)); ok {
if _, ok, err := db.getFileDirty(folder, protocol.LocalDeviceID[:], []byte(slashPrefixed)); err != nil {
t.Fatal(err)
} else if ok {
t.Error("File prefixed by '/' was not removed during transition to schema 1")
}
if _, err := db.Get(db.keyer.GenerateGlobalVersionKey(nil, folder, []byte(invalid)), nil); err != nil {
key, err := db.keyer.GenerateGlobalVersionKey(nil, folder, []byte(invalid))
if err != nil {
t.Fatal(err)
}
if _, err := db.Get(key); err != nil {
t.Error("Invalid file wasn't added to global list")
}
updater.updateSchema1to2()
if err := updater.updateSchema1to2(); err != nil {
t.Fatal(err)
}
found := false
db.withHaveSequence(folder, 0, func(fi FileIntf) bool {
_ = db.withHaveSequence(folder, 0, func(fi FileIntf) bool {
f := fi.(protocol.FileInfo)
l.Infoln(f)
if found {
@@ -178,14 +204,16 @@ func TestUpdate0to3(t *testing.T) {
t.Error("Local file wasn't added to sequence bucket", err)
}
updater.updateSchema2to3()
if err := updater.updateSchema2to3(); err != nil {
t.Fatal(err)
}
need := map[string]protocol.FileInfo{
haveUpdate0to3[remoteDevice0][0].Name: haveUpdate0to3[remoteDevice0][0],
haveUpdate0to3[remoteDevice1][0].Name: haveUpdate0to3[remoteDevice1][0],
haveUpdate0to3[remoteDevice0][2].Name: haveUpdate0to3[remoteDevice0][2],
}
db.withNeed(folder, protocol.LocalDeviceID[:], false, func(fi FileIntf) bool {
_ = db.withNeed(folder, protocol.LocalDeviceID[:], false, func(fi FileIntf) bool {
e, ok := need[fi.FileName()]
if !ok {
t.Error("Got unexpected needed file:", fi.FileName())
@@ -203,12 +231,17 @@ func TestUpdate0to3(t *testing.T) {
}
func TestDowngrade(t *testing.T) {
db := OpenMemory()
UpdateSchema(db) // sets the min version etc
db := NewLowlevel(backend.OpenMemory())
// sets the min version etc
if err := UpdateSchema(db); err != nil {
t.Fatal(err)
}
// Bump the database version to something newer than we actually support
miscDB := NewMiscDataNamespace(db)
miscDB.PutInt64("dbVersion", dbVersion+1)
if err := miscDB.PutInt64("dbVersion", dbVersion+1); err != nil {
t.Fatal(err)
}
l.Infoln(dbVersion)
// Pretend we just opened the DB and attempt to update it again

View File

@@ -1,568 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package db
import (
"bytes"
"encoding/binary"
"fmt"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
)
type instance struct {
*Lowlevel
keyer keyer
}
func newInstance(ll *Lowlevel) *instance {
return &instance{
Lowlevel: ll,
keyer: newDefaultKeyer(ll.folderIdx, ll.deviceIdx),
}
}
// updateRemoteFiles adds a list of fileinfos to the database and updates the
// global versionlist and metadata.
func (db *instance) updateRemoteFiles(folder, device []byte, fs []protocol.FileInfo, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
var dk, gk, keyBuf []byte
devID := protocol.DeviceIDFromBytes(device)
for _, f := range fs {
name := []byte(f.Name)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, device, name)
ef, ok := t.getFileTrunc(dk, true)
if ok && unchanged(f, ef) {
continue
}
if ok {
meta.removeFile(devID, ef)
}
meta.addFile(devID, f)
l.Debugf("insert; folder=%q device=%v %v", folder, devID, f)
t.Put(dk, mustMarshal(&f))
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
keyBuf, _ = t.updateGlobal(gk, keyBuf, folder, device, f, meta)
t.checkFlush()
}
}
// updateLocalFiles adds fileinfos to the db, and updates the global versionlist,
// metadata, sequence and blockmap buckets.
func (db *instance) updateLocalFiles(folder []byte, fs []protocol.FileInfo, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
var dk, gk, keyBuf []byte
blockBuf := make([]byte, 4)
for _, f := range fs {
name := []byte(f.Name)
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], name)
ef, ok := t.getFileByKey(dk)
if ok && unchanged(f, ef) {
continue
}
if ok {
if !ef.IsDirectory() && !ef.IsDeleted() && !ef.IsInvalid() {
for _, block := range ef.Blocks {
keyBuf = db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Delete(keyBuf)
}
}
keyBuf = db.keyer.GenerateSequenceKey(keyBuf, folder, ef.SequenceNo())
t.Delete(keyBuf)
l.Debugf("removing sequence; folder=%q sequence=%v %v", folder, ef.SequenceNo(), ef.FileName())
}
f.Sequence = meta.nextLocalSeq()
if ok {
meta.removeFile(protocol.LocalDeviceID, ef)
}
meta.addFile(protocol.LocalDeviceID, f)
l.Debugf("insert (local); folder=%q %v", folder, f)
t.Put(dk, mustMarshal(&f))
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, []byte(f.Name))
keyBuf, _ = t.updateGlobal(gk, keyBuf, folder, protocol.LocalDeviceID[:], f, meta)
keyBuf = db.keyer.GenerateSequenceKey(keyBuf, folder, f.Sequence)
t.Put(keyBuf, dk)
l.Debugf("adding sequence; folder=%q sequence=%v %v", folder, f.Sequence, f.Name)
if !f.IsDirectory() && !f.IsDeleted() && !f.IsInvalid() {
for i, block := range f.Blocks {
binary.BigEndian.PutUint32(blockBuf, uint32(i))
keyBuf = db.keyer.GenerateBlockMapKey(keyBuf, folder, block.Hash, name)
t.Put(keyBuf, blockBuf)
}
}
t.checkFlush()
}
}
func (db *instance) withHave(folder, device, prefix []byte, truncate bool, fn Iterator) {
t := db.newReadOnlyTransaction()
defer t.close()
if len(prefix) > 0 {
unslashedPrefix := prefix
if bytes.HasSuffix(prefix, []byte{'/'}) {
unslashedPrefix = unslashedPrefix[:len(unslashedPrefix)-1]
} else {
prefix = append(prefix, '/')
}
if f, ok := t.getFileTrunc(db.keyer.GenerateDeviceFileKey(nil, folder, device, unslashedPrefix), true); ok && !fn(f) {
return
}
}
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateDeviceFileKey(nil, folder, device, prefix)), nil)
defer dbi.Release()
for dbi.Next() {
name := db.keyer.NameFromDeviceFileKey(dbi.Key())
if len(prefix) > 0 && !bytes.HasPrefix(name, prefix) {
return
}
f, err := unmarshalTrunc(dbi.Value(), truncate)
if err != nil {
l.Debugln("unmarshal error:", err)
continue
}
if !fn(f) {
return
}
}
}
func (db *instance) withHaveSequence(folder []byte, startSeq int64, fn Iterator) {
t := db.newReadOnlyTransaction()
defer t.close()
dbi := t.NewIterator(&util.Range{Start: db.keyer.GenerateSequenceKey(nil, folder, startSeq), Limit: db.keyer.GenerateSequenceKey(nil, folder, maxInt64)}, nil)
defer dbi.Release()
for dbi.Next() {
f, ok := t.getFileByKey(dbi.Value())
if !ok {
l.Debugln("missing file for sequence number", db.keyer.SequenceFromSequenceKey(dbi.Key()))
continue
}
if shouldDebug() {
if seq := db.keyer.SequenceFromSequenceKey(dbi.Key()); f.Sequence != seq {
l.Warnf("Sequence index corruption (folder %v, file %v): sequence %d != expected %d", string(folder), f.Name, f.Sequence, seq)
panic("sequence index corruption")
}
}
if !fn(f) {
return
}
}
}
func (db *instance) withAllFolderTruncated(folder []byte, fn func(device []byte, f FileInfoTruncated) bool) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateDeviceFileKey(nil, folder, nil, nil).WithoutNameAndDevice()), nil)
defer dbi.Release()
var gk, keyBuf []byte
for dbi.Next() {
device, ok := db.keyer.DeviceFromDeviceFileKey(dbi.Key())
if !ok {
// Not having the device in the index is bad. Clear it.
t.Delete(dbi.Key())
t.checkFlush()
continue
}
var f FileInfoTruncated
// The iterator function may keep a reference to the unmarshalled
// struct, which in turn references the buffer it was unmarshalled
// from. dbi.Value() just returns an internal slice that it reuses, so
// we need to copy it.
err := f.Unmarshal(append([]byte{}, dbi.Value()...))
if err != nil {
l.Debugln("unmarshal error:", err)
continue
}
switch f.Name {
case "", ".", "..", "/": // A few obviously invalid filenames
l.Infof("Dropping invalid filename %q from database", f.Name)
name := []byte(f.Name)
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
keyBuf = t.removeFromGlobal(gk, keyBuf, folder, device, name, nil)
t.Delete(dbi.Key())
t.checkFlush()
continue
}
if !fn(device, f) {
return
}
}
}
func (db *instance) getFileDirty(folder, device, file []byte) (protocol.FileInfo, bool) {
t := db.newReadOnlyTransaction()
defer t.close()
return t.getFile(folder, device, file)
}
func (db *instance) getGlobalDirty(folder, file []byte, truncate bool) (FileIntf, bool) {
t := db.newReadOnlyTransaction()
defer t.close()
_, f, ok := t.getGlobal(nil, folder, file, truncate)
return f, ok
}
func (db *instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator) {
t := db.newReadOnlyTransaction()
defer t.close()
if len(prefix) > 0 {
unslashedPrefix := prefix
if bytes.HasSuffix(prefix, []byte{'/'}) {
unslashedPrefix = unslashedPrefix[:len(unslashedPrefix)-1]
} else {
prefix = append(prefix, '/')
}
if _, f, ok := t.getGlobal(nil, folder, unslashedPrefix, truncate); ok && !fn(f) {
return
}
}
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, prefix)), nil)
defer dbi.Release()
var dk []byte
for dbi.Next() {
name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
if len(prefix) > 0 && !bytes.HasPrefix(name, prefix) {
return
}
vl, ok := unmarshalVersionList(dbi.Value())
if !ok {
continue
}
dk = db.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[0].Device, name)
f, ok := t.getFileTrunc(dk, truncate)
if !ok {
continue
}
if !fn(f) {
return
}
}
}
func (db *instance) availability(folder, file []byte) []protocol.DeviceID {
k := db.keyer.GenerateGlobalVersionKey(nil, folder, file)
bs, err := db.Get(k, nil)
if err == leveldb.ErrNotFound {
return nil
}
if err != nil {
l.Debugln("surprise error:", err)
return nil
}
vl, ok := unmarshalVersionList(bs)
if !ok {
return nil
}
var devices []protocol.DeviceID
for _, v := range vl.Versions {
if !v.Version.Equal(vl.Versions[0].Version) {
break
}
if v.Invalid {
continue
}
n := protocol.DeviceIDFromBytes(v.Device)
devices = append(devices, n)
}
return devices
}
func (db *instance) withNeed(folder, device []byte, truncate bool, fn Iterator) {
if bytes.Equal(device, protocol.LocalDeviceID[:]) {
db.withNeedLocal(folder, truncate, fn)
return
}
t := db.newReadOnlyTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var dk []byte
devID := protocol.DeviceIDFromBytes(device)
for dbi.Next() {
vl, ok := unmarshalVersionList(dbi.Value())
if !ok {
continue
}
haveFV, have := vl.Get(device)
// XXX: This marks Concurrent (i.e. conflicting) changes as
// needs. Maybe we should do that, but it needs special
// handling in the puller.
if have && haveFV.Version.GreaterEqual(vl.Versions[0].Version) {
continue
}
name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
needVersion := vl.Versions[0].Version
needDevice := protocol.DeviceIDFromBytes(vl.Versions[0].Device)
for i := range vl.Versions {
if !vl.Versions[i].Version.Equal(needVersion) {
// We haven't found a valid copy of the file with the needed version.
break
}
if vl.Versions[i].Invalid {
// The file is marked invalid, don't use it.
continue
}
dk = db.keyer.GenerateDeviceFileKey(dk, folder, vl.Versions[i].Device, name)
gf, ok := t.getFileTrunc(dk, truncate)
if !ok {
continue
}
if gf.IsDeleted() && !have {
// We don't need deleted files that we don't have
break
}
l.Debugf("need folder=%q device=%v name=%q have=%v invalid=%v haveV=%v globalV=%v globalDev=%v", folder, devID, name, have, haveFV.Invalid, haveFV.Version, needVersion, needDevice)
if !fn(gf) {
return
}
// This file is handled, no need to look further in the version list
break
}
}
}
func (db *instance) withNeedLocal(folder []byte, truncate bool, fn Iterator) {
t := db.newReadOnlyTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateNeedFileKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var keyBuf []byte
var f FileIntf
var ok bool
for dbi.Next() {
keyBuf, f, ok = t.getGlobal(keyBuf, folder, db.keyer.NameFromGlobalVersionKey(dbi.Key()), truncate)
if !ok {
continue
}
if !fn(f) {
return
}
}
}
func (db *instance) dropFolder(folder []byte) {
t := db.newReadWriteTransaction()
defer t.close()
for _, key := range [][]byte{
// Remove all items related to the given folder from the device->file bucket
db.keyer.GenerateDeviceFileKey(nil, folder, nil, nil).WithoutNameAndDevice(),
// Remove all sequences related to the folder
db.keyer.GenerateSequenceKey(nil, []byte(folder), 0).WithoutSequence(),
// Remove all items related to the given folder from the global bucket
db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName(),
// Remove all needs related to the folder
db.keyer.GenerateNeedFileKey(nil, folder, nil).WithoutName(),
// Remove the blockmap of the folder
db.keyer.GenerateBlockMapKey(nil, folder, nil, nil).WithoutHashAndName(),
} {
t.deleteKeyPrefix(key)
}
}
func (db *instance) dropDeviceFolder(device, folder []byte, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateDeviceFileKey(nil, folder, device, nil)), nil)
defer dbi.Release()
var gk, keyBuf []byte
for dbi.Next() {
name := db.keyer.NameFromDeviceFileKey(dbi.Key())
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
keyBuf = t.removeFromGlobal(gk, keyBuf, folder, device, name, meta)
t.Delete(dbi.Key())
t.checkFlush()
}
if bytes.Equal(device, protocol.LocalDeviceID[:]) {
t.deleteKeyPrefix(db.keyer.GenerateBlockMapKey(nil, folder, nil, nil).WithoutHashAndName())
}
}
func (db *instance) checkGlobals(folder []byte, meta *metadataTracker) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(db.keyer.GenerateGlobalVersionKey(nil, folder, nil).WithoutName()), nil)
defer dbi.Release()
var dk []byte
for dbi.Next() {
vl, ok := unmarshalVersionList(dbi.Value())
if !ok {
continue
}
// Check the global version list for consistency. An issue in previous
// versions of goleveldb could result in reordered writes so that
// there are global entries pointing to no longer existing files. Here
// we find those and clear them out.
name := db.keyer.NameFromGlobalVersionKey(dbi.Key())
var newVL VersionList
for i, version := range vl.Versions {
dk = db.keyer.GenerateDeviceFileKey(dk, folder, version.Device, name)
_, err := t.Get(dk, nil)
if err == leveldb.ErrNotFound {
continue
}
if err != nil {
l.Debugln("surprise error:", err)
return
}
newVL.Versions = append(newVL.Versions, version)
if i == 0 {
if fi, ok := t.getFileByKey(dk); ok {
meta.addFile(protocol.GlobalDeviceID, fi)
}
}
}
if len(newVL.Versions) != len(vl.Versions) {
t.Put(dbi.Key(), mustMarshal(&newVL))
t.checkFlush()
}
}
l.Debugf("db check completed for %q", folder)
}
func (db *instance) getIndexID(device, folder []byte) protocol.IndexID {
cur, err := db.Get(db.keyer.GenerateIndexIDKey(nil, device, folder), nil)
if err != nil {
return 0
}
var id protocol.IndexID
if err := id.Unmarshal(cur); err != nil {
return 0
}
return id
}
func (db *instance) setIndexID(device, folder []byte, id protocol.IndexID) {
bs, _ := id.Marshal() // marshalling can't fail
if err := db.Put(db.keyer.GenerateIndexIDKey(nil, device, folder), bs, nil); err != nil && err != leveldb.ErrClosed {
panic("storing index ID: " + err.Error())
}
}
func (db *instance) dropMtimes(folder []byte) {
db.dropPrefix(db.keyer.GenerateMtimesKey(nil, folder))
}
func (db *instance) dropFolderMeta(folder []byte) {
db.dropPrefix(db.keyer.GenerateFolderMetaKey(nil, folder))
}
func (db *instance) dropPrefix(prefix []byte) {
t := db.newReadWriteTransaction()
defer t.close()
t.deleteKeyPrefix(prefix)
}
func unmarshalTrunc(bs []byte, truncate bool) (FileIntf, error) {
if truncate {
var tf FileInfoTruncated
err := tf.Unmarshal(bs)
return tf, err
}
var tf protocol.FileInfo
err := tf.Unmarshal(bs)
return tf, err
}
func unmarshalVersionList(data []byte) (VersionList, bool) {
var vl VersionList
if err := vl.Unmarshal(data); err != nil {
l.Debugln("unmarshal error:", err)
return VersionList{}, false
}
if len(vl.Versions) == 0 {
l.Debugln("empty version list")
return VersionList{}, false
}
return vl, true
}
type errorSuggestion struct {
inner error
suggestion string
}
func (e errorSuggestion) Error() string {
return fmt.Sprintf("%s (%s)", e.inner.Error(), e.suggestion)
}
// unchanged checks if two files are the same and thus don't need to be updated.
// Local flags or the invalid bit might change without the version
// being bumped.
func unchanged(nf, ef FileIntf) bool {
return ef.FileVersion().Equal(nf.FileVersion()) && ef.IsInvalid() == nf.IsInvalid() && ef.FileLocalFlags() == nf.FileLocalFlags()
}

View File

@@ -63,36 +63,36 @@ const (
type keyer interface {
// device file key stuff
GenerateDeviceFileKey(key, folder, device, name []byte) deviceFileKey
GenerateDeviceFileKey(key, folder, device, name []byte) (deviceFileKey, error)
NameFromDeviceFileKey(key []byte) []byte
DeviceFromDeviceFileKey(key []byte) ([]byte, bool)
FolderFromDeviceFileKey(key []byte) ([]byte, bool)
// global version key stuff
GenerateGlobalVersionKey(key, folder, name []byte) globalVersionKey
GenerateGlobalVersionKey(key, folder, name []byte) (globalVersionKey, error)
NameFromGlobalVersionKey(key []byte) []byte
FolderFromGlobalVersionKey(key []byte) ([]byte, bool)
// block map key stuff (former BlockMap)
GenerateBlockMapKey(key, folder, hash, name []byte) blockMapKey
GenerateBlockMapKey(key, folder, hash, name []byte) (blockMapKey, error)
NameFromBlockMapKey(key []byte) []byte
// file need index
GenerateNeedFileKey(key, folder, name []byte) needFileKey
GenerateNeedFileKey(key, folder, name []byte) (needFileKey, error)
// file sequence index
GenerateSequenceKey(key, folder []byte, seq int64) sequenceKey
GenerateSequenceKey(key, folder []byte, seq int64) (sequenceKey, error)
SequenceFromSequenceKey(key []byte) int64
// index IDs
GenerateIndexIDKey(key, device, folder []byte) indexIDKey
GenerateIndexIDKey(key, device, folder []byte) (indexIDKey, error)
DeviceFromIndexIDKey(key []byte) ([]byte, bool)
// Mtimes
GenerateMtimesKey(key, folder []byte) mtimesKey
GenerateMtimesKey(key, folder []byte) (mtimesKey, error)
// Folder metadata
GenerateFolderMetaKey(key, folder []byte) folderMetaKey
GenerateFolderMetaKey(key, folder []byte) (folderMetaKey, error)
}
// defaultKeyer implements our key scheme. It needs folder and device
@@ -115,13 +115,21 @@ func (k deviceFileKey) WithoutNameAndDevice() []byte {
return k[:keyPrefixLen+keyFolderLen]
}
func (k defaultKeyer) GenerateDeviceFileKey(key, folder, device, name []byte) deviceFileKey {
func (k defaultKeyer) GenerateDeviceFileKey(key, folder, device, name []byte) (deviceFileKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
deviceID, err := k.deviceIdx.ID(device)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen+keyDeviceLen+len(name))
key[0] = KeyTypeDevice
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
binary.BigEndian.PutUint32(key[keyPrefixLen+keyFolderLen:], k.deviceIdx.ID(device))
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
binary.BigEndian.PutUint32(key[keyPrefixLen+keyFolderLen:], deviceID)
copy(key[keyPrefixLen+keyFolderLen+keyDeviceLen:], name)
return key
return key, nil
}
func (k defaultKeyer) NameFromDeviceFileKey(key []byte) []byte {
@@ -142,12 +150,16 @@ func (k globalVersionKey) WithoutName() []byte {
return k[:keyPrefixLen+keyFolderLen]
}
func (k defaultKeyer) GenerateGlobalVersionKey(key, folder, name []byte) globalVersionKey {
func (k defaultKeyer) GenerateGlobalVersionKey(key, folder, name []byte) (globalVersionKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen+len(name))
key[0] = KeyTypeGlobal
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
copy(key[keyPrefixLen+keyFolderLen:], name)
return key
return key, nil
}
func (k defaultKeyer) NameFromGlobalVersionKey(key []byte) []byte {
@@ -160,13 +172,17 @@ func (k defaultKeyer) FolderFromGlobalVersionKey(key []byte) ([]byte, bool) {
type blockMapKey []byte
func (k defaultKeyer) GenerateBlockMapKey(key, folder, hash, name []byte) blockMapKey {
func (k defaultKeyer) GenerateBlockMapKey(key, folder, hash, name []byte) (blockMapKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen+keyHashLen+len(name))
key[0] = KeyTypeBlock
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
copy(key[keyPrefixLen+keyFolderLen:], hash)
copy(key[keyPrefixLen+keyFolderLen+keyHashLen:], name)
return key
return key, nil
}
func (k defaultKeyer) NameFromBlockMapKey(key []byte) []byte {
@@ -183,12 +199,16 @@ func (k needFileKey) WithoutName() []byte {
return k[:keyPrefixLen+keyFolderLen]
}
func (k defaultKeyer) GenerateNeedFileKey(key, folder, name []byte) needFileKey {
func (k defaultKeyer) GenerateNeedFileKey(key, folder, name []byte) (needFileKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen+len(name))
key[0] = KeyTypeNeed
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
copy(key[keyPrefixLen+keyFolderLen:], name)
return key
return key, nil
}
type sequenceKey []byte
@@ -197,12 +217,16 @@ func (k sequenceKey) WithoutSequence() []byte {
return k[:keyPrefixLen+keyFolderLen]
}
func (k defaultKeyer) GenerateSequenceKey(key, folder []byte, seq int64) sequenceKey {
func (k defaultKeyer) GenerateSequenceKey(key, folder []byte, seq int64) (sequenceKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen+keySequenceLen)
key[0] = KeyTypeSequence
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
binary.BigEndian.PutUint64(key[keyPrefixLen+keyFolderLen:], uint64(seq))
return key
return key, nil
}
func (k defaultKeyer) SequenceFromSequenceKey(key []byte) int64 {
@@ -211,12 +235,20 @@ func (k defaultKeyer) SequenceFromSequenceKey(key []byte) int64 {
type indexIDKey []byte
func (k defaultKeyer) GenerateIndexIDKey(key, device, folder []byte) indexIDKey {
func (k defaultKeyer) GenerateIndexIDKey(key, device, folder []byte) (indexIDKey, error) {
deviceID, err := k.deviceIdx.ID(device)
if err != nil {
return nil, err
}
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyDeviceLen+keyFolderLen)
key[0] = KeyTypeIndexID
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.deviceIdx.ID(device))
binary.BigEndian.PutUint32(key[keyPrefixLen+keyDeviceLen:], k.folderIdx.ID(folder))
return key
binary.BigEndian.PutUint32(key[keyPrefixLen:], deviceID)
binary.BigEndian.PutUint32(key[keyPrefixLen+keyDeviceLen:], folderID)
return key, nil
}
func (k defaultKeyer) DeviceFromIndexIDKey(key []byte) ([]byte, bool) {
@@ -225,20 +257,28 @@ func (k defaultKeyer) DeviceFromIndexIDKey(key []byte) ([]byte, bool) {
type mtimesKey []byte
func (k defaultKeyer) GenerateMtimesKey(key, folder []byte) mtimesKey {
func (k defaultKeyer) GenerateMtimesKey(key, folder []byte) (mtimesKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen)
key[0] = KeyTypeVirtualMtime
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
return key
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
return key, nil
}
type folderMetaKey []byte
func (k defaultKeyer) GenerateFolderMetaKey(key, folder []byte) folderMetaKey {
func (k defaultKeyer) GenerateFolderMetaKey(key, folder []byte) (folderMetaKey, error) {
folderID, err := k.folderIdx.ID(folder)
if err != nil {
return nil, err
}
key = resize(key, keyPrefixLen+keyFolderLen)
key[0] = KeyTypeFolderMeta
binary.BigEndian.PutUint32(key[keyPrefixLen:], k.folderIdx.ID(folder))
return key
binary.BigEndian.PutUint32(key[keyPrefixLen:], folderID)
return key, nil
}
// resize returns a byte slice of the specified size, reusing bs if possible

View File

@@ -9,6 +9,8 @@ package db
import (
"bytes"
"testing"
"github.com/syncthing/syncthing/lib/db/backend"
)
func TestDeviceKey(t *testing.T) {
@@ -16,9 +18,12 @@ func TestDeviceKey(t *testing.T) {
dev := []byte("device67890123456789012345678901")
name := []byte("name")
db := newInstance(OpenMemory())
db := NewLowlevel(backend.OpenMemory())
key := db.keyer.GenerateDeviceFileKey(nil, fld, dev, name)
key, err := db.keyer.GenerateDeviceFileKey(nil, fld, dev, name)
if err != nil {
t.Fatal(err)
}
fld2, ok := db.keyer.FolderFromDeviceFileKey(key)
if !ok {
@@ -44,9 +49,12 @@ func TestGlobalKey(t *testing.T) {
fld := []byte("folder6789012345678901234567890123456789012345678901234567890123")
name := []byte("name")
db := newInstance(OpenMemory())
db := NewLowlevel(backend.OpenMemory())
key := db.keyer.GenerateGlobalVersionKey(nil, fld, name)
key, err := db.keyer.GenerateGlobalVersionKey(nil, fld, name)
if err != nil {
t.Fatal(err)
}
fld2, ok := db.keyer.FolderFromGlobalVersionKey(key)
if !ok {
@@ -69,10 +77,13 @@ func TestGlobalKey(t *testing.T) {
func TestSequenceKey(t *testing.T) {
fld := []byte("folder6789012345678901234567890123456789012345678901234567890123")
db := newInstance(OpenMemory())
db := NewLowlevel(backend.OpenMemory())
const seq = 1234567890
key := db.keyer.GenerateSequenceKey(nil, fld, seq)
key, err := db.keyer.GenerateSequenceKey(nil, fld, seq)
if err != nil {
t.Fatal(err)
}
outSeq := db.keyer.SequenceFromSequenceKey(key)
if outSeq != seq {
t.Errorf("sequence number mangled, %d != %d", outSeq, seq)

View File

File diff suppressed because it is too large Load Diff

View File

@@ -56,8 +56,11 @@ func (m *metadataTracker) Marshal() ([]byte, error) {
// toDB saves the marshalled metadataTracker to the given db, under the key
// corresponding to the given folder
func (m *metadataTracker) toDB(db *instance, folder []byte) error {
key := db.keyer.GenerateFolderMetaKey(nil, folder)
func (m *metadataTracker) toDB(db *Lowlevel, folder []byte) error {
key, err := db.keyer.GenerateFolderMetaKey(nil, folder)
if err != nil {
return err
}
m.mut.RLock()
defer m.mut.RUnlock()
@@ -70,7 +73,7 @@ func (m *metadataTracker) toDB(db *instance, folder []byte) error {
if err != nil {
return err
}
err = db.Put(key, bs, nil)
err = db.Put(key, bs)
if err == nil {
m.dirty = false
}
@@ -80,9 +83,12 @@ func (m *metadataTracker) toDB(db *instance, folder []byte) error {
// fromDB initializes the metadataTracker from the marshalled data found in
// the database under the key corresponding to the given folder
func (m *metadataTracker) fromDB(db *instance, folder []byte) error {
key := db.keyer.GenerateFolderMetaKey(nil, folder)
bs, err := db.Get(key, nil)
func (m *metadataTracker) fromDB(db *Lowlevel, folder []byte) error {
key, err := db.keyer.GenerateFolderMetaKey(nil, folder)
if err != nil {
return err
}
bs, err := db.Get(key)
if err != nil {
return err
}

View File

@@ -10,7 +10,7 @@ import (
"encoding/binary"
"time"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/syncthing/syncthing/lib/db/backend"
)
// NamespacedKV is a simple key-value store using a specific namespace within
@@ -34,112 +34,99 @@ func NewNamespacedKV(db *Lowlevel, prefix string) *NamespacedKV {
}
}
// Reset removes all entries in this namespace.
func (n *NamespacedKV) Reset() {
it := n.db.NewIterator(util.BytesPrefix(n.prefix), nil)
defer it.Release()
batch := n.db.newBatch()
for it.Next() {
batch.Delete(it.Key())
batch.checkFlush()
}
batch.flush()
}
// PutInt64 stores a new int64. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutInt64(key string, val int64) {
func (n *NamespacedKV) PutInt64(key string, val int64) error {
var valBs [8]byte
binary.BigEndian.PutUint64(valBs[:], uint64(val))
n.db.Put(n.prefixedKey(key), valBs[:], nil)
return n.db.Put(n.prefixedKey(key), valBs[:])
}
// Int64 returns the stored value interpreted as an int64 and a boolean that
// is false if no value was stored at the key.
func (n *NamespacedKV) Int64(key string) (int64, bool) {
valBs, err := n.db.Get(n.prefixedKey(key), nil)
func (n *NamespacedKV) Int64(key string) (int64, bool, error) {
valBs, err := n.db.Get(n.prefixedKey(key))
if err != nil {
return 0, false
return 0, false, filterNotFound(err)
}
val := binary.BigEndian.Uint64(valBs)
return int64(val), true
return int64(val), true, nil
}
// PutTime stores a new time.Time. Any existing value (even if of another
// type) is overwritten.
func (n *NamespacedKV) PutTime(key string, val time.Time) {
func (n *NamespacedKV) PutTime(key string, val time.Time) error {
valBs, _ := val.MarshalBinary() // never returns an error
n.db.Put(n.prefixedKey(key), valBs, nil)
return n.db.Put(n.prefixedKey(key), valBs)
}
// Time returns the stored value interpreted as a time.Time and a boolean
// that is false if no value was stored at the key.
func (n NamespacedKV) Time(key string) (time.Time, bool) {
func (n NamespacedKV) Time(key string) (time.Time, bool, error) {
var t time.Time
valBs, err := n.db.Get(n.prefixedKey(key), nil)
valBs, err := n.db.Get(n.prefixedKey(key))
if err != nil {
return t, false
return t, false, filterNotFound(err)
}
err = t.UnmarshalBinary(valBs)
return t, err == nil
return t, err == nil, err
}
// PutString stores a new string. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutString(key, val string) {
n.db.Put(n.prefixedKey(key), []byte(val), nil)
func (n *NamespacedKV) PutString(key, val string) error {
return n.db.Put(n.prefixedKey(key), []byte(val))
}
// String returns the stored value interpreted as a string and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) String(key string) (string, bool) {
valBs, err := n.db.Get(n.prefixedKey(key), nil)
func (n NamespacedKV) String(key string) (string, bool, error) {
valBs, err := n.db.Get(n.prefixedKey(key))
if err != nil {
return "", false
return "", false, filterNotFound(err)
}
return string(valBs), true
return string(valBs), true, nil
}
// PutBytes stores a new byte slice. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutBytes(key string, val []byte) {
n.db.Put(n.prefixedKey(key), val, nil)
func (n *NamespacedKV) PutBytes(key string, val []byte) error {
return n.db.Put(n.prefixedKey(key), val)
}
// Bytes returns the stored value as a raw byte slice and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) Bytes(key string) ([]byte, bool) {
valBs, err := n.db.Get(n.prefixedKey(key), nil)
func (n NamespacedKV) Bytes(key string) ([]byte, bool, error) {
valBs, err := n.db.Get(n.prefixedKey(key))
if err != nil {
return nil, false
return nil, false, filterNotFound(err)
}
return valBs, true
return valBs, true, nil
}
// PutBool stores a new boolean. Any existing value (even if of another type)
// is overwritten.
func (n *NamespacedKV) PutBool(key string, val bool) {
func (n *NamespacedKV) PutBool(key string, val bool) error {
if val {
n.db.Put(n.prefixedKey(key), []byte{0x0}, nil)
} else {
n.db.Put(n.prefixedKey(key), []byte{0x1}, nil)
return n.db.Put(n.prefixedKey(key), []byte{0x0})
}
return n.db.Put(n.prefixedKey(key), []byte{0x1})
}
// Bool returns the stored value as a boolean and a boolean that
// is false if no value was stored at the key.
func (n NamespacedKV) Bool(key string) (bool, bool) {
valBs, err := n.db.Get(n.prefixedKey(key), nil)
func (n NamespacedKV) Bool(key string) (bool, bool, error) {
valBs, err := n.db.Get(n.prefixedKey(key))
if err != nil {
return false, false
return false, false, filterNotFound(err)
}
return valBs[0] == 0x0, true
return valBs[0] == 0x0, true, nil
}
// Delete deletes the specified key. It is allowed to delete a nonexistent
// key.
func (n NamespacedKV) Delete(key string) {
n.db.Delete(n.prefixedKey(key), nil)
func (n NamespacedKV) Delete(key string) error {
return n.db.Delete(n.prefixedKey(key))
}
func (n NamespacedKV) prefixedKey(key string) []byte {
@@ -165,3 +152,10 @@ func NewFolderStatisticsNamespace(db *Lowlevel, folder string) *NamespacedKV {
func NewMiscDataNamespace(db *Lowlevel) *NamespacedKV {
return NewNamespacedKV(db, string(KeyTypeMiscData))
}
func filterNotFound(err error) error {
if backend.IsNotFound(err) {
return nil
}
return err
}

View File

@@ -9,104 +9,167 @@ package db
import (
"testing"
"time"
"github.com/syncthing/syncthing/lib/db/backend"
)
func TestNamespacedInt(t *testing.T) {
ldb := OpenMemory()
ldb := NewLowlevel(backend.OpenMemory())
n1 := NewNamespacedKV(ldb, "foo")
n2 := NewNamespacedKV(ldb, "bar")
// Key is missing to start with
if v, ok := n1.Int64("test"); v != 0 || ok {
if v, ok, err := n1.Int64("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != 0 || ok {
t.Errorf("Incorrect return v %v != 0 || ok %v != false", v, ok)
}
n1.PutInt64("test", 42)
if err := n1.PutInt64("test", 42); err != nil {
t.Fatal(err)
}
// It should now exist in n1
if v, ok := n1.Int64("test"); v != 42 || !ok {
if v, ok, err := n1.Int64("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != 42 || !ok {
t.Errorf("Incorrect return v %v != 42 || ok %v != true", v, ok)
}
// ... but not in n2, which is in a different namespace
if v, ok := n2.Int64("test"); v != 0 || ok {
if v, ok, err := n2.Int64("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != 0 || ok {
t.Errorf("Incorrect return v %v != 0 || ok %v != false", v, ok)
}
n1.Delete("test")
if err := n1.Delete("test"); err != nil {
t.Fatal(err)
}
// It should no longer exist
if v, ok := n1.Int64("test"); v != 0 || ok {
if v, ok, err := n1.Int64("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != 0 || ok {
t.Errorf("Incorrect return v %v != 0 || ok %v != false", v, ok)
}
}
func TestNamespacedTime(t *testing.T) {
ldb := OpenMemory()
ldb := NewLowlevel(backend.OpenMemory())
n1 := NewNamespacedKV(ldb, "foo")
if v, ok := n1.Time("test"); !v.IsZero() || ok {
if v, ok, err := n1.Time("test"); err != nil {
t.Error("Unexpected error:", err)
} else if !v.IsZero() || ok {
t.Errorf("Incorrect return v %v != %v || ok %v != false", v, time.Time{}, ok)
}
now := time.Now()
n1.PutTime("test", now)
if err := n1.PutTime("test", now); err != nil {
t.Fatal(err)
}
if v, ok := n1.Time("test"); !v.Equal(now) || !ok {
if v, ok, err := n1.Time("test"); err != nil {
t.Error("Unexpected error:", err)
} else if !v.Equal(now) || !ok {
t.Errorf("Incorrect return v %v != %v || ok %v != true", v, now, ok)
}
}
func TestNamespacedString(t *testing.T) {
ldb := OpenMemory()
ldb := NewLowlevel(backend.OpenMemory())
n1 := NewNamespacedKV(ldb, "foo")
if v, ok := n1.String("test"); v != "" || ok {
if v, ok, err := n1.String("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "" || ok {
t.Errorf("Incorrect return v %q != \"\" || ok %v != false", v, ok)
}
n1.PutString("test", "yo")
if err := n1.PutString("test", "yo"); err != nil {
t.Fatal(err)
}
if v, ok := n1.String("test"); v != "yo" || !ok {
if v, ok, err := n1.String("test"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "yo" || !ok {
t.Errorf("Incorrect return v %q != \"yo\" || ok %v != true", v, ok)
}
}
func TestNamespacedReset(t *testing.T) {
ldb := OpenMemory()
ldb := NewLowlevel(backend.OpenMemory())
n1 := NewNamespacedKV(ldb, "foo")
n1.PutString("test1", "yo1")
n1.PutString("test2", "yo2")
n1.PutString("test3", "yo3")
if err := n1.PutString("test1", "yo1"); err != nil {
t.Fatal(err)
}
if err := n1.PutString("test2", "yo2"); err != nil {
t.Fatal(err)
}
if err := n1.PutString("test3", "yo3"); err != nil {
t.Fatal(err)
}
if v, ok := n1.String("test1"); v != "yo1" || !ok {
if v, ok, err := n1.String("test1"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "yo1" || !ok {
t.Errorf("Incorrect return v %q != \"yo1\" || ok %v != true", v, ok)
}
if v, ok := n1.String("test2"); v != "yo2" || !ok {
if v, ok, err := n1.String("test2"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "yo2" || !ok {
t.Errorf("Incorrect return v %q != \"yo2\" || ok %v != true", v, ok)
}
if v, ok := n1.String("test3"); v != "yo3" || !ok {
if v, ok, err := n1.String("test3"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "yo3" || !ok {
t.Errorf("Incorrect return v %q != \"yo3\" || ok %v != true", v, ok)
}
n1.Reset()
reset(n1)
if v, ok := n1.String("test1"); v != "" || ok {
if v, ok, err := n1.String("test1"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "" || ok {
t.Errorf("Incorrect return v %q != \"\" || ok %v != false", v, ok)
}
if v, ok := n1.String("test2"); v != "" || ok {
if v, ok, err := n1.String("test2"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "" || ok {
t.Errorf("Incorrect return v %q != \"\" || ok %v != false", v, ok)
}
if v, ok := n1.String("test3"); v != "" || ok {
if v, ok, err := n1.String("test3"); err != nil {
t.Error("Unexpected error:", err)
} else if v != "" || ok {
t.Errorf("Incorrect return v %q != \"\" || ok %v != false", v, ok)
}
}
// reset removes all entries in this namespace.
func reset(n *NamespacedKV) {
tr, err := n.db.NewWriteTransaction()
if err != nil {
return
}
defer tr.Release()
it, err := tr.NewPrefixIterator(n.prefix)
if err != nil {
return
}
for it.Next() {
_ = tr.Delete(it.Key())
}
it.Release()
_ = tr.Commit()
}

View File

@@ -11,7 +11,6 @@ import (
"strings"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb/util"
)
// List of all dbVersion to dbMinSyncthingVersion pairs for convenience
@@ -39,22 +38,27 @@ func (e databaseDowngradeError) Error() string {
return fmt.Sprintf("Syncthing %s required", e.minSyncthingVersion)
}
func UpdateSchema(ll *Lowlevel) error {
updater := &schemaUpdater{newInstance(ll)}
func UpdateSchema(db *Lowlevel) error {
updater := &schemaUpdater{db}
return updater.updateSchema()
}
type schemaUpdater struct {
*instance
*Lowlevel
}
func (db *schemaUpdater) updateSchema() error {
miscDB := NewMiscDataNamespace(db.Lowlevel)
prevVersion, _ := miscDB.Int64("dbVersion")
prevVersion, _, err := miscDB.Int64("dbVersion")
if err != nil {
return err
}
if prevVersion > dbVersion {
err := databaseDowngradeError{}
if minSyncthingVersion, ok := miscDB.String("dbMinSyncthingVersion"); ok {
if minSyncthingVersion, ok, dbErr := miscDB.String("dbMinSyncthingVersion"); dbErr != nil {
return dbErr
} else if ok {
err.minSyncthingVersion = minSyncthingVersion
}
return err
@@ -65,36 +69,58 @@ func (db *schemaUpdater) updateSchema() error {
}
if prevVersion < 1 {
db.updateSchema0to1()
if err := db.updateSchema0to1(); err != nil {
return err
}
}
if prevVersion < 2 {
db.updateSchema1to2()
if err := db.updateSchema1to2(); err != nil {
return err
}
}
if prevVersion < 3 {
db.updateSchema2to3()
if err := db.updateSchema2to3(); err != nil {
return err
}
}
// This update fixes problems existing in versions 3 and 4
if prevVersion == 3 || prevVersion == 4 {
db.updateSchemaTo5()
if err := db.updateSchemaTo5(); err != nil {
return err
}
}
if prevVersion < 6 {
db.updateSchema5to6()
if err := db.updateSchema5to6(); err != nil {
return err
}
}
if prevVersion < 7 {
db.updateSchema6to7()
if err := db.updateSchema6to7(); err != nil {
return err
}
}
miscDB.PutInt64("dbVersion", dbVersion)
miscDB.PutString("dbMinSyncthingVersion", dbMinSyncthingVersion)
if err := miscDB.PutInt64("dbVersion", dbVersion); err != nil {
return err
}
if err := miscDB.PutString("dbMinSyncthingVersion", dbMinSyncthingVersion); err != nil {
return err
}
return nil
}
func (db *schemaUpdater) updateSchema0to1() {
t := db.newReadWriteTransaction()
func (db *schemaUpdater) updateSchema0to1() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
dbi := t.NewIterator(util.BytesPrefix([]byte{KeyTypeDevice}), nil)
dbi, err := t.NewPrefixIterator([]byte{KeyTypeDevice})
if err != nil {
return err
}
defer dbi.Release()
symlinkConv := 0
@@ -104,18 +130,20 @@ func (db *schemaUpdater) updateSchema0to1() {
var gk, buf []byte
for dbi.Next() {
t.checkFlush()
folder, ok := db.keyer.FolderFromDeviceFileKey(dbi.Key())
if !ok {
// not having the folder in the index is bad; delete and continue
t.Delete(dbi.Key())
if err := t.Delete(dbi.Key()); err != nil {
return err
}
continue
}
device, ok := db.keyer.DeviceFromDeviceFileKey(dbi.Key())
if !ok {
// not having the device in the index is bad; delete and continue
t.Delete(dbi.Key())
if err := t.Delete(dbi.Key()); err != nil {
return err
}
continue
}
name := db.keyer.NameFromDeviceFileKey(dbi.Key())
@@ -125,9 +153,17 @@ func (db *schemaUpdater) updateSchema0to1() {
if _, ok := changedFolders[string(folder)]; !ok {
changedFolders[string(folder)] = struct{}{}
}
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
buf = t.removeFromGlobal(gk, buf, folder, device, nil, nil)
t.Delete(dbi.Key())
gk, err = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if err != nil {
return err
}
buf, err = t.removeFromGlobal(gk, buf, folder, device, nil, nil)
if err != nil {
return err
}
if err := t.Delete(dbi.Key()); err != nil {
return err
}
continue
}
@@ -147,14 +183,21 @@ func (db *schemaUpdater) updateSchema0to1() {
if err != nil {
panic("can't happen: " + err.Error())
}
t.Put(dbi.Key(), bs)
if err := t.Put(dbi.Key(), bs); err != nil {
return err
}
symlinkConv++
}
// Add invalid files to global list
if f.IsInvalid() {
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if buf, ok = t.updateGlobal(gk, buf, folder, device, f, meta); ok {
gk, err = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if err != nil {
return err
}
if buf, ok, err = t.updateGlobal(gk, buf, folder, device, f, meta); err != nil {
return err
} else if ok {
if _, ok = changedFolders[string(folder)]; !ok {
changedFolders[string(folder)] = struct{}{}
}
@@ -164,86 +207,139 @@ func (db *schemaUpdater) updateSchema0to1() {
}
for folder := range changedFolders {
db.dropFolderMeta([]byte(folder))
if err := db.dropFolderMeta([]byte(folder)); err != nil {
return err
}
}
return t.commit()
}
// updateSchema1to2 introduces a sequenceKey->deviceKey bucket for local items
// to allow iteration in sequence order (simplifies sending indexes).
func (db *schemaUpdater) updateSchema1to2() {
t := db.newReadWriteTransaction()
func (db *schemaUpdater) updateSchema1to2() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var sk []byte
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
db.withHave(folder, protocol.LocalDeviceID[:], nil, true, func(f FileIntf) bool {
sk = db.keyer.GenerateSequenceKey(sk, folder, f.SequenceNo())
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(f.FileName()))
t.Put(sk, dk)
t.checkFlush()
return true
var putErr error
err := db.withHave(folder, protocol.LocalDeviceID[:], nil, true, func(f FileIntf) bool {
sk, putErr = db.keyer.GenerateSequenceKey(sk, folder, f.SequenceNo())
if putErr != nil {
return false
}
dk, putErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(f.FileName()))
if putErr != nil {
return false
}
putErr = t.Put(sk, dk)
return putErr == nil
})
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.commit()
}
// updateSchema2to3 introduces a needKey->nil bucket for locally needed files.
func (db *schemaUpdater) updateSchema2to3() {
t := db.newReadWriteTransaction()
func (db *schemaUpdater) updateSchema2to3() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var nk []byte
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
db.withGlobal(folder, nil, true, func(f FileIntf) bool {
var putErr error
err := db.withGlobal(folder, nil, true, func(f FileIntf) bool {
name := []byte(f.FileName())
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], name)
dk, putErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], name)
if putErr != nil {
return false
}
var v protocol.Vector
haveFile, ok := t.getFileTrunc(dk, true)
haveFile, ok, err := t.getFileTrunc(dk, true)
if err != nil {
putErr = err
return false
}
if ok {
v = haveFile.FileVersion()
}
if !need(f, ok, v) {
return true
}
nk = t.keyer.GenerateNeedFileKey(nk, folder, []byte(f.FileName()))
t.Put(nk, nil)
t.checkFlush()
return true
nk, putErr = t.keyer.GenerateNeedFileKey(nk, folder, []byte(f.FileName()))
if putErr != nil {
return false
}
putErr = t.Put(nk, nil)
return putErr == nil
})
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.commit()
}
// updateSchemaTo5 resets the need bucket due to bugs existing in the v0.14.49
// release candidates (dbVersion 3 and 4)
// https://github.com/syncthing/syncthing/issues/5007
// https://github.com/syncthing/syncthing/issues/5053
func (db *schemaUpdater) updateSchemaTo5() {
t := db.newReadWriteTransaction()
func (db *schemaUpdater) updateSchemaTo5() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
var nk []byte
for _, folderStr := range db.ListFolders() {
nk = db.keyer.GenerateNeedFileKey(nk, []byte(folderStr), nil)
t.deleteKeyPrefix(nk[:keyPrefixLen+keyFolderLen])
nk, err = db.keyer.GenerateNeedFileKey(nk, []byte(folderStr), nil)
if err != nil {
return err
}
if err := t.deleteKeyPrefix(nk[:keyPrefixLen+keyFolderLen]); err != nil {
return err
}
}
if err := t.commit(); err != nil {
return err
}
t.close()
db.updateSchema2to3()
return db.updateSchema2to3()
}
func (db *schemaUpdater) updateSchema5to6() {
func (db *schemaUpdater) updateSchema5to6() error {
// For every local file with the Invalid bit set, clear the Invalid bit and
// set LocalFlags = FlagLocalIgnored.
t := db.newReadWriteTransaction()
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var dk []byte
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
db.withHave(folder, protocol.LocalDeviceID[:], nil, false, func(f FileIntf) bool {
var putErr error
err := db.withHave(folder, protocol.LocalDeviceID[:], nil, false, func(f FileIntf) bool {
if !f.IsInvalid() {
return true
}
@@ -253,19 +349,31 @@ func (db *schemaUpdater) updateSchema5to6() {
fi.LocalFlags = protocol.FlagLocalIgnored
bs, _ := fi.Marshal()
dk = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(fi.Name))
t.Put(dk, bs)
dk, putErr = db.keyer.GenerateDeviceFileKey(dk, folder, protocol.LocalDeviceID[:], []byte(fi.Name))
if putErr != nil {
return false
}
putErr = t.Put(dk, bs)
t.checkFlush()
return true
return putErr == nil
})
if putErr != nil {
return putErr
}
if err != nil {
return err
}
}
return t.commit()
}
// updateSchema6to7 checks whether all currently locally needed files are really
// needed and removes them if not.
func (db *schemaUpdater) updateSchema6to7() {
t := db.newReadWriteTransaction()
func (db *schemaUpdater) updateSchema6to7() error {
t, err := db.newReadWriteTransaction()
if err != nil {
return err
}
defer t.close()
var gk []byte
@@ -273,15 +381,24 @@ func (db *schemaUpdater) updateSchema6to7() {
for _, folderStr := range db.ListFolders() {
folder := []byte(folderStr)
db.withNeedLocal(folder, false, func(f FileIntf) bool {
var delErr error
err := db.withNeedLocal(folder, false, func(f FileIntf) bool {
name := []byte(f.FileName())
global := f.(protocol.FileInfo)
gk = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
svl, err := t.Get(gk, nil)
gk, delErr = db.keyer.GenerateGlobalVersionKey(gk, folder, name)
if delErr != nil {
return false
}
svl, err := t.Get(gk)
if err != nil {
// If there is no global list, we hardly need it.
t.Delete(t.keyer.GenerateNeedFileKey(nk, folder, name))
return true
key, err := t.keyer.GenerateNeedFileKey(nk, folder, name)
if err != nil {
delErr = err
return false
}
delErr = t.Delete(key)
return delErr == nil
}
var fl VersionList
err = fl.Unmarshal(svl)
@@ -291,9 +408,18 @@ func (db *schemaUpdater) updateSchema6to7() {
return true
}
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); !need(global, haveLocalFV, localFV.Version) {
t.Delete(t.keyer.GenerateNeedFileKey(nk, folder, name))
key, err := t.keyer.GenerateNeedFileKey(nk, folder, name)
if err != nil {
delErr = err
return false
}
delErr = t.Delete(key)
}
return true
return delErr == nil
})
if err != nil {
return err
}
}
return t.commit()
}

View File

@@ -16,17 +16,17 @@ import (
"os"
"time"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb/util"
)
type FileSet struct {
folder string
fs fs.Filesystem
db *instance
db *Lowlevel
meta *metadataTracker
updateMutex sync.Mutex // protects database updates and the corresponding metadata changes
@@ -70,9 +70,7 @@ func init() {
}
}
func NewFileSet(folder string, fs fs.Filesystem, ll *Lowlevel) *FileSet {
db := newInstance(ll)
func NewFileSet(folder string, fs fs.Filesystem, db *Lowlevel) *FileSet {
var s = FileSet{
folder: folder,
fs: fs,
@@ -83,29 +81,42 @@ func NewFileSet(folder string, fs fs.Filesystem, ll *Lowlevel) *FileSet {
if err := s.meta.fromDB(db, []byte(folder)); err != nil {
l.Infof("No stored folder metadata for %q: recalculating", folder)
s.recalcCounts()
if err := s.recalcCounts(); backend.IsClosed(err) {
return nil
} else if err != nil {
panic(err)
}
} else if age := time.Since(s.meta.Created()); age > databaseRecheckInterval {
l.Infof("Stored folder metadata for %q is %v old; recalculating", folder, age)
s.recalcCounts()
if err := s.recalcCounts(); backend.IsClosed(err) {
return nil
} else if err != nil {
panic(err)
}
}
return &s
}
func (s *FileSet) recalcCounts() {
func (s *FileSet) recalcCounts() error {
s.meta = newMetadataTracker()
s.db.checkGlobals([]byte(s.folder), s.meta)
if err := s.db.checkGlobals([]byte(s.folder), s.meta); err != nil {
return err
}
var deviceID protocol.DeviceID
s.db.withAllFolderTruncated([]byte(s.folder), func(device []byte, f FileInfoTruncated) bool {
err := s.db.withAllFolderTruncated([]byte(s.folder), func(device []byte, f FileInfoTruncated) bool {
copy(deviceID[:], device)
s.meta.addFile(deviceID, f)
return true
})
if err != nil {
return err
}
s.meta.SetCreated()
s.meta.toDB(s.db, []byte(s.folder))
return s.meta.toDB(s.db, []byte(s.folder))
}
func (s *FileSet) Drop(device protocol.DeviceID) {
@@ -114,7 +125,11 @@ func (s *FileSet) Drop(device protocol.DeviceID) {
s.updateMutex.Lock()
defer s.updateMutex.Unlock()
s.db.dropDeviceFolder(device[:], []byte(s.folder), s.meta)
if err := s.db.dropDeviceFolder(device[:], []byte(s.folder), s.meta); backend.IsClosed(err) {
return
} else if err != nil {
panic(err)
}
if device == protocol.LocalDeviceID {
s.meta.resetCounts(device)
@@ -131,7 +146,11 @@ func (s *FileSet) Drop(device protocol.DeviceID) {
s.meta.resetAll(device)
}
s.meta.toDB(s.db, []byte(s.folder))
if err := s.meta.toDB(s.db, []byte(s.folder)); backend.IsClosed(err) {
return
} else if err != nil {
panic(err)
}
}
func (s *FileSet) Update(device protocol.DeviceID, fs []protocol.FileInfo) {
@@ -145,73 +164,110 @@ func (s *FileSet) Update(device protocol.DeviceID, fs []protocol.FileInfo) {
s.updateMutex.Lock()
defer s.updateMutex.Unlock()
defer s.meta.toDB(s.db, []byte(s.folder))
defer func() {
if err := s.meta.toDB(s.db, []byte(s.folder)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}()
if device == protocol.LocalDeviceID {
// For the local device we have a bunch of metadata to track.
s.db.updateLocalFiles([]byte(s.folder), fs, s.meta)
if err := s.db.updateLocalFiles([]byte(s.folder), fs, s.meta); err != nil && !backend.IsClosed(err) {
panic(err)
}
return
}
// Easy case, just update the files and we're done.
s.db.updateRemoteFiles([]byte(s.folder), device[:], fs, s.meta)
if err := s.db.updateRemoteFiles([]byte(s.folder), device[:], fs, s.meta); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithNeed(device protocol.DeviceID, fn Iterator) {
l.Debugf("%s WithNeed(%v)", s.folder, device)
s.db.withNeed([]byte(s.folder), device[:], false, nativeFileIterator(fn))
if err := s.db.withNeed([]byte(s.folder), device[:], false, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithNeedTruncated(device protocol.DeviceID, fn Iterator) {
l.Debugf("%s WithNeedTruncated(%v)", s.folder, device)
s.db.withNeed([]byte(s.folder), device[:], true, nativeFileIterator(fn))
if err := s.db.withNeed([]byte(s.folder), device[:], true, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithHave(device protocol.DeviceID, fn Iterator) {
l.Debugf("%s WithHave(%v)", s.folder, device)
s.db.withHave([]byte(s.folder), device[:], nil, false, nativeFileIterator(fn))
if err := s.db.withHave([]byte(s.folder), device[:], nil, false, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithHaveTruncated(device protocol.DeviceID, fn Iterator) {
l.Debugf("%s WithHaveTruncated(%v)", s.folder, device)
s.db.withHave([]byte(s.folder), device[:], nil, true, nativeFileIterator(fn))
if err := s.db.withHave([]byte(s.folder), device[:], nil, true, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithHaveSequence(startSeq int64, fn Iterator) {
l.Debugf("%s WithHaveSequence(%v)", s.folder, startSeq)
s.db.withHaveSequence([]byte(s.folder), startSeq, nativeFileIterator(fn))
if err := s.db.withHaveSequence([]byte(s.folder), startSeq, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
// Except for an item with a path equal to prefix, only children of prefix are iterated.
// E.g. for prefix "dir", "dir/file" is iterated, but "dir.file" is not.
func (s *FileSet) WithPrefixedHaveTruncated(device protocol.DeviceID, prefix string, fn Iterator) {
l.Debugf(`%s WithPrefixedHaveTruncated(%v, "%v")`, s.folder, device, prefix)
s.db.withHave([]byte(s.folder), device[:], []byte(osutil.NormalizedFilename(prefix)), true, nativeFileIterator(fn))
if err := s.db.withHave([]byte(s.folder), device[:], []byte(osutil.NormalizedFilename(prefix)), true, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithGlobal(fn Iterator) {
l.Debugf("%s WithGlobal()", s.folder)
s.db.withGlobal([]byte(s.folder), nil, false, nativeFileIterator(fn))
if err := s.db.withGlobal([]byte(s.folder), nil, false, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) WithGlobalTruncated(fn Iterator) {
l.Debugf("%s WithGlobalTruncated()", s.folder)
s.db.withGlobal([]byte(s.folder), nil, true, nativeFileIterator(fn))
if err := s.db.withGlobal([]byte(s.folder), nil, true, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
// Except for an item with a path equal to prefix, only children of prefix are iterated.
// E.g. for prefix "dir", "dir/file" is iterated, but "dir.file" is not.
func (s *FileSet) WithPrefixedGlobalTruncated(prefix string, fn Iterator) {
l.Debugf(`%s WithPrefixedGlobalTruncated("%v")`, s.folder, prefix)
s.db.withGlobal([]byte(s.folder), []byte(osutil.NormalizedFilename(prefix)), true, nativeFileIterator(fn))
if err := s.db.withGlobal([]byte(s.folder), []byte(osutil.NormalizedFilename(prefix)), true, nativeFileIterator(fn)); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) Get(device protocol.DeviceID, file string) (protocol.FileInfo, bool) {
f, ok := s.db.getFileDirty([]byte(s.folder), device[:], []byte(osutil.NormalizedFilename(file)))
f, ok, err := s.db.getFileDirty([]byte(s.folder), device[:], []byte(osutil.NormalizedFilename(file)))
if backend.IsClosed(err) {
return protocol.FileInfo{}, false
} else if err != nil {
panic(err)
}
f.Name = osutil.NativeFilename(f.Name)
return f, ok
}
func (s *FileSet) GetGlobal(file string) (protocol.FileInfo, bool) {
fi, ok := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), false)
fi, ok, err := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), false)
if backend.IsClosed(err) {
return protocol.FileInfo{}, false
} else if err != nil {
panic(err)
}
if !ok {
return protocol.FileInfo{}, false
}
@@ -221,7 +277,12 @@ func (s *FileSet) GetGlobal(file string) (protocol.FileInfo, bool) {
}
func (s *FileSet) GetGlobalTruncated(file string) (FileInfoTruncated, bool) {
fi, ok := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), true)
fi, ok, err := s.db.getGlobalDirty([]byte(s.folder), []byte(osutil.NormalizedFilename(file)), true)
if backend.IsClosed(err) {
return FileInfoTruncated{}, false
} else if err != nil {
panic(err)
}
if !ok {
return FileInfoTruncated{}, false
}
@@ -231,7 +292,13 @@ func (s *FileSet) GetGlobalTruncated(file string) (FileInfoTruncated, bool) {
}
func (s *FileSet) Availability(file string) []protocol.DeviceID {
return s.db.availability([]byte(s.folder), []byte(osutil.NormalizedFilename(file)))
av, err := s.db.availability([]byte(s.folder), []byte(osutil.NormalizedFilename(file)))
if backend.IsClosed(err) {
return nil
} else if err != nil {
panic(err)
}
return av
}
func (s *FileSet) Sequence(device protocol.DeviceID) int64 {
@@ -255,11 +322,21 @@ func (s *FileSet) GlobalSize() Counts {
}
func (s *FileSet) IndexID(device protocol.DeviceID) protocol.IndexID {
id := s.db.getIndexID(device[:], []byte(s.folder))
id, err := s.db.getIndexID(device[:], []byte(s.folder))
if backend.IsClosed(err) {
return 0
} else if err != nil {
panic(err)
}
if id == 0 && device == protocol.LocalDeviceID {
// No index ID set yet. We create one now.
id = protocol.NewIndexID()
s.db.setIndexID(device[:], []byte(s.folder), id)
err := s.db.setIndexID(device[:], []byte(s.folder), id)
if backend.IsClosed(err) {
return 0
} else if err != nil {
panic(err)
}
}
return id
}
@@ -268,12 +345,19 @@ func (s *FileSet) SetIndexID(device protocol.DeviceID, id protocol.IndexID) {
if device == protocol.LocalDeviceID {
panic("do not explicitly set index ID for local device")
}
s.db.setIndexID(device[:], []byte(s.folder), id)
if err := s.db.setIndexID(device[:], []byte(s.folder), id); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
func (s *FileSet) MtimeFS() *fs.MtimeFS {
prefix := s.db.keyer.GenerateMtimesKey(nil, []byte(s.folder))
kv := NewNamespacedKV(s.db.Lowlevel, string(prefix))
prefix, err := s.db.keyer.GenerateMtimesKey(nil, []byte(s.folder))
if backend.IsClosed(err) {
return nil
} else if err != nil {
panic(err)
}
kv := NewNamespacedKV(s.db, string(prefix))
return fs.NewMtimeFS(s.fs, kv)
}
@@ -283,23 +367,39 @@ func (s *FileSet) ListDevices() []protocol.DeviceID {
// DropFolder clears out all information related to the given folder from the
// database.
func DropFolder(ll *Lowlevel, folder string) {
db := newInstance(ll)
db.dropFolder([]byte(folder))
db.dropMtimes([]byte(folder))
db.dropFolderMeta([]byte(folder))
// Also clean out the folder ID mapping.
db.folderIdx.Delete([]byte(folder))
func DropFolder(db *Lowlevel, folder string) {
droppers := []func([]byte) error{
db.dropFolder,
db.dropMtimes,
db.dropFolderMeta,
db.folderIdx.Delete,
}
for _, drop := range droppers {
if err := drop([]byte(folder)); backend.IsClosed(err) {
return
} else if err != nil {
panic(err)
}
}
}
// DropDeltaIndexIDs removes all delta index IDs from the database.
// This will cause a full index transmission on the next connection.
func DropDeltaIndexIDs(db *Lowlevel) {
dbi := db.NewIterator(util.BytesPrefix([]byte{KeyTypeIndexID}), nil)
dbi, err := db.NewPrefixIterator([]byte{KeyTypeIndexID})
if backend.IsClosed(err) {
return
} else if err != nil {
panic(err)
}
defer dbi.Release()
for dbi.Next() {
db.Delete(dbi.Key(), nil)
if err := db.Delete(dbi.Key()); err != nil && !backend.IsClosed(err) {
panic(err)
}
}
if err := dbi.Error(); err != nil && !backend.IsClosed(err) {
panic(err)
}
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/d4l3k/messagediff"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
)
@@ -117,7 +118,7 @@ func (l fileList) String() string {
}
func TestGlobalSet(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
m := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -332,7 +333,7 @@ func TestGlobalSet(t *testing.T) {
}
func TestNeedWithInvalid(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -369,7 +370,7 @@ func TestNeedWithInvalid(t *testing.T) {
}
func TestUpdateToInvalid(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
s := db.NewFileSet(folder, fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -425,7 +426,7 @@ func TestUpdateToInvalid(t *testing.T) {
}
func TestInvalidAvailability(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -463,7 +464,7 @@ func TestInvalidAvailability(t *testing.T) {
}
func TestGlobalReset(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
m := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -501,7 +502,7 @@ func TestGlobalReset(t *testing.T) {
}
func TestNeed(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
m := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -539,7 +540,7 @@ func TestNeed(t *testing.T) {
}
func TestSequence(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
m := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -569,7 +570,7 @@ func TestSequence(t *testing.T) {
}
func TestListDropFolder(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s0 := db.NewFileSet("test0", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
local1 := []protocol.FileInfo{
@@ -619,7 +620,7 @@ func TestListDropFolder(t *testing.T) {
}
func TestGlobalNeedWithInvalid(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test1", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -660,7 +661,7 @@ func TestGlobalNeedWithInvalid(t *testing.T) {
}
func TestLongPath(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -671,7 +672,7 @@ func TestLongPath(t *testing.T) {
name := b.String() // 5000 characters
local := []protocol.FileInfo{
{Name: string(name), Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: name, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
replace(s, protocol.LocalDeviceID, local)
@@ -686,39 +687,6 @@ func TestLongPath(t *testing.T) {
}
}
func TestCommitted(t *testing.T) {
// Verify that the Committed counter increases when we change things and
// doesn't increase when we don't.
ldb := db.OpenMemory()
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
local := []protocol.FileInfo{
{Name: string("file"), Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
// Adding a file should increase the counter
c0 := ldb.Committed()
replace(s, protocol.LocalDeviceID, local)
c1 := ldb.Committed()
if c1 <= c0 {
t.Errorf("committed data didn't increase; %d <= %d", c1, c0)
}
// Updating with something identical should not do anything
s.Update(protocol.LocalDeviceID, local)
c2 := ldb.Committed()
if c2 > c1 {
t.Errorf("replace with same contents should do nothing but %d > %d", c2, c1)
}
}
func BenchmarkUpdateOneFile(b *testing.B) {
local0 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
@@ -729,10 +697,11 @@ func BenchmarkUpdateOneFile(b *testing.B) {
protocol.FileInfo{Name: "zajksdhaskjdh/askjdhaskjdashkajshd/kasjdhaskjdhaskdjhaskdjash/dkjashdaksjdhaskdjahskdjh", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(8)},
}
ldb, err := db.Open("testdata/benchmarkupdate.db", db.TuningAuto)
be, err := backend.Open("testdata/benchmarkupdate.db", backend.TuningAuto)
if err != nil {
b.Fatal(err)
}
ldb := db.NewLowlevel(be)
defer func() {
ldb.Close()
os.RemoveAll("testdata/benchmarkupdate.db")
@@ -751,7 +720,7 @@ func BenchmarkUpdateOneFile(b *testing.B) {
}
func TestIndexID(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -783,7 +752,7 @@ func TestIndexID(t *testing.T) {
}
func TestDropFiles(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
m := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -846,7 +815,7 @@ func TestDropFiles(t *testing.T) {
}
func TestIssue4701(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -887,7 +856,7 @@ func TestIssue4701(t *testing.T) {
}
func TestWithHaveSequence(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
s := db.NewFileSet(folder, fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -915,14 +884,14 @@ func TestWithHaveSequence(t *testing.T) {
func TestStressWithHaveSequence(t *testing.T) {
// This races two loops against each other: one that contiously does
// updates, and one that continously does sequence walks. The test fails
// updates, and one that continuously does sequence walks. The test fails
// if the sequence walker sees a discontinuity.
if testing.Short() {
t.Skip("Takes a long time")
}
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
s := db.NewFileSet(folder, fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -945,7 +914,7 @@ func TestStressWithHaveSequence(t *testing.T) {
close(done)
}()
var prevSeq int64 = 0
var prevSeq int64
loop:
for {
select {
@@ -964,7 +933,7 @@ loop:
}
func TestIssue4925(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
s := db.NewFileSet(folder, fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -990,7 +959,7 @@ func TestIssue4925(t *testing.T) {
}
func TestMoveGlobalBack(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
file := "foo"
@@ -1054,7 +1023,7 @@ func TestMoveGlobalBack(t *testing.T) {
// needed files.
// https://github.com/syncthing/syncthing/issues/5007
func TestIssue5007(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
file := "foo"
@@ -1081,7 +1050,7 @@ func TestIssue5007(t *testing.T) {
// TestNeedDeleted checks that a file that doesn't exist locally isn't needed
// when the global file is deleted.
func TestNeedDeleted(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
file := "foo"
@@ -1115,7 +1084,7 @@ func TestNeedDeleted(t *testing.T) {
}
func TestReceiveOnlyAccounting(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
s := db.NewFileSet(folder, fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -1219,7 +1188,7 @@ func TestReceiveOnlyAccounting(t *testing.T) {
}
func TestNeedAfterUnignore(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
folder := "test"
file := "foo"
@@ -1251,7 +1220,7 @@ func TestNeedAfterUnignore(t *testing.T) {
func TestRemoteInvalidNotAccounted(t *testing.T) {
// Remote files with the invalid bit should not count.
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
files := []protocol.FileInfo{
@@ -1270,7 +1239,7 @@ func TestRemoteInvalidNotAccounted(t *testing.T) {
}
func TestNeedWithNewerInvalid(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("default", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -1308,7 +1277,7 @@ func TestNeedWithNewerInvalid(t *testing.T) {
}
func TestNeedAfterDeviceRemove(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
file := "foo"
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
@@ -1335,7 +1304,7 @@ func TestNeedAfterDeviceRemove(t *testing.T) {
func TestCaseSensitive(t *testing.T) {
// Normal case sensitive lookup should work
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
local := []protocol.FileInfo{
@@ -1372,7 +1341,7 @@ func TestSequenceIndex(t *testing.T) {
// Set up a db and a few files that we will manipulate.
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)
local := []protocol.FileInfo{
@@ -1463,7 +1432,7 @@ func TestSequenceIndex(t *testing.T) {
}
func TestIgnoreAfterReceiveOnly(t *testing.T) {
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
file := "foo"
s := db.NewFileSet("test", fs.NewFilesystem(fs.FilesystemTypeBasic, "."), ldb)

View File

@@ -10,16 +10,15 @@ import (
"encoding/binary"
"sort"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
)
// A smallIndex is an in memory bidirectional []byte to uint32 map. It gives
// fast lookups in both directions and persists to the database. Don't use for
// storing more items than fit comfortably in RAM.
type smallIndex struct {
db *leveldb.DB
db backend.Backend
prefix []byte
id2val map[uint32]string
val2id map[string]uint32
@@ -27,7 +26,7 @@ type smallIndex struct {
mut sync.Mutex
}
func newSmallIndex(db *leveldb.DB, prefix []byte) *smallIndex {
func newSmallIndex(db backend.Backend, prefix []byte) *smallIndex {
idx := &smallIndex{
db: db,
prefix: prefix,
@@ -42,7 +41,10 @@ func newSmallIndex(db *leveldb.DB, prefix []byte) *smallIndex {
// load iterates over the prefix space in the database and populates the in
// memory maps.
func (i *smallIndex) load() {
it := i.db.NewIterator(util.BytesPrefix(i.prefix), nil)
it, err := i.db.NewPrefixIterator(i.prefix)
if err != nil {
panic("loading small index: " + err.Error())
}
defer it.Release()
for it.Next() {
val := string(it.Value())
@@ -60,7 +62,7 @@ func (i *smallIndex) load() {
// ID returns the index number for the given byte slice, allocating a new one
// and persisting this to the database if necessary.
func (i *smallIndex) ID(val []byte) uint32 {
func (i *smallIndex) ID(val []byte) (uint32, error) {
i.mut.Lock()
// intentionally avoiding defer here as we want this call to be as fast as
// possible in the general case (folder ID already exists). The map lookup
@@ -69,7 +71,7 @@ func (i *smallIndex) ID(val []byte) uint32 {
// here.
if id, ok := i.val2id[string(val)]; ok {
i.mut.Unlock()
return id
return id, nil
}
id := i.nextID
@@ -82,10 +84,13 @@ func (i *smallIndex) ID(val []byte) uint32 {
key := make([]byte, len(i.prefix)+8) // prefix plus uint32 id
copy(key, i.prefix)
binary.BigEndian.PutUint32(key[len(i.prefix):], id)
i.db.Put(key, val, nil)
if err := i.db.Put(key, val); err != nil {
i.mut.Unlock()
return 0, err
}
i.mut.Unlock()
return id
return id, nil
}
// Val returns the value for the given index number, or (nil, false) if there
@@ -101,7 +106,7 @@ func (i *smallIndex) Val(id uint32) ([]byte, bool) {
return []byte(val), true
}
func (i *smallIndex) Delete(val []byte) {
func (i *smallIndex) Delete(val []byte) error {
i.mut.Lock()
defer i.mut.Unlock()
@@ -115,7 +120,9 @@ func (i *smallIndex) Delete(val []byte) {
// Put an empty value into the database. This indicates that the
// entry does not exist any more and prevents the ID from being
// reused in the future.
i.db.Put(key, []byte{}, nil)
if err := i.db.Put(key, []byte{}); err != nil {
return err
}
// Delete reverse mapping.
delete(i.id2val, id)
@@ -123,6 +130,7 @@ func (i *smallIndex) Delete(val []byte) {
// Delete forward mapping.
delete(i.val2id, string(val))
return nil
}
// Values returns the set of values in the index

View File

@@ -6,11 +6,15 @@
package db
import "testing"
import (
"testing"
"github.com/syncthing/syncthing/lib/db/backend"
)
func TestSmallIndex(t *testing.T) {
db := OpenMemory()
idx := newSmallIndex(db.DB, []byte{12, 34})
db := NewLowlevel(backend.OpenMemory())
idx := newSmallIndex(db, []byte{12, 34})
// ID zero should be unallocated
if val, ok := idx.Val(0); ok || val != nil {
@@ -18,7 +22,9 @@ func TestSmallIndex(t *testing.T) {
}
// A new key should get ID zero
if id := idx.ID([]byte("hello")); id != 0 {
if id, err := idx.ID([]byte("hello")); err != nil {
t.Fatal(err)
} else if id != 0 {
t.Fatal("Expected 0, not", id)
}
// Looking up ID zero should work
@@ -30,23 +36,29 @@ func TestSmallIndex(t *testing.T) {
idx.Delete([]byte("hello"))
// Next ID should be one
if id := idx.ID([]byte("key2")); id != 1 {
if id, err := idx.ID([]byte("key2")); err != nil {
t.Fatal(err)
} else if id != 1 {
t.Fatal("Expected 1, not", id)
}
// Now lets create a new index instance based on what's actually serialized to the database.
idx = newSmallIndex(db.DB, []byte{12, 34})
idx = newSmallIndex(db, []byte{12, 34})
// Status should be about the same as before.
if val, ok := idx.Val(0); ok || val != nil {
t.Fatal("Unexpected return for deleted ID 0")
}
if id := idx.ID([]byte("key2")); id != 1 {
if id, err := idx.ID([]byte("key2")); err != nil {
t.Fatal(err)
} else if id != 1 {
t.Fatal("Expected 1, not", id)
}
// Setting "hello" again should get us ID 2, not 0 as it was originally.
if id := idx.ID([]byte("hello")); id != 2 {
if id, err := idx.ID([]byte("hello")); err != nil {
t.Fatal(err)
} else if id != 2 {
t.Fatal("Expected 2, not", id)
}
}

View File

@@ -175,7 +175,7 @@ func (vl VersionList) String() string {
// update brings the VersionList up to date with file. It returns the updated
// VersionList, a potentially removed old FileVersion and its index, as well as
// the index where the new FileVersion was inserted.
func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, t readOnlyTransaction) (_ VersionList, removedFV FileVersion, removedAt int, insertedAt int) {
func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, t readOnlyTransaction) (_ VersionList, removedFV FileVersion, removedAt int, insertedAt int, err error) {
vl, removedFV, removedAt = vl.pop(device)
nv := FileVersion{
@@ -198,7 +198,7 @@ func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, t re
// The version at this point in the list is equal to or lesser
// ("older") than us. We insert ourselves in front of it.
vl = vl.insertAt(i, nv)
return vl, removedFV, removedAt, i
return vl, removedFV, removedAt, i, nil
case protocol.ConcurrentLesser, protocol.ConcurrentGreater:
// The version at this point is in conflict with us. We must pull
@@ -209,9 +209,11 @@ func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, t re
// to determine the winner.)
//
// A surprise missing file entry here is counted as a win for us.
if of, ok := t.getFile(folder, vl.Versions[i].Device, []byte(file.Name)); !ok || file.WinsConflict(of) {
if of, ok, err := t.getFile(folder, vl.Versions[i].Device, []byte(file.Name)); err != nil {
return vl, removedFV, removedAt, i, err
} else if !ok || file.WinsConflict(of) {
vl = vl.insertAt(i, nv)
return vl, removedFV, removedAt, i
return vl, removedFV, removedAt, i, nil
}
}
}
@@ -219,7 +221,7 @@ func (vl VersionList) update(folder, device []byte, file protocol.FileInfo, t re
// We didn't find a position for an insert above, so append to the end.
vl.Versions = append(vl.Versions, nv)
return vl, removedFV, removedAt, len(vl.Versions) - 1
return vl, removedFV, removedAt, len(vl.Versions) - 1, nil
}
func (vl VersionList) insertAt(i int, v FileVersion) VersionList {

View File

@@ -102,23 +102,23 @@ var xxx_messageInfo_VersionList proto.InternalMessageInfo
// Must be the same as FileInfo but without the blocks field
type FileInfoTruncated struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Type protocol.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=protocol.FileInfoType" json:"type,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
ModifiedBy github_com_syncthing_syncthing_lib_protocol.ShortID `protobuf:"varint,12,opt,name=modified_by,json=modifiedBy,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.ShortID" json:"modified_by"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
RawInvalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
Version protocol.Vector `protobuf:"bytes,9,opt,name=version,proto3" json:"version"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
RawBlockSize int32 `protobuf:"varint,13,opt,name=block_size,json=blockSize,proto3" json:"block_size,omitempty"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedBy github_com_syncthing_syncthing_lib_protocol.ShortID `protobuf:"varint,12,opt,name=modified_by,json=modifiedBy,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.ShortID" json:"modified_by"`
Version protocol.Vector `protobuf:"bytes,9,opt,name=version,proto3" json:"version"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
// repeated BlockInfo Blocks = 16
SymlinkTarget string `protobuf:"bytes,17,opt,name=symlink_target,json=symlinkTarget,proto3" json:"symlink_target,omitempty"`
SymlinkTarget string `protobuf:"bytes,17,opt,name=symlink_target,json=symlinkTarget,proto3" json:"symlink_target,omitempty"`
Type protocol.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=protocol.FileInfoType" json:"type,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
RawBlockSize int32 `protobuf:"varint,13,opt,name=block_size,json=blockSize,proto3" json:"block_size,omitempty"`
// see bep.proto
LocalFlags uint32 `protobuf:"varint,1000,opt,name=local_flags,json=localFlags,proto3" json:"local_flags,omitempty"`
LocalFlags uint32 `protobuf:"varint,1000,opt,name=local_flags,json=localFlags,proto3" json:"local_flags,omitempty"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
RawInvalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
}
func (m *FileInfoTruncated) Reset() { *m = FileInfoTruncated{} }
@@ -249,49 +249,49 @@ func init() { proto.RegisterFile("structs.proto", fileDescriptor_e774e8f5f348d14
var fileDescriptor_e774e8f5f348d14d = []byte{
// 683 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x53, 0x4f, 0x6b, 0xdb, 0x4e,
0x10, 0xb5, 0x62, 0xf9, 0xdf, 0xd8, 0xce, 0x2f, 0x59, 0x42, 0x10, 0x86, 0x9f, 0x2c, 0x5c, 0x0a,
0xa2, 0x07, 0xbb, 0x4d, 0x6e, 0xed, 0xcd, 0x0d, 0x01, 0x43, 0x69, 0xcb, 0x3a, 0xe4, 0x54, 0x30,
0xfa, 0xb3, 0x76, 0x96, 0xc8, 0x5a, 0x47, 0xbb, 0x4e, 0x70, 0x3e, 0x45, 0x8f, 0x3d, 0xe6, 0xe3,
0xe4, 0x98, 0x63, 0xe9, 0xc1, 0xa4, 0x76, 0x0f, 0xfd, 0x18, 0x65, 0x77, 0x25, 0x45, 0xcd, 0xa9,
0xb7, 0x79, 0x6f, 0x66, 0x77, 0x67, 0xe6, 0xbd, 0x85, 0x36, 0x17, 0xc9, 0x32, 0x10, 0xbc, 0xbf,
0x48, 0x98, 0x60, 0x68, 0x27, 0xf4, 0x3b, 0x2f, 0x12, 0xb2, 0x60, 0x7c, 0xa0, 0x08, 0x7f, 0x39,
0x1d, 0xcc, 0xd8, 0x8c, 0x29, 0xa0, 0x22, 0x5d, 0xd8, 0x39, 0x8c, 0xa8, 0xaf, 0x4b, 0x02, 0x16,
0x0d, 0x7c, 0xb2, 0xd0, 0x7c, 0xef, 0x0a, 0x9a, 0xa7, 0x34, 0x22, 0xe7, 0x24, 0xe1, 0x94, 0xc5,
0xe8, 0x35, 0xd4, 0xae, 0x75, 0x68, 0x19, 0x8e, 0xe1, 0x36, 0x8f, 0xf6, 0xfa, 0xd9, 0xa1, 0xfe,
0x39, 0x09, 0x04, 0x4b, 0x86, 0xe6, 0xfd, 0xba, 0x5b, 0xc2, 0x59, 0x19, 0x3a, 0x84, 0x6a, 0x48,
0xae, 0x69, 0x40, 0xac, 0x1d, 0xc7, 0x70, 0x5b, 0x38, 0x45, 0xc8, 0x82, 0x1a, 0x8d, 0xaf, 0xbd,
0x88, 0x86, 0x56, 0xd9, 0x31, 0xdc, 0x3a, 0xce, 0x60, 0xef, 0x14, 0x9a, 0xe9, 0x73, 0x1f, 0x28,
0x17, 0xe8, 0x0d, 0xd4, 0xd3, 0xbb, 0xb8, 0x65, 0x38, 0x65, 0xb7, 0x79, 0xf4, 0x5f, 0x3f, 0xf4,
0xfb, 0x85, 0xae, 0xd2, 0x27, 0xf3, 0xb2, 0xb7, 0xe6, 0xb7, 0xbb, 0x6e, 0xa9, 0xf7, 0x68, 0xc2,
0xbe, 0xac, 0x1a, 0xc5, 0x53, 0x76, 0x96, 0x2c, 0xe3, 0xc0, 0x13, 0x24, 0x44, 0x08, 0xcc, 0xd8,
0x9b, 0x13, 0xd5, 0x7e, 0x03, 0xab, 0x18, 0xbd, 0x02, 0x53, 0xac, 0x16, 0xba, 0xc3, 0xdd, 0xa3,
0xc3, 0xa7, 0x91, 0xf2, 0xe3, 0xab, 0x05, 0xc1, 0xaa, 0x46, 0x9e, 0xe7, 0xf4, 0x96, 0xa8, 0xa6,
0xcb, 0x58, 0xc5, 0xc8, 0x81, 0xe6, 0x82, 0x24, 0x73, 0xca, 0x75, 0x97, 0xa6, 0x63, 0xb8, 0x6d,
0x5c, 0xa4, 0xd0, 0xff, 0x00, 0x73, 0x16, 0xd2, 0x29, 0x25, 0xe1, 0x84, 0x5b, 0x15, 0x75, 0xb6,
0x91, 0x31, 0x63, 0xd4, 0x85, 0x66, 0x9e, 0x8e, 0xb9, 0xd5, 0x74, 0x0c, 0xb7, 0x82, 0xf3, 0x13,
0x1f, 0x39, 0xfa, 0x52, 0x28, 0xf0, 0x57, 0x56, 0xcb, 0x31, 0x5c, 0x73, 0xf8, 0x4e, 0x8e, 0xfd,
0x63, 0xdd, 0x3d, 0x9e, 0x51, 0x71, 0xb1, 0xf4, 0xfb, 0x01, 0x9b, 0x0f, 0xf8, 0x2a, 0x0e, 0xc4,
0x05, 0x8d, 0x67, 0x85, 0xa8, 0x28, 0x6d, 0x7f, 0x7c, 0xc1, 0x12, 0x31, 0x3a, 0x79, 0xba, 0x7d,
0xb8, 0x92, 0x5a, 0x84, 0x24, 0x22, 0x82, 0x84, 0x56, 0x55, 0x6b, 0x91, 0x42, 0xe4, 0x3e, 0xa9,
0x54, 0x93, 0x99, 0xe1, 0xee, 0x66, 0xdd, 0x05, 0xec, 0xdd, 0x8c, 0x34, 0x9b, 0xab, 0x86, 0x5e,
0xc2, 0x6e, 0xcc, 0x26, 0xc5, 0x35, 0xd4, 0xd5, 0x55, 0xed, 0x98, 0x7d, 0x2e, 0x2c, 0xa2, 0x60,
0xa0, 0xc6, 0xbf, 0x19, 0xa8, 0x03, 0x75, 0x4e, 0xae, 0x96, 0x24, 0x0e, 0x88, 0x05, 0x6a, 0x71,
0x39, 0x46, 0x03, 0x00, 0x3f, 0x62, 0xc1, 0xe5, 0x44, 0x49, 0xd2, 0x96, 0x6b, 0x1b, 0xee, 0x6d,
0xd6, 0xdd, 0x16, 0xf6, 0x6e, 0x86, 0x32, 0x31, 0xa6, 0xb7, 0x04, 0x37, 0xfc, 0x2c, 0x94, 0x5d,
0xf2, 0xd5, 0x3c, 0xa2, 0xf1, 0xe5, 0x44, 0x78, 0xc9, 0x8c, 0x08, 0x6b, 0x5f, 0xf9, 0xa0, 0x9d,
0xb2, 0x67, 0x8a, 0x94, 0x82, 0x46, 0x2c, 0xf0, 0xa2, 0xc9, 0x34, 0xf2, 0x66, 0xdc, 0xfa, 0x5d,
0x53, 0x8a, 0x82, 0xe2, 0x4e, 0x25, 0x95, 0x5a, 0xec, 0x97, 0x01, 0xd5, 0xf7, 0x6c, 0x19, 0x0b,
0x8e, 0x0e, 0xa0, 0x32, 0xa5, 0x11, 0xe1, 0xca, 0x58, 0x15, 0xac, 0x81, 0xbc, 0x28, 0xa4, 0x89,
0x9a, 0x8b, 0x12, 0xae, 0x0c, 0x56, 0xc1, 0x45, 0x4a, 0x8d, 0xa7, 0xdf, 0xe6, 0xca, 0x53, 0x15,
0x9c, 0xe3, 0xa2, 0x2e, 0xa6, 0x4a, 0xe5, 0xba, 0x1c, 0x40, 0xc5, 0x5f, 0x09, 0x92, 0x59, 0x49,
0x83, 0xbf, 0x56, 0x55, 0x7d, 0xb6, 0xaa, 0x0e, 0xd4, 0xf5, 0xcf, 0x1b, 0x9d, 0xa8, 0x99, 0x5b,
0x38, 0xc7, 0xc8, 0x86, 0xc2, 0x68, 0x16, 0x7a, 0x3e, 0x6c, 0xef, 0x13, 0x34, 0xf4, 0x94, 0x63,
0x22, 0x90, 0x0b, 0xd5, 0x40, 0x81, 0xf4, 0x37, 0x82, 0xfc, 0x8d, 0x3a, 0x9d, 0x4a, 0x97, 0xe6,
0x65, 0xfb, 0x41, 0x42, 0xe4, 0xaf, 0x53, 0x83, 0x97, 0x71, 0x06, 0x87, 0xce, 0xfd, 0x4f, 0xbb,
0x74, 0xbf, 0xb1, 0x8d, 0x87, 0x8d, 0x6d, 0x3c, 0x6e, 0xec, 0xd2, 0xd7, 0xad, 0x5d, 0xba, 0xdb,
0xda, 0xc6, 0xc3, 0xd6, 0x2e, 0x7d, 0xdf, 0xda, 0x25, 0xbf, 0xaa, 0x5c, 0x71, 0xfc, 0x27, 0x00,
0x00, 0xff, 0xff, 0x38, 0xe1, 0x29, 0xbf, 0xd0, 0x04, 0x00, 0x00,
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x53, 0x4f, 0x6f, 0xda, 0x4e,
0x10, 0xc5, 0xc1, 0x10, 0x18, 0x20, 0xbf, 0x64, 0x15, 0x45, 0x16, 0xd2, 0xcf, 0x58, 0x54, 0x95,
0xac, 0x1e, 0xa0, 0x4d, 0x6e, 0xed, 0x8d, 0x46, 0x91, 0x90, 0xaa, 0xb6, 0x5a, 0xa2, 0x9c, 0x2a,
0x21, 0xff, 0x59, 0xc8, 0x2a, 0xc6, 0x4b, 0xbc, 0x4b, 0x22, 0xf2, 0x29, 0x7a, 0xec, 0x31, 0x1f,
0x27, 0xc7, 0x1c, 0xab, 0x1e, 0x50, 0x0a, 0x3d, 0xf4, 0x63, 0x54, 0xbb, 0x6b, 0x1b, 0x37, 0xa7,
0xde, 0xe6, 0xbd, 0x19, 0x7b, 0x66, 0xde, 0xbc, 0x85, 0x16, 0x17, 0xc9, 0x22, 0x10, 0xbc, 0x37,
0x4f, 0x98, 0x60, 0x68, 0x27, 0xf4, 0xdb, 0x2f, 0x12, 0x32, 0x67, 0xbc, 0xaf, 0x08, 0x7f, 0x31,
0xe9, 0x4f, 0xd9, 0x94, 0x29, 0xa0, 0x22, 0x5d, 0xd8, 0x3e, 0x8a, 0xa8, 0xaf, 0x4b, 0x02, 0x16,
0xf5, 0x7d, 0x32, 0xd7, 0x7c, 0xf7, 0x1a, 0x1a, 0x67, 0x34, 0x22, 0x17, 0x24, 0xe1, 0x94, 0xc5,
0xe8, 0x35, 0xec, 0xde, 0xe8, 0xd0, 0x32, 0x1c, 0xc3, 0x6d, 0x1c, 0xef, 0xf7, 0xb2, 0x8f, 0x7a,
0x17, 0x24, 0x10, 0x2c, 0x19, 0x98, 0x0f, 0xab, 0x4e, 0x09, 0x67, 0x65, 0xe8, 0x08, 0xaa, 0x21,
0xb9, 0xa1, 0x01, 0xb1, 0x76, 0x1c, 0xc3, 0x6d, 0xe2, 0x14, 0x21, 0x0b, 0x76, 0x69, 0x7c, 0xe3,
0x45, 0x34, 0xb4, 0xca, 0x8e, 0xe1, 0xd6, 0x70, 0x06, 0xbb, 0x67, 0xd0, 0x48, 0xdb, 0x7d, 0xa0,
0x5c, 0xa0, 0x37, 0x50, 0x4b, 0xff, 0xc5, 0x2d, 0xc3, 0x29, 0xbb, 0x8d, 0xe3, 0xff, 0x7a, 0xa1,
0xdf, 0x2b, 0x4c, 0x95, 0xb6, 0xcc, 0xcb, 0xde, 0x9a, 0xdf, 0xee, 0x3b, 0xa5, 0xee, 0x93, 0x09,
0x07, 0xb2, 0x6a, 0x18, 0x4f, 0xd8, 0x79, 0xb2, 0x88, 0x03, 0x4f, 0x90, 0x10, 0x21, 0x30, 0x63,
0x6f, 0x46, 0xd4, 0xf8, 0x75, 0xac, 0x62, 0xc9, 0x71, 0x7a, 0x47, 0xd4, 0x20, 0x65, 0xac, 0x62,
0xf4, 0x3f, 0xc0, 0x8c, 0x85, 0x74, 0x42, 0x49, 0x38, 0xe6, 0x56, 0x45, 0x65, 0xea, 0x19, 0x33,
0x42, 0x5f, 0xa0, 0x91, 0xa7, 0xfd, 0xa5, 0xd5, 0x74, 0x0c, 0xd7, 0x1c, 0xbc, 0x93, 0x73, 0xfc,
0x58, 0x75, 0x4e, 0xa6, 0x54, 0x5c, 0x2e, 0xfc, 0x5e, 0xc0, 0x66, 0x7d, 0xbe, 0x8c, 0x03, 0x71,
0x49, 0xe3, 0x69, 0x21, 0x2a, 0x6a, 0xdd, 0x1b, 0x5d, 0xb2, 0x44, 0x0c, 0x4f, 0x71, 0xde, 0x6e,
0xb0, 0x2c, 0xca, 0x5c, 0xff, 0x37, 0x99, 0xdb, 0x50, 0xe3, 0xe4, 0x7a, 0x41, 0xe2, 0x80, 0x58,
0xa0, 0x86, 0xcd, 0x31, 0x7a, 0x09, 0x7b, 0x7c, 0x39, 0x8b, 0x68, 0x7c, 0x35, 0x16, 0x5e, 0x32,
0x25, 0xc2, 0x3a, 0x50, 0xcb, 0xb7, 0x52, 0xf6, 0x5c, 0x91, 0xe8, 0x15, 0x98, 0x62, 0x39, 0xd7,
0x77, 0xda, 0x3b, 0x3e, 0xda, 0x76, 0xcc, 0x45, 0x5c, 0xce, 0x09, 0x56, 0x35, 0xc8, 0x81, 0xc6,
0x9c, 0x24, 0x33, 0xca, 0xf5, 0x5d, 0x4c, 0xc7, 0x70, 0x5b, 0xb8, 0x48, 0xa1, 0x4e, 0x41, 0xa0,
0x98, 0x5b, 0x0d, 0xc7, 0x70, 0x2b, 0xdb, 0x1d, 0x3f, 0x72, 0xd4, 0x07, 0xf0, 0x23, 0x16, 0x5c,
0x8d, 0x95, 0xf4, 0x2d, 0x99, 0x1f, 0xec, 0xaf, 0x57, 0x9d, 0x26, 0xf6, 0x6e, 0x07, 0x32, 0x31,
0xa2, 0x77, 0x04, 0xd7, 0xfd, 0x2c, 0x94, 0x3d, 0x23, 0x16, 0x78, 0xd1, 0x78, 0x12, 0x79, 0x53,
0x6e, 0xfd, 0xde, 0x55, 0x4d, 0x41, 0x71, 0x67, 0x92, 0x92, 0x9e, 0x0a, 0x49, 0x44, 0x04, 0x09,
0xad, 0xaa, 0xf6, 0x54, 0x0a, 0x91, 0xbb, 0x75, 0x9b, 0xfc, 0xac, 0x36, 0xd8, 0x5b, 0xaf, 0x3a,
0x80, 0xbd, 0xdb, 0xa1, 0x66, 0x73, 0xf7, 0x49, 0xb1, 0x62, 0x36, 0x2e, 0x2e, 0x57, 0x53, 0xbf,
0x6a, 0xc5, 0xec, 0xf3, 0x96, 0x4c, 0x2d, 0xf6, 0xcb, 0x80, 0xea, 0x7b, 0xb6, 0x88, 0x05, 0x47,
0x87, 0x50, 0x99, 0xd0, 0x88, 0x70, 0x65, 0xac, 0x0a, 0xd6, 0x40, 0xce, 0x1c, 0xd2, 0x44, 0x5d,
0x8c, 0x12, 0xae, 0xa4, 0xad, 0xe0, 0x22, 0xa5, 0x0e, 0xa7, 0xcf, 0xc0, 0x95, 0xff, 0x2a, 0x38,
0xc7, 0xc5, 0x7d, 0x4c, 0x95, 0xca, 0xf7, 0x39, 0x84, 0x8a, 0xbf, 0x14, 0x24, 0x33, 0xa6, 0x06,
0x7f, 0x99, 0xa0, 0xfa, 0xcc, 0x04, 0x6d, 0xa8, 0xe9, 0x97, 0x37, 0x3c, 0x55, 0xe7, 0x6f, 0xe2,
0x1c, 0x23, 0x1b, 0x0a, 0x2a, 0x5a, 0xe8, 0xb9, 0xae, 0xdd, 0x4f, 0x50, 0xd7, 0x5b, 0x8e, 0x88,
0x40, 0x2e, 0x54, 0x03, 0x05, 0xd2, 0xd7, 0x08, 0xf2, 0x35, 0xea, 0x74, 0x6a, 0xca, 0x34, 0x2f,
0xc7, 0x0f, 0x12, 0x22, 0x5f, 0x9d, 0x5a, 0xbc, 0x8c, 0x33, 0x38, 0x70, 0x1e, 0x7e, 0xda, 0xa5,
0x87, 0xb5, 0x6d, 0x3c, 0xae, 0x6d, 0xe3, 0x69, 0x6d, 0x97, 0xbe, 0x6e, 0xec, 0xd2, 0xfd, 0xc6,
0x36, 0x1e, 0x37, 0x76, 0xe9, 0xfb, 0xc6, 0x2e, 0xf9, 0x55, 0xe5, 0xbe, 0x93, 0x3f, 0x01, 0x00,
0x00, 0xff, 0xff, 0x7e, 0x87, 0x05, 0xb2, 0xd0, 0x04, 0x00, 0x00,
}
func (m *FileVersion) Marshal() (dAtA []byte, err error) {

View File

@@ -27,23 +27,24 @@ message VersionList {
message FileInfoTruncated {
option (gogoproto.goproto_stringer) = false;
string name = 1;
protocol.FileInfoType type = 2;
int64 size = 3;
uint32 permissions = 4;
int64 modified_s = 5;
int32 modified_ns = 11;
uint64 modified_by = 12 [(gogoproto.customtype) = "github.com/syncthing/syncthing/lib/protocol.ShortID", (gogoproto.nullable) = false];
bool deleted = 6;
bool invalid = 7 [(gogoproto.customname) = "RawInvalid"];
bool no_permissions = 8;
protocol.Vector version = 9 [(gogoproto.nullable) = false];
int64 sequence = 10;
int32 block_size = 13 [(gogoproto.customname) = "RawBlockSize"];
// repeated BlockInfo Blocks = 16
string symlink_target = 17;
protocol.FileInfoType type = 2;
uint32 permissions = 4;
int32 modified_ns = 11;
int32 block_size = 13 [(gogoproto.customname) = "RawBlockSize"];
// see bep.proto
uint32 local_flags = 1000;
bool deleted = 6;
bool invalid = 7 [(gogoproto.customname) = "RawInvalid"];
bool no_permissions = 8;
}
// For each folder and device we keep one of these to track the current

View File

@@ -7,111 +7,146 @@
package db
import (
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/util"
)
// A readOnlyTransaction represents a database snapshot.
type readOnlyTransaction struct {
snapshot
backend.ReadTransaction
keyer keyer
}
func (db *instance) newReadOnlyTransaction() readOnlyTransaction {
return readOnlyTransaction{
snapshot: db.GetSnapshot(),
keyer: db.keyer,
func (db *Lowlevel) newReadOnlyTransaction() (readOnlyTransaction, error) {
tran, err := db.NewReadTransaction()
if err != nil {
return readOnlyTransaction{}, err
}
return readOnlyTransaction{
ReadTransaction: tran,
keyer: db.keyer,
}, nil
}
func (t readOnlyTransaction) close() {
t.Release()
}
func (t readOnlyTransaction) getFile(folder, device, file []byte) (protocol.FileInfo, bool) {
return t.getFileByKey(t.keyer.GenerateDeviceFileKey(nil, folder, device, file))
}
func (t readOnlyTransaction) getFileByKey(key []byte) (protocol.FileInfo, bool) {
if f, ok := t.getFileTrunc(key, false); ok {
return f.(protocol.FileInfo), true
func (t readOnlyTransaction) getFile(folder, device, file []byte) (protocol.FileInfo, bool, error) {
key, err := t.keyer.GenerateDeviceFileKey(nil, folder, device, file)
if err != nil {
return protocol.FileInfo{}, false, err
}
return protocol.FileInfo{}, false
return t.getFileByKey(key)
}
func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (FileIntf, bool) {
bs, err := t.Get(key, nil)
if err == leveldb.ErrNotFound {
return nil, false
func (t readOnlyTransaction) getFileByKey(key []byte) (protocol.FileInfo, bool, error) {
f, ok, err := t.getFileTrunc(key, false)
if err != nil || !ok {
return protocol.FileInfo{}, false, err
}
return f.(protocol.FileInfo), true, nil
}
func (t readOnlyTransaction) getFileTrunc(key []byte, trunc bool) (FileIntf, bool, error) {
bs, err := t.Get(key)
if backend.IsNotFound(err) {
return nil, false, nil
}
if err != nil {
l.Debugln("surprise error:", err)
return nil, false
return nil, false, err
}
f, err := unmarshalTrunc(bs, trunc)
if err != nil {
l.Debugln("unmarshal error:", err)
return nil, false
return nil, false, err
}
return f, true
return f, true, nil
}
func (t readOnlyTransaction) getGlobal(keyBuf, folder, file []byte, truncate bool) ([]byte, FileIntf, bool) {
keyBuf = t.keyer.GenerateGlobalVersionKey(keyBuf, folder, file)
bs, err := t.Get(keyBuf, nil)
func (t readOnlyTransaction) getGlobal(keyBuf, folder, file []byte, truncate bool) ([]byte, FileIntf, bool, error) {
var err error
keyBuf, err = t.keyer.GenerateGlobalVersionKey(keyBuf, folder, file)
if err != nil {
return keyBuf, nil, false
return nil, nil, false, err
}
bs, err := t.Get(keyBuf)
if backend.IsNotFound(err) {
return keyBuf, nil, false, nil
}
if err != nil {
return nil, nil, false, err
}
vl, ok := unmarshalVersionList(bs)
if !ok {
return keyBuf, nil, false
return keyBuf, nil, false, nil
}
keyBuf = t.keyer.GenerateDeviceFileKey(keyBuf, folder, vl.Versions[0].Device, file)
if fi, ok := t.getFileTrunc(keyBuf, truncate); ok {
return keyBuf, fi, true
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, vl.Versions[0].Device, file)
if err != nil {
return nil, nil, false, err
}
return keyBuf, nil, false
fi, ok, err := t.getFileTrunc(keyBuf, truncate)
if err != nil || !ok {
return keyBuf, nil, false, err
}
return keyBuf, fi, true, nil
}
// A readWriteTransaction is a readOnlyTransaction plus a batch for writes.
// The batch will be committed on close() or by checkFlush() if it exceeds the
// batch size.
type readWriteTransaction struct {
backend.WriteTransaction
readOnlyTransaction
*batch
}
func (db *instance) newReadWriteTransaction() readWriteTransaction {
return readWriteTransaction{
readOnlyTransaction: db.newReadOnlyTransaction(),
batch: db.newBatch(),
func (db *Lowlevel) newReadWriteTransaction() (readWriteTransaction, error) {
tran, err := db.NewWriteTransaction()
if err != nil {
return readWriteTransaction{}, err
}
return readWriteTransaction{
WriteTransaction: tran,
readOnlyTransaction: readOnlyTransaction{
ReadTransaction: tran,
keyer: db.keyer,
},
}, nil
}
func (t readWriteTransaction) commit() error {
t.readOnlyTransaction.close()
return t.WriteTransaction.Commit()
}
func (t readWriteTransaction) close() {
t.flush()
t.readOnlyTransaction.close()
t.WriteTransaction.Release()
}
// updateGlobal adds this device+version to the version list for the given
// file. If the device is already present in the list, the version is updated.
// If the file does not have an entry in the global list, it is created.
func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, file protocol.FileInfo, meta *metadataTracker) ([]byte, bool) {
func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, file protocol.FileInfo, meta *metadataTracker) ([]byte, bool, error) {
l.Debugf("update global; folder=%q device=%v file=%q version=%v invalid=%v", folder, protocol.DeviceIDFromBytes(device), file.Name, file.Version, file.IsInvalid())
var fl VersionList
if svl, err := t.Get(gk, nil); err == nil {
fl.Unmarshal(svl) // Ignore error, continue with empty fl
svl, err := t.Get(gk)
if err == nil {
_ = fl.Unmarshal(svl) // Ignore error, continue with empty fl
} else if !backend.IsNotFound(err) {
return nil, false, err
}
fl, removedFV, removedAt, insertedAt, err := fl.update(folder, device, file, t.readOnlyTransaction)
if err != nil {
return nil, false, err
}
fl, removedFV, removedAt, insertedAt := fl.update(folder, device, file, t.readOnlyTransaction)
if insertedAt == -1 {
l.Debugln("update global; same version, global unchanged")
return keyBuf, false
return keyBuf, false, nil
}
name := []byte(file.Name)
@@ -121,24 +156,29 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
// Inserted a new newest version
global = file
} else {
keyBuf = t.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, name)
if new, ok := t.getFileByKey(keyBuf); ok {
global = new
} else {
// This file must exist in the db, so this must be caused
// by the db being closed - bail out.
l.Debugln("File should exist:", name)
return keyBuf, false
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, name)
if err != nil {
return nil, false, err
}
new, ok, err := t.getFileByKey(keyBuf)
if err != nil || !ok {
return keyBuf, false, err
}
global = new
}
// Fixup the list of files we need.
keyBuf = t.updateLocalNeed(keyBuf, folder, name, fl, global)
keyBuf, err = t.updateLocalNeed(keyBuf, folder, name, fl, global)
if err != nil {
return nil, false, err
}
if removedAt != 0 && insertedAt != 0 {
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
t.Put(gk, mustMarshal(&fl))
return keyBuf, true
if err := t.Put(gk, mustMarshal(&fl)); err != nil {
return nil, false, err
}
return keyBuf, true, nil
}
// Remove the old global from the global size counter
@@ -149,8 +189,15 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
// The previous newest version is now at index 1
oldGlobalFV = fl.Versions[1]
}
keyBuf = t.keyer.GenerateDeviceFileKey(keyBuf, folder, oldGlobalFV.Device, name)
if oldFile, ok := t.getFileByKey(keyBuf); ok {
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, oldGlobalFV.Device, name)
if err != nil {
return nil, false, err
}
oldFile, ok, err := t.getFileByKey(keyBuf)
if err != nil {
return nil, false, err
}
if ok {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
meta.removeFile(protocol.GlobalDeviceID, oldFile)
@@ -160,27 +207,41 @@ func (t readWriteTransaction) updateGlobal(gk, keyBuf, folder, device []byte, fi
meta.addFile(protocol.GlobalDeviceID, global)
l.Debugf(`new global for "%v" after update: %v`, file.Name, fl)
t.Put(gk, mustMarshal(&fl))
if err := t.Put(gk, mustMarshal(&fl)); err != nil {
return nil, false, err
}
return keyBuf, true
return keyBuf, true, nil
}
// updateLocalNeed checks whether the given file is still needed on the local
// device according to the version list and global FileInfo given and updates
// the db accordingly.
func (t readWriteTransaction) updateLocalNeed(keyBuf, folder, name []byte, fl VersionList, global protocol.FileInfo) []byte {
keyBuf = t.keyer.GenerateNeedFileKey(keyBuf, folder, name)
hasNeeded, _ := t.Has(keyBuf, nil)
func (t readWriteTransaction) updateLocalNeed(keyBuf, folder, name []byte, fl VersionList, global protocol.FileInfo) ([]byte, error) {
var err error
keyBuf, err = t.keyer.GenerateNeedFileKey(keyBuf, folder, name)
if err != nil {
return nil, err
}
_, err = t.Get(keyBuf)
if err != nil && !backend.IsNotFound(err) {
return nil, err
}
hasNeeded := err == nil
if localFV, haveLocalFV := fl.Get(protocol.LocalDeviceID[:]); need(global, haveLocalFV, localFV.Version) {
if !hasNeeded {
l.Debugf("local need insert; folder=%q, name=%q", folder, name)
t.Put(keyBuf, nil)
if err := t.Put(keyBuf, nil); err != nil {
return nil, err
}
}
} else if hasNeeded {
l.Debugf("local need delete; folder=%q, name=%q", folder, name)
t.Delete(keyBuf)
if err := t.Delete(keyBuf); err != nil {
return nil, err
}
}
return keyBuf
return keyBuf, nil
}
func need(global FileIntf, haveLocal bool, localVersion protocol.Vector) bool {
@@ -202,71 +263,94 @@ func need(global FileIntf, haveLocal bool, localVersion protocol.Vector) bool {
// removeFromGlobal removes the device from the global version list for the
// given file. If the version list is empty after this, the file entry is
// removed entirely.
func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device []byte, file []byte, meta *metadataTracker) []byte {
func (t readWriteTransaction) removeFromGlobal(gk, keyBuf, folder, device []byte, file []byte, meta *metadataTracker) ([]byte, error) {
l.Debugf("remove from global; folder=%q device=%v file=%q", folder, protocol.DeviceIDFromBytes(device), file)
svl, err := t.Get(gk, nil)
if err != nil {
svl, err := t.Get(gk)
if backend.IsNotFound(err) {
// We might be called to "remove" a global version that doesn't exist
// if the first update for the file is already marked invalid.
return keyBuf
return keyBuf, nil
} else if err != nil {
return nil, err
}
var fl VersionList
err = fl.Unmarshal(svl)
if err != nil {
l.Debugln("unmarshal error:", err)
return keyBuf
return nil, err
}
fl, _, removedAt := fl.pop(device)
if removedAt == -1 {
// There is no version for the given device
return keyBuf
return keyBuf, nil
}
if removedAt == 0 {
// A failure to get the file here is surprising and our
// global size data will be incorrect until a restart...
keyBuf = t.keyer.GenerateDeviceFileKey(keyBuf, folder, device, file)
if f, ok := t.getFileByKey(keyBuf); ok {
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, device, file)
if err != nil {
return nil, err
}
if f, ok, err := t.getFileByKey(keyBuf); err != nil {
return keyBuf, nil
} else if ok {
meta.removeFile(protocol.GlobalDeviceID, f)
}
}
if len(fl.Versions) == 0 {
keyBuf = t.keyer.GenerateNeedFileKey(keyBuf, folder, file)
t.Delete(keyBuf)
t.Delete(gk)
return keyBuf
keyBuf, err = t.keyer.GenerateNeedFileKey(keyBuf, folder, file)
if err != nil {
return nil, err
}
if err := t.Delete(keyBuf); err != nil {
return nil, err
}
if err := t.Delete(gk); err != nil {
return nil, err
}
return keyBuf, nil
}
if removedAt == 0 {
keyBuf = t.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, file)
global, ok := t.getFileByKey(keyBuf)
if !ok {
// This file must exist in the db, so this must be caused
// by the db being closed - bail out.
l.Debugln("File should exist:", file)
return keyBuf
keyBuf, err = t.keyer.GenerateDeviceFileKey(keyBuf, folder, fl.Versions[0].Device, file)
if err != nil {
return nil, err
}
global, ok, err := t.getFileByKey(keyBuf)
if err != nil || !ok {
return keyBuf, err
}
keyBuf, err = t.updateLocalNeed(keyBuf, folder, file, fl, global)
if err != nil {
return nil, err
}
keyBuf = t.updateLocalNeed(keyBuf, folder, file, fl, global)
meta.addFile(protocol.GlobalDeviceID, global)
}
l.Debugf("new global after remove: %v", fl)
t.Put(gk, mustMarshal(&fl))
if err := t.Put(gk, mustMarshal(&fl)); err != nil {
return nil, err
}
return keyBuf
return keyBuf, nil
}
func (t readWriteTransaction) deleteKeyPrefix(prefix []byte) {
dbi := t.NewIterator(util.BytesPrefix(prefix), nil)
for dbi.Next() {
t.Delete(dbi.Key())
t.checkFlush()
func (t readWriteTransaction) deleteKeyPrefix(prefix []byte) error {
dbi, err := t.NewPrefixIterator(prefix)
if err != nil {
return err
}
dbi.Release()
defer dbi.Release()
for dbi.Next() {
if err := t.Delete(dbi.Key()); err != nil {
return err
}
}
return dbi.Error()
}
type marshaller interface {

View File

@@ -11,22 +11,26 @@ import (
"io"
"os"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/storage"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/syncthing/syncthing/lib/db/backend"
)
// writeJSONS serializes the database to a JSON stream that can be checked
// in to the repo and used for tests.
func writeJSONS(w io.Writer, db *leveldb.DB) {
it := db.NewIterator(&util.Range{}, nil)
func writeJSONS(w io.Writer, db backend.Backend) {
it, err := db.NewPrefixIterator(nil)
if err != nil {
panic(err)
}
defer it.Release()
enc := json.NewEncoder(w)
for it.Next() {
enc.Encode(map[string][]byte{
err := enc.Encode(map[string][]byte{
"k": it.Key(),
"v": it.Value(),
})
if err != nil {
panic(err)
}
}
}
@@ -34,15 +38,15 @@ func writeJSONS(w io.Writer, db *leveldb.DB) {
// here and the linter to not complain.
var _ = writeJSONS
// openJSONS reads a JSON stream file into a leveldb.DB
func openJSONS(file string) (*leveldb.DB, error) {
// openJSONS reads a JSON stream file into a backend DB
func openJSONS(file string) (backend.Backend, error) {
fd, err := os.Open(file)
if err != nil {
return nil, err
}
dec := json.NewDecoder(fd)
db, _ := leveldb.Open(storage.NewMemStorage(), nil)
db := backend.OpenMemory()
for {
var row map[string][]byte
@@ -54,7 +58,9 @@ func openJSONS(file string) (*leveldb.DB, error) {
return nil, err
}
db.Put(row["k"], row["v"], nil)
if err := db.Put(row["k"], row["v"]); err != nil {
return nil, err
}
}
return db, nil

23
lib/dialer/debug.go Normal file
View File

@@ -0,0 +1,23 @@
// Copyright (C) 2019 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at https://mozilla.org/MPL/2.0/.
package dialer
import (
"os"
"strings"
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("dialer", "Dialing connections")
// To run before init() of other files that log on init.
_ = func() error {
l.SetDebug("dialer", strings.Contains(os.Getenv("STTRACE"), "dialer") || os.Getenv("STTRACE") == "all")
return nil
}()
)

View File

@@ -7,34 +7,24 @@
package dialer
import (
"net"
"net/http"
"net/url"
"os"
"time"
"golang.org/x/net/proxy"
"github.com/syncthing/syncthing/lib/logger"
)
var (
l = logger.DefaultLogger.NewFacility("dialer", "Dialing connections")
proxyDialer proxy.Dialer
usingProxy bool
noFallback = os.Getenv("ALL_PROXY_NO_FALLBACK") != ""
noFallback = os.Getenv("ALL_PROXY_NO_FALLBACK") != ""
)
type dialFunc func(network, addr string) (net.Conn, error)
func init() {
proxy.RegisterDialerType("socks", socksDialerFunction)
proxyDialer = getDialer(proxy.Direct)
usingProxy = proxyDialer != proxy.Direct
if usingProxy {
if proxyDialer := proxy.FromEnvironment(); proxyDialer != proxy.Direct {
http.DefaultTransport = &http.Transport{
Dial: Dial,
DialContext: DialContext,
Proxy: http.ProxyFromEnvironment,
TLSHandshakeTimeout: 10 * time.Second,
}
@@ -55,31 +45,6 @@ func init() {
}
}
func dialWithFallback(proxyDialFunc dialFunc, fallbackDialFunc dialFunc, network, addr string) (net.Conn, error) {
conn, err := proxyDialFunc(network, addr)
if err == nil {
l.Debugf("Dialing %s address %s via proxy - success, %s -> %s", network, addr, conn.LocalAddr(), conn.RemoteAddr())
SetTCPOptions(conn)
return dialerConn{
conn, newDialerAddr(network, addr),
}, nil
}
l.Debugf("Dialing %s address %s via proxy - error %s", network, addr, err)
if noFallback {
return conn, err
}
conn, err = fallbackDialFunc(network, addr)
if err == nil {
l.Debugf("Dialing %s address %s via fallback - success, %s -> %s", network, addr, conn.LocalAddr(), conn.RemoteAddr())
SetTCPOptions(conn)
} else {
l.Debugf("Dialing %s address %s via fallback - error %s", network, addr, err)
}
return conn, err
}
// This is a rip off of proxy.FromURL for "socks" URL scheme
func socksDialerFunction(u *url.URL, forward proxy.Dialer) (proxy.Dialer, error) {
var auth *proxy.Auth
@@ -93,67 +58,3 @@ func socksDialerFunction(u *url.URL, forward proxy.Dialer) (proxy.Dialer, error)
return proxy.SOCKS5("tcp", u.Host, auth, forward)
}
// This is a rip off of proxy.FromEnvironment with a custom forward dialer
func getDialer(forward proxy.Dialer) proxy.Dialer {
allProxy := os.Getenv("all_proxy")
if len(allProxy) == 0 {
return forward
}
proxyURL, err := url.Parse(allProxy)
if err != nil {
return forward
}
prxy, err := proxy.FromURL(proxyURL, forward)
if err != nil {
return forward
}
noProxy := os.Getenv("no_proxy")
if len(noProxy) == 0 {
return prxy
}
perHost := proxy.NewPerHost(prxy, forward)
perHost.AddFromString(noProxy)
return perHost
}
type timeoutDirectDialer struct {
timeout time.Duration
}
func (d *timeoutDirectDialer) Dial(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, d.timeout)
}
type dialerConn struct {
net.Conn
addr net.Addr
}
func (c dialerConn) RemoteAddr() net.Addr {
return c.addr
}
func newDialerAddr(network, addr string) net.Addr {
netaddr, err := net.ResolveIPAddr(network, addr)
if err == nil {
return netaddr
}
return fallbackAddr{network, addr}
}
type fallbackAddr struct {
network string
addr string
}
func (a fallbackAddr) Network() string {
return a.network
}
func (a fallbackAddr) String() string {
return a.addr
}

View File

@@ -7,49 +7,18 @@
package dialer
import (
"context"
"errors"
"fmt"
"net"
"time"
"golang.org/x/net/ipv4"
"golang.org/x/net/ipv6"
"golang.org/x/net/proxy"
)
// Dial tries dialing via proxy if a proxy is configured, and falls back to
// a direct connection if no proxy is defined, or connecting via proxy fails.
func Dial(network, addr string) (net.Conn, error) {
if usingProxy {
return dialWithFallback(proxyDialer.Dial, net.Dial, network, addr)
}
return net.Dial(network, addr)
}
// DialTimeout tries dialing via proxy with a timeout if a proxy is configured,
// and falls back to a direct connection if no proxy is defined, or connecting
// via proxy fails. The timeout can potentially be applied twice, once trying
// to connect via the proxy connection, and second time trying to connect
// directly.
func DialTimeout(network, addr string, timeout time.Duration) (net.Conn, error) {
if usingProxy {
// Because the proxy package is poorly structured, we have to
// construct a struct that matches proxy.Dialer but has a timeout
// and reconstrcut the proxy dialer using that, in order to be able to
// set a timeout.
dd := &timeoutDirectDialer{
timeout: timeout,
}
// Check if the dialer we are getting is not timeoutDirectDialer we just
// created. It could happen that usingProxy is true, but getDialer
// returns timeoutDirectDialer due to env vars changing.
if timeoutProxyDialer := getDialer(dd); timeoutProxyDialer != dd {
directDialFunc := func(inetwork, iaddr string) (net.Conn, error) {
return net.DialTimeout(inetwork, iaddr, timeout)
}
return dialWithFallback(timeoutProxyDialer.Dial, directDialFunc, network, addr)
}
}
return net.DialTimeout(network, addr, timeout)
}
var errUnexpectedInterfaceType = errors.New("unexpected interface type")
// SetTCPOptions sets our default TCP options on a TCP connection, possibly
// digging through dialerConn to extract the *net.TCPConn
@@ -70,10 +39,6 @@ func SetTCPOptions(conn net.Conn) error {
return err
}
return nil
case dialerConn:
return SetTCPOptions(conn.Conn)
default:
return fmt.Errorf("unknown connection type %T", conn)
}
@@ -89,11 +54,54 @@ func SetTrafficClass(conn net.Conn, class int) error {
return e1
}
return e2
case dialerConn:
return SetTrafficClass(conn.Conn, class)
default:
return fmt.Errorf("unknown connection type %T", conn)
}
}
func dialContextWithFallback(ctx context.Context, fallback proxy.ContextDialer, network, addr string) (net.Conn, error) {
dialer, ok := proxy.FromEnvironment().(proxy.ContextDialer)
if !ok {
return nil, errUnexpectedInterfaceType
}
if dialer == proxy.Direct {
return fallback.DialContext(ctx, network, addr)
}
if noFallback {
return dialer.DialContext(ctx, network, addr)
}
ctx, cancel := context.WithCancel(ctx)
defer cancel()
var proxyConn, fallbackConn net.Conn
var proxyErr, fallbackErr error
proxyDone := make(chan struct{})
fallbackDone := make(chan struct{})
go func() {
proxyConn, proxyErr = dialer.DialContext(ctx, network, addr)
close(proxyDone)
}()
go func() {
fallbackConn, fallbackErr = fallback.DialContext(ctx, network, addr)
close(fallbackDone)
}()
<-proxyDone
if proxyErr == nil {
go func() {
<-fallbackDone
if fallbackErr == nil {
fallbackConn.Close()
}
}()
return proxyConn, nil
}
<-fallbackDone
return fallbackConn, fallbackErr
}
// DialContext dials via context and/or directly, depending on how it is configured.
// If dialing via proxy and allowing fallback, dialing for both happens simultaneously
// and the proxy connection is returned if successful.
func DialContext(ctx context.Context, network, addr string) (net.Conn, error) {
return dialContextWithFallback(ctx, proxy.Direct, network, addr)
}

View File

@@ -8,6 +8,7 @@ package discover
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"errors"
@@ -91,8 +92,8 @@ func NewGlobal(server string, cert tls.Certificate, addrList AddressLister, evLo
var announceClient httpClient = &http.Client{
Timeout: requestTimeout,
Transport: &http.Transport{
Dial: dialer.Dial,
Proxy: http.ProxyFromEnvironment,
DialContext: dialer.DialContext,
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: opts.insecure,
Certificates: []tls.Certificate{cert},
@@ -108,8 +109,8 @@ func NewGlobal(server string, cert tls.Certificate, addrList AddressLister, evLo
var queryClient httpClient = &http.Client{
Timeout: requestTimeout,
Transport: &http.Transport{
Dial: dialer.Dial,
Proxy: http.ProxyFromEnvironment,
DialContext: dialer.DialContext,
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: opts.insecure,
},
@@ -128,7 +129,7 @@ func NewGlobal(server string, cert tls.Certificate, addrList AddressLister, evLo
noLookup: opts.noLookup,
evLogger: evLogger,
}
cl.Service = util.AsService(cl.serve)
cl.Service = util.AsService(cl.serve, cl.String())
if !opts.noAnnounce {
// If we are supposed to annonce, it's an error until we've done so.
cl.setError(errors.New("not announced"))
@@ -188,11 +189,11 @@ func (c *globalClient) String() string {
return "global@" + c.server
}
func (c *globalClient) serve(stop chan struct{}) {
func (c *globalClient) serve(ctx context.Context) {
if c.noAnnounce {
// We're configured to not do announcements, only lookups. To maintain
// the same interface, we just pause here if Serve() is run.
<-stop
<-ctx.Done()
return
}
@@ -212,7 +213,7 @@ func (c *globalClient) serve(stop chan struct{}) {
case <-timer.C:
c.sendAnnouncement(timer)
case <-stop:
case <-ctx.Done():
return
}
}

View File

@@ -10,8 +10,10 @@
package discover
import (
"context"
"encoding/binary"
"encoding/hex"
"fmt"
"io"
"net"
"net/url"
@@ -81,9 +83,9 @@ func NewLocal(id protocol.DeviceID, addr string, addrList AddressLister, evLogge
c.beacon = beacon.NewMulticast(addr)
}
c.Add(c.beacon)
c.Add(util.AsService(c.recvAnnouncements))
c.Add(util.AsService(c.recvAnnouncements, fmt.Sprintf("%s/recv", c)))
c.Add(util.AsService(c.sendLocalAnnouncements))
c.Add(util.AsService(c.sendLocalAnnouncements, fmt.Sprintf("%s/sendLocal", c)))
return c, nil
}
@@ -135,7 +137,7 @@ func (c *localClient) announcementPkt(instanceID int64, msg []byte) ([]byte, boo
return msg, true
}
func (c *localClient) sendLocalAnnouncements(stop chan struct{}) {
func (c *localClient) sendLocalAnnouncements(ctx context.Context) {
var msg []byte
var ok bool
instanceID := rand.Int63()
@@ -147,18 +149,18 @@ func (c *localClient) sendLocalAnnouncements(stop chan struct{}) {
select {
case <-c.localBcastTick:
case <-c.forcedBcastTick:
case <-stop:
case <-ctx.Done():
return
}
}
}
func (c *localClient) recvAnnouncements(stop chan struct{}) {
func (c *localClient) recvAnnouncements(ctx context.Context) {
b := c.beacon
warnedAbout := make(map[string]bool)
for {
select {
case <-stop:
case <-ctx.Done():
return
default:
}

View File

@@ -8,8 +8,10 @@
package events
import (
"context"
"encoding/json"
"errors"
"fmt"
"runtime"
"time"
@@ -258,7 +260,7 @@ func NewLogger() Logger {
funcs: make(chan func()),
toUnsubscribe: make(chan *subscription),
}
l.Service = util.AsService(l.serve)
l.Service = util.AsService(l.serve, l.String())
// Make sure the timer is in the stopped state and hasn't fired anything
// into the channel.
if !l.timeout.Stop() {
@@ -267,7 +269,7 @@ func NewLogger() Logger {
return l
}
func (l *logger) serve(stop chan struct{}) {
func (l *logger) serve(ctx context.Context) {
loop:
for {
select {
@@ -282,7 +284,7 @@ loop:
case s := <-l.toUnsubscribe:
l.unsubscribe(s)
case <-stop:
case <-ctx.Done():
break loop
}
}
@@ -388,6 +390,10 @@ func (l *logger) unsubscribe(s *subscription) {
close(s.events)
}
func (l *logger) String() string {
return fmt.Sprintf("events.Logger/@%p", l)
}
// Poll returns an event from the subscription or an error if the poll times
// out of the event channel is closed. Poll should not be called concurrently
// from multiple goroutines for a single subscription.

View File

@@ -55,12 +55,12 @@ func (f *BasicFilesystem) Watch(name string, ignore Matcher, ctx context.Context
}
errChan := make(chan error)
go f.watchLoop(name, roots, backendChan, outChan, errChan, ignore, ctx)
go f.watchLoop(ctx, name, roots, backendChan, outChan, errChan, ignore)
return outChan, errChan, nil
}
func (f *BasicFilesystem) watchLoop(name string, roots []string, backendChan chan notify.EventInfo, outChan chan<- Event, errChan chan<- error, ignore Matcher, ctx context.Context) {
func (f *BasicFilesystem) watchLoop(ctx context.Context, name string, roots []string, backendChan chan notify.EventInfo, outChan chan<- Event, errChan chan<- error, ignore Matcher) {
for {
// Detect channel overflow
if len(backendChan) == backendBuffer {

View File

@@ -178,7 +178,7 @@ func TestWatchWinRoot(t *testing.T) {
}
cancel()
}()
fs.watchLoop(".", roots, backendChan, outChan, errChan, fakeMatcher{}, ctx)
fs.watchLoop(ctx, ".", roots, backendChan, outChan, errChan, fakeMatcher{})
}()
// filepath.Dir as watch has a /... suffix
@@ -219,7 +219,7 @@ func expectErrorForPath(t *testing.T, path string) {
// testFs is Filesystem, but we need BasicFilesystem here
fs := newBasicFilesystem(testDirAbs)
go fs.watchLoop(".", []string{testDirAbs}, backendChan, outChan, errChan, fakeMatcher{}, ctx)
go fs.watchLoop(ctx, ".", []string{testDirAbs}, backendChan, outChan, errChan, fakeMatcher{})
backendChan <- fakeEventInfo(path)
@@ -244,7 +244,7 @@ func TestWatchSubpath(t *testing.T) {
fs := newBasicFilesystem(testDirAbs)
abs, _ := fs.rooted("sub")
go fs.watchLoop("sub", []string{testDirAbs}, backendChan, outChan, errChan, fakeMatcher{}, ctx)
go fs.watchLoop(ctx, "sub", []string{testDirAbs}, backendChan, outChan, errChan, fakeMatcher{})
backendChan <- fakeEventInfo(filepath.Join(abs, "file"))

View File

@@ -47,12 +47,14 @@ const randomBlockShift = 14 // 128k
// maxsize=n to generate files up to a total of n MiB (default 0)
// sizeavg=n to set the average size of random files, in bytes (default 1<<20)
// seed=n to set the initial random seed (default 0)
// insens=b "true" makes filesystem case-insensitive Windows- or OSX-style (default false)
//
// - Two fakefs:s pointing at the same root path see the same files.
//
type fakefs struct {
mut sync.Mutex
root *fakeEntry
mut sync.Mutex
root *fakeEntry
insens bool
}
var (
@@ -90,6 +92,10 @@ func newFakeFilesystem(root string) *fakefs {
maxsize, _ := strconv.Atoi(params.Get("maxsize"))
sizeavg, _ := strconv.Atoi(params.Get("sizeavg"))
seed, _ := strconv.Atoi(params.Get("seed"))
if params.Get("insens") == "true" {
fs.insens = true
}
if sizeavg == 0 {
sizeavg = 1 << 20
}
@@ -149,6 +155,9 @@ type fakeEntry struct {
func (fs *fakefs) entryForName(name string) *fakeEntry {
// bug: lookup doesn't work through symlinks.
if fs.insens {
name = UnicodeLowercase(name)
}
name = filepath.ToSlash(name)
if name == "." || name == "/" {
@@ -232,6 +241,11 @@ func (fs *fakefs) create(name string) (*fakeEntry, error) {
mode: 0666,
mtime: time.Now(),
}
if fs.insens {
base = UnicodeLowercase(base)
}
entry.children[base] = new
return new, nil
}
@@ -241,6 +255,9 @@ func (fs *fakefs) Create(name string) (File, error) {
if err != nil {
return nil, err
}
if fs.insens {
return &fakeFile{fakeEntry: entry, presentedName: filepath.Base(name)}, nil
}
return &fakeFile{fakeEntry: entry}, nil
}
@@ -264,8 +281,8 @@ func (fs *fakefs) DirNames(name string) ([]string, error) {
}
names := make([]string, 0, len(entry.children))
for name := range entry.children {
names = append(names, name)
for _, child := range entry.children {
names = append(names, child.name)
}
return names, nil
@@ -279,7 +296,13 @@ func (fs *fakefs) Lstat(name string) (FileInfo, error) {
if entry == nil {
return nil, os.ErrNotExist
}
return &fakeFileInfo{*entry}, nil
info := &fakeFileInfo{*entry}
if fs.insens {
info.name = filepath.Base(name)
}
return info, nil
}
func (fs *fakefs) Mkdir(name string, perm FileMode) error {
@@ -289,17 +312,22 @@ func (fs *fakefs) Mkdir(name string, perm FileMode) error {
dir := filepath.Dir(name)
base := filepath.Base(name)
entry := fs.entryForName(dir)
key := base
if entry == nil {
return os.ErrNotExist
}
if entry.entryType != fakeEntryTypeDir {
return os.ErrExist
}
if _, ok := entry.children[base]; ok {
if fs.insens {
key = UnicodeLowercase(key)
}
if _, ok := entry.children[key]; ok {
return os.ErrExist
}
entry.children[base] = &fakeEntry{
entry.children[key] = &fakeEntry{
name: base,
entryType: fakeEntryTypeDir,
mode: perm,
@@ -315,7 +343,12 @@ func (fs *fakefs) MkdirAll(name string, perm FileMode) error {
comps := strings.Split(name, "/")
entry := fs.root
for _, comp := range comps {
next, ok := entry.children[comp]
key := comp
if fs.insens {
key = UnicodeLowercase(key)
}
next, ok := entry.children[key]
if !ok {
new := &fakeEntry{
@@ -325,7 +358,7 @@ func (fs *fakefs) MkdirAll(name string, perm FileMode) error {
mtime: time.Now(),
children: make(map[string]*fakeEntry),
}
entry.children[comp] = new
entry.children[key] = new
next = new
} else if next.entryType != fakeEntryTypeDir {
return errors.New("not a directory")
@@ -344,28 +377,37 @@ func (fs *fakefs) Open(name string) (File, error) {
if entry == nil || entry.entryType != fakeEntryTypeFile {
return nil, os.ErrNotExist
}
if fs.insens {
return &fakeFile{fakeEntry: entry, presentedName: filepath.Base(name)}, nil
}
return &fakeFile{fakeEntry: entry}, nil
}
func (fs *fakefs) OpenFile(name string, flags int, mode FileMode) (File, error) {
fs.mut.Lock()
defer fs.mut.Unlock()
if flags&os.O_CREATE == 0 {
return fs.Open(name)
}
fs.mut.Lock()
defer fs.mut.Unlock()
dir := filepath.Dir(name)
base := filepath.Base(name)
entry := fs.entryForName(dir)
key := base
if entry == nil {
return nil, os.ErrNotExist
} else if entry.entryType != fakeEntryTypeDir {
return nil, errors.New("not a directory")
}
if fs.insens {
key = UnicodeLowercase(key)
}
if flags&os.O_EXCL != 0 {
if _, ok := entry.children[base]; ok {
if _, ok := entry.children[key]; ok {
return nil, os.ErrExist
}
}
@@ -376,7 +418,7 @@ func (fs *fakefs) OpenFile(name string, flags int, mode FileMode) (File, error)
mtime: time.Now(),
}
entry.children[base] = newEntry
entry.children[key] = newEntry
return &fakeFile{fakeEntry: newEntry}, nil
}
@@ -397,6 +439,10 @@ func (fs *fakefs) Remove(name string) error {
fs.mut.Lock()
defer fs.mut.Unlock()
if fs.insens {
name = UnicodeLowercase(name)
}
entry := fs.entryForName(name)
if entry == nil {
return os.ErrNotExist
@@ -414,9 +460,13 @@ func (fs *fakefs) RemoveAll(name string) error {
fs.mut.Lock()
defer fs.mut.Unlock()
if fs.insens {
name = UnicodeLowercase(name)
}
entry := fs.entryForName(filepath.Dir(name))
if entry == nil {
return os.ErrNotExist
return nil // all tested real systems exibit this behaviour
}
// RemoveAll is easy when the file system uses garbage collection under
@@ -429,12 +479,20 @@ func (fs *fakefs) Rename(oldname, newname string) error {
fs.mut.Lock()
defer fs.mut.Unlock()
oldKey := filepath.Base(oldname)
newKey := filepath.Base(newname)
if fs.insens {
oldKey = UnicodeLowercase(oldKey)
newKey = UnicodeLowercase(newKey)
}
p0 := fs.entryForName(filepath.Dir(oldname))
if p0 == nil {
return os.ErrNotExist
}
entry := p0.children[filepath.Base(oldname)]
entry := p0.children[oldKey]
if entry == nil {
return os.ErrNotExist
}
@@ -444,13 +502,24 @@ func (fs *fakefs) Rename(oldname, newname string) error {
return os.ErrNotExist
}
dst, ok := p1.children[filepath.Base(newname)]
if ok && dst.entryType == fakeEntryTypeDir {
return errors.New("is a directory")
dst, ok := p1.children[newKey]
if ok {
if fs.insens && newKey == oldKey {
// case-only in-place rename
entry.name = filepath.Base(newname)
return nil
}
if dst.entryType == fakeEntryTypeDir {
return errors.New("is a directory")
}
}
p1.children[filepath.Base(newname)] = entry
delete(p0.children, filepath.Base(oldname))
p1.children[newKey] = entry
entry.name = filepath.Base(newname)
delete(p0.children, oldKey)
return nil
}
@@ -500,18 +569,30 @@ func (fs *fakefs) URI() string {
}
func (fs *fakefs) SameFile(fi1, fi2 FileInfo) bool {
return fi1.Name() == fi2.Name()
// BUG: real systems base file sameness on path, inodes, etc
// we try our best, but FileInfo just doesn't have enough data
// so there be false positives, especially on Windows
// where ModTime is not that precise
var ok bool
if fs.insens {
ok = UnicodeLowercase(fi1.Name()) == UnicodeLowercase(fi2.Name())
} else {
ok = fi1.Name() == fi2.Name()
}
return ok && fi1.ModTime().Equal(fi2.ModTime()) && fi1.Mode() == fi2.Mode() && fi1.IsDir() == fi2.IsDir() && fi1.IsRegular() == fi2.IsRegular() && fi1.IsSymlink() == fi2.IsSymlink() && fi1.Owner() == fi2.Owner() && fi1.Group() == fi2.Group()
}
// fakeFile is the representation of an open file. We don't care if it's
// opened for reading or writing, it's all good.
type fakeFile struct {
*fakeEntry
mut sync.Mutex
rng io.Reader
seed int64
offset int64
seedOffs int64
mut sync.Mutex
rng io.Reader
seed int64
offset int64
seedOffs int64
presentedName string // present (i.e. != "") on insensitive fs only
}
func (f *fakeFile) Close() error {
@@ -674,6 +755,9 @@ func (f *fakeFile) WriteAt(p []byte, off int64) (int, error) {
}
func (f *fakeFile) Name() string {
if f.presentedName != "" {
return f.presentedName
}
return f.name
}
@@ -690,7 +774,12 @@ func (f *fakeFile) Truncate(size int64) error {
}
func (f *fakeFile) Stat() (FileInfo, error) {
return &fakeFileInfo{*f.fakeEntry}, nil
info := &fakeFileInfo{*f.fakeEntry}
if f.presentedName != "" {
info.name = f.presentedName
}
return info, nil
}
func (f *fakeFile) Sync() error {

View File

@@ -8,9 +8,16 @@ package fs
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
"runtime"
"sort"
"testing"
"time"
)
func TestFakeFS(t *testing.T) {
@@ -123,13 +130,11 @@ func TestFakeFS(t *testing.T) {
}
}
func TestFakeFSRead(t *testing.T) {
func testFakeFSRead(t *testing.T, fs Filesystem) {
// Test some basic aspects of the fakefs
fs := newFakeFilesystem("/foo/bar/baz")
// Create
fd, _ := fs.Create("test")
defer fd.Close()
fd.Truncate(3 * 1 << randomBlockShift)
// Read
@@ -172,3 +177,734 @@ func TestFakeFSRead(t *testing.T) {
t.Error("data mismatch")
}
}
type testFS struct {
name string
fs Filesystem
}
type test struct {
name string
impl func(t *testing.T, fs Filesystem)
}
func TestFakeFSCaseSensitive(t *testing.T) {
var tests = []test{
{"Read", testFakeFSRead},
{"OpenFile", testFakeFSOpenFile},
{"RemoveAll", testFakeFSRemoveAll},
{"Remove", testFakeFSRemove},
{"Rename", testFakeFSRename},
{"Mkdir", testFakeFSMkdir},
{"SameFile", testFakeFSSameFile},
{"DirNames", testDirNames},
{"FileName", testFakeFSFileName},
}
var filesystems = []testFS{
{"fakefs", newFakeFilesystem("/foo")},
}
testDir, sensitive := createTestDir(t)
defer removeTestDir(t, testDir)
if sensitive {
filesystems = append(filesystems, testFS{runtime.GOOS, newBasicFilesystem(testDir)})
}
runTests(t, tests, filesystems)
}
func TestFakeFSCaseInsensitive(t *testing.T) {
var tests = []test{
{"Read", testFakeFSRead},
{"OpenFile", testFakeFSOpenFile},
{"RemoveAll", testFakeFSRemoveAll},
{"Remove", testFakeFSRemove},
{"Mkdir", testFakeFSMkdir},
{"SameFile", testFakeFSSameFile},
{"DirNames", testDirNames},
{"FileName", testFakeFSFileName},
{"GeneralInsens", testFakeFSCaseInsensitive},
{"MkdirAllInsens", testFakeFSCaseInsensitiveMkdirAll},
{"StatInsens", testFakeFSStatInsens},
{"RenameInsens", testFakeFSRenameInsensitive},
{"MkdirInsens", testFakeFSMkdirInsens},
{"OpenFileInsens", testFakeFSOpenFileInsens},
{"RemoveAllInsens", testFakeFSRemoveAllInsens},
{"RemoveInsens", testFakeFSRemoveInsens},
{"SameFileInsens", testFakeFSSameFileInsens},
{"CreateInsens", testFakeFSCreateInsens},
{"FileNameInsens", testFakeFSFileNameInsens},
}
var filesystems = []testFS{
{"fakefs", newFakeFilesystem("/foobar?insens=true")},
}
testDir, sensitive := createTestDir(t)
defer removeTestDir(t, testDir)
if !sensitive {
filesystems = append(filesystems, testFS{runtime.GOOS, newBasicFilesystem(testDir)})
}
runTests(t, tests, filesystems)
}
func createTestDir(t *testing.T) (string, bool) {
t.Helper()
testDir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatalf("could not create temporary dir for testing: %s", err)
}
if fd, err := os.Create(filepath.Join(testDir, ".stfolder")); err != nil {
t.Fatalf("could not create .stfolder: %s", err)
} else {
fd.Close()
}
var sensitive bool
if f, err := os.Open(filepath.Join(testDir, ".STfolder")); err != nil {
sensitive = true
} else {
defer f.Close()
}
return testDir, sensitive
}
func removeTestDir(t *testing.T, testDir string) {
t.Helper()
if err := os.RemoveAll(testDir); err != nil {
t.Fatalf("could not remove test directory: %s", err)
}
}
func runTests(t *testing.T, tests []test, filesystems []testFS) {
for _, filesystem := range filesystems {
for _, test := range tests {
name := fmt.Sprintf("%s_%s", test.name, filesystem.name)
t.Run(name, func(t *testing.T) {
test.impl(t, filesystem.fs)
if err := cleanup(filesystem.fs); err != nil {
t.Errorf("cleanup failed: %s", err)
}
})
}
}
}
func testFakeFSCaseInsensitive(t *testing.T, fs Filesystem) {
bs1 := []byte("test")
err := fs.Mkdir("/fUbar", 0755)
if err != nil {
t.Fatal(err)
}
fd1, err := fs.Create("fuBAR/SISYPHOS")
if err != nil {
t.Fatalf("could not create file: %s", err)
}
defer fd1.Close()
_, err = fd1.Write(bs1)
if err != nil {
t.Fatal(err)
}
// Try reading from the same file with different filenames
fd2, err := fs.Open("Fubar/Sisyphos")
if err != nil {
t.Fatalf("could not open file by its case-differing filename: %s", err)
}
defer fd2.Close()
if _, err := fd2.Seek(0, io.SeekStart); err != nil {
t.Fatal(err)
}
bs2, err := ioutil.ReadAll(fd2)
if err != nil {
t.Fatal(err)
}
if len(bs1) != len(bs2) {
t.Errorf("wrong number of bytes, expected %d, got %d", len(bs1), len(bs2))
}
}
func testFakeFSCaseInsensitiveMkdirAll(t *testing.T, fs Filesystem) {
err := fs.MkdirAll("/fOO/Bar/bAz", 0755)
if err != nil {
t.Fatal(err)
}
fd, err := fs.OpenFile("/foo/BaR/BaZ/tESt", os.O_CREATE, 0644)
if err != nil {
t.Fatal(err)
}
if err = fd.Close(); err != nil {
t.Fatal(err)
}
if err = fs.Rename("/FOO/BAR/baz/tesT", "/foo/baR/BAZ/Qux"); err != nil {
t.Fatal(err)
}
}
func testDirNames(t *testing.T, fs Filesystem) {
filenames := []string{"fOO", "Bar", "baz"}
for _, filename := range filenames {
if fd, err := fs.Create("/" + filename); err != nil {
t.Errorf("Could not create %s: %s", filename, err)
} else {
fd.Close()
}
}
assertDir(t, fs, "/", filenames)
}
func assertDir(t *testing.T, fs Filesystem, directory string, filenames []string) {
t.Helper()
got, err := fs.DirNames(directory)
if err != nil {
t.Fatal(err)
}
if path.Clean(directory) == "/" {
filenames = append(filenames, ".stfolder")
}
sort.Strings(filenames)
sort.Strings(got)
if len(filenames) != len(got) {
t.Errorf("want %s, got %s", filenames, got)
return
}
for i := range filenames {
if filenames[i] != got[i] {
t.Errorf("want %s, got %s", filenames, got)
return
}
}
}
func testFakeFSStatInsens(t *testing.T, fs Filesystem) {
// this is to test that neither fs.Stat nor fd.Stat change the filename
// both in directory and in previous Stat results
fd1, err := fs.Create("aAa")
if err != nil {
t.Fatal(err)
}
defer fd1.Close()
info1, err := fs.Stat("AAA")
if err != nil {
t.Fatal(err)
}
if _, err = fs.Stat("AaA"); err != nil {
t.Fatal(err)
}
info2, err := fd1.Stat()
if err != nil {
t.Fatal(err)
}
fd2, err := fs.Open("aaa")
if err != nil {
t.Fatal(err)
}
defer fd2.Close()
if _, err = fd2.Stat(); err != nil {
t.Fatal(err)
}
if info1.Name() != "AAA" {
t.Errorf("want AAA, got %s", info1.Name())
}
if info2.Name() != "aAa" {
t.Errorf("want aAa, got %s", info2.Name())
}
assertDir(t, fs, "/", []string{"aAa"})
}
func testFakeFSFileName(t *testing.T, fs Filesystem) {
var testCases = []struct {
create string
open string
}{
{"bar", "bar"},
}
for _, testCase := range testCases {
if fd, err := fs.Create(testCase.create); err != nil {
t.Fatal(err)
} else {
fd.Close()
}
fd, err := fs.Open(testCase.open)
if err != nil {
t.Fatal(err)
}
defer fd.Close()
if got := fd.Name(); got != testCase.open {
t.Errorf("want %s, got %s", testCase.open, got)
}
}
}
func testFakeFSFileNameInsens(t *testing.T, fs Filesystem) {
var testCases = []struct {
create string
open string
}{
{"BaZ", "bAz"},
}
for _, testCase := range testCases {
fd, err := fs.Create(testCase.create)
if err != nil {
t.Fatal(err)
}
fd.Close()
fd, err = fs.Open(testCase.open)
if err != nil {
t.Fatal(err)
}
defer fd.Close()
if got := fd.Name(); got != testCase.open {
t.Errorf("want %s, got %s", testCase.open, got)
}
}
}
func testFakeFSRename(t *testing.T, fs Filesystem) {
if err := fs.MkdirAll("/foo/bar/baz", 0755); err != nil {
t.Fatal(err)
}
fd, err := fs.Create("/foo/bar/baz/qux")
if err != nil {
t.Fatal(err)
}
fd.Close()
if err := fs.Rename("/foo/bar/baz/qux", "/foo/notthere/qux"); err == nil {
t.Errorf("rename to non-existent dir gave no error")
}
if err := fs.MkdirAll("/baz/bar/foo", 0755); err != nil {
t.Fatal(err)
}
if err := fs.Rename("/foo/bar/baz/qux", "/baz/bar/foo/qux"); err != nil {
t.Fatal(err)
}
var dirs = []struct {
dir string
files []string
}{
{dir: "/", files: []string{"foo", "baz"}},
{dir: "/foo", files: []string{"bar"}},
{dir: "/foo/bar/baz", files: []string{}},
{dir: "/baz/bar/foo", files: []string{"qux"}},
}
for _, dir := range dirs {
assertDir(t, fs, dir.dir, dir.files)
}
if err := fs.Rename("/baz/bar/foo", "/baz/bar/FOO"); err != nil {
t.Fatal(err)
}
assertDir(t, fs, "/baz/bar", []string{"FOO"})
assertDir(t, fs, "/baz/bar/FOO", []string{"qux"})
}
func testFakeFSRenameInsensitive(t *testing.T, fs Filesystem) {
if err := fs.MkdirAll("/baz/bar/foo", 0755); err != nil {
t.Fatal(err)
}
if err := fs.MkdirAll("/foO/baR/baZ", 0755); err != nil {
t.Fatal(err)
}
fd, err := fs.Create("/BAZ/BAR/FOO/QUX")
if err != nil {
t.Fatal(err)
}
fd.Close()
if err := fs.Rename("/Baz/bAr/foO/QuX", "/Foo/Bar/Baz/qUUx"); err != nil {
t.Fatal(err)
}
var dirs = []struct {
dir string
files []string
}{
{dir: "/", files: []string{"foO", "baz"}},
{dir: "/foo", files: []string{"baR"}},
{dir: "/foo/bar/baz", files: []string{"qUUx"}},
{dir: "/baz/bar/foo", files: []string{}},
}
for _, dir := range dirs {
assertDir(t, fs, dir.dir, dir.files)
}
// not checking on darwin due to https://github.com/golang/go/issues/35222
if runtime.GOOS != "darwin" {
if err := fs.Rename("/foo/bar/BAZ", "/FOO/BAR/bAz"); err != nil {
t.Errorf("Could not perform in-place case-only directory rename: %s", err)
}
assertDir(t, fs, "/foo/bar", []string{"bAz"})
assertDir(t, fs, "/fOO/bAr/baz", []string{"qUUx"})
}
if err := fs.Rename("foo/bar/baz/quux", "foo/bar/BaZ/Quux"); err != nil {
t.Errorf("File rename failed: %s", err)
}
assertDir(t, fs, "/FOO/BAR/BAZ", []string{"Quux"})
}
func testFakeFSMkdir(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/foo", 0755); err != nil {
t.Fatal(err)
}
if _, err := fs.Stat("/foo"); err != nil {
t.Fatal(err)
}
if err := fs.Mkdir("/foo", 0755); err == nil {
t.Errorf("got no error while creating existing directory")
}
}
func testFakeFSMkdirInsens(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/foo", 0755); err != nil {
t.Fatal(err)
}
if _, err := fs.Stat("/Foo"); err != nil {
t.Fatal(err)
}
if err := fs.Mkdir("/FOO", 0755); err == nil {
t.Errorf("got no error while creating existing directory")
}
}
func testFakeFSOpenFile(t *testing.T, fs Filesystem) {
fd, err := fs.OpenFile("foobar", os.O_RDONLY, 0664)
if err == nil {
fd.Close()
t.Fatalf("got no error opening a non-existing file")
}
fd, err = fs.OpenFile("foobar", os.O_RDWR|os.O_CREATE, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
fd, err = fs.OpenFile("foobar", os.O_RDWR|os.O_CREATE|os.O_EXCL, 0664)
if err == nil {
fd.Close()
t.Fatalf("created an existing file while told not to")
}
fd, err = fs.OpenFile("foobar", os.O_RDWR|os.O_CREATE, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
fd, err = fs.OpenFile("foobar", os.O_RDWR, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
}
func testFakeFSOpenFileInsens(t *testing.T, fs Filesystem) {
fd, err := fs.OpenFile("FooBar", os.O_RDONLY, 0664)
if err == nil {
fd.Close()
t.Fatalf("got no error opening a non-existing file")
}
fd, err = fs.OpenFile("fOObar", os.O_RDWR|os.O_CREATE, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
fd, err = fs.OpenFile("fOoBaR", os.O_RDWR|os.O_CREATE|os.O_EXCL, 0664)
if err == nil {
fd.Close()
t.Fatalf("created an existing file while told not to")
}
fd, err = fs.OpenFile("FoObAr", os.O_RDWR|os.O_CREATE, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
fd, err = fs.OpenFile("FOOBAR", os.O_RDWR, 0664)
if err != nil {
t.Fatal(err)
}
fd.Close()
}
func testFakeFSRemoveAll(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/foo", 0755); err != nil {
t.Fatal(err)
}
filenames := []string{"bar", "baz", "qux"}
for _, filename := range filenames {
fd, err := fs.Create("/foo/" + filename)
if err != nil {
t.Fatalf("Could not create %s: %s", filename, err)
} else {
fd.Close()
}
}
if err := fs.RemoveAll("/foo"); err != nil {
t.Fatal(err)
}
if _, err := fs.Stat("/foo"); err == nil {
t.Errorf("this should be an error, as file doesn not exist anymore")
}
if err := fs.RemoveAll("/foo/bar"); err != nil {
t.Errorf("real systems don't return error here")
}
}
func testFakeFSRemoveAllInsens(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/Foo", 0755); err != nil {
t.Fatal(err)
}
filenames := []string{"bar", "baz", "qux"}
for _, filename := range filenames {
fd, err := fs.Create("/FOO/" + filename)
if err != nil {
t.Fatalf("Could not create %s: %s", filename, err)
}
fd.Close()
}
if err := fs.RemoveAll("/fOo"); err != nil {
t.Errorf("Could not remove dir: %s", err)
}
if _, err := fs.Stat("/foo"); err == nil {
t.Errorf("this should be an error, as file doesn not exist anymore")
}
if err := fs.RemoveAll("/foO/bAr"); err != nil {
t.Errorf("real systems don't return error here")
}
}
func testFakeFSRemove(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/Foo", 0755); err != nil {
t.Fatal(err)
}
fd, err := fs.Create("/Foo/Bar")
if err != nil {
t.Fatal(err)
} else {
fd.Close()
}
if err := fs.Remove("/Foo"); err == nil {
t.Errorf("not empty, should give error")
}
if err := fs.Remove("/Foo/Bar"); err != nil {
t.Fatal(err)
}
if err := fs.Remove("/Foo"); err != nil {
t.Fatal(err)
}
}
func testFakeFSRemoveInsens(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/Foo", 0755); err != nil {
t.Fatal(err)
}
fd, err := fs.Create("/Foo/Bar")
if err != nil {
t.Fatal(err)
}
fd.Close()
if err := fs.Remove("/FOO"); err == nil {
t.Errorf("not empty, should give error")
}
if err := fs.Remove("/Foo/BaR"); err != nil {
t.Fatal(err)
}
if err := fs.Remove("/FoO"); err != nil {
t.Fatal(err)
}
}
func testFakeFSSameFile(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/Foo", 0755); err != nil {
t.Fatal(err)
}
filenames := []string{"Bar", "Baz", "/Foo/Bar"}
for _, filename := range filenames {
if fd, err := fs.Create(filename); err != nil {
t.Fatalf("Could not create %s: %s", filename, err)
} else {
fd.Close()
if runtime.GOOS == "windows" {
time.Sleep(1 * time.Millisecond)
}
}
}
testCases := []struct {
f1 string
f2 string
want bool
}{
{"Bar", "Baz", false},
{"Bar", "/Foo/Bar", false},
{"Bar", "Bar", true},
}
for _, test := range testCases {
assertSameFile(t, fs, test.f1, test.f2, test.want)
}
}
func testFakeFSSameFileInsens(t *testing.T, fs Filesystem) {
if err := fs.Mkdir("/Foo", 0755); err != nil {
t.Fatal(err)
}
filenames := []string{"Bar", "Baz"}
for _, filename := range filenames {
fd, err := fs.Create(filename)
if err != nil {
t.Errorf("Could not create %s: %s", filename, err)
}
fd.Close()
}
testCases := []struct {
f1 string
f2 string
want bool
}{
{"bAr", "baZ", false},
{"baz", "BAZ", true},
}
for _, test := range testCases {
assertSameFile(t, fs, test.f1, test.f2, test.want)
}
}
func assertSameFile(t *testing.T, fs Filesystem, f1, f2 string, want bool) {
t.Helper()
fi1, err := fs.Stat(f1)
if err != nil {
t.Fatal(err)
}
fi2, err := fs.Stat(f2)
if err != nil {
t.Fatal(err)
}
got := fs.SameFile(fi1, fi2)
if got != want {
t.Errorf("for \"%s\" and \"%s\" want SameFile %v, got %v", f1, f2, want, got)
}
}
func testFakeFSCreateInsens(t *testing.T, fs Filesystem) {
fd1, err := fs.Create("FOO")
if err != nil {
t.Fatal(err)
}
defer fd1.Close()
fd2, err := fs.Create("fOo")
if err != nil {
t.Fatal(err)
}
defer fd2.Close()
if fd1.Name() != "FOO" {
t.Errorf("name of the file created as \"FOO\" is %s", fd1.Name())
}
if fd2.Name() != "fOo" {
t.Errorf("name of created file \"fOo\" is %s", fd2.Name())
}
// one would expect DirNames to show the last variant, but in fact it shows
// the original one
assertDir(t, fs, "/", []string{"FOO"})
}
func cleanup(fs Filesystem) error {
filenames, _ := fs.DirNames("/")
for _, filename := range filenames {
if filename != ".stfolder" {
if err := fs.RemoveAll(filename); err != nil {
return err
}
}
}
return nil
}

View File

@@ -38,6 +38,8 @@ func TestUnicodeLowercase(t *testing.T) {
{"汉语/漢語 or 中文", "汉语/漢語 or 中文"},
// Niether katakana as far as I can tell.
{"チャーハン", "チャーハン"},
// Some special unicode characters, however, are folded by OSes
{"\u212A", "k"},
}
for _, tc := range cases {
res := UnicodeLowercase(tc[0])

View File

@@ -12,9 +12,9 @@ import (
// The database is where we store the virtual mtimes
type database interface {
Bytes(key string) (data []byte, ok bool)
PutBytes(key string, data []byte)
Delete(key string)
Bytes(key string) (data []byte, ok bool, err error)
PutBytes(key string, data []byte) error
Delete(key string) error
}
// The MtimeFS is a filesystem with nanosecond mtime precision, regardless
@@ -72,7 +72,10 @@ func (f *MtimeFS) Stat(name string) (FileInfo, error) {
return nil, err
}
real, virtual := f.load(name)
real, virtual, err := f.load(name)
if err != nil {
return nil, err
}
if real == info.ModTime() {
info = mtimeFileInfo{
FileInfo: info,
@@ -89,7 +92,10 @@ func (f *MtimeFS) Lstat(name string) (FileInfo, error) {
return nil, err
}
real, virtual := f.load(name)
real, virtual, err := f.load(name)
if err != nil {
return nil, err
}
if real == info.ModTime() {
info = mtimeFileInfo{
FileInfo: info,
@@ -103,7 +109,11 @@ func (f *MtimeFS) Lstat(name string) (FileInfo, error) {
func (f *MtimeFS) Walk(root string, walkFn WalkFunc) error {
return f.Filesystem.Walk(root, func(path string, info FileInfo, err error) error {
if info != nil {
real, virtual := f.load(path)
real, virtual, loadErr := f.load(path)
if loadErr != nil && err == nil {
// The iterator gets to deal with the error
err = loadErr
}
if real == info.ModTime() {
info = mtimeFileInfo{
FileInfo: info,
@@ -162,22 +172,24 @@ func (f *MtimeFS) save(name string, real, virtual time.Time) {
f.db.PutBytes(name, bs)
}
func (f *MtimeFS) load(name string) (real, virtual time.Time) {
func (f *MtimeFS) load(name string) (real, virtual time.Time, err error) {
if f.caseInsensitive {
name = UnicodeLowercase(name)
}
data, exists := f.db.Bytes(name)
if !exists {
return
data, exists, err := f.db.Bytes(name)
if err != nil {
return time.Time{}, time.Time{}, err
} else if !exists {
return time.Time{}, time.Time{}, nil
}
var mtime dbMtime
if err := mtime.Unmarshal(data); err != nil {
return
return time.Time{}, time.Time{}, err
}
return mtime.real, mtime.virtual
return mtime.real, mtime.virtual, nil
}
// The mtimeFileInfo is an os.FileInfo that lies about the ModTime().
@@ -202,7 +214,10 @@ func (f *mtimeFile) Stat() (FileInfo, error) {
return nil, err
}
real, virtual := f.fs.load(f.Name())
real, virtual, err := f.fs.load(f.Name())
if err != nil {
return nil, err
}
if real == info.ModTime() {
info = mtimeFileInfo{
FileInfo: info,

View File

@@ -236,17 +236,19 @@ func TestMtimeFSInsensitive(t *testing.T) {
type mapStore map[string][]byte
func (s mapStore) PutBytes(key string, data []byte) {
func (s mapStore) PutBytes(key string, data []byte) error {
s[key] = data
return nil
}
func (s mapStore) Bytes(key string) (data []byte, ok bool) {
func (s mapStore) Bytes(key string) (data []byte, ok bool, err error) {
data, ok = s[key]
return
}
func (s mapStore) Delete(key string) {
func (s mapStore) Delete(key string) error {
delete(s, key)
return nil
}
// failChtimes does nothing, and fails

View File

@@ -18,6 +18,8 @@ import (
"time"
"github.com/gobwas/glob"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/sync"
@@ -30,6 +32,14 @@ const (
resultFoldCase = 1 << iota
)
var defaultResult Result = resultInclude
func init() {
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
defaultResult |= resultFoldCase
}
}
type Pattern struct {
pattern string
match glob.Glob
@@ -50,6 +60,25 @@ func (p Pattern) String() string {
return ret
}
func (p Pattern) allowsSkippingIgnoredDirs() bool {
if p.result.IsIgnored() {
return true
}
if p.pattern[0] != '/' {
return false
}
// Double asterisk everywhere in the path except at the end is bad
if strings.Contains(strings.TrimSuffix(p.pattern, "**"), "**") {
return false
}
// Any wildcards anywhere except for the last path component are bad
lastSep := strings.LastIndex(p.pattern, "/")
if lastSep == -1 {
return true
}
return p.pattern[:lastSep] == glob.QuoteMeta(p.pattern[:lastSep])
}
type Result uint8
func (r Result) IsIgnored() bool {
@@ -172,11 +201,20 @@ func (m *Matcher) parseLocked(r io.Reader, file string) error {
}
m.skipIgnoredDirs = true
var previous string
for _, p := range patterns {
if !p.result.IsIgnored() {
// We automatically add patterns with a /** suffix, which normally
// means that we cannot skip directories. However if the same
// pattern without the /** already exists (which is true for
// automatically added patterns) we can skip.
if l := len(p.pattern); l > 3 && p.pattern[:len(p.pattern)-3] == previous {
continue
}
if !p.allowsSkippingIgnoredDirs() {
m.skipIgnoredDirs = false
break
}
previous = p.pattern
}
m.curHash = newHash
@@ -358,87 +396,90 @@ func loadParseIncludeFile(filesystem fs.Filesystem, file string, cd ChangeDetect
return patterns, err
}
func parseLine(line string) ([]Pattern, error) {
pattern := Pattern{
result: defaultResult,
}
// Allow prefixes to be specified in any order, but only once.
var seenPrefix [3]bool
for {
if strings.HasPrefix(line, "!") && !seenPrefix[0] {
seenPrefix[0] = true
line = line[1:]
pattern.result ^= resultInclude
} else if strings.HasPrefix(line, "(?i)") && !seenPrefix[1] {
seenPrefix[1] = true
pattern.result |= resultFoldCase
line = line[4:]
} else if strings.HasPrefix(line, "(?d)") && !seenPrefix[2] {
seenPrefix[2] = true
pattern.result |= resultDeletable
line = line[4:]
} else {
break
}
}
if pattern.result.IsCaseFolded() {
line = strings.ToLower(line)
}
pattern.pattern = line
var err error
if strings.HasPrefix(line, "/") {
// Pattern is rooted in the current dir only
pattern.match, err = glob.Compile(line[1:], '/')
return []Pattern{pattern}, err
}
patterns := make([]Pattern, 2)
if strings.HasPrefix(line, "**/") {
// Add the pattern as is, and without **/ so it matches in current dir
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return nil, err
}
patterns[0] = pattern
line = line[3:]
pattern.pattern = line
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return nil, err
}
patterns[1] = pattern
return patterns, nil
}
// Path name or pattern, add it so it matches files both in
// current directory and subdirs.
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return nil, err
}
patterns[0] = pattern
line = "**/" + line
pattern.pattern = line
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return nil, err
}
patterns[1] = pattern
return patterns, nil
}
func parseIgnoreFile(fs fs.Filesystem, fd io.Reader, currentFile string, cd ChangeDetector, linesSeen map[string]struct{}) ([]string, []Pattern, error) {
var lines []string
var patterns []Pattern
defaultResult := resultInclude
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
defaultResult |= resultFoldCase
}
addPattern := func(line string) error {
pattern := Pattern{
result: defaultResult,
}
// Allow prefixes to be specified in any order, but only once.
var seenPrefix [3]bool
for {
if strings.HasPrefix(line, "!") && !seenPrefix[0] {
seenPrefix[0] = true
line = line[1:]
pattern.result ^= resultInclude
} else if strings.HasPrefix(line, "(?i)") && !seenPrefix[1] {
seenPrefix[1] = true
pattern.result |= resultFoldCase
line = line[4:]
} else if strings.HasPrefix(line, "(?d)") && !seenPrefix[2] {
seenPrefix[2] = true
pattern.result |= resultDeletable
line = line[4:]
} else {
break
}
}
if pattern.result.IsCaseFolded() {
line = strings.ToLower(line)
}
pattern.pattern = line
var err error
if strings.HasPrefix(line, "/") {
// Pattern is rooted in the current dir only
pattern.match, err = glob.Compile(line[1:], '/')
if err != nil {
return fmt.Errorf("invalid pattern %q in ignore file (%v)", line, err)
}
patterns = append(patterns, pattern)
} else if strings.HasPrefix(line, "**/") {
// Add the pattern as is, and without **/ so it matches in current dir
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return fmt.Errorf("invalid pattern %q in ignore file (%v)", line, err)
}
patterns = append(patterns, pattern)
line = line[3:]
pattern.pattern = line
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return fmt.Errorf("invalid pattern %q in ignore file (%v)", line, err)
}
patterns = append(patterns, pattern)
} else {
// Path name or pattern, add it so it matches files both in
// current directory and subdirs.
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return fmt.Errorf("invalid pattern %q in ignore file (%v)", line, err)
}
patterns = append(patterns, pattern)
line := "**/" + line
pattern.pattern = line
pattern.match, err = glob.Compile(line, '/')
if err != nil {
return fmt.Errorf("invalid pattern %q in ignore file (%v)", line, err)
}
patterns = append(patterns, pattern)
newPatterns, err := parseLine(line)
if err != nil {
return errors.Wrapf(err, "invalid pattern %q in ignore file", line)
}
patterns = append(patterns, newPatterns...)
return nil
}
@@ -504,6 +545,14 @@ func parseIgnoreFile(fs fs.Filesystem, fd io.Reader, currentFile string, cd Chan
// WriteIgnores is a convenience function to avoid code duplication
func WriteIgnores(filesystem fs.Filesystem, path string, content []string) error {
if len(content) == 0 {
err := filesystem.Remove(path)
if fs.IsNotExist(err) {
return nil
}
return err
}
fd, err := osutil.CreateAtomicFilesystem(filesystem, path)
if err != nil {
return err

View File

@@ -1098,3 +1098,55 @@ func TestPartialIncludeLine(t *testing.T) {
}
}
}
func TestSkipIgnoredDirs(t *testing.T) {
tcs := []struct {
pattern string
expected bool
}{
{`!/test`, true},
{`!/t[eih]t`, true},
{`!/t*t`, true},
{`!/t?t`, true},
{`!/**`, true},
{`!/parent/test`, true},
{`!/parent/t[eih]t`, true},
{`!/parent/t*t`, true},
{`!/parent/t?t`, true},
{`!/**.mp3`, false},
{`!/pa*nt/test`, false},
{`!/pa[sdf]nt/t[eih]t`, false},
{`!/lowest/pa[sdf]nt/test`, false},
{`!/lo*st/parent/test`, false},
{`/pa*nt/test`, true},
{`test`, true},
{`*`, true},
}
for _, tc := range tcs {
pats, err := parseLine(tc.pattern)
if err != nil {
t.Error(err)
}
for _, pat := range pats {
if got := pat.allowsSkippingIgnoredDirs(); got != tc.expected {
t.Errorf(`Pattern "%v": got %v, expected %v`, pat, got, tc.expected)
}
}
}
pats := New(fs.NewFilesystem(fs.FilesystemTypeBasic, "testdata"), WithCache(true))
stignore := `
/foo/ign*
!/f*
!/bar
*
`
if err := pats.Parse(bytes.NewBufferString(stignore), ".stignore"); err != nil {
t.Fatal(err)
}
if !pats.SkipIgnoredDirs() {
t.Error("SkipIgnoredDirs should be true")
}
}

View File

@@ -32,8 +32,8 @@ type fakeConnection struct {
fileData map[string][]byte
folder string
model *model
indexFn func(string, []protocol.FileInfo)
requestFn func(folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error)
indexFn func(context.Context, string, []protocol.FileInfo)
requestFn func(ctx context.Context, folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error)
closeFn func(error)
mut sync.Mutex
}
@@ -64,29 +64,29 @@ func (f *fakeConnection) Option(string) string {
return ""
}
func (f *fakeConnection) Index(folder string, fs []protocol.FileInfo) error {
func (f *fakeConnection) Index(ctx context.Context, folder string, fs []protocol.FileInfo) error {
f.mut.Lock()
defer f.mut.Unlock()
if f.indexFn != nil {
f.indexFn(folder, fs)
f.indexFn(ctx, folder, fs)
}
return nil
}
func (f *fakeConnection) IndexUpdate(folder string, fs []protocol.FileInfo) error {
func (f *fakeConnection) IndexUpdate(ctx context.Context, folder string, fs []protocol.FileInfo) error {
f.mut.Lock()
defer f.mut.Unlock()
if f.indexFn != nil {
f.indexFn(folder, fs)
f.indexFn(ctx, folder, fs)
}
return nil
}
func (f *fakeConnection) Request(folder, name string, offset int64, size int, hash []byte, weakHash uint32, fromTemporary bool) ([]byte, error) {
func (f *fakeConnection) Request(ctx context.Context, folder, name string, offset int64, size int, hash []byte, weakHash uint32, fromTemporary bool) ([]byte, error) {
f.mut.Lock()
defer f.mut.Unlock()
if f.requestFn != nil {
return f.requestFn(folder, name, offset, size, hash, fromTemporary)
return f.requestFn(ctx, folder, name, offset, size, hash, fromTemporary)
}
return f.fileData[name], nil
}
@@ -109,7 +109,7 @@ func (f *fakeConnection) Statistics() protocol.Statistics {
return protocol.Statistics{}
}
func (f *fakeConnection) DownloadProgress(folder string, updates []protocol.FileDownloadProgressUpdate) {
func (f *fakeConnection) DownloadProgress(_ context.Context, folder string, updates []protocol.FileDownloadProgressUpdate) {
f.downloadProgressMessages = append(f.downloadProgressMessages, downloadProgressMessage{
folder: folder,
updates: updates,

View File

@@ -15,6 +15,8 @@ import (
"sync/atomic"
"time"
"github.com/pkg/errors"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/events"
@@ -47,7 +49,6 @@ type folder struct {
fset *db.FileSet
ignores *ignore.Matcher
ctx context.Context
cancel context.CancelFunc
scanInterval time.Duration
scanTimer *time.Timer
@@ -78,8 +79,6 @@ type puller interface {
}
func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg config.FolderConfiguration, evLogger events.Logger) folder {
ctx, cancel := context.WithCancel(context.Background())
return folder{
stateTracker: newStateTracker(cfg.ID, evLogger),
FolderConfiguration: cfg,
@@ -89,8 +88,6 @@ func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg conf
shortID: model.shortID,
fset: fset,
ignores: ignores,
ctx: ctx,
cancel: cancel,
scanInterval: time.Duration(cfg.RescanIntervalS) * time.Second,
scanTimer: time.NewTimer(time.Millisecond), // The first scan should be done immediately.
@@ -107,10 +104,12 @@ func newFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher, cfg conf
}
}
func (f *folder) serve(_ chan struct{}) {
func (f *folder) serve(ctx context.Context) {
atomic.AddInt32(&f.model.foldersRunning, 1)
defer atomic.AddInt32(&f.model.foldersRunning, -1)
f.ctx = ctx
l.Debugln(f, "starting")
defer l.Debugln(f, "exiting")
@@ -254,11 +253,6 @@ func (f *folder) Delay(next time.Duration) {
f.scanDelay <- next
}
func (f *folder) Stop() {
f.cancel()
f.Service.Stop()
}
// CheckHealth checks the folder for common errors, updates the folder state
// and returns the current folder error, or nil if the folder is healthy.
func (f *folder) CheckHealth() error {
@@ -278,7 +272,7 @@ func (f *folder) getHealthError() error {
dbPath := locations.Get(locations.Database)
if usage, err := fs.NewFilesystem(fs.FilesystemTypeBasic, dbPath).Usage("."); err == nil {
if err = config.CheckFreeSpace(f.model.cfg.Options().MinHomeDiskFree, usage); err != nil {
return fmt.Errorf("insufficient space on disk for database (%v): %v", dbPath, err)
return errors.Wrapf(err, "insufficient space on disk for database (%v)", dbPath)
}
}
@@ -297,7 +291,7 @@ func (f *folder) scanSubdirs(subDirs []string) error {
oldHash := f.ignores.Hash()
if err := f.ignores.Load(".stignore"); err != nil && !fs.IsNotExist(err) {
err = fmt.Errorf("loading ignores: %v", err)
err = errors.Wrap(err, "loading ignores")
f.setError(err)
return err
}
@@ -641,7 +635,7 @@ func (f *folder) monitorWatch(ctx context.Context) {
failTimer.Reset(time.Minute)
continue
}
watchaggregator.Aggregate(eventChan, f.watchChan, f.FolderConfiguration, f.model.cfg, f.evLogger, aggrCtx)
watchaggregator.Aggregate(aggrCtx, eventChan, f.watchChan, f.FolderConfiguration, f.model.cfg, f.evLogger)
l.Debugln("Started filesystem watcher for folder", f.Description())
case err = <-errChan:
f.setWatchError(err)

View File

@@ -16,6 +16,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/scanner"
@@ -54,7 +55,7 @@ func TestRecvOnlyRevertDeletes(t *testing.T) {
// Start the folder. This will cause a scan, should discover the other stuff in the folder
m.StartFolder("ro")
m.startFolder("ro")
m.ScanFolder("ro")
// We should now have two files and two directories.
@@ -125,7 +126,7 @@ func TestRecvOnlyRevertNeeds(t *testing.T) {
// Start the folder. This will cause a scan.
m.StartFolder("ro")
m.startFolder("ro")
m.ScanFolder("ro")
// Everything should be in sync.
@@ -221,7 +222,7 @@ func TestRecvOnlyUndoChanges(t *testing.T) {
// Start the folder. This will cause a scan.
m.StartFolder("ro")
m.startFolder("ro")
m.ScanFolder("ro")
// Everything should be in sync.
@@ -316,9 +317,15 @@ func setupROFolder() (*model, *sendOnlyFolder) {
fcfg.Type = config.FolderTypeReceiveOnly
w.SetFolder(fcfg)
m := newModel(w, myID, "syncthing", "dev", db.OpenMemory(), nil)
m.AddFolder(fcfg)
m := newModel(w, myID, "syncthing", "dev", db.NewLowlevel(backend.OpenMemory()), nil)
m.ServeBackground()
// Folder should only be added, not started.
m.removeFolder(fcfg)
m.addFolder(fcfg)
m.fmut.RLock()
f := &sendOnlyFolder{
folder: folder{
stateTracker: newStateTracker(fcfg.ID, m.evLogger),
@@ -326,8 +333,7 @@ func setupROFolder() (*model, *sendOnlyFolder) {
FolderConfiguration: fcfg,
},
}
m.ServeBackground()
m.fmut.RUnlock()
return m, f
}

View File

@@ -30,7 +30,7 @@ func newSendOnlyFolder(model *model, fset *db.FileSet, ignores *ignore.Matcher,
folder: newFolder(model, fset, ignores, cfg, evLogger),
}
f.folder.puller = f
f.folder.Service = util.AsService(f.serve)
f.folder.Service = util.AsService(f.serve, f.String())
return f
}

View File

@@ -104,7 +104,8 @@ type sendReceiveFolder struct {
queue *jobQueue
pullErrors map[string]string // path -> error string
pullErrors map[string]string // errors for most recent/current iteration
oldPullErrors map[string]string // errors from previous iterations for log filtering only
pullErrorsMut sync.Mutex
}
@@ -117,7 +118,7 @@ func newSendReceiveFolder(model *model, fset *db.FileSet, ignores *ignore.Matche
pullErrorsMut: sync.NewMutex(),
}
f.folder.puller = f
f.folder.Service = util.AsService(f.serve)
f.folder.Service = util.AsService(f.serve, f.String())
if f.Copiers == 0 {
f.Copiers = defaultCopiers
@@ -169,15 +170,13 @@ func (f *sendReceiveFolder) pull() bool {
}
}()
if err := f.ignores.Load(".stignore"); err != nil && !fs.IsNotExist(err) {
err = fmt.Errorf("loading ignores: %v", err)
err = errors.Wrap(err, "loading ignores")
f.setError(err)
return false
}
l.Debugf("%v pulling", f)
f.clearPullErrors()
scanChan := make(chan string)
go f.pullScannerRoutine(scanChan)
@@ -186,6 +185,8 @@ func (f *sendReceiveFolder) pull() bool {
f.setState(FolderIdle)
}()
changed := 0
for tries := 0; tries < maxPullerIterations; tries++ {
select {
case <-f.ctx.Done():
@@ -197,30 +198,30 @@ func (f *sendReceiveFolder) pull() bool {
// it to FolderSyncing during the last iteration.
f.setState(FolderSyncPreparing)
changed := f.pullerIteration(scanChan)
changed = f.pullerIteration(scanChan)
l.Debugln(f, "changed", changed, "on try", tries+1)
if changed == 0 {
// No files were changed by the puller, so we are in
// sync. Any errors were just transitional.
f.clearPullErrors()
return true
// sync (except for unrecoverable stuff like invalid
// filenames on windows).
break
}
}
// We've tried a bunch of times to get in sync, but
// we're not making it. Probably there are write
// errors preventing us. Flag this with a warning and
// wait a bit longer before retrying.
if errors := f.Errors(); len(errors) > 0 {
f.pullErrorsMut.Lock()
pullErrNum := len(f.pullErrors)
f.pullErrorsMut.Unlock()
if pullErrNum > 0 {
l.Infof("%v: Failed to sync %v items", f.Description(), pullErrNum)
f.evLogger.Log(events.FolderErrors, map[string]interface{}{
"folder": f.folderID,
"errors": errors,
"errors": f.Errors(),
})
}
return false
return changed == 0
}
// pullerIteration runs a single puller iteration for the given folder and
@@ -228,6 +229,11 @@ func (f *sendReceiveFolder) pull() bool {
// might have failed). One puller iteration handles all files currently
// flagged as needed in the folder.
func (f *sendReceiveFolder) pullerIteration(scanChan chan<- string) int {
f.pullErrorsMut.Lock()
f.oldPullErrors = f.pullErrors
f.pullErrors = make(map[string]string)
f.pullErrorsMut.Unlock()
pullChan := make(chan pullBlockState)
copyChan := make(chan copyBlocksState)
finisherChan := make(chan *sharedPullerState)
@@ -292,6 +298,10 @@ func (f *sendReceiveFolder) pullerIteration(scanChan chan<- string) int {
close(dbUpdateChan)
updateWg.Wait()
f.pullErrorsMut.Lock()
f.oldPullErrors = nil
f.pullErrorsMut.Unlock()
return changed
}
@@ -315,20 +325,19 @@ func (f *sendReceiveFolder) processNeeded(dbUpdateChan chan<- dbUpdateJob, copyC
}
if f.IgnoreDelete && intf.IsDeleted() {
f.resetPullError(intf.FileName())
l.Debugln(f, "ignore file deletion (config)", intf.FileName())
return true
}
changed++
file := intf.(protocol.FileInfo)
switch {
case f.ignores.ShouldIgnore(file.Name):
f.resetPullError(file.Name)
file.SetIgnored(f.shortID)
l.Debugln(f, "Handling ignored file", file)
dbUpdateChan <- dbUpdateJob{file, dbUpdateInvalidate}
changed++
case runtime.GOOS == "windows" && fs.WindowsInvalidFilename(file.Name):
if file.IsDeleted() {
@@ -337,10 +346,11 @@ func (f *sendReceiveFolder) processNeeded(dbUpdateChan chan<- dbUpdateJob, copyC
// Reason we need it in the first place is, that it was
// ignored at some point.
dbUpdateChan <- dbUpdateJob{file, dbUpdateDeleteFile}
changed++
} else {
// We can't pull an invalid file.
f.newPullError(file.Name, fs.ErrInvalidFilename)
// No reason to retry for this
changed--
}
case file.IsDeleted():
@@ -365,7 +375,6 @@ func (f *sendReceiveFolder) processNeeded(dbUpdateChan chan<- dbUpdateJob, copyC
f.deleteFileWithCurrent(file, df, ok, dbUpdateChan, scanChan)
}
}
changed++
case file.Type == protocol.FileInfoTypeFile:
curFile, hasCurFile := f.fset.Get(protocol.LocalDeviceID, file.Name)
@@ -380,21 +389,17 @@ func (f *sendReceiveFolder) processNeeded(dbUpdateChan chan<- dbUpdateJob, copyC
}
case runtime.GOOS == "windows" && file.IsSymlink():
f.resetPullError(file.Name)
file.SetUnsupported(f.shortID)
l.Debugln(f, "Invalidating symlink (unsupported)", file.Name)
dbUpdateChan <- dbUpdateJob{file, dbUpdateInvalidate}
changed++
case file.IsDirectory() && !file.IsSymlink():
changed++
l.Debugln(f, "Handling directory", file.Name)
if f.checkParent(file.Name, scanChan) {
f.handleDir(file, dbUpdateChan, scanChan)
}
case file.IsSymlink():
changed++
l.Debugln(f, "Handling symlink", file.Name)
if f.checkParent(file.Name, scanChan) {
f.handleSymlink(file, dbUpdateChan, scanChan)
@@ -446,8 +451,6 @@ nextFile:
break
}
f.resetPullError(fileName)
fi, ok := f.fset.GetGlobal(fileName)
if !ok {
// File is no longer in the index. Mark it as done and drop it.
@@ -490,7 +493,6 @@ nextFile:
// Remove the pending deletion (as we performed it by renaming)
delete(fileDeletions, candidate.Name)
changed++
f.queue.Done(fileName)
continue nextFile
}
@@ -499,7 +501,6 @@ nextFile:
devices := f.fset.Availability(fileName)
for _, dev := range devices {
if _, ok := f.model.Connection(dev); ok {
changed++
// Handle the file normally, by coping and pulling, etc.
f.handleFile(fi, copyChan, dbUpdateChan)
continue nextFile
@@ -520,7 +521,6 @@ func (f *sendReceiveFolder) processDeletions(fileDeletions map[string]protocol.F
default:
}
f.resetPullError(file.Name)
f.deleteFile(file, dbUpdateChan, scanChan)
}
@@ -533,7 +533,6 @@ func (f *sendReceiveFolder) processDeletions(fileDeletions map[string]protocol.F
}
dir := dirDeletions[len(dirDeletions)-i-1]
f.resetPullError(dir.Name)
l.Debugln(f, "Deleting dir", dir.Name)
f.deleteDir(dir, dbUpdateChan, scanChan)
}
@@ -545,8 +544,6 @@ func (f *sendReceiveFolder) handleDir(file protocol.FileInfo, dbUpdateChan chan<
// care not declare another err.
var err error
f.resetPullError(file.Name)
f.evLogger.Log(events.ItemStarted, map[string]string{
"folder": f.folderID,
"item": file.Name,
@@ -701,8 +698,6 @@ func (f *sendReceiveFolder) handleSymlink(file protocol.FileInfo, dbUpdateChan c
// care not declare another err.
var err error
f.resetPullError(file.Name)
f.evLogger.Log(events.ItemStarted, map[string]string{
"folder": f.folderID,
"item": file.Name,
@@ -823,8 +818,6 @@ func (f *sendReceiveFolder) deleteFileWithCurrent(file, cur protocol.FileInfo, h
l.Debugln(f, "Deleting file", file.Name)
f.resetPullError(file.Name)
f.evLogger.Log(events.ItemStarted, map[string]string{
"folder": f.folderID,
"item": file.Name,
@@ -1178,8 +1171,6 @@ func populateOffsets(blocks []protocol.BlockInfo) {
func (f *sendReceiveFolder) shortcutFile(file, curFile protocol.FileInfo, dbUpdateChan chan<- dbUpdateJob) {
l.Debugln(f, "taking shortcut on", file.Name)
f.resetPullError(file.Name)
f.evLogger.Log(events.ItemStarted, map[string]string{
"folder": f.folderID,
"item": file.Name,
@@ -1452,7 +1443,7 @@ func (f *sendReceiveFolder) pullBlock(state pullBlockState, out chan<- *sharedPu
select {
case <-f.ctx.Done():
state.fail(errors.Wrap(f.ctx.Err(), "folder stopped"))
return
break
default:
}
@@ -1475,7 +1466,7 @@ func (f *sendReceiveFolder) pullBlock(state pullBlockState, out chan<- *sharedPu
// leastBusy can select another device when someone else asks.
activity.using(selected)
var buf []byte
buf, lastError = f.model.requestGlobal(selected.ID, f.folderID, state.file.Name, state.block.Offset, int(state.block.Size), state.block.Hash, state.block.WeakHash, selected.FromTemporary)
buf, lastError = f.model.requestGlobal(f.ctx, selected.ID, f.folderID, state.file.Name, state.block.Offset, int(state.block.Size), state.block.Hash, state.block.WeakHash, selected.FromTemporary)
activity.done(selected)
if lastError != nil {
l.Debugln("request:", f.folderID, state.file.Name, state.block.Offset, state.block.Size, "returned error:", lastError)
@@ -1774,6 +1765,11 @@ func (f *sendReceiveFolder) moveForConflict(name, lastModBy string, scanChan cha
}
func (f *sendReceiveFolder) newPullError(path string, err error) {
if errors.Cause(err) == f.ctx.Err() {
// Error because the folder stopped - no point logging/tracking
return
}
f.pullErrorsMut.Lock()
defer f.pullErrorsMut.Unlock()
@@ -1784,26 +1780,19 @@ func (f *sendReceiveFolder) newPullError(path string, err error) {
return
}
l.Infof("Puller (folder %s, item %q): %v", f.Description(), path, err)
// Establish context to differentiate from errors while scanning.
// Use "syncing" as opposed to "pulling" as the latter might be used
// for errors occurring specificly in the puller routine.
f.pullErrors[path] = fmt.Sprintln("syncing:", err)
}
errStr := fmt.Sprintln("syncing:", err)
f.pullErrors[path] = errStr
// resetPullError removes the error at path in case there was an error on a
// previous pull iteration.
func (f *sendReceiveFolder) resetPullError(path string) {
f.pullErrorsMut.Lock()
delete(f.pullErrors, path)
f.pullErrorsMut.Unlock()
}
if oldErr, ok := f.oldPullErrors[path]; ok && oldErr == errStr {
l.Debugf("Repeat error on puller (folder %s, item %q): %v", f.Description(), path, err)
delete(f.oldPullErrors, path) // Potential repeats are now caught by f.pullErrors itself
return
}
func (f *sendReceiveFolder) clearPullErrors() {
f.pullErrorsMut.Lock()
f.pullErrors = make(map[string]string)
f.pullErrorsMut.Unlock()
l.Infof("Puller (folder %s, item %q): %v", f.Description(), path, err)
}
func (f *sendReceiveFolder) Errors() []FileError {

View File

@@ -20,6 +20,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
@@ -91,9 +92,9 @@ func createFile(t *testing.T, name string, fs fs.Filesystem) protocol.FileInfo {
func setupSendReceiveFolder(files ...protocol.FileInfo) (*model, *sendReceiveFolder) {
w := createTmpWrapper(defaultCfg)
model := newModel(w, myID, "syncthing", "dev", db.OpenMemory(), nil)
model := newModel(w, myID, "syncthing", "dev", db.NewLowlevel(backend.OpenMemory()), nil)
fcfg := testFolderConfigTmp()
model.AddFolder(fcfg)
model.addFolder(fcfg)
f := &sendReceiveFolder{
folder: folder{
@@ -250,6 +251,7 @@ func TestCopierFinder(t *testing.T) {
// Run a single fetcher routine
go f.copierRoutine(copyChan, pullChan, finisherChan)
defer close(copyChan)
f.handleFile(requiredFile, copyChan, dbUpdateChan)
@@ -380,17 +382,19 @@ func TestWeakHash(t *testing.T) {
// Run a single fetcher routine
go fo.copierRoutine(copyChan, pullChan, finisherChan)
defer close(copyChan)
// Test 1 - no weak hashing, file gets fully repulled (`expectBlocks` pulls).
fo.WeakHashThresholdPct = 101
fo.handleFile(desiredFile, copyChan, dbUpdateChan)
var pulls []pullBlockState
timeout := time.After(10 * time.Second)
for len(pulls) < expectBlocks {
select {
case pull := <-pullChan:
pulls = append(pulls, pull)
case <-time.After(10 * time.Second):
case <-timeout:
t.Errorf("timed out, got %d pulls expected %d", len(pulls), expectPulls)
}
}
@@ -487,30 +491,37 @@ func TestDeregisterOnFailInCopy(t *testing.T) {
t.Fatal("Expected file in progress")
}
copyChan := make(chan copyBlocksState)
pullChan := make(chan pullBlockState)
finisherBufferChan := make(chan *sharedPullerState)
finisherChan := make(chan *sharedPullerState)
dbUpdateChan := make(chan dbUpdateJob, 1)
go f.copierRoutine(copyChan, pullChan, finisherBufferChan)
copyChan, copyWg := startCopier(f, pullChan, finisherBufferChan)
go f.finisherRoutine(finisherChan, dbUpdateChan, make(chan string))
defer func() {
close(copyChan)
copyWg.Wait()
close(pullChan)
close(finisherBufferChan)
close(finisherChan)
}()
f.handleFile(file, copyChan, dbUpdateChan)
// Receive a block at puller, to indicate that at least a single copier
// loop has been performed.
toPull := <-pullChan
// Close the file, causing errors on further access
toPull.sharedPullerState.fail(os.ErrNotExist)
// Unblock copier
go func() {
for range pullChan {
}
}()
// Close the file, causing errors on further access
toPull.sharedPullerState.fail(os.ErrNotExist)
select {
case state := <-finisherBufferChan:
// At this point the file should still be registered with both the job
@@ -552,7 +563,7 @@ func TestDeregisterOnFailInCopy(t *testing.T) {
t.Fatal("Still registered", f.model.progressEmitter.lenRegistry(), f.queue.lenProgress(), f.queue.lenQueued())
}
case <-time.After(time.Second):
case <-time.After(5 * time.Second):
t.Fatal("Didn't get anything to the finisher")
}
}
@@ -574,63 +585,90 @@ func TestDeregisterOnFailInPull(t *testing.T) {
t.Fatal("Expected file in progress")
}
copyChan := make(chan copyBlocksState)
pullChan := make(chan pullBlockState)
finisherBufferChan := make(chan *sharedPullerState)
finisherChan := make(chan *sharedPullerState)
dbUpdateChan := make(chan dbUpdateJob, 1)
go f.copierRoutine(copyChan, pullChan, finisherBufferChan)
go f.pullerRoutine(pullChan, finisherBufferChan)
copyChan, copyWg := startCopier(f, pullChan, finisherBufferChan)
pullWg := sync.NewWaitGroup()
pullWg.Add(1)
go func() {
f.pullerRoutine(pullChan, finisherBufferChan)
pullWg.Done()
}()
go f.finisherRoutine(finisherChan, dbUpdateChan, make(chan string))
defer func() {
// Unblock copier and puller
go func() {
for range finisherBufferChan {
}
}()
close(copyChan)
copyWg.Wait()
close(pullChan)
pullWg.Wait()
close(finisherBufferChan)
close(finisherChan)
}()
f.handleFile(file, copyChan, dbUpdateChan)
// Receive at finisher, we should error out as puller has nowhere to pull
// from.
timeout = time.Second
select {
case state := <-finisherBufferChan:
// At this point the file should still be registered with both the job
// queue, and the progress emitter. Verify this.
if f.model.progressEmitter.lenRegistry() != 1 || f.queue.lenProgress() != 1 || f.queue.lenQueued() != 0 {
t.Fatal("Could not find file")
// Both the puller and copier may send to the finisherBufferChan.
var state *sharedPullerState
after := time.After(5 * time.Second)
for {
select {
case state = <-finisherBufferChan:
case <-after:
t.Fatal("Didn't get failed state to the finisher")
}
// Pass the file down the real finisher, and give it time to consume
finisherChan <- state
t0 := time.Now()
if ev, err := s.Poll(time.Minute); err != nil {
t.Fatal("Got error waiting for ItemFinished event:", err)
} else if n := ev.Data.(map[string]interface{})["item"]; n != state.file.Name {
t.Fatal("Got ItemFinished event for wrong file:", n)
if state.failed() != nil {
break
}
t.Log("event took", time.Since(t0))
}
state.mut.Lock()
stateWriter := state.writer
state.mut.Unlock()
if stateWriter != nil {
t.Fatal("File not closed?")
}
// At this point the file should still be registered with both the job
// queue, and the progress emitter. Verify this.
if f.model.progressEmitter.lenRegistry() != 1 || f.queue.lenProgress() != 1 || f.queue.lenQueued() != 0 {
t.Fatal("Could not find file")
}
if f.model.progressEmitter.lenRegistry() != 0 || f.queue.lenProgress() != 0 || f.queue.lenQueued() != 0 {
t.Fatal("Still registered", f.model.progressEmitter.lenRegistry(), f.queue.lenProgress(), f.queue.lenQueued())
}
// Pass the file down the real finisher, and give it time to consume
finisherChan <- state
// Doing it again should have no effect
finisherChan <- state
t0 := time.Now()
if ev, err := s.Poll(time.Minute); err != nil {
t.Fatal("Got error waiting for ItemFinished event:", err)
} else if n := ev.Data.(map[string]interface{})["item"]; n != state.file.Name {
t.Fatal("Got ItemFinished event for wrong file:", n)
}
t.Log("event took", time.Since(t0))
if _, err := s.Poll(time.Second); err != events.ErrTimeout {
t.Fatal("Expected timeout, not another event", err)
}
state.mut.Lock()
stateWriter := state.writer
state.mut.Unlock()
if stateWriter != nil {
t.Fatal("File not closed?")
}
if f.model.progressEmitter.lenRegistry() != 0 || f.queue.lenProgress() != 0 || f.queue.lenQueued() != 0 {
t.Fatal("Still registered", f.model.progressEmitter.lenRegistry(), f.queue.lenProgress(), f.queue.lenQueued())
}
case <-time.After(time.Second):
t.Fatal("Didn't get anything to the finisher")
if f.model.progressEmitter.lenRegistry() != 0 || f.queue.lenProgress() != 0 || f.queue.lenQueued() != 0 {
t.Fatal("Still registered", f.model.progressEmitter.lenRegistry(), f.queue.lenProgress(), f.queue.lenQueued())
}
// Doing it again should have no effect
finisherChan <- state
if _, err := s.Poll(time.Second); err != events.ErrTimeout {
t.Fatal("Expected timeout, not another event", err)
}
if f.model.progressEmitter.lenRegistry() != 0 || f.queue.lenProgress() != 0 || f.queue.lenQueued() != 0 {
t.Fatal("Still registered", f.model.progressEmitter.lenRegistry(), f.queue.lenProgress(), f.queue.lenQueued())
}
}
@@ -821,11 +859,14 @@ func TestCopyOwner(t *testing.T) {
// comes the finisher is done.
finisherChan := make(chan *sharedPullerState)
defer close(finisherChan)
copierChan := make(chan copyBlocksState)
defer close(copierChan)
go f.copierRoutine(copierChan, nil, finisherChan)
copierChan, copyWg := startCopier(f, nil, finisherChan)
go f.finisherRoutine(finisherChan, dbUpdateChan, nil)
defer func() {
close(copierChan)
copyWg.Wait()
close(finisherChan)
}()
f.handleFile(file, copierChan, nil)
<-dbUpdateChan
@@ -984,3 +1025,14 @@ func cleanupSharedPullerState(s *sharedPullerState) {
s.writer.fd.Close()
s.writer.mut.Unlock()
}
func startCopier(f *sendReceiveFolder, pullChan chan<- pullBlockState, finisherChan chan<- *sharedPullerState) (chan copyBlocksState, sync.WaitGroup) {
copyChan := make(chan copyBlocksState)
wg := sync.NewWaitGroup()
wg.Add(1)
go func() {
f.copierRoutine(copyChan, pullChan, finisherChan)
wg.Done()
}()
return copyChan, wg
}

View File

@@ -7,6 +7,7 @@
package model
import (
"context"
"fmt"
"strings"
"time"
@@ -63,8 +64,8 @@ func NewFolderSummaryService(cfg config.Wrapper, m Model, id protocol.DeviceID,
lastEventReqMut: sync.NewMutex(),
}
service.Add(util.AsService(service.listenForUpdates))
service.Add(util.AsService(service.calculateSummaries))
service.Add(util.AsService(service.listenForUpdates, fmt.Sprintf("%s/listenForUpdates", service)))
service.Add(util.AsService(service.calculateSummaries, fmt.Sprintf("%s/calculateSummaries", service)))
return service
}
@@ -145,7 +146,7 @@ func (c *folderSummaryService) OnEventRequest() {
// listenForUpdates subscribes to the event bus and makes note of folders that
// need their data recalculated.
func (c *folderSummaryService) listenForUpdates(stop chan struct{}) {
func (c *folderSummaryService) listenForUpdates(ctx context.Context) {
sub := c.evLogger.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress | events.DeviceConnected | events.FolderWatchStateChanged | events.DownloadProgress)
defer sub.Unsubscribe()
@@ -155,7 +156,7 @@ func (c *folderSummaryService) listenForUpdates(stop chan struct{}) {
select {
case ev := <-sub.C():
c.processUpdate(ev)
case <-stop:
case <-ctx.Done():
return
}
}
@@ -197,7 +198,10 @@ func (c *folderSummaryService) processUpdate(ev events.Event) {
case events.StateChanged:
data := ev.Data.(map[string]interface{})
if !(data["to"].(string) == "idle" && data["from"].(string) == "syncing") {
if data["to"].(string) != "idle" {
return
}
if from := data["from"].(string); from != "syncing" && from != "sync-preparing" {
return
}
@@ -234,7 +238,7 @@ func (c *folderSummaryService) processUpdate(ev events.Event) {
// calculateSummaries periodically recalculates folder summaries and
// completion percentage, and sends the results on the event bus.
func (c *folderSummaryService) calculateSummaries(stop chan struct{}) {
func (c *folderSummaryService) calculateSummaries(ctx context.Context) {
const pumpInterval = 2 * time.Second
pump := time.NewTimer(pumpInterval)
@@ -255,7 +259,7 @@ func (c *folderSummaryService) calculateSummaries(stop chan struct{}) {
case folder := <-c.immediate:
c.sendSummary(folder)
case <-stop:
case <-ctx.Done():
return
}
}

View File

@@ -8,8 +8,8 @@ package model
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net"
"path/filepath"
@@ -20,6 +20,9 @@ import (
stdsync "sync"
"time"
"github.com/pkg/errors"
"github.com/thejerf/suture"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/connections"
"github.com/syncthing/syncthing/lib/db"
@@ -34,7 +37,6 @@ import (
"github.com/syncthing/syncthing/lib/upgrade"
"github.com/syncthing/syncthing/lib/util"
"github.com/syncthing/syncthing/lib/versioner"
"github.com/thejerf/suture"
)
// How many files to send in each Index/IndexUpdate message.
@@ -57,7 +59,7 @@ type service interface {
Errors() []FileError
WatchError() error
ForceRescan(file protocol.FileInfo) error
GetStatistics() stats.FolderStatistics
GetStatistics() (stats.FolderStatistics, error)
getState() (folderState, time.Time, error)
}
@@ -72,9 +74,6 @@ type Model interface {
connections.Model
AddFolder(cfg config.FolderConfiguration)
RestartFolder(from, to config.FolderConfiguration)
StartFolder(folder string)
ResetFolder(folder string)
DelayScan(folder string, next time.Duration)
ScanFolder(folder string) error
@@ -109,8 +108,8 @@ type Model interface {
Completion(device protocol.DeviceID, folder string) FolderCompletion
ConnectionStats() map[string]interface{}
DeviceStatistics() map[string]stats.DeviceStatistics
FolderStatistics() map[string]stats.FolderStatistics
DeviceStatistics() (map[string]stats.DeviceStatistics, error)
FolderStatistics() (map[string]stats.FolderStatistics, error)
UsageReportingStats(version int, preview bool) map[string]interface{}
StartDeadlockDetector(timeout time.Duration)
@@ -141,6 +140,7 @@ type model struct {
folderRunners map[string]service // folder -> puller or scanner
folderRunnerTokens map[string][]suture.ServiceToken // folder -> tokens for puller or scanner
folderRestartMuts syncMutexMap // folder -> restart mutex
folderVersioners map[string]versioner.Versioner // folder -> versioner (may be nil)
pmut sync.RWMutex // protects the below
conn map[protocol.DeviceID]connections.Connection
@@ -167,6 +167,7 @@ var (
errFolderNotRunning = errors.New("folder is not running")
errFolderMissing = errors.New("no such folder")
errNetworkNotAllowed = errors.New("network not allowed")
errNoVersioner = errors.New("folder has no versioner")
// errors about why a connection is closed
errIgnoredFolderRemoved = errors.New("folder no longer ignored")
errReplacingConnection = errors.New("replacing connection")
@@ -201,6 +202,7 @@ func NewModel(cfg config.Wrapper, id protocol.DeviceID, clientName, clientVersio
folderIgnores: make(map[string]*ignore.Matcher),
folderRunners: make(map[string]service),
folderRunnerTokens: make(map[string][]suture.ServiceToken),
folderVersioners: make(map[string]versioner.Versioner),
conn: make(map[protocol.DeviceID]connections.Connection),
connRequestLimiters: make(map[protocol.DeviceID]*byteSemaphore),
closed: make(map[protocol.DeviceID]chan struct{}),
@@ -215,12 +217,34 @@ func NewModel(cfg config.Wrapper, id protocol.DeviceID, clientName, clientVersio
}
m.Add(m.progressEmitter)
scanLimiter.setCapacity(cfg.Options().MaxConcurrentScans)
cfg.Subscribe(m)
return m
}
func (m *model) Serve() {
m.onServe()
m.Supervisor.Serve()
}
func (m *model) ServeBackground() {
m.onServe()
m.Supervisor.ServeBackground()
}
func (m *model) onServe() {
// Add and start folders
for _, folderCfg := range m.cfg.Folders() {
if folderCfg.Paused {
folderCfg.CreateRoot()
continue
}
m.newFolder(folderCfg)
}
m.cfg.Subscribe(m)
}
func (m *model) Stop() {
m.cfg.Unsubscribe(m)
m.Supervisor.Stop()
devs := m.cfg.Devices()
ids := make([]protocol.DeviceID, 0, len(devs))
@@ -241,14 +265,18 @@ func (m *model) StartDeadlockDetector(timeout time.Duration) {
detector.Watch("pmut", m.pmut)
}
// StartFolder constructs the folder service and starts it.
func (m *model) StartFolder(folder string) {
// startFolder constructs the folder service and starts it.
func (m *model) startFolder(folder string) {
m.fmut.RLock()
folderCfg := m.folderCfgs[folder]
m.fmut.RUnlock()
// Close connections to affected devices
m.closeConns(folderCfg.DeviceIDs(), fmt.Errorf("started folder %v", folderCfg.Description()))
m.fmut.Lock()
defer m.fmut.Unlock()
folderCfg := m.folderCfgs[folder]
m.startFolderLocked(folderCfg)
l.Infof("Ready to synchronize %s (%s)", folderCfg.Description(), folderCfg.Type)
}
// Need to hold lock on m.fmut when calling this.
@@ -278,11 +306,6 @@ func (m *model) startFolderLocked(cfg config.FolderConfiguration) {
}
}
// Close connections to affected devices
m.fmut.Unlock()
m.closeConns(cfg.DeviceIDs(), fmt.Errorf("started folder %v", cfg.Description()))
m.fmut.Lock()
v, ok := fset.Sequence(protocol.LocalDeviceID), true
indexHasFiles := ok && v > 0
if !indexHasFiles {
@@ -299,44 +322,55 @@ func (m *model) startFolderLocked(cfg config.FolderConfiguration) {
}
}
ver := cfg.Versioner()
if service, ok := ver.(suture.Service); ok {
// The versioner implements the suture.Service interface, so
// expects to be run in the background in addition to being called
// when files are going to be archived.
token := m.Add(service)
m.folderRunnerTokens[folder] = append(m.folderRunnerTokens[folder], token)
}
ffs := fset.MtimeFS()
// These are our metadata files, and they should always be hidden.
ffs.Hide(config.DefaultMarkerName)
ffs.Hide(".stversions")
ffs.Hide(".stignore")
_ = ffs.Hide(config.DefaultMarkerName)
_ = ffs.Hide(".stversions")
_ = ffs.Hide(".stignore")
p := folderFactory(m, fset, m.folderIgnores[folder], cfg, ver, ffs, m.evLogger)
var ver versioner.Versioner
if cfg.Versioning.Type != "" {
var err error
ver, err = versioner.New(ffs, cfg.Versioning)
if err != nil {
panic(fmt.Errorf("creating versioner: %v", err))
}
if service, ok := ver.(suture.Service); ok {
// The versioner implements the suture.Service interface, so
// expects to be run in the background in addition to being called
// when files are going to be archived.
token := m.Add(service)
m.folderRunnerTokens[folder] = append(m.folderRunnerTokens[folder], token)
}
}
m.folderVersioners[folder] = ver
ignores := m.folderIgnores[folder]
p := folderFactory(m, fset, ignores, cfg, ver, ffs, m.evLogger)
m.folderRunners[folder] = p
m.warnAboutOverwritingProtectedFiles(folder)
m.warnAboutOverwritingProtectedFiles(cfg, ignores)
token := m.Add(p)
m.folderRunnerTokens[folder] = append(m.folderRunnerTokens[folder], token)
l.Infof("Ready to synchronize %s (%s)", cfg.Description(), cfg.Type)
}
func (m *model) warnAboutOverwritingProtectedFiles(folder string) {
if m.folderCfgs[folder].Type == config.FolderTypeSendOnly {
func (m *model) warnAboutOverwritingProtectedFiles(cfg config.FolderConfiguration, ignores *ignore.Matcher) {
if cfg.Type == config.FolderTypeSendOnly {
return
}
// This is a bit of a hack.
ffs := m.folderCfgs[folder].Filesystem()
ffs := cfg.Filesystem()
if ffs.Type() != fs.FilesystemTypeBasic {
return
}
folderLocation := ffs.URI()
ignores := m.folderIgnores[folder]
var filesAtRisk []string
for _, protectedFilePath := range m.protectedFiles {
@@ -359,7 +393,7 @@ func (m *model) warnAboutOverwritingProtectedFiles(folder string) {
}
}
func (m *model) AddFolder(cfg config.FolderConfiguration) {
func (m *model) addFolder(cfg config.FolderConfiguration) {
if len(cfg.ID) == 0 {
panic("cannot add empty folder id")
}
@@ -388,9 +422,10 @@ func (m *model) addFolderLocked(cfg config.FolderConfiguration, fset *db.FileSet
m.folderIgnores[cfg.ID] = ignores
}
func (m *model) RemoveFolder(cfg config.FolderConfiguration) {
func (m *model) removeFolder(cfg config.FolderConfiguration) {
m.stopFolder(cfg, fmt.Errorf("removing folder %v", cfg.Description()))
m.fmut.Lock()
defer m.fmut.Unlock()
isPathUnique := true
for folderID, folderCfg := range m.folderCfgs {
@@ -404,23 +439,20 @@ func (m *model) RemoveFolder(cfg config.FolderConfiguration) {
cfg.Filesystem().RemoveAll(config.DefaultMarkerName)
}
m.tearDownFolderLocked(cfg, fmt.Errorf("removing folder %v", cfg.Description()))
m.removeFolderLocked(cfg)
m.fmut.Unlock()
// Remove it from the database
db.DropFolder(m.db, cfg.ID)
}
// Need to hold lock on m.fmut when calling this.
func (m *model) tearDownFolderLocked(cfg config.FolderConfiguration, err error) {
func (m *model) stopFolder(cfg config.FolderConfiguration, err error) {
// Stop the services running for this folder and wait for them to finish
// stopping to prevent races on restart.
m.fmut.RLock()
tokens := m.folderRunnerTokens[cfg.ID]
m.fmut.Unlock()
// Close connections to affected devices
// Must happen before stopping the folder service to abort ongoing
// transmissions and thus allow timely service termination.
w := m.closeConns(cfg.DeviceIDs(), err)
m.fmut.RUnlock()
for _, id := range tokens {
m.RemoveAndWait(id, 0)
@@ -428,19 +460,21 @@ func (m *model) tearDownFolderLocked(cfg config.FolderConfiguration, err error)
// Wait for connections to stop to ensure that no more calls to methods
// expecting this folder to exist happen (e.g. .IndexUpdate).
w.Wait()
m.fmut.Lock()
m.closeConns(cfg.DeviceIDs(), err).Wait()
}
// Need to hold lock on m.fmut when calling this.
func (m *model) removeFolderLocked(cfg config.FolderConfiguration) {
// Clean up our config maps
delete(m.folderCfgs, cfg.ID)
delete(m.folderFiles, cfg.ID)
delete(m.folderIgnores, cfg.ID)
delete(m.folderRunners, cfg.ID)
delete(m.folderRunnerTokens, cfg.ID)
delete(m.folderVersioners, cfg.ID)
}
func (m *model) RestartFolder(from, to config.FolderConfiguration) {
func (m *model) restartFolder(from, to config.FolderConfiguration) {
if len(to.ID) == 0 {
panic("bug: cannot restart empty folder ID")
}
@@ -473,22 +507,40 @@ func (m *model) RestartFolder(from, to config.FolderConfiguration) {
errMsg = "restarting"
}
m.fmut.Lock()
defer m.fmut.Unlock()
m.tearDownFolderLocked(from, fmt.Errorf("%v folder %v", errMsg, to.Description()))
var fset *db.FileSet
if !to.Paused {
// Creating the fileset can take a long time (metadata calculation)
// so we do it outside of the lock.
m.fmut.Unlock()
fset := db.NewFileSet(to.ID, to.Filesystem(), m.db)
m.fmut.Lock()
fset = db.NewFileSet(to.ID, to.Filesystem(), m.db)
}
m.stopFolder(from, fmt.Errorf("%v folder %v", errMsg, to.Description()))
m.fmut.Lock()
defer m.fmut.Unlock()
m.removeFolderLocked(from)
if !to.Paused {
m.addFolderLocked(to, fset)
m.startFolderLocked(to)
}
l.Infof("%v folder %v (%v)", infoMsg, to.Description(), to.Type)
}
func (m *model) newFolder(cfg config.FolderConfiguration) {
// Creating the fileset can take a long time (metadata calculation) so
// we do it outside of the lock.
fset := db.NewFileSet(cfg.ID, cfg.Filesystem(), m.db)
// Close connections to affected devices
m.closeConns(cfg.DeviceIDs(), fmt.Errorf("started folder %v", cfg.Description()))
m.fmut.Lock()
defer m.fmut.Unlock()
m.addFolderLocked(cfg, fset)
m.startFolderLocked(cfg)
}
func (m *model) UsageReportingStats(version int, preview bool) map[string]interface{} {
stats := make(map[string]interface{})
if version >= 3 {
@@ -655,25 +707,33 @@ func (m *model) ConnectionStats() map[string]interface{} {
}
// DeviceStatistics returns statistics about each device
func (m *model) DeviceStatistics() map[string]stats.DeviceStatistics {
func (m *model) DeviceStatistics() (map[string]stats.DeviceStatistics, error) {
m.fmut.RLock()
defer m.fmut.RUnlock()
res := make(map[string]stats.DeviceStatistics, len(m.deviceStatRefs))
for id, sr := range m.deviceStatRefs {
res[id.String()] = sr.GetStatistics()
stats, err := sr.GetStatistics()
if err != nil {
return nil, err
}
res[id.String()] = stats
}
return res
return res, nil
}
// FolderStatistics returns statistics about each folder
func (m *model) FolderStatistics() map[string]stats.FolderStatistics {
func (m *model) FolderStatistics() (map[string]stats.FolderStatistics, error) {
res := make(map[string]stats.FolderStatistics)
m.fmut.RLock()
defer m.fmut.RUnlock()
for id, runner := range m.folderRunners {
res[id] = runner.GetStatistics()
stats, err := runner.GetStatistics()
if err != nil {
return nil, err
}
res[id] = stats
}
return res
return res, nil
}
type FolderCompletion struct {
@@ -979,17 +1039,17 @@ func (m *model) RemoteNeedFolderFiles(device protocol.DeviceID, folder string, p
// Index is called when a new device is connected and we receive their full index.
// Implements the protocol.Model interface.
func (m *model) Index(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo) {
m.handleIndex(deviceID, folder, fs, false)
func (m *model) Index(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo) error {
return m.handleIndex(deviceID, folder, fs, false)
}
// IndexUpdate is called for incremental updates to connected devices' indexes.
// Implements the protocol.Model interface.
func (m *model) IndexUpdate(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo) {
m.handleIndex(deviceID, folder, fs, true)
func (m *model) IndexUpdate(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo) error {
return m.handleIndex(deviceID, folder, fs, true)
}
func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo, update bool) {
func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []protocol.FileInfo, update bool) error {
op := "Index"
if update {
op += " update"
@@ -999,10 +1059,10 @@ func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []prot
if cfg, ok := m.cfg.Folder(folder); !ok || !cfg.SharedWith(deviceID) {
l.Infof("%v for unexpected folder ID %q sent from device %q; ensure that the folder exists and that this device is selected under \"Share With\" in the folder configuration.", op, folder, deviceID)
return
return errors.Wrap(errFolderMissing, folder)
} else if cfg.Paused {
l.Debugf("%v for paused folder (ID %q) sent from device %q.", op, folder, deviceID)
return
return errors.Wrap(ErrFolderPaused, folder)
}
m.fmut.RLock()
@@ -1011,17 +1071,12 @@ func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []prot
m.fmut.RUnlock()
if !existing {
l.Warnf("%v for nonexistent folder %q", op, folder)
panic("handling index for nonexistent folder")
l.Infof("%v for nonexistent folder %q", op, folder)
return errors.Wrap(errFolderMissing, folder)
}
if running {
defer runner.SchedulePull()
} else if update {
// Runner may legitimately not be set if this is the "cleanup" Index
// message at startup.
l.Warnf("%v for not running folder %q", op, folder)
panic("handling index for not running folder")
}
m.pmut.RLock()
@@ -1045,9 +1100,11 @@ func (m *model) handleIndex(deviceID protocol.DeviceID, folder string, fs []prot
"items": len(fs),
"version": files.Sequence(deviceID),
})
return nil
}
func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterConfig) {
func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterConfig) error {
// Check the peer device's announced folders against our own. Emits events
// for folders that we don't expect (unknown or not shared).
// Also, collect a list of folders we do share, and if he's interested in
@@ -1198,7 +1255,7 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
dropSymlinks: dropSymlinks,
evLogger: m.evLogger,
}
is.Service = util.AsService(is.serve)
is.Service = util.AsService(is.serve, is.String())
// The token isn't tracked as the service stops when the connection
// terminates and is automatically removed from supervisor (by
// implementing suture.IsCompletable).
@@ -1223,14 +1280,20 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
}
if deviceCfg.Introducer {
foldersDevices, introduced := m.handleIntroductions(deviceCfg, cm)
if introduced {
changed = true
}
// If permitted, check if the introducer has unshare devices/folders with
// some of the devices/folders that we know were introduced to us by him.
if !deviceCfg.SkipIntroductionRemovals && m.handleDeintroductions(deviceCfg, cm, foldersDevices) {
folders, devices, foldersDevices, introduced := m.handleIntroductions(deviceCfg, cm)
folders, devices, deintroduced := m.handleDeintroductions(deviceCfg, foldersDevices, folders, devices)
if introduced || deintroduced {
changed = true
cfg := m.cfg.RawCopy()
cfg.Folders = make([]config.FolderConfiguration, 0, len(folders))
for _, fcfg := range folders {
cfg.Folders = append(cfg.Folders, fcfg)
}
cfg.Devices = make([]config.DeviceConfiguration, len(devices))
for _, dcfg := range devices {
cfg.Devices = append(cfg.Devices, dcfg)
}
m.cfg.Replace(cfg)
}
}
@@ -1239,14 +1302,15 @@ func (m *model) ClusterConfig(deviceID protocol.DeviceID, cm protocol.ClusterCon
l.Warnln("Failed to save config", err)
}
}
return nil
}
// handleIntroductions handles adding devices/shares that are shared by an introducer device
func (m *model) handleIntroductions(introducerCfg config.DeviceConfiguration, cm protocol.ClusterConfig) (folderDeviceSet, bool) {
// This device is an introducer. Go through the announced lists of folders
// and devices and add what we are missing, remove what we have extra that
// has been introducer by the introducer.
// handleIntroductions handles adding devices/folders that are shared by an introducer device
func (m *model) handleIntroductions(introducerCfg config.DeviceConfiguration, cm protocol.ClusterConfig) (map[string]config.FolderConfiguration, map[protocol.DeviceID]config.DeviceConfiguration, folderDeviceSet, bool) {
changed := false
folders := m.cfg.Folders()
devices := m.cfg.Devices()
foldersDevices := make(folderDeviceSet)
@@ -1256,12 +1320,14 @@ func (m *model) handleIntroductions(introducerCfg config.DeviceConfiguration, cm
// with devices that we have in common, yet are currently not sharing
// the folder.
fcfg, ok := m.cfg.Folder(folder.ID)
fcfg, ok := folders[folder.ID]
if !ok {
// Don't have this folder, carry on.
continue
}
folderChanged := false
for _, device := range folder.Devices {
// No need to share with self.
if device.ID == m.id {
@@ -1272,7 +1338,7 @@ func (m *model) handleIntroductions(introducerCfg config.DeviceConfiguration, cm
if _, ok := m.cfg.Devices()[device.ID]; !ok {
// The device is currently unknown. Add it to the config.
m.introduceDevice(device, introducerCfg)
devices[device.ID] = m.introduceDevice(device, introducerCfg)
} else if fcfg.SharedWith(device.ID) {
// We already share the folder with this device, so
// nothing to do.
@@ -1286,36 +1352,41 @@ func (m *model) handleIntroductions(introducerCfg config.DeviceConfiguration, cm
DeviceID: device.ID,
IntroducedBy: introducerCfg.DeviceID,
})
changed = true
folderChanged = true
}
if changed {
m.cfg.SetFolder(fcfg)
if folderChanged {
folders[fcfg.ID] = fcfg
changed = true
}
}
return foldersDevices, changed
return folders, devices, foldersDevices, changed
}
// handleDeintroductions handles removals of devices/shares that are removed by an introducer device
func (m *model) handleDeintroductions(introducerCfg config.DeviceConfiguration, cm protocol.ClusterConfig, foldersDevices folderDeviceSet) bool {
func (m *model) handleDeintroductions(introducerCfg config.DeviceConfiguration, foldersDevices folderDeviceSet, folders map[string]config.FolderConfiguration, devices map[protocol.DeviceID]config.DeviceConfiguration) (map[string]config.FolderConfiguration, map[protocol.DeviceID]config.DeviceConfiguration, bool) {
if introducerCfg.SkipIntroductionRemovals {
return folders, devices, false
}
changed := false
devicesNotIntroduced := make(map[protocol.DeviceID]struct{})
folders := m.cfg.FolderList()
// Check if we should unshare some folders, if the introducer has unshared them.
for i := range folders {
for k := 0; k < len(folders[i].Devices); k++ {
if folders[i].Devices[k].IntroducedBy != introducerCfg.DeviceID {
devicesNotIntroduced[folders[i].Devices[k].DeviceID] = struct{}{}
for folderID, folderCfg := range folders {
for k := 0; k < len(folderCfg.Devices); k++ {
if folderCfg.Devices[k].IntroducedBy != introducerCfg.DeviceID {
devicesNotIntroduced[folderCfg.Devices[k].DeviceID] = struct{}{}
continue
}
if !foldersDevices.has(folders[i].Devices[k].DeviceID, folders[i].ID) {
if !foldersDevices.has(folderCfg.Devices[k].DeviceID, folderCfg.ID) {
// We could not find that folder shared on the
// introducer with the device that was introduced to us.
// We should follow and unshare as well.
l.Infof("Unsharing folder %s with %v as introducer %v no longer shares the folder with that device", folders[i].Description(), folders[i].Devices[k].DeviceID, folders[i].Devices[k].IntroducedBy)
folders[i].Devices = append(folders[i].Devices[:k], folders[i].Devices[k+1:]...)
l.Infof("Unsharing folder %s with %v as introducer %v no longer shares the folder with that device", folderCfg.Description(), folderCfg.Devices[k].DeviceID, folderCfg.Devices[k].IntroducedBy)
folderCfg.Devices = append(folderCfg.Devices[:k], folderCfg.Devices[k+1:]...)
folders[folderID] = folderCfg
k--
changed = true
}
@@ -1325,9 +1396,7 @@ func (m *model) handleDeintroductions(introducerCfg config.DeviceConfiguration,
// Check if we should remove some devices, if the introducer no longer
// shares any folder with them. Yet do not remove if we share other
// folders that haven't been introduced by the introducer.
devMap := m.cfg.Devices()
devices := make([]config.DeviceConfiguration, 0, len(devMap))
for deviceID, device := range devMap {
for deviceID, device := range devices {
if device.IntroducedBy == introducerCfg.DeviceID {
if !foldersDevices.hasDevice(deviceID) {
if _, ok := devicesNotIntroduced[deviceID]; !ok {
@@ -1335,22 +1404,15 @@ func (m *model) handleDeintroductions(introducerCfg config.DeviceConfiguration,
// device, remove the device.
l.Infof("Removing device %v as introducer %v no longer shares any folders with that device", deviceID, device.IntroducedBy)
changed = true
delete(devices, deviceID)
continue
}
l.Infof("Would have removed %v as %v no longer shares any folders, yet there are other folders that are shared with this device that haven't been introduced by this introducer.", deviceID, device.IntroducedBy)
}
}
devices = append(devices, device)
}
if changed {
cfg := m.cfg.RawCopy()
cfg.Folders = folders
cfg.Devices = devices
m.cfg.Replace(cfg)
}
return changed
return folders, devices, changed
}
// handleAutoAccepts handles adding and sharing folders for devices that have
@@ -1401,7 +1463,7 @@ func (m *model) handleAutoAccepts(deviceCfg config.DeviceConfiguration, folder p
}
}
func (m *model) introduceDevice(device protocol.Device, introducerCfg config.DeviceConfiguration) {
func (m *model) introduceDevice(device protocol.Device, introducerCfg config.DeviceConfiguration) config.DeviceConfiguration {
addresses := []string{"dynamic"}
for _, addr := range device.Addresses {
if addr != "dynamic" {
@@ -1426,7 +1488,7 @@ func (m *model) introduceDevice(device protocol.Device, introducerCfg config.Dev
newDeviceCfg.SkipIntroductionRemovals = device.SkipIntroductionRemovals
}
m.cfg.SetDevice(newDeviceCfg)
return newDeviceCfg
}
// Closed is called when a connection has been closed
@@ -1758,7 +1820,7 @@ func (m *model) SetIgnores(folder string, content []string) error {
err := cfg.CheckPath()
if err == config.ErrPathMissing {
if err = cfg.CreateRoot(); err != nil {
return fmt.Errorf("failed to create folder root: %v", err)
return errors.Wrap(err, "failed to create folder root")
}
err = cfg.CheckPath()
}
@@ -1898,13 +1960,13 @@ func (m *model) AddConnection(conn connections.Connection, hello protocol.HelloR
m.deviceWasSeen(deviceID)
}
func (m *model) DownloadProgress(device protocol.DeviceID, folder string, updates []protocol.FileDownloadProgressUpdate) {
func (m *model) DownloadProgress(device protocol.DeviceID, folder string, updates []protocol.FileDownloadProgressUpdate) error {
m.fmut.RLock()
cfg, ok := m.folderCfgs[folder]
m.fmut.RUnlock()
if !ok || cfg.DisableTempIndexes || !cfg.SharedWith(device) {
return
return nil
}
m.pmut.RLock()
@@ -1918,6 +1980,8 @@ func (m *model) DownloadProgress(device protocol.DeviceID, folder string, update
"folder": folder,
"state": state,
})
return nil
}
func (m *model) deviceWasSeen(deviceID protocol.DeviceID) {
@@ -1941,14 +2005,14 @@ type indexSender struct {
connClosed chan struct{}
}
func (s *indexSender) serve(stop chan struct{}) {
func (s *indexSender) serve(ctx context.Context) {
var err error
l.Debugf("Starting indexSender for %s to %s at %s (slv=%d)", s.folder, s.dev, s.conn, s.prevSequence)
defer l.Debugf("Exiting indexSender for %s to %s at %s: %v", s.folder, s.dev, s.conn, err)
// We need to send one index, regardless of whether there is something to send or not
err = s.sendIndexTo()
err = s.sendIndexTo(ctx)
// Subscribe to LocalIndexUpdated (we have new information to send) and
// DeviceDisconnected (it might be us who disconnected, so we should
@@ -1962,7 +2026,7 @@ func (s *indexSender) serve(stop chan struct{}) {
for err == nil {
select {
case <-stop:
case <-ctx.Done():
return
case <-s.connClosed:
return
@@ -1975,7 +2039,7 @@ func (s *indexSender) serve(stop chan struct{}) {
// sending for.
if s.fset.Sequence(protocol.LocalDeviceID) <= s.prevSequence {
select {
case <-stop:
case <-ctx.Done():
return
case <-s.connClosed:
return
@@ -1986,7 +2050,7 @@ func (s *indexSender) serve(stop chan struct{}) {
continue
}
err = s.sendIndexTo()
err = s.sendIndexTo(ctx)
// Wait a short amount of time before entering the next loop. If there
// are continuous changes happening to the local index, this gives us
@@ -2004,16 +2068,16 @@ func (s *indexSender) Complete() bool { return true }
// sendIndexTo sends file infos with a sequence number higher than prevSequence and
// returns the highest sent sequence number.
func (s *indexSender) sendIndexTo() error {
func (s *indexSender) sendIndexTo(ctx context.Context) error {
initial := s.prevSequence == 0
batch := newFileInfoBatch(nil)
batch.flushFn = func(fs []protocol.FileInfo) error {
l.Debugf("Sending indexes for %s to %s at %s: %d files (<%d bytes)", s.folder, s.dev, s.conn, len(batch.infos), batch.size)
l.Debugf("%v: Sending %d files (<%d bytes)", s, len(batch.infos), batch.size)
if initial {
initial = false
return s.conn.Index(s.folder, fs)
return s.conn.Index(ctx, s.folder, fs)
}
return s.conn.IndexUpdate(s.folder, fs)
return s.conn.IndexUpdate(ctx, s.folder, fs)
}
var err error
@@ -2070,7 +2134,11 @@ func (s *indexSender) sendIndexTo() error {
return err
}
func (m *model) requestGlobal(deviceID protocol.DeviceID, folder, name string, offset int64, size int, hash []byte, weakHash uint32, fromTemporary bool) ([]byte, error) {
func (s *indexSender) String() string {
return fmt.Sprintf("indexSender@%p for %s to %s at %s", s, s.folder, s.dev, s.conn)
}
func (m *model) requestGlobal(ctx context.Context, deviceID protocol.DeviceID, folder, name string, offset int64, size int, hash []byte, weakHash uint32, fromTemporary bool) ([]byte, error) {
m.pmut.RLock()
nc, ok := m.conn[deviceID]
m.pmut.RUnlock()
@@ -2081,7 +2149,7 @@ func (m *model) requestGlobal(deviceID protocol.DeviceID, folder, name string, o
l.Debugf("%v REQ(out): %s: %q / %q o=%d s=%d h=%x wh=%x ft=%t", m, deviceID, folder, name, offset, size, hash, weakHash, fromTemporary)
return nc.Request(folder, name, offset, size, hash, weakHash, fromTemporary)
return nc.Request(ctx, folder, name, offset, size, hash, weakHash, fromTemporary)
}
func (m *model) ScanFolders() map[string]error {
@@ -2398,14 +2466,14 @@ func (m *model) GlobalDirectoryTree(folder, prefix string, levels int, dirsonly
}
func (m *model) GetFolderVersions(folder string) (map[string][]versioner.FileVersion, error) {
fcfg, ok := m.cfg.Folder(folder)
m.fmut.RLock()
ver, ok := m.folderVersioners[folder]
m.fmut.RUnlock()
if !ok {
return nil, errFolderMissing
}
ver := fcfg.Versioner()
if ver == nil {
return nil, errors.New("no versioner configured")
return nil, errNoVersioner
}
return ver.GetVersions()
@@ -2417,7 +2485,15 @@ func (m *model) RestoreFolderVersions(folder string, versions map[string]time.Ti
return nil, errFolderMissing
}
ver := fcfg.Versioner()
m.fmut.RLock()
ver := m.folderVersioners[folder]
m.fmut.RUnlock()
if !ok {
return nil, errFolderMissing
}
if ver == nil {
return nil, errNoVersioner
}
restoreErrors := make(map[string]string)
@@ -2513,8 +2589,7 @@ func (m *model) CommitConfiguration(from, to config.Configuration) bool {
l.Infoln("Paused folder", cfg.Description())
} else {
l.Infoln("Adding folder", cfg.Description())
m.AddFolder(cfg)
m.StartFolder(folderID)
m.newFolder(cfg)
}
}
}
@@ -2523,7 +2598,7 @@ func (m *model) CommitConfiguration(from, to config.Configuration) bool {
toCfg, ok := toFolders[folderID]
if !ok {
// The folder was removed.
m.RemoveFolder(fromCfg)
m.removeFolder(fromCfg)
continue
}
@@ -2534,7 +2609,7 @@ func (m *model) CommitConfiguration(from, to config.Configuration) bool {
// This folder exists on both sides. Settings might have changed.
// Check if anything differs that requires a restart.
if !reflect.DeepEqual(fromCfg.RequiresRestartOnly(), toCfg.RequiresRestartOnly()) {
m.RestartFolder(fromCfg, toCfg)
m.restartFolder(fromCfg, toCfg)
}
// Emit the folder pause/resume event

View File

@@ -8,6 +8,7 @@ package model
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -26,6 +27,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/ignore"
@@ -254,7 +256,7 @@ func BenchmarkRequestOut(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
data, err := m.requestGlobal(device1, "default", files[i%n].Name, 0, 32, nil, 0, false)
data, err := m.requestGlobal(context.Background(), device1, "default", files[i%n].Name, 0, 32, nil, 0, false)
if err != nil {
b.Error(err)
}
@@ -305,7 +307,7 @@ func TestDeviceRename(t *testing.T) {
}
cfg := config.Wrap("testdata/tmpconfig.xml", rawCfg, events.NoopLogger)
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
m := newModel(cfg, myID, "syncthing", "dev", db, nil)
if cfg.Devices()[device1].Name != "" {
@@ -401,13 +403,15 @@ func TestClusterConfig(t *testing.T) {
},
}
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
wrapper := createTmpWrapper(cfg)
m := newModel(wrapper, myID, "syncthing", "dev", db, nil)
m.AddFolder(cfg.Folders[0])
m.AddFolder(cfg.Folders[1])
m.ServeBackground()
for _, fcfg := range cfg.Folders {
m.removeFolder(fcfg)
m.addFolder(fcfg)
}
defer cleanupModel(m)
cm := m.generateClusterConfig(device2)
@@ -1454,8 +1458,8 @@ func TestIgnores(t *testing.T) {
m := setupModel(defaultCfgWrapper)
defer cleanupModel(m)
m.RemoveFolder(defaultFolderConfig)
m.AddFolder(defaultFolderConfig)
m.removeFolder(defaultFolderConfig)
m.addFolder(defaultFolderConfig)
// Reach in and update the ignore matcher to one that always does
// reloads when asked to, instead of checking file mtimes. This is
// because we will be changing the files on disk often enough that the
@@ -1463,7 +1467,7 @@ func TestIgnores(t *testing.T) {
m.fmut.Lock()
m.folderIgnores["default"] = ignore.New(defaultFs, ignore.WithCache(true), ignore.WithChangeDetector(newAlwaysChanged()))
m.fmut.Unlock()
m.StartFolder("default")
m.startFolder("default")
// Make sure the initial scan has finished (ScanFolders is blocking)
m.ScanFolders()
@@ -1486,7 +1490,7 @@ func TestIgnores(t *testing.T) {
}
// Invalid path, marker should be missing, hence returns an error.
m.AddFolder(config.FolderConfiguration{ID: "fresh", Path: "XXX"})
m.addFolder(config.FolderConfiguration{ID: "fresh", Path: "XXX"})
_, _, err = m.GetIgnores("fresh")
if err == nil {
t.Error("No error")
@@ -1496,7 +1500,7 @@ func TestIgnores(t *testing.T) {
pausedDefaultFolderConfig := defaultFolderConfig
pausedDefaultFolderConfig.Paused = true
m.RestartFolder(defaultFolderConfig, pausedDefaultFolderConfig)
m.restartFolder(defaultFolderConfig, pausedDefaultFolderConfig)
// Here folder initialization is not an issue as a paused folder isn't
// added to the model and thus there is no initial scan happening.
@@ -1510,6 +1514,41 @@ func TestIgnores(t *testing.T) {
changeIgnores(t, m, []string{})
}
func TestEmptyIgnores(t *testing.T) {
testOs := &fatalOs{t}
// Assure a clean start state
testOs.RemoveAll(filepath.Join("testdata", config.DefaultMarkerName))
testOs.MkdirAll(filepath.Join("testdata", config.DefaultMarkerName), 0644)
m := setupModel(defaultCfgWrapper)
defer cleanupModel(m)
m.removeFolder(defaultFolderConfig)
m.addFolder(defaultFolderConfig)
if err := m.SetIgnores("default", []string{}); err != nil {
t.Error(err)
}
if _, err := os.Stat("testdata/.stignore"); err == nil {
t.Error(".stignore was created despite being empty")
}
if err := m.SetIgnores("default", []string{".*", "quux"}); err != nil {
t.Error(err)
}
if _, err := os.Stat("testdata/.stignore"); os.IsNotExist(err) {
t.Error(".stignore does not exist")
}
if err := m.SetIgnores("default", []string{}); err != nil {
t.Error(err)
}
if _, err := os.Stat("testdata/.stignore"); err == nil {
t.Error(".stignore should have been deleted because it is empty")
}
}
func waitForState(t *testing.T, m *model, folder, status string) {
t.Helper()
timeout := time.Now().Add(2 * time.Second)
@@ -1530,7 +1569,7 @@ func waitForState(t *testing.T, m *model, folder, status string) {
func TestROScanRecovery(t *testing.T) {
testOs := &fatalOs{t}
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
set := db.NewFileSet("default", defaultFs, ldb)
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
{Name: "dummyfile", Version: protocol.Vector{Counters: []protocol.Counter{{ID: 42, Value: 1}}}},
@@ -1555,8 +1594,6 @@ func TestROScanRecovery(t *testing.T) {
testOs.RemoveAll(fcfg.Path)
m := newModel(cfg, myID, "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
m.StartFolder("default")
m.ServeBackground()
defer cleanupModel(m)
@@ -1583,7 +1620,7 @@ func TestROScanRecovery(t *testing.T) {
func TestRWScanRecovery(t *testing.T) {
testOs := &fatalOs{t}
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
set := db.NewFileSet("default", defaultFs, ldb)
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
{Name: "dummyfile", Version: protocol.Vector{Counters: []protocol.Counter{{ID: 42, Value: 1}}}},
@@ -1608,8 +1645,6 @@ func TestRWScanRecovery(t *testing.T) {
testOs.RemoveAll(fcfg.Path)
m := newModel(cfg, myID, "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
m.StartFolder("default")
m.ServeBackground()
defer cleanupModel(m)
@@ -1634,10 +1669,11 @@ func TestRWScanRecovery(t *testing.T) {
}
func TestGlobalDirectoryTree(t *testing.T) {
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
m := newModel(defaultCfgWrapper, myID, "syncthing", "dev", db, nil)
m.AddFolder(defaultFolderConfig)
m.ServeBackground()
m.removeFolder(defaultFolderConfig)
m.addFolder(defaultFolderConfig)
defer cleanupModel(m)
b := func(isfile bool, path ...string) protocol.FileInfo {
@@ -1886,10 +1922,11 @@ func TestGlobalDirectoryTree(t *testing.T) {
}
func TestGlobalDirectorySelfFixing(t *testing.T) {
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
m := newModel(defaultCfgWrapper, myID, "syncthing", "dev", db, nil)
m.AddFolder(defaultFolderConfig)
m.ServeBackground()
m.removeFolder(defaultFolderConfig)
m.addFolder(defaultFolderConfig)
defer cleanupModel(m)
b := func(isfile bool, path ...string) protocol.FileInfo {
@@ -2062,10 +2099,11 @@ func BenchmarkTree_100_10(b *testing.B) {
}
func benchmarkTree(b *testing.B, n1, n2 int) {
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
m := newModel(defaultCfgWrapper, myID, "syncthing", "dev", db, nil)
m.AddFolder(defaultFolderConfig)
m.ServeBackground()
m.removeFolder(defaultFolderConfig)
m.addFolder(defaultFolderConfig)
defer cleanupModel(m)
m.ScanFolder("default")
@@ -2126,7 +2164,7 @@ func TestIssue3028(t *testing.T) {
}
func TestIssue4357(t *testing.T) {
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
cfg := defaultCfgWrapper.RawCopy()
// Create a separate wrapper not to pollute other tests.
wrapper := createTmpWrapper(config.Configuration{})
@@ -2249,7 +2287,7 @@ func TestIssue2782(t *testing.T) {
}
func TestIndexesForUnknownDevicesDropped(t *testing.T) {
dbi := db.OpenMemory()
dbi := db.NewLowlevel(backend.OpenMemory())
files := db.NewFileSet("default", defaultFs, dbi)
files.Drop(device1)
@@ -2262,15 +2300,15 @@ func TestIndexesForUnknownDevicesDropped(t *testing.T) {
}
m := newModel(defaultCfgWrapper, myID, "syncthing", "dev", dbi, nil)
m.AddFolder(defaultFolderConfig)
m.StartFolder("default")
m.addFolder(defaultFolderConfig)
m.startFolder("default")
defer cleanupModel(m)
// Remote sequence is cached, hence need to recreated.
files = db.NewFileSet("default", defaultFs, dbi)
if len(files.ListDevices()) != 1 {
t.Error("Expected one device")
if l := len(files.ListDevices()); l != 1 {
t.Errorf("Expected one device got %v", l)
}
}
@@ -2675,7 +2713,7 @@ func TestInternalScan(t *testing.T) {
func TestCustomMarkerName(t *testing.T) {
testOs := &fatalOs{t}
ldb := db.OpenMemory()
ldb := db.NewLowlevel(backend.OpenMemory())
set := db.NewFileSet("default", defaultFs, ldb)
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
{Name: "dummyfile"},
@@ -2701,8 +2739,6 @@ func TestCustomMarkerName(t *testing.T) {
defer testOs.RemoveAll(fcfg.Path)
m := newModel(cfg, myID, "syncthing", "dev", ldb, nil)
m.AddFolder(fcfg)
m.StartFolder("default")
m.ServeBackground()
defer cleanupModel(m)
@@ -3052,7 +3088,7 @@ func TestPausedFolders(t *testing.T) {
func TestIssue4094(t *testing.T) {
testOs := &fatalOs{t}
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
// Create a separate wrapper not to pollute other tests.
wrapper := createTmpWrapper(config.Configuration{})
m := newModel(wrapper, myID, "syncthing", "dev", db, nil)
@@ -3088,7 +3124,7 @@ func TestIssue4094(t *testing.T) {
func TestIssue4903(t *testing.T) {
testOs := &fatalOs{t}
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
// Create a separate wrapper not to pollute other tests.
wrapper := createTmpWrapper(config.Configuration{})
m := newModel(wrapper, myID, "syncthing", "dev", db, nil)
@@ -3290,7 +3326,7 @@ func TestConnCloseOnRestart(t *testing.T) {
newFcfg.Paused = true
done := make(chan struct{})
go func() {
m.RestartFolder(fcfg, newFcfg)
m.restartFolder(fcfg, newFcfg)
close(done)
}()
select {
@@ -3391,7 +3427,10 @@ func TestDeviceWasSeen(t *testing.T) {
m.deviceWasSeen(device1)
stats := m.DeviceStatistics()
stats, err := m.DeviceStatistics()
if err != nil {
t.Error("Unexpected error:", err)
}
entry := stats[device1.String()]
if time.Since(entry.LastSeen) > time.Second {
t.Error("device should have been seen now")

View File

@@ -7,6 +7,7 @@
package model
import (
"context"
"fmt"
"time"
@@ -22,6 +23,7 @@ import (
type ProgressEmitter struct {
suture.Service
cfg config.Wrapper
registry map[string]map[string]*sharedPullerState // folder: name: puller
interval time.Duration
minBlocks int
@@ -39,6 +41,7 @@ type ProgressEmitter struct {
// DownloadProgress events every interval.
func NewProgressEmitter(cfg config.Wrapper, evLogger events.Logger) *ProgressEmitter {
t := &ProgressEmitter{
cfg: cfg,
registry: make(map[string]map[string]*sharedPullerState),
timer: time.NewTimer(time.Millisecond),
sentDownloadStates: make(map[protocol.DeviceID]*sentDownloadState),
@@ -47,22 +50,24 @@ func NewProgressEmitter(cfg config.Wrapper, evLogger events.Logger) *ProgressEmi
evLogger: evLogger,
mut: sync.NewMutex(),
}
t.Service = util.AsService(t.serve)
t.Service = util.AsService(t.serve, t.String())
t.CommitConfiguration(config.Configuration{}, cfg.RawCopy())
cfg.Subscribe(t)
return t
}
// serve starts the progress emitter which starts emitting DownloadProgress
// events as the progress happens.
func (t *ProgressEmitter) serve(stop chan struct{}) {
func (t *ProgressEmitter) serve(ctx context.Context) {
t.cfg.Subscribe(t)
defer t.cfg.Unsubscribe(t)
var lastUpdate time.Time
var lastCount, newCount int
for {
select {
case <-stop:
case <-ctx.Done():
l.Debugln("progress emitter: stopping")
return
case <-t.timer.C:
@@ -84,7 +89,7 @@ func (t *ProgressEmitter) serve(stop chan struct{}) {
lastCount = newCount
t.sendDownloadProgressEventLocked()
if len(t.connections) > 0 {
t.sendDownloadProgressMessagesLocked()
t.sendDownloadProgressMessagesLocked(ctx)
}
} else {
l.Debugln("progress emitter: nothing new")
@@ -113,7 +118,7 @@ func (t *ProgressEmitter) sendDownloadProgressEventLocked() {
l.Debugf("progress emitter: emitting %#v", output)
}
func (t *ProgressEmitter) sendDownloadProgressMessagesLocked() {
func (t *ProgressEmitter) sendDownloadProgressMessagesLocked(ctx context.Context) {
for id, conn := range t.connections {
for _, folder := range t.foldersByConns[id] {
pullers, ok := t.registry[folder]
@@ -144,7 +149,7 @@ func (t *ProgressEmitter) sendDownloadProgressMessagesLocked() {
updates := state.update(folder, activePullers)
if len(updates) > 0 {
conn.DownloadProgress(folder, updates)
conn.DownloadProgress(ctx, folder, updates)
}
}
}
@@ -310,7 +315,7 @@ func (t *ProgressEmitter) clearLocked() {
}
for _, folder := range state.folders() {
if updates := state.cleanup(folder); len(updates) > 0 {
conn.DownloadProgress(folder, updates)
conn.DownloadProgress(context.Background(), folder, updates)
}
}
}

View File

@@ -7,6 +7,7 @@
package model
import (
"context"
"fmt"
"os"
"path/filepath"
@@ -66,6 +67,7 @@ func TestProgressEmitter(t *testing.T) {
p := NewProgressEmitter(c, evLogger)
go p.Serve()
defer p.Stop()
p.interval = 0
expectTimeout(w, t)
@@ -460,5 +462,5 @@ func TestSendDownloadProgressMessages(t *testing.T) {
func sendMsgs(p *ProgressEmitter) {
p.mut.Lock()
defer p.mut.Unlock()
p.sendDownloadProgressMessagesLocked()
p.sendDownloadProgressMessagesLocked(context.Background())
}

View File

@@ -8,6 +8,7 @@ package model
import (
"bytes"
"context"
"errors"
"io/ioutil"
"os"
@@ -37,7 +38,7 @@ func TestRequestSimple(t *testing.T) {
// the expected test file.
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Error("More than one index update sent")
@@ -79,7 +80,7 @@ func TestSymlinkTraversalRead(t *testing.T) {
// the expected test file.
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Error("More than one index update sent")
@@ -124,7 +125,7 @@ func TestSymlinkTraversalWrite(t *testing.T) {
badReq := make(chan string, 1)
badIdx := make(chan string, 1)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
for _, f := range fs {
if f.Name == "symlink" {
done <- struct{}{}
@@ -136,7 +137,7 @@ func TestSymlinkTraversalWrite(t *testing.T) {
}
}
}
fc.requestFn = func(folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error) {
fc.requestFn = func(_ context.Context, folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error) {
if name != "symlink" && strings.HasPrefix(name, "symlink") {
badReq <- name
}
@@ -182,7 +183,7 @@ func TestRequestCreateTmpSymlink(t *testing.T) {
goodIdx := make(chan struct{})
name := fs.TempName("testlink")
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
for _, f := range fs {
if f.Name == name {
if f.IsInvalid() {
@@ -239,7 +240,7 @@ func TestRequestVersioningSymlinkAttack(t *testing.T) {
// the expected test file.
idx := make(chan int)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
idx <- len(fs)
}
fc.mut.Unlock()
@@ -301,8 +302,8 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
m := setupModel(w)
defer cleanupModelAndRemoveDir(m, fss.URI())
m.RemoveFolder(fcfg)
m.AddFolder(fcfg)
m.removeFolder(fcfg)
m.addFolder(fcfg)
// Reach in and update the ignore matcher to one that always does
// reloads when asked to, instead of checking file mtimes. This is
// because we might be changing the files on disk often enough that the
@@ -310,7 +311,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
m.fmut.Lock()
m.folderIgnores["default"] = ignore.New(fss, ignore.WithChangeDetector(newAlwaysChanged()))
m.fmut.Unlock()
m.StartFolder(fcfg.ID)
m.startFolder(fcfg.ID)
fc := addFakeConn(m, device1)
fc.folder = "default"
@@ -338,7 +339,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
expected := map[string]struct{}{invIgn: {}, ign: {}, ignExisting: {}}
for _, f := range fs {
if _, ok := expected[f.Name]; !ok {
@@ -374,7 +375,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
// The indexes will normally arrive in one update, but it is possible
// that they arrive in separate ones.
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
for _, f := range fs {
if _, ok := expected[f.Name]; !ok {
t.Errorf("Unexpected file %v was updated in index", f.Name)
@@ -411,7 +412,7 @@ func pullInvalidIgnored(t *testing.T, ft config.FolderType) {
}
// Make sure pulling doesn't interfere, as index updates are racy and
// thus we cannot distinguish between scan and pull results.
fc.requestFn = func(folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error) {
fc.requestFn = func(_ context.Context, folder, name string, offset int64, size int, hash []byte, fromTemporary bool) ([]byte, error) {
return nil, nil
}
fc.mut.Unlock()
@@ -433,7 +434,7 @@ func TestIssue4841(t *testing.T) {
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(_ string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, _ string, fs []protocol.FileInfo) {
received <- fs
}
fc.mut.Unlock()
@@ -482,7 +483,7 @@ func TestRescanIfHaveInvalidContent(t *testing.T) {
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(_ string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, _ string, fs []protocol.FileInfo) {
received <- fs
}
fc.mut.Unlock()
@@ -549,7 +550,7 @@ func TestParentDeletion(t *testing.T) {
fc.addFile(parent, 0777, protocol.FileInfoTypeDirectory, nil)
fc.addFile(child, 0777, protocol.FileInfoTypeDirectory, nil)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
received <- fs
}
fc.mut.Unlock()
@@ -622,7 +623,7 @@ func TestRequestSymlinkWindows(t *testing.T) {
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-received:
t.Error("More than one index update sent")
@@ -692,7 +693,7 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
received := make(chan []protocol.FileInfo)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-received:
t.Error("More than one index update sent")
@@ -732,7 +733,7 @@ func TestRequestRemoteRenameChanged(t *testing.T) {
bFinalVersion := bIntermediateVersion.Copy().Update(fc.id.Short())
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Error("Received more index updates than expected")
@@ -833,7 +834,7 @@ func TestRequestRemoteRenameConflict(t *testing.T) {
recv := make(chan int)
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
recv <- len(fs)
}
fc.mut.Unlock()
@@ -923,7 +924,7 @@ func TestRequestDeleteChanged(t *testing.T) {
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Error("More than one index update sent")
@@ -946,7 +947,7 @@ func TestRequestDeleteChanged(t *testing.T) {
}
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
select {
case <-done:
t.Error("More than one index update sent")
@@ -996,7 +997,7 @@ func TestNeedFolderFiles(t *testing.T) {
errPreventSync := errors.New("you aren't getting any of this")
fc.mut.Lock()
fc.requestFn = func(string, string, int64, int, []byte, bool) ([]byte, error) {
fc.requestFn = func(context.Context, string, string, int64, int, []byte, bool) ([]byte, error) {
return nil, errPreventSync
}
fc.mut.Unlock()
@@ -1038,8 +1039,8 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
tmpDir := fss.URI()
defer cleanupModelAndRemoveDir(m, tmpDir)
m.RemoveFolder(fcfg)
m.AddFolder(fcfg)
m.removeFolder(fcfg)
m.addFolder(fcfg)
// Reach in and update the ignore matcher to one that always does
// reloads when asked to, instead of checking file mtimes. This is
// because we might be changing the files on disk often enough that the
@@ -1047,7 +1048,7 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
m.fmut.Lock()
m.folderIgnores["default"] = ignore.New(fss, ignore.WithChangeDetector(newAlwaysChanged()))
m.fmut.Unlock()
m.StartFolder(fcfg.ID)
m.startFolder(fcfg.ID)
fc := addFakeConn(m, device1)
fc.folder = "default"
@@ -1068,7 +1069,7 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
done := make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
basicCheck(fs)
close(done)
}
@@ -1087,7 +1088,7 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
done = make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
basicCheck(fs)
f := fs[0]
if !f.IsInvalid() {
@@ -1109,7 +1110,7 @@ func TestIgnoreDeleteUnignore(t *testing.T) {
done = make(chan struct{})
fc.mut.Lock()
fc.indexFn = func(folder string, fs []protocol.FileInfo) {
fc.indexFn = func(_ context.Context, folder string, fs []protocol.FileInfo) {
basicCheck(fs)
f := fs[0]
if f.IsInvalid() {

View File

@@ -13,6 +13,7 @@ import (
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/db/backend"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/protocol"
@@ -102,15 +103,9 @@ func setupModelWithConnectionFromWrapper(w config.Wrapper) (*model, *fakeConnect
}
func setupModel(w config.Wrapper) *model {
db := db.OpenMemory()
db := db.NewLowlevel(backend.OpenMemory())
m := newModel(w, myID, "syncthing", "dev", db, nil)
m.ServeBackground()
for id, cfg := range w.Folders() {
if !cfg.Paused {
m.AddFolder(cfg)
m.StartFolder(id)
}
}
m.ScanFolders()

View File

@@ -7,11 +7,12 @@
package nat
import (
"context"
"sync"
"time"
)
type DiscoverFunc func(renewal, timeout time.Duration) []Device
type DiscoverFunc func(ctx context.Context, renewal, timeout time.Duration) []Device
var providers []DiscoverFunc
@@ -19,7 +20,7 @@ func Register(provider DiscoverFunc) {
providers = append(providers, provider)
}
func discoverAll(renewal, timeout time.Duration, stop chan struct{}) map[string]Device {
func discoverAll(ctx context.Context, renewal, timeout time.Duration) map[string]Device {
wg := &sync.WaitGroup{}
wg.Add(len(providers))
@@ -29,10 +30,10 @@ func discoverAll(renewal, timeout time.Duration, stop chan struct{}) map[string]
for _, discoverFunc := range providers {
go func(f DiscoverFunc) {
defer wg.Done()
for _, dev := range f(renewal, timeout) {
for _, dev := range f(ctx, renewal, timeout) {
select {
case c <- dev:
case <-stop:
case <-ctx.Done():
return
}
}
@@ -50,7 +51,7 @@ func discoverAll(renewal, timeout time.Duration, stop chan struct{}) map[string]
return
}
nats[dev.ID()] = dev
case <-stop:
case <-ctx.Done():
return
}
}

View File

@@ -7,6 +7,7 @@
package nat
import (
"context"
"fmt"
"hash/fnv"
"math/rand"
@@ -43,11 +44,11 @@ func NewService(id protocol.DeviceID, cfg config.Wrapper) *Service {
timer: time.NewTimer(0),
mut: sync.NewRWMutex(),
}
s.Service = util.AsService(s.serve)
s.Service = util.AsService(s.serve, s.String())
return s
}
func (s *Service) serve(stop chan struct{}) {
func (s *Service) serve(ctx context.Context) {
announce := stdsync.Once{}
s.mut.Lock()
@@ -57,7 +58,7 @@ func (s *Service) serve(stop chan struct{}) {
for {
select {
case <-s.timer.C:
if found := s.process(stop); found != -1 {
if found := s.process(ctx); found != -1 {
announce.Do(func() {
suffix := "s"
if found == 1 {
@@ -66,7 +67,7 @@ func (s *Service) serve(stop chan struct{}) {
l.Infoln("Detected", found, "NAT service"+suffix)
})
}
case <-stop:
case <-ctx.Done():
s.timer.Stop()
s.mut.RLock()
for _, mapping := range s.mappings {
@@ -78,7 +79,7 @@ func (s *Service) serve(stop chan struct{}) {
}
}
func (s *Service) process(stop chan struct{}) int {
func (s *Service) process(ctx context.Context) int {
// toRenew are mappings which are due for renewal
// toUpdate are the remaining mappings, which will only be updated if one of
// the old IGDs has gone away, or a new IGD has appeared, but only if we
@@ -120,14 +121,14 @@ func (s *Service) process(stop chan struct{}) int {
return -1
}
nats := discoverAll(time.Duration(s.cfg.Options().NATRenewalM)*time.Minute, time.Duration(s.cfg.Options().NATTimeoutS)*time.Second, stop)
nats := discoverAll(ctx, time.Duration(s.cfg.Options().NATRenewalM)*time.Minute, time.Duration(s.cfg.Options().NATTimeoutS)*time.Second)
for _, mapping := range toRenew {
s.updateMapping(mapping, nats, true, stop)
s.updateMapping(ctx, mapping, nats, true)
}
for _, mapping := range toUpdate {
s.updateMapping(mapping, nats, false, stop)
s.updateMapping(ctx, mapping, nats, false)
}
return len(nats)
@@ -177,17 +178,17 @@ func (s *Service) RemoveMapping(mapping *Mapping) {
// acquire mappings for natds which the mapping was unaware of before.
// Optionally takes renew flag which indicates whether or not we should renew
// mappings with existing natds
func (s *Service) updateMapping(mapping *Mapping, nats map[string]Device, renew bool, stop chan struct{}) {
func (s *Service) updateMapping(ctx context.Context, mapping *Mapping, nats map[string]Device, renew bool) {
var added, removed []Address
renewalTime := time.Duration(s.cfg.Options().NATRenewalM) * time.Minute
mapping.expires = time.Now().Add(renewalTime)
newAdded, newRemoved := s.verifyExistingMappings(mapping, nats, renew, stop)
newAdded, newRemoved := s.verifyExistingMappings(ctx, mapping, nats, renew)
added = append(added, newAdded...)
removed = append(removed, newRemoved...)
newAdded, newRemoved = s.acquireNewMappings(mapping, nats, stop)
newAdded, newRemoved = s.acquireNewMappings(ctx, mapping, nats)
added = append(added, newAdded...)
removed = append(removed, newRemoved...)
@@ -196,14 +197,14 @@ func (s *Service) updateMapping(mapping *Mapping, nats map[string]Device, renew
}
}
func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Device, renew bool, stop chan struct{}) ([]Address, []Address) {
func (s *Service) verifyExistingMappings(ctx context.Context, mapping *Mapping, nats map[string]Device, renew bool) ([]Address, []Address) {
var added, removed []Address
leaseTime := time.Duration(s.cfg.Options().NATLeaseM) * time.Minute
for id, address := range mapping.addressMap() {
select {
case <-stop:
case <-ctx.Done():
return nil, nil
default:
}
@@ -225,7 +226,7 @@ func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Devic
l.Debugf("Renewing %s -> %s mapping on %s", mapping, address, id)
addr, err := s.tryNATDevice(nat, mapping.address.Port, address.Port, leaseTime, stop)
addr, err := s.tryNATDevice(ctx, nat, mapping.address.Port, address.Port, leaseTime)
if err != nil {
l.Debugf("Failed to renew %s -> mapping on %s", mapping, address, id)
mapping.removeAddress(id)
@@ -247,7 +248,7 @@ func (s *Service) verifyExistingMappings(mapping *Mapping, nats map[string]Devic
return added, removed
}
func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device, stop chan struct{}) ([]Address, []Address) {
func (s *Service) acquireNewMappings(ctx context.Context, mapping *Mapping, nats map[string]Device) ([]Address, []Address) {
var added, removed []Address
leaseTime := time.Duration(s.cfg.Options().NATLeaseM) * time.Minute
@@ -255,7 +256,7 @@ func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device, s
for id, nat := range nats {
select {
case <-stop:
case <-ctx.Done():
return nil, nil
default:
}
@@ -274,7 +275,7 @@ func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device, s
l.Debugf("Acquiring %s mapping on %s", mapping, id)
addr, err := s.tryNATDevice(nat, mapping.address.Port, 0, leaseTime, stop)
addr, err := s.tryNATDevice(ctx, nat, mapping.address.Port, 0, leaseTime)
if err != nil {
l.Debugf("Failed to acquire %s mapping on %s", mapping, id)
continue
@@ -291,7 +292,7 @@ func (s *Service) acquireNewMappings(mapping *Mapping, nats map[string]Device, s
// tryNATDevice tries to acquire a port mapping for the given internal address to
// the given external port. If external port is 0, picks a pseudo-random port.
func (s *Service) tryNATDevice(natd Device, intPort, extPort int, leaseTime time.Duration, stop chan struct{}) (Address, error) {
func (s *Service) tryNATDevice(ctx context.Context, natd Device, intPort, extPort int, leaseTime time.Duration) (Address, error) {
var err error
var port int
@@ -313,7 +314,7 @@ func (s *Service) tryNATDevice(natd Device, intPort, extPort int, leaseTime time
for i := 0; i < 10; i++ {
select {
case <-stop:
case <-ctx.Done():
return Address{}, nil
default:
}
@@ -343,6 +344,10 @@ findIP:
}, nil
}
func (s *Service) String() string {
return fmt.Sprintf("nat.Service@%p", s)
}
func hash(input string) int64 {
h := fnv.New64a()
h.Write([]byte(input))

Some files were not shown because too many files have changed in this diff Show More