Compare commits

..

62 Commits

Author SHA1 Message Date
Jakob Borg
84eb729bd4 Don't start the browser on restarts (fixes #636) 2014-09-06 07:35:30 +02:00
Jakob Borg
14aea365c5 Don't stop permanently on exit (fixes #637) 2014-09-06 07:28:57 +02:00
Jakob Borg
97cb3fa5a5 Translation update (add Catalan) 2014-09-05 14:24:20 +02:00
Jakob Borg
b5368db704 Update assets 2014-09-05 13:26:17 +02:00
Jakob Borg
8c442b72f3 Merge remote-tracking branch 'origin/pr/634'
* origin/pr/634:
  Removed unused `optionEditor` directive from app.js
  Removed unused `clean` filter from app.js.
  Removed unused `shortPath` filter from app.js.
  Removed  unused `short` filter from app.js.
2014-09-05 13:25:53 +02:00
Jakob Borg
f8f6791d39 Add pyfisch 2014-09-05 13:25:40 +02:00
Pyfisch
0c09f077aa Removed unused optionEditor directive from app.js 2014-09-05 12:42:52 +02:00
Pyfisch
af2831d7b6 Removed unused clean filter from app.js. 2014-09-05 12:40:45 +02:00
Pyfisch
64d5d4aec7 Removed unused shortPath filter from app.js. 2014-09-05 12:39:35 +02:00
Pyfisch
619a6b2adb Removed unused short filter from app.js. 2014-09-05 12:38:21 +02:00
Jakob Borg
33a26bc0cf Merge pull request #631 from AudriusButkevicius/upnp
Check if we had successfully acquired a UPnP mapping before (fixes #627)
2014-09-05 09:09:23 +02:00
Audrius Butkevicius
b445a7c4d3 Check if we had successfully acquired a UPnP mapping before (fixes #627) 2014-09-04 23:02:10 +01:00
Jakob Borg
e6892d0c3e Autogen warning in lang dir 2014-09-04 23:37:23 +02:00
Jakob Borg
33e9a88b56 Proper signal handling in monitor process 2014-09-04 23:31:22 +02:00
Jakob Borg
df00a2251e Pesky copyright is pesky 2014-09-04 22:33:01 +02:00
Jakob Borg
92c44c8abe Rework .stignore functionality (fixes #561) (...)
- Only one .stignore is supported, at the repo root
 - Negative patterns (!) are supported
 - Ignore patterns affect sent and received indexes, not only scanning
2014-09-04 22:30:42 +02:00
Jakob Borg
8e4f7bbd3e Merge pull request #626 from alex2108/master
staggered versioner: count directories as files (fixes #607)
2014-09-04 21:59:38 +02:00
Jakob Borg
a40217cf07 Trim dead bits of code 2014-09-04 22:07:59 +02:00
Jakob Borg
e586fda5f2 Woops, close the right fd 2014-09-04 22:03:25 +02:00
Alexander Graf
a58564ff88 count directories as files (fixes #607) 2014-09-04 16:48:24 +02:00
Jakob Borg
89885b9fb9 Clean up GUI directory 2014-09-04 08:53:28 +02:00
Jakob Borg
5c7d977ae0 Use woff instead of ttf font 2014-09-04 08:47:23 +02:00
Jakob Borg
2cd3ee9698 Dead code cleanup 2014-09-04 08:39:39 +02:00
Jakob Borg
dd3080e018 Copyright cleanup 2014-09-04 08:31:38 +02:00
Jakob Borg
5915e8e86a Don't trust mime.TypeByExtension for the easy stuff (fixes #598) 2014-09-04 08:26:12 +02:00
Jakob Borg
3c67c06654 Merge pull request #619 from marcindziadus/sorting-order
Change sorting order (fix #618)
2014-09-03 23:26:20 +02:00
Marcin
76232ca573 change sorting order 2014-09-03 18:41:45 +02:00
Jakob Borg
5235e82bda Limit number of open db files (fixes #587) 2014-09-02 14:47:36 +02:00
Jakob Borg
10f0713257 Use a monitor process to handle panics and restarts (fixes #586) 2014-09-02 13:24:41 +02:00
Jakob Borg
e9c7970ea4 Only create assets map on demand 2014-09-02 13:07:33 +02:00
Jakob Borg
1a6ac4aeb1 Integration tests should use v4 localhost 2014-09-02 12:10:18 +02:00
Jakob Borg
f633bdddf0 Update goleveldb 2014-09-02 09:44:07 +02:00
Jakob Borg
de0b91d157 Show IPv6 GUI URL correctly 2014-09-01 20:04:22 +02:00
Jakob Borg
2e77e498f5 Use more compact base64 encoding for assets 2014-09-01 20:04:22 +02:00
Jakob Borg
4ac67eb1f9 Merge pull request #589 from AudriusButkevicius/include
Add #include directive to .stignore (fixes #424)
2014-09-01 18:08:53 +02:00
Jakob Borg
2b536de37f Don't fake indexes for stopped repos 2014-09-01 17:48:39 +02:00
Jakob Borg
2ffa92ba1b Warn on startup for stopped repositories 2014-09-01 17:47:18 +02:00
Jakob Borg
6ecddd8388 Don't fail build on Solaris 2014-09-01 17:26:28 +02:00
Jakob Borg
bd2772ea4c If all instances of the global version is invalid, the file should not be on the need list 2014-09-01 09:07:51 +02:00
Audrius Butkevicius
92bf79d53b Fix tests 2014-08-31 22:34:13 +01:00
Audrius Butkevicius
eebe0eeb71 Handle recursive includes 2014-08-31 22:33:49 +01:00
Jakob Borg
1068eaa0b9 Translation update 2014-08-31 21:52:29 +02:00
Jakob Borg
faac3e7d7c Don't clobber staggeredMaxAge = 0 (fixes #604) 2014-08-31 21:44:06 +02:00
Jakob Borg
dab4340207 Merge pull request #603 from AudriusButkevicius/restart
Fix GUI breaking during restarts (fixes #577)
2014-08-31 21:30:51 +02:00
Audrius Butkevicius
fd2567748f Fix GUI breaking during restarts (fixes #577) 2014-08-31 15:49:08 +01:00
Jakob Borg
c2daedbd11 Try not to crash the box with failing tests 2014-08-31 15:36:05 +01:00
Jakob Borg
7c604beb73 Test cases for ignore #include 2014-08-31 15:35:48 +01:00
Audrius Butkevicius
8c42aea827 Add #include directive to .stignore (fixes #424)
Though breaks #502 in a way, as .stignore is not the only place where
stuff gets defined anymore.

Though it never was, as .stignore can be placed in each dir, but I think we
should phase that out in favor of globbing which means that we can then
have a single file, which means that we can have a UI for editing that.

Alternative would be as suggested to include a .stglobalignore which is then synced
as a normal file, but gets included by default.

Then when the UI would have two editors, a local ignore, and a global ignore.
2014-08-31 15:32:22 +01:00
Jakob Borg
cf1bfdfb61 Hold rmut read lock when looking at nodeStatRefs 2014-08-31 13:48:43 +02:00
Jakob Borg
75b26513e1 Don't crash under suspicious circumstances... (fixes #602) 2014-08-31 13:48:16 +02:00
Jakob Borg
6c09a77a97 Clean out index for nonexistent repositories (fixes #549) 2014-08-31 13:34:17 +02:00
Jakob Borg
67389c39fb For now, don't allow changing repo path (ref #549) 2014-08-31 13:05:08 +02:00
Jakob Borg
c326103e6e Add X-Syncthing-Version header to HTTP responses 2014-08-31 12:59:20 +02:00
Jakob Borg
c2120a16da Try to set some reasonable resource limits when running tests 2014-08-30 10:02:10 +02:00
Jakob Borg
258ad4352e Fix connecting to discovered IPv6 address 2014-08-29 17:18:25 +02:00
Jakob Borg
435d3958f4 Update goleveldb 2014-08-29 12:36:45 +02:00
Jakob Borg
b0408ef5c6 Info line formatting (ref #583) 2014-08-28 21:35:55 +02:00
Jakob Borg
1c41b0bc2f Document GOMAXPROCS instead of (useless) STDEADLOCKTIMEOUT 2014-08-28 15:29:49 +02:00
Jakob Borg
aa827f3042 Fix language detection, never show untranslated strings (fixes #543) 2014-08-28 13:23:23 +02:00
Audrius Butkevicius
f44f5964bb Set rescan interval on default repository (fixes #579) 2014-08-27 23:45:09 +01:00
Audrius Butkevicius
91ba93bd7a Merge pull request #571 from syncthing/recheck
Add routine for checking possible standby (fixes #565)
2014-08-27 22:44:36 +01:00
Audrius Butkevicius
0abe4cefb4 Add routine for checking possible standby (fixes #565) 2014-08-27 22:42:59 +01:00
130 changed files with 2812 additions and 1693 deletions

View File

@@ -9,6 +9,7 @@ Gilli Sigurdsson <gilli@vx.is>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Marcin Dziadus <dziadus.marcin@gmail.com>
Michael Tilli <pyfisch@gmail.com>
Philippe Schommers <philippe@schommers.be>
Ryan Sullivan <kayoticsully@gmail.com>
Tully Robinson <tully@tojr.org>

4
Godeps/Godeps.json generated
View File

@@ -41,7 +41,7 @@
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "e1714bbe4764b15490fcc8ebd25d4bd9ea50a4b9"
"Rev": "a597b63b87d6140f79084c8aab214b4d533833a1"
},
{
"ImportPath": "github.com/juju/ratelimit",
@@ -49,7 +49,7 @@
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "17fd8940e0f778c27793a25bff8c48ddd7bf53ac"
"Rev": "2b99e8d4757bf06eeab1b0485d80b8ae1c088874"
},
{
"ImportPath": "github.com/vitrun/qart/coding",

View File

@@ -22,7 +22,7 @@ func (r *Reader) ReadUint8() uint8 {
}
if debug {
dl.Printf("rd uint8=%d (0x%08x)", r.b[0], r.b[0])
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
}
return r.b[0]
}
@@ -43,7 +43,7 @@ func (r *Reader) ReadUint16() uint16 {
v := uint16(r.b[1]) | uint16(r.b[0])<<8
if debug {
dl.Printf("rd uint16=%d (0x%08x)", v, v)
dl.Printf("rd uint16=%d (0x%04x)", v, v)
}
return v
}

View File

@@ -249,7 +249,9 @@ func (p *dbBench) newIter() iterator.Iterator {
}
func (p *dbBench) close() {
p.b.Log(p.db.s.tops.bpool)
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
p.b.Log("Block pool stats: ", bp)
}
p.db.Close()
p.stor.Close()
os.RemoveAll(benchDB)

View File

@@ -11,84 +11,117 @@ import (
"sync/atomic"
)
// SetFunc used by Namespace.Get method to create a cache object. SetFunc
// may return ok false, in that case the cache object will not be created.
type SetFunc func() (ok bool, value interface{}, charge int, fin SetFin)
// SetFunc is the function that will be called by Namespace.Get to create
// a cache object, if charge is less than one than the cache object will
// not be registered to cache tree, if value is nil then the cache object
// will not be created.
type SetFunc func() (charge int, value interface{})
// SetFin will be called when corresponding cache object are released.
type SetFin func()
// DelFin is the function that will be called as the result of a delete operation.
// Exist == true is indication that the object is exist, and pending == true is
// indication of deletion already happen but haven't done yet (wait for all handles
// to be released). And exist == false means the object doesn't exist.
type DelFin func(exist, pending bool)
// DelFin will be called when corresponding cache object are released.
// DelFin will be called after SetFin. The exist is true if the corresponding
// cache object is actually exist in the cache tree.
type DelFin func(exist bool)
// PurgeFin will be called when corresponding cache object are released.
// PurgeFin will be called after SetFin. If PurgeFin present DelFin will
// not be executed but passed to the PurgeFin, it is up to the caller
// to call it or not.
type PurgeFin func(ns, key uint64, delfin DelFin)
// PurgeFin is the function that will be called as the result of a purge operation.
type PurgeFin func(ns, key uint64)
// Cache is a cache tree. A cache instance must be goroutine-safe.
type Cache interface {
// SetCapacity sets cache capacity.
// SetCapacity sets cache tree capacity.
SetCapacity(capacity int)
// GetNamespace gets or creates a cache namespace for the given id.
// Capacity returns cache tree capacity.
Capacity() int
// Used returns used cache tree capacity.
Used() int
// Size returns entire alive cache objects size.
Size() int
// NumObjects returns number of alive objects.
NumObjects() int
// GetNamespace gets cache namespace with the given id.
// GetNamespace is never return nil.
GetNamespace(id uint64) Namespace
// Purge purges all cache namespaces, read Namespace.Purge method documentation.
// PurgeNamespace purges cache namespace with the given id from this cache tree.
// Also read Namespace.Purge.
PurgeNamespace(id uint64, fin PurgeFin)
// ZapNamespace detaches cache namespace with the given id from this cache tree.
// Also read Namespace.Zap.
ZapNamespace(id uint64)
// Purge purges all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Purge method on all cache namespace.
Purge(fin PurgeFin)
// Zap zaps all cache namespaces, read Namespace.Zap method documentation.
Zap(closed bool)
// Zap detaches all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Zap method on all cache namespace.
Zap()
}
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
type Namespace interface {
// Get gets cache object for the given key. The given SetFunc (if not nil) will
// be called if the given key does not exist.
// If the given key does not exist, SetFunc is nil or SetFunc return ok false, Get
// will return ok false.
Get(key uint64, setf SetFunc) (obj Object, ok bool)
// Get deletes cache object for the given key. If exist the cache object will
// be deleted later when all of its handles have been released (i.e. no one use
// it anymore) and the given DelFin (if not nil) will finally be executed. If
// such cache object does not exist the given DelFin will be executed anyway.
// Get gets cache object with the given key.
// If cache object is not found and setf is not nil, Get will atomically creates
// the cache object by calling setf. Otherwise Get will returns nil.
//
// Delete returns true if such cache object exist.
// The returned cache handle should be released after use by calling Release
// method.
Get(key uint64, setf SetFunc) Handle
// Delete removes cache object with the given key from cache tree.
// A deleted cache object will be released as soon as all of its handles have
// been released.
// Delete only happen once, subsequent delete will consider cache object doesn't
// exist, even if the cache object ins't released yet.
//
// If not nil, fin will be called if the cache object doesn't exist or when
// finally be released.
//
// Delete returns true if such cache object exist and never been deleted.
Delete(key uint64, fin DelFin) bool
// Purge deletes all cache objects, read Delete method documentation.
// Purge removes all cache objects within this namespace from cache tree.
// This is the same as doing delete on all cache objects.
//
// If not nil, fin will be called on all cache objects when its finally be
// released.
Purge(fin PurgeFin)
// Zap detaches the namespace from the cache tree and delete all its cache
// objects. The cache objects deletion and finalizers execution are happen
// immediately, even if its existing handles haven't yet been released.
// A zapped namespace can't never be filled again.
// If closed is false then the Get function will always call the given SetFunc
// if it is not nil, but resultant of the SetFunc will not be cached.
Zap(closed bool)
// Zap detaches namespace from cache tree and release all its cache objects.
// A zapped namespace can never be filled again.
// Calling Get on zapped namespace will always return nil.
Zap()
}
// Object is a cache object.
type Object interface {
// Release releases the cache object. Other methods should not be called
// after the cache object has been released.
// Handle is a cache handle.
type Handle interface {
// Release releases this cache handle. This method can be safely called mutiple
// times.
Release()
// Value returns value of the cache object.
// Value returns value of this cache handle.
// Value will returns nil after this cache handle have be released.
Value() interface{}
}
const (
DelNotExist = iota
DelExist
DelPendig
)
// Namespace state.
type nsState int
const (
nsEffective nsState = iota
nsZapped
nsClosed
)
// Node state.
@@ -97,29 +130,29 @@ type nodeState int
const (
nodeEffective nodeState = iota
nodeEvicted
nodeRemoved
nodeDeleted
)
// Fake object.
type fakeObject struct {
// Fake handle.
type fakeHandle struct {
value interface{}
fin func()
once uint32
}
func (o *fakeObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.value
func (h *fakeHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.value
}
return nil
}
func (o *fakeObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
func (h *fakeHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
if o.fin != nil {
o.fin()
o.fin = nil
if h.fin != nil {
h.fin()
h.fin = nil
}
}

View File

@@ -7,15 +7,35 @@
package cache
import (
"fmt"
"math/rand"
"runtime"
"strings"
"sync"
"sync/atomic"
"testing"
"time"
)
func set(ns Namespace, key uint64, value interface{}, charge int, fin func()) Object {
obj, _ := ns.Get(key, func() (bool, interface{}, int, SetFin) {
return true, value, charge, fin
type releaserFunc struct {
fn func()
value interface{}
}
func (r releaserFunc) Release() {
if r.fn != nil {
r.fn()
}
}
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
return ns.Get(key, func() (int, interface{}) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
return charge, value
}
})
return obj
}
func TestCache_HitMiss(t *testing.T) {
@@ -43,29 +63,31 @@ func TestCache_HitMiss(t *testing.T) {
setfin++
}).Release()
for j, y := range cases {
r, ok := ns.Get(y.key, nil)
h := ns.Get(y.key, nil)
if j <= i {
// should hit
if !ok {
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else if r.Value().(string) != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if ok {
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, r.Value().(string))
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if ok {
r.Release()
if h != nil {
h.Release()
}
}
}
for i, x := range cases {
finalizerOk := false
ns.Delete(x.key, func(exist bool) {
ns.Delete(x.key, func(exist, pending bool) {
finalizerOk = true
})
@@ -74,22 +96,24 @@ func TestCache_HitMiss(t *testing.T) {
}
for j, y := range cases {
r, ok := ns.Get(y.key, nil)
h := ns.Get(y.key, nil)
if j > i {
// should hit
if !ok {
if h == nil {
t.Errorf("case '%d' iteration '%d' is miss", i, j)
} else if r.Value().(string) != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
} else {
if x := h.Value().(releaserFunc).value.(string); x != y.value {
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
}
}
} else {
// should miss
if ok {
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, r.Value().(string))
if h != nil {
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
}
}
if ok {
r.Release()
if h != nil {
h.Release()
}
}
}
@@ -107,42 +131,42 @@ func TestLRUCache_Eviction(t *testing.T) {
set(ns, 3, 3, 1, nil).Release()
set(ns, 4, 4, 1, nil).Release()
set(ns, 5, 5, 1, nil).Release()
if r, ok := ns.Get(2, nil); ok { // 1,3,4,5,2
r.Release()
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(ns, 9, 9, 10, nil).Release() // 5,2,9
for _, x := range []uint64{9, 2, 5, 1} {
r, ok := ns.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{9, 2, 5, 1} {
h := ns.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
o1.Release()
for _, x := range []uint64{1, 2, 5} {
r, ok := ns.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{1, 2, 5} {
h := ns.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
for _, x := range []uint64{3, 4, 9} {
r, ok := ns.Get(x, nil)
if ok {
t.Errorf("hit for key '%d'", x)
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
for _, key := range []uint64{3, 4, 9} {
h := ns.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
}
@@ -153,16 +177,15 @@ func TestLRUCache_SetGet(t *testing.T) {
for i := 0; i < 200; i++ {
n := uint64(rand.Intn(99999) % 20)
set(ns, n, n, 1, nil).Release()
if p, ok := ns.Get(n, nil); ok {
if p.Value() == nil {
if h := ns.Get(n, nil); h != nil {
if h.Value() == nil {
t.Errorf("key '%d' contains nil value", n)
} else {
got := p.Value().(uint64)
if got != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, got)
if x := h.Value().(uint64); x != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
}
}
p.Release()
h.Release()
} else {
t.Errorf("key '%d' doesn't exist", n)
}
@@ -176,31 +199,319 @@ func TestLRUCache_Purge(t *testing.T) {
o2 := set(ns1, 2, 2, 1, nil)
ns1.Purge(nil)
set(ns1, 3, 3, 1, nil).Release()
for _, x := range []uint64{1, 2, 3} {
r, ok := ns1.Get(x, nil)
if !ok {
t.Errorf("miss for key '%d'", x)
for _, key := range []uint64{1, 2, 3} {
h := ns1.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
o1.Release()
o2.Release()
for _, x := range []uint64{1, 2} {
r, ok := ns1.Get(x, nil)
if ok {
t.Errorf("hit for key '%d'", x)
if r.Value().(int) != int(x) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
for _, key := range []uint64{1, 2} {
h := ns1.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
r.Release()
h.Release()
}
}
}
type testingCacheObjectCounter struct {
created uint32
released uint32
}
func (c *testingCacheObjectCounter) createOne() {
atomic.AddUint32(&c.created, 1)
}
func (c *testingCacheObjectCounter) releaseOne() {
atomic.AddUint32(&c.released, 1)
}
type testingCacheObject struct {
t *testing.T
cnt *testingCacheObjectCounter
ns, key uint64
releaseCalled uint32
}
func (x *testingCacheObject) Release() {
if atomic.CompareAndSwapUint32(&x.releaseCalled, 0, 1) {
x.cnt.releaseOne()
} else {
x.t.Errorf("duplicate setfin NS#%d KEY#%s", x.ns, x.key)
}
}
func TestLRUCache_Finalizer(t *testing.T) {
const (
capacity = 100
goroutines = 100
iterations = 10000
keymax = 8000
)
runtime.GOMAXPROCS(runtime.NumCPU())
defer runtime.GOMAXPROCS(1)
wg := &sync.WaitGroup{}
cnt := &testingCacheObjectCounter{}
c := NewLRUCache(capacity)
type instance struct {
seed int64
rnd *rand.Rand
ns uint64
effective int32
handles []Handle
handlesMap map[uint64]int
delete bool
purge bool
zap bool
wantDel int32
delfinCalledAll int32
delfinCalledEff int32
purgefinCalled int32
}
instanceGet := func(p *instance, ns Namespace, key uint64) {
h := ns.Get(key, func() (charge int, value interface{}) {
to := &testingCacheObject{
t: t, cnt: cnt,
ns: p.ns,
key: key,
}
atomic.AddInt32(&p.effective, 1)
cnt.createOne()
return 1, releaserFunc{func() {
to.Release()
atomic.AddInt32(&p.effective, -1)
}, to}
})
p.handles = append(p.handles, h)
p.handlesMap[key] = p.handlesMap[key] + 1
}
instanceRelease := func(p *instance, ns Namespace, i int) {
h := p.handles[i]
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
if n := p.handlesMap[key]; n == 0 {
t.Fatal("key ref == 0")
} else if n > 1 {
p.handlesMap[key] = n - 1
} else {
delete(p.handlesMap, key)
}
h.Release()
p.handles = append(p.handles[:i], p.handles[i+1:]...)
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
}
seeds := make([]int64, goroutines)
instances := make([]instance, goroutines)
for i := range instances {
p := &instances[i]
p.handlesMap = make(map[uint64]int)
if seeds[i] == 0 {
seeds[i] = time.Now().UnixNano()
}
p.seed = seeds[i]
p.rnd = rand.New(rand.NewSource(p.seed))
p.ns = uint64(i)
p.delete = i%6 == 0
p.purge = i%8 == 0
p.zap = i%12 == 0 || i%3 == 0
}
seedsStr := make([]string, len(seeds))
for i, seed := range seeds {
seedsStr[i] = fmt.Sprint(seed)
}
t.Logf("seeds := []int64{%s}", strings.Join(seedsStr, ", "))
// Get and release.
for i := range instances {
p := &instances[i]
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
for i := 0; i < iterations; i++ {
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
instanceGet(p, ns, uint64(p.rnd.Intn(keymax)))
} else {
instanceRelease(p, ns, p.rnd.Intn(len(p.handles)))
}
}
}(p)
}
wg.Wait()
if used, cap := c.Used(), c.Capacity(); used > cap {
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
}
// Check effective objects.
for i := range instances {
p := &instances[i]
if int(p.effective) < len(p.handlesMap) {
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
}
}
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Delete and purge.
for i := range instances {
p := &instances[i]
p.wantDel = p.effective
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
if p.delete {
for key := uint64(0); key < keymax; key++ {
_, wantExist := p.handlesMap[key]
gotExist := ns.Delete(key, func(exist, pending bool) {
atomic.AddInt32(&p.delfinCalledAll, 1)
if exist {
atomic.AddInt32(&p.delfinCalledEff, 1)
}
})
if !gotExist && wantExist {
t.Errorf("delete on NS#%d KEY#%d not found", p.ns, key)
}
}
var delfinCalled int
for key := uint64(0); key < keymax; key++ {
func(key uint64) {
gotExist := ns.Delete(key, func(exist, pending bool) {
if exist && !pending {
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.ns, key)
}
delfinCalled++
})
if gotExist {
t.Errorf("delete on NS#%d KEY#%d found", p.ns, key)
}
}(key)
}
if delfinCalled != keymax {
t.Errorf("(2) #%d not all delete fin called, diff=%d", p.ns, keymax-delfinCalled)
}
}
if p.purge {
ns.Purge(func(ns, key uint64) {
atomic.AddInt32(&p.purgefinCalled, 1)
})
}
}(p)
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Release.
for i := range instances {
p := &instances[i]
if !p.zap {
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
for i := len(p.handles) - 1; i >= 0; i-- {
instanceRelease(p, ns, i)
}
}(p)
}
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Zap.
for i := range instances {
p := &instances[i]
if p.zap {
wg.Add(1)
go func(p *instance) {
defer wg.Done()
ns := c.GetNamespace(p.ns)
ns.Zap()
p.handles = nil
p.handlesMap = nil
}(p)
}
}
wg.Wait()
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
}
c.Purge(nil)
for i := range instances {
p := &instances[i]
if p.delete {
if p.delfinCalledAll != keymax {
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.ns, p.purge, p.zap, keymax-p.delfinCalledAll)
}
if p.delfinCalledEff != p.wantDel {
t.Errorf("#%d not all effective delete fin called, diff=%d", p.ns, p.wantDel-p.delfinCalledEff)
}
if p.purge && p.purgefinCalled > 0 {
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.ns, p.delete, p.zap, p.purgefinCalled)
}
} else {
if p.purge {
if p.purgefinCalled != p.wantDel {
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.ns, p.delete, p.zap, p.wantDel-p.purgefinCalled)
}
}
}
}
if cnt.created != cnt.released {
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
}
}
func BenchmarkLRUCache_SetRelease(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {

View File

@@ -1,246 +0,0 @@
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"sync/atomic"
)
type emptyCache struct {
sync.Mutex
table map[uint64]*emptyNS
}
// NewEmptyCache creates a new initialized empty cache.
func NewEmptyCache() Cache {
return &emptyCache{
table: make(map[uint64]*emptyNS),
}
}
func (c *emptyCache) GetNamespace(id uint64) Namespace {
c.Lock()
defer c.Unlock()
if ns, ok := c.table[id]; ok {
return ns
}
ns := &emptyNS{
cache: c,
id: id,
table: make(map[uint64]*emptyNode),
}
c.table[id] = ns
return ns
}
func (c *emptyCache) Purge(fin PurgeFin) {
c.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.Unlock()
}
func (c *emptyCache) Zap(closed bool) {
c.Lock()
for _, ns := range c.table {
ns.zapNB(closed)
}
c.table = make(map[uint64]*emptyNS)
c.Unlock()
}
func (*emptyCache) SetCapacity(capacity int) {}
type emptyNS struct {
cache *emptyCache
id uint64
table map[uint64]*emptyNode
state nsState
}
func (ns *emptyNS) Get(key uint64, setf SetFunc) (o Object, ok bool) {
ns.cache.Lock()
switch ns.state {
case nsZapped:
ns.cache.Unlock()
if setf == nil {
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if ok {
o = &fakeObject{
value: value,
fin: fin,
}
}
return
case nsClosed:
ns.cache.Unlock()
return
}
n, ok := ns.table[key]
if ok {
n.ref++
} else {
if setf == nil {
ns.cache.Unlock()
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if !ok {
ns.cache.Unlock()
return
}
n = &emptyNode{
ns: ns,
key: key,
value: value,
setfin: fin,
ref: 1,
}
ns.table[key] = n
}
ns.cache.Unlock()
o = &emptyObject{node: n}
return
}
func (ns *emptyNS) Delete(key uint64, fin DelFin) bool {
ns.cache.Lock()
if ns.state != nsEffective {
ns.cache.Unlock()
if fin != nil {
fin(false)
}
return false
}
n, ok := ns.table[key]
if !ok {
ns.cache.Unlock()
if fin != nil {
fin(false)
}
return false
}
n.delfin = fin
ns.cache.Unlock()
return true
}
func (ns *emptyNS) purgeNB(fin PurgeFin) {
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.purgefin = fin
}
}
func (ns *emptyNS) Purge(fin PurgeFin) {
ns.cache.Lock()
ns.purgeNB(fin)
ns.cache.Unlock()
}
func (ns *emptyNS) zapNB(closed bool) {
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.execFin()
}
if closed {
ns.state = nsClosed
} else {
ns.state = nsZapped
}
ns.table = nil
}
func (ns *emptyNS) Zap(closed bool) {
ns.cache.Lock()
ns.zapNB(closed)
delete(ns.cache.table, ns.id)
ns.cache.Unlock()
}
type emptyNode struct {
ns *emptyNS
key uint64
value interface{}
ref int
setfin SetFin
delfin DelFin
purgefin PurgeFin
}
func (n *emptyNode) execFin() {
if n.setfin != nil {
n.setfin()
n.setfin = nil
}
if n.purgefin != nil {
n.purgefin(n.ns.id, n.key, n.delfin)
n.delfin = nil
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true)
n.delfin = nil
}
}
func (n *emptyNode) evict() {
n.ns.cache.Lock()
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// Remove elem.
delete(n.ns.table, n.key)
// Execute finalizer.
n.execFin()
}
} else if n.ref < 0 {
panic("leveldb/cache: emptyNode: negative node reference")
}
n.ns.cache.Unlock()
}
type emptyObject struct {
node *emptyNode
once uint32
}
func (o *emptyObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.node.value
}
return nil
}
func (o *emptyObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
return
}
o.node.evict()
o.node = nil
}

View File

@@ -9,16 +9,17 @@ package cache
import (
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/util"
)
// lruCache represent a LRU cache state.
type lruCache struct {
sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
size int
mu sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
used, size, alive int
}
// NewLRUCache creates a new initialized LRU cache with the given capacity.
@@ -32,57 +33,98 @@ func NewLRUCache(capacity int) Cache {
return c
}
func (c *lruCache) Capacity() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.capacity
}
func (c *lruCache) Used() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.used
}
func (c *lruCache) Size() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.size
}
func (c *lruCache) NumObjects() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.alive
}
// SetCapacity set cache capacity.
func (c *lruCache) SetCapacity(capacity int) {
c.Lock()
c.mu.Lock()
c.capacity = capacity
c.evict()
c.Unlock()
c.mu.Unlock()
}
// GetNamespace return namespace object for given id.
func (c *lruCache) GetNamespace(id uint64) Namespace {
c.Lock()
defer c.Unlock()
c.mu.Lock()
defer c.mu.Unlock()
if p, ok := c.table[id]; ok {
return p
if ns, ok := c.table[id]; ok {
return ns
}
p := &lruNs{
ns := &lruNs{
lru: c,
id: id,
table: make(map[uint64]*lruNode),
}
c.table[id] = p
return p
c.table[id] = ns
return ns
}
func (c *lruCache) ZapNamespace(id uint64) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.zapNB()
delete(c.table, id)
}
c.mu.Unlock()
}
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
// Purge purge entire cache.
func (c *lruCache) Purge(fin PurgeFin) {
c.Lock()
c.mu.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.Unlock()
c.mu.Unlock()
}
func (c *lruCache) Zap(closed bool) {
c.Lock()
func (c *lruCache) Zap() {
c.mu.Lock()
for _, ns := range c.table {
ns.zapNB(closed)
ns.zapNB()
}
c.table = make(map[uint64]*lruNs)
c.Unlock()
c.mu.Unlock()
}
func (c *lruCache) evict() {
top := &c.recent
for n := c.recent.rPrev; c.size > c.capacity && n != top; {
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
n.state = nodeEvicted
n.rRemove()
n.evictNB()
c.size -= n.charge
n.derefNB()
c.used -= n.charge
n = c.recent.rPrev
}
}
@@ -94,170 +136,158 @@ type lruNs struct {
state nsState
}
func (ns *lruNs) Get(key uint64, setf SetFunc) (o Object, ok bool) {
lru := ns.lru
lru.Lock()
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
ns.lru.mu.Lock()
switch ns.state {
case nsZapped:
lru.Unlock()
if setf == nil {
return
}
var value interface{}
var fin func()
ok, value, _, fin = setf()
if ok {
o = &fakeObject{
value: value,
fin: fin,
}
}
return
case nsClosed:
lru.Unlock()
return
if ns.state != nsEffective {
ns.lru.mu.Unlock()
return nil
}
n, ok := ns.table[key]
node, ok := ns.table[key]
if ok {
switch n.state {
switch node.state {
case nodeEvicted:
// Insert to recent list.
n.state = nodeEffective
n.ref++
lru.size += n.charge
lru.evict()
node.state = nodeEffective
node.ref++
ns.lru.used += node.charge
ns.lru.evict()
fallthrough
case nodeEffective:
// Bump to front
n.rRemove()
n.rInsert(&lru.recent)
// Bump to front.
node.rRemove()
node.rInsert(&ns.lru.recent)
}
n.ref++
node.ref++
} else {
if setf == nil {
lru.Unlock()
return
ns.lru.mu.Unlock()
return nil
}
var value interface{}
var charge int
var fin func()
ok, value, charge, fin = setf()
if !ok {
lru.Unlock()
return
charge, value := setf()
if value == nil {
ns.lru.mu.Unlock()
return nil
}
n = &lruNode{
node = &lruNode{
ns: ns,
key: key,
value: value,
charge: charge,
setfin: fin,
ref: 2,
ref: 1,
}
ns.table[key] = n
n.rInsert(&lru.recent)
ns.table[key] = node
lru.size += charge
lru.evict()
ns.lru.size += charge
ns.lru.alive++
if charge > 0 {
node.ref++
node.rInsert(&ns.lru.recent)
ns.lru.used += charge
ns.lru.evict()
}
}
lru.Unlock()
o = &lruObject{node: n}
return
ns.lru.mu.Unlock()
return &lruHandle{node: node}
}
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
lru := ns.lru
lru.Lock()
ns.lru.mu.Lock()
if ns.state != nsEffective {
lru.Unlock()
if fin != nil {
fin(false)
fin(false, false)
}
ns.lru.mu.Unlock()
return false
}
n, ok := ns.table[key]
if !ok {
lru.Unlock()
node, exist := ns.table[key]
if !exist {
if fin != nil {
fin(false)
fin(false, false)
}
ns.lru.mu.Unlock()
return false
}
n.delfin = fin
switch n.state {
case nodeRemoved:
lru.Unlock()
switch node.state {
case nodeDeleted:
if fin != nil {
fin(true, true)
}
ns.lru.mu.Unlock()
return false
case nodeEffective:
lru.size -= n.charge
n.rRemove()
n.evictNB()
ns.lru.used -= node.charge
node.state = nodeDeleted
node.delfin = fin
node.rRemove()
node.derefNB()
default:
node.state = nodeDeleted
node.delfin = fin
}
n.state = nodeRemoved
lru.Unlock()
ns.lru.mu.Unlock()
return true
}
func (ns *lruNs) purgeNB(fin PurgeFin) {
lru := ns.lru
if ns.state != nsEffective {
return
}
for _, n := range ns.table {
n.purgefin = fin
if n.state == nodeEffective {
lru.size -= n.charge
n.rRemove()
n.evictNB()
for _, node := range ns.table {
switch node.state {
case nodeDeleted:
case nodeEffective:
ns.lru.used -= node.charge
node.state = nodeDeleted
node.purgefin = fin
node.rRemove()
node.derefNB()
default:
node.state = nodeDeleted
node.purgefin = fin
}
n.state = nodeRemoved
}
}
func (ns *lruNs) Purge(fin PurgeFin) {
ns.lru.Lock()
ns.lru.mu.Lock()
ns.purgeNB(fin)
ns.lru.Unlock()
ns.lru.mu.Unlock()
}
func (ns *lruNs) zapNB(closed bool) {
lru := ns.lru
func (ns *lruNs) zapNB() {
if ns.state != nsEffective {
return
}
if closed {
ns.state = nsClosed
} else {
ns.state = nsZapped
}
for _, n := range ns.table {
if n.state == nodeEffective {
lru.size -= n.charge
n.rRemove()
ns.state = nsZapped
for _, node := range ns.table {
if node.state == nodeEffective {
ns.lru.used -= node.charge
node.rRemove()
}
n.state = nodeRemoved
n.execFin()
ns.lru.size -= node.charge
node.state = nodeDeleted
node.fin()
}
ns.table = nil
}
func (ns *lruNs) Zap(closed bool) {
ns.lru.Lock()
ns.zapNB(closed)
func (ns *lruNs) Zap() {
ns.lru.mu.Lock()
ns.zapNB()
delete(ns.lru.table, ns.id)
ns.lru.Unlock()
ns.lru.mu.Unlock()
}
type lruNode struct {
@@ -270,7 +300,6 @@ type lruNode struct {
charge int
ref int
state nodeState
setfin SetFin
delfin DelFin
purgefin PurgeFin
}
@@ -284,7 +313,6 @@ func (n *lruNode) rInsert(at *lruNode) {
}
func (n *lruNode) rRemove() bool {
// only remove if not already removed
if n.rPrev == nil {
return false
}
@@ -297,58 +325,58 @@ func (n *lruNode) rRemove() bool {
return true
}
func (n *lruNode) execFin() {
if n.setfin != nil {
n.setfin()
n.setfin = nil
func (n *lruNode) fin() {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
if n.purgefin != nil {
n.purgefin(n.ns.id, n.key, n.delfin)
n.purgefin(n.ns.id, n.key)
n.delfin = nil
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true)
n.delfin(true, false)
n.delfin = nil
}
}
func (n *lruNode) evictNB() {
func (n *lruNode) derefNB() {
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// remove elem
// Remove elemement.
delete(n.ns.table, n.key)
// execute finalizer
n.execFin()
n.ns.lru.size -= n.charge
n.ns.lru.alive--
n.fin()
}
n.value = nil
} else if n.ref < 0 {
panic("leveldb/cache: lruCache: negative node reference")
}
}
func (n *lruNode) evict() {
n.ns.lru.Lock()
n.evictNB()
n.ns.lru.Unlock()
func (n *lruNode) deref() {
n.ns.lru.mu.Lock()
n.derefNB()
n.ns.lru.mu.Unlock()
}
type lruObject struct {
type lruHandle struct {
node *lruNode
once uint32
}
func (o *lruObject) Value() interface{} {
if atomic.LoadUint32(&o.once) == 0 {
return o.node.value
func (h *lruHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.node.value
}
return nil
}
func (o *lruObject) Release() {
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
func (h *lruHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
o.node.evict()
o.node = nil
h.node.deref()
h.node = nil
}

View File

@@ -14,6 +14,7 @@ import (
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/syndtr/goleveldb/leveldb/iterator"
@@ -35,7 +36,7 @@ type DB struct {
// MemDB.
memMu sync.RWMutex
memPool *util.Pool
memPool chan *memdb.DB
mem, frozenMem *memDB
journal *journal.Writer
journalWriter storage.Writer
@@ -47,6 +48,9 @@ type DB struct {
snapsMu sync.Mutex
snapsRoot snapshotElement
// Stats.
aliveSnaps, aliveIters int32
// Write.
writeC chan *Batch
writeMergedC chan bool
@@ -80,7 +84,7 @@ func openDB(s *session) (*DB, error) {
// Initial sequence
seq: s.stSeq,
// MemDB
memPool: util.NewPool(1),
memPool: make(chan *memdb.DB, 1),
// Write
writeC: make(chan *Batch),
writeMergedC: make(chan bool),
@@ -122,6 +126,7 @@ func openDB(s *session) (*DB, error) {
go db.tCompaction()
go db.mCompaction()
go db.jWriter()
go db.mpoolDrain()
s.logf("db@open done T·%v", time.Since(start))
@@ -568,7 +573,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
}
defer m.decref()
mk, mv, me := m.db.Find(ikey)
mk, mv, me := m.mdb.Find(ikey)
if me == nil {
ukey, _, t, ok := parseIkey(mk)
if ok && db.s.icmp.uCompare(ukey, key) == 0 {
@@ -655,6 +660,16 @@ func (db *DB) GetSnapshot() (*Snapshot, error) {
// Returns statistics of the underlying DB.
// leveldb.sstables
// Returns sstables list for each level.
// leveldb.blockpool
// Returns block pool stats.
// leveldb.cachedblock
// Returns size of cached block.
// leveldb.openedtables
// Returns number of opened tables.
// leveldb.alivesnaps
// Returns number of alive snapshots.
// leveldb.aliveiters
// Returns number of alive iterators.
func (db *DB) GetProperty(name string) (value string, err error) {
err = db.ok()
if err != nil {
@@ -700,6 +715,20 @@ func (db *DB) GetProperty(name string) (value string, err error) {
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.imin, t.imax)
}
}
case p == "blockpool":
value = fmt.Sprintf("%v", db.s.tops.bpool)
case p == "cachedblock":
if bc := db.s.o.GetBlockCache(); bc != nil {
value = fmt.Sprintf("%d", bc.Size())
} else {
value = "<nil>"
}
case p == "openedtables":
value = fmt.Sprintf("%d", db.s.tops.cache.Size())
case p == "alivesnaps":
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveSnaps))
case p == "aliveiters":
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
default:
err = errors.New("leveldb: GetProperty: unknown property: " + name)
}

View File

@@ -221,10 +221,10 @@ func (db *DB) memCompaction() {
c := newCMem(db.s)
stats := new(cStatsStaging)
db.logf("mem@flush N·%d S·%s", mem.db.Len(), shortenb(mem.db.Size()))
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
// Don't compact empty memdb.
if mem.db.Len() == 0 {
if mem.mdb.Len() == 0 {
db.logf("mem@flush skipping")
// drop frozen mem
db.dropFrozenMem()
@@ -242,7 +242,7 @@ func (db *DB) memCompaction() {
db.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
stats.startTimer()
defer stats.stopTimer()
return c.flush(mem.db, -1)
return c.flush(mem.mdb, -1)
}, func() error {
for _, r := range c.rec.addedTables {
db.logf("mem@flush rollback @%d", r.num)

View File

@@ -10,6 +10,7 @@ import (
"errors"
"runtime"
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
@@ -38,11 +39,11 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
ti := v.getIterators(slice, ro)
n := len(ti) + 2
i := make([]iterator.Iterator, 0, n)
emi := em.db.NewIterator(slice)
emi := em.mdb.NewIterator(slice)
emi.SetReleaser(&memdbReleaser{m: em})
i = append(i, emi)
if fm != nil {
fmi := fm.db.NewIterator(slice)
fmi := fm.mdb.NewIterator(slice)
fmi.SetReleaser(&memdbReleaser{m: fm})
i = append(i, fmi)
}
@@ -66,6 +67,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
}
rawIter := db.newRawIterator(islice, ro)
iter := &dbIter{
db: db,
icmp: db.s.icmp,
iter: rawIter,
seq: seq,
@@ -73,6 +75,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
key: make([]byte, 0),
value: make([]byte, 0),
}
atomic.AddInt32(&db.aliveIters, 1)
runtime.SetFinalizer(iter, (*dbIter).Release)
return iter
}
@@ -89,6 +92,7 @@ const (
// dbIter represent an interator states over a database session.
type dbIter struct {
db *DB
icmp *iComparer
iter iterator.Iterator
seq uint64
@@ -303,6 +307,7 @@ func (i *dbIter) Release() {
if i.releaser != nil {
i.releaser.Release()
i.releaser = nil
}
i.dir = dirReleased
@@ -310,6 +315,8 @@ func (i *dbIter) Release() {
i.value = nil
i.iter.Release()
i.iter = nil
atomic.AddInt32(&i.db.aliveIters, -1)
i.db = nil
}
}

View File

@@ -9,6 +9,7 @@ package leveldb
import (
"runtime"
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/iterator"
"github.com/syndtr/goleveldb/leveldb/opt"
@@ -81,7 +82,7 @@ func (db *DB) minSeq() uint64 {
type Snapshot struct {
db *DB
elem *snapshotElement
mu sync.Mutex
mu sync.RWMutex
released bool
}
@@ -91,6 +92,7 @@ func (db *DB) newSnapshot() *Snapshot {
db: db,
elem: db.acquireSnapshot(),
}
atomic.AddInt32(&db.aliveSnaps, 1)
runtime.SetFinalizer(snap, (*Snapshot).Release)
return snap
}
@@ -105,8 +107,8 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
if err != nil {
return
}
snap.mu.Lock()
defer snap.mu.Unlock()
snap.mu.RLock()
defer snap.mu.RUnlock()
if snap.released {
err = ErrSnapshotReleased
return
@@ -160,6 +162,7 @@ func (snap *Snapshot) Release() {
snap.released = true
snap.db.releaseSnapshot(snap.elem)
atomic.AddInt32(&snap.db.aliveSnaps, -1)
snap.db = nil
snap.elem = nil
}

View File

@@ -8,16 +8,16 @@ package leveldb
import (
"sync/atomic"
"time"
"github.com/syndtr/goleveldb/leveldb/journal"
"github.com/syndtr/goleveldb/leveldb/memdb"
"github.com/syndtr/goleveldb/leveldb/util"
)
type memDB struct {
pool *util.Pool
db *memdb.DB
ref int32
db *DB
mdb *memdb.DB
ref int32
}
func (m *memDB) incref() {
@@ -26,7 +26,13 @@ func (m *memDB) incref() {
func (m *memDB) decref() {
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
m.pool.Put(m)
// Only put back memdb with std capacity.
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
m.mdb.Reset()
m.db.mpoolPut(m.mdb)
}
m.db = nil
m.mdb = nil
} else if ref < 0 {
panic("negative memdb ref")
}
@@ -42,6 +48,41 @@ func (db *DB) addSeq(delta uint64) {
atomic.AddUint64(&db.seq, delta)
}
func (db *DB) mpoolPut(mem *memdb.DB) {
defer func() {
recover()
}()
select {
case db.memPool <- mem:
default:
}
}
func (db *DB) mpoolGet() *memdb.DB {
select {
case mem := <-db.memPool:
return mem
default:
return nil
}
}
func (db *DB) mpoolDrain() {
ticker := time.NewTicker(30 * time.Second)
for {
select {
case <-ticker.C:
select {
case <-db.memPool:
default:
}
case _, _ = <-db.closeC:
close(db.memPool)
return
}
}
}
// Create new memdb and froze the old one; need external synchronization.
// newMem only called synchronously by the writer.
func (db *DB) newMem(n int) (mem *memDB, err error) {
@@ -70,18 +111,15 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
db.journalWriter = w
db.journalFile = file
db.frozenMem = db.mem
mem, ok := db.memPool.Get().(*memDB)
if ok && mem.db.Capacity() >= n {
mem.db.Reset()
mem.incref()
} else {
mem = &memDB{
pool: db.memPool,
db: memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n)),
ref: 1,
}
mdb := db.mpoolGet()
if mdb == nil || mdb.Capacity() < n {
mdb = memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n))
}
mem = &memDB{
db: db,
mdb: mdb,
ref: 2,
}
mem.incref()
db.mem = mem
// The seq only incremented by the writer. And whoever called newMem
// should hold write lock, so no need additional synchronization here.

View File

@@ -1577,7 +1577,11 @@ func TestDb_BloomFilter(t *testing.T) {
return fmt.Sprintf("key%06d", i)
}
n := 10000
const (
n = 10000
indexOverheat = 19898
filterOverheat = 19799
)
// Populate multiple layers
for i := 0; i < n; i++ {
@@ -1601,7 +1605,7 @@ func TestDb_BloomFilter(t *testing.T) {
cnt := int(h.stor.ReadCounter())
t.Logf("lookup of %d present keys yield %d sstable I/O reads", n, cnt)
if min, max := n, n+2*n/100; cnt < min || cnt > max {
if min, max := n+indexOverheat+filterOverheat, n+indexOverheat+filterOverheat+2*n/100; cnt < min || cnt > max {
t.Errorf("num of sstable I/O reads of present keys not in range of %d - %d, got %d", min, max, cnt)
}
@@ -1612,7 +1616,7 @@ func TestDb_BloomFilter(t *testing.T) {
}
cnt = int(h.stor.ReadCounter())
t.Logf("lookup of %d missing keys yield %d sstable I/O reads", n, cnt)
if max := 3 * n / 100; cnt > max {
if max := 3*n/100 + indexOverheat + filterOverheat; cnt > max {
t.Errorf("num of sstable I/O reads of missing keys was more than %d, got %d", max, cnt)
}

View File

@@ -75,7 +75,7 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
mem = nil
}
}()
nn = mem.db.Free()
nn = mem.mdb.Free()
switch {
case v.tLen(0) >= kL0_SlowdownWritesTrigger && !delayed:
delayed = true
@@ -90,13 +90,13 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
}
default:
// Allow memdb to grow if it has no entry.
if mem.db.Len() == 0 {
if mem.mdb.Len() == 0 {
nn = n
} else {
mem.decref()
mem, err = db.rotateMem(n)
if err == nil {
nn = mem.db.Free()
nn = mem.mdb.Free()
} else {
nn = 0
}
@@ -190,7 +190,7 @@ drain:
return
case db.journalC <- b:
// Write into memdb
b.memReplay(mem.db)
b.memReplay(mem.mdb)
}
// Wait for journal writer
select {
@@ -200,7 +200,7 @@ drain:
case err = <-db.journalAckC:
if err != nil {
// Revert memdb if error detected
b.revertMemReplay(mem.db)
b.revertMemReplay(mem.mdb)
return
}
}
@@ -209,7 +209,7 @@ drain:
if err != nil {
return
}
b.memReplay(mem.db)
b.memReplay(mem.mdb)
}
// Set last seq number.
@@ -271,7 +271,7 @@ func (db *DB) CompactRange(r util.Range) error {
// Check for overlaps in memdb.
mem := db.getEffectiveMem()
defer mem.decref()
if isMemOverlaps(db.s.icmp, mem.db, r.Start, r.Limit) {
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
// Memdb compaction.
if _, err := db.rotateMem(0); err != nil {
<-db.writeLockC

View File

@@ -37,6 +37,16 @@
// err = iter.Error()
// ...
//
// Iterate over subset of database content with a particular prefix:
// iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
// for iter.Next() {
// // Use key/value.
// ...
// }
// iter.Release()
// err = iter.Error()
// ...
//
// Seek-then-Iterate:
//
// iter := db.NewIterator(nil, nil)

View File

@@ -30,10 +30,16 @@ const (
type noCache struct{}
func (noCache) SetCapacity(capacity int) {}
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap(closed bool) {}
func (noCache) SetCapacity(capacity int) {}
func (noCache) Capacity() int { return 0 }
func (noCache) Used() int { return 0 }
func (noCache) Size() int { return 0 }
func (noCache) NumObjects() int { return 0 }
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
func (noCache) ZapNamespace(id uint64) {}
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap() {}
var NoCache cache.Cache = noCache{}

View File

@@ -297,7 +297,7 @@ func (t *tOps) create() (*tWriter, error) {
func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
w, err := t.create()
if err != nil {
return f, n, err
return
}
defer func() {
@@ -322,33 +322,24 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
return
}
// Opens table. It returns a cache object, which should
// Opens table. It returns a cache handle, which should
// be released after use.
func (t *tOps) open(f *tFile) (c cache.Object, err error) {
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
num := f.file.Num()
c, ok := t.cacheNS.Get(num, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
var r storage.Reader
r, err = f.file.Open()
if err != nil {
return
return 0, nil
}
o := t.s.o
var cacheNS cache.Namespace
if bc := o.GetBlockCache(); bc != nil {
cacheNS = bc.GetNamespace(num)
var bcacheNS cache.Namespace
if bc := t.s.o.GetBlockCache(); bc != nil {
bcacheNS = bc.GetNamespace(num)
}
ok = true
value = table.NewReader(r, int64(f.size), cacheNS, t.bpool, o)
charge = 1
fin = func() {
r.Close()
}
return
return 1, table.NewReader(r, int64(f.size), bcacheNS, t.bpool, t.s.o)
})
if !ok && err == nil {
if ch == nil && err == nil {
err = ErrClosed
}
return
@@ -357,34 +348,33 @@ func (t *tOps) open(f *tFile) (c cache.Object, err error) {
// Finds key/value pair whose key is greater than or equal to the
// given key.
func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
c, err := t.open(f)
ch, err := t.open(f)
if err != nil {
return nil, nil, err
}
defer c.Release()
return c.Value().(*table.Reader).Find(key, ro)
defer ch.Release()
return ch.Value().(*table.Reader).Find(key, ro)
}
// Returns approximate offset of the given key.
func (t *tOps) offsetOf(f *tFile, key []byte) (offset uint64, err error) {
c, err := t.open(f)
ch, err := t.open(f)
if err != nil {
return
}
_offset, err := c.Value().(*table.Reader).OffsetOf(key)
offset = uint64(_offset)
c.Release()
return
defer ch.Release()
offset_, err := ch.Value().(*table.Reader).OffsetOf(key)
return uint64(offset_), err
}
// Creates an iterator from the given table.
func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
c, err := t.open(f)
ch, err := t.open(f)
if err != nil {
return iterator.NewEmptyIterator(err)
}
iter := c.Value().(*table.Reader).NewIterator(slice, ro)
iter.SetReleaser(c)
iter := ch.Value().(*table.Reader).NewIterator(slice, ro)
iter.SetReleaser(ch)
return iter
}
@@ -392,14 +382,16 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
// no one use the the table.
func (t *tOps) remove(f *tFile) {
num := f.file.Num()
t.cacheNS.Delete(num, func(exist bool) {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.GetNamespace(num).Zap(false)
t.cacheNS.Delete(num, func(exist, pending bool) {
if !pending {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.ZapNamespace(num)
}
}
})
}
@@ -407,7 +399,8 @@ func (t *tOps) remove(f *tFile) {
// Closes the table ops instance. It will close all tables,
// regadless still used or not.
func (t *tOps) close() {
t.cache.Zap(true)
t.cache.Zap()
t.bpool.Close()
}
// Creates new initialized table ops instance.

View File

@@ -40,7 +40,7 @@ var _ = testutil.Defer(func() {
data := bw.buf.Bytes()
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
return &block{
cmp: comparer.DefaultComparer,
tr: &Reader{cmp: comparer.DefaultComparer},
data: data,
restartsLen: restartsLen,
restartsOffset: len(data) - (restartsLen+1)*4,

View File

@@ -37,7 +37,7 @@ func max(x, y int) int {
}
type block struct {
cmp comparer.BasicComparer
tr *Reader
data []byte
restartsLen int
restartsOffset int
@@ -46,31 +46,25 @@ type block struct {
}
func (b *block) seek(rstart, rlimit int, key []byte) (index, offset int, err error) {
n := b.restartsOffset
data := b.data
cmp := b.cmp
index = sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
offset := int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):]))
offset += 1 // shared always zero, since this is a restart point
v1, n1 := binary.Uvarint(data[offset:]) // key length
_, n2 := binary.Uvarint(data[offset+n1:]) // value length
offset := int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):]))
offset += 1 // shared always zero, since this is a restart point
v1, n1 := binary.Uvarint(b.data[offset:]) // key length
_, n2 := binary.Uvarint(b.data[offset+n1:]) // value length
m := offset + n1 + n2
return cmp.Compare(data[m:m+int(v1)], key) > 0
return b.tr.cmp.Compare(b.data[m:m+int(v1)], key) > 0
}) + rstart - 1
if index < rstart {
// The smallest key is greater-than key sought.
index = rstart
}
offset = int(binary.LittleEndian.Uint32(data[n+4*index:]))
offset = int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*index:]))
return
}
func (b *block) restartIndex(rstart, rlimit, offset int) int {
n := b.restartsOffset
data := b.data
return sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
return int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):])) > offset
return int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):])) > offset
}) + rstart - 1
}
@@ -139,6 +133,14 @@ func (b *block) newIterator(slice *util.Range, inclLimit bool, cache util.Releas
return bi
}
func (b *block) Release() {
if b.tr.bpool != nil {
b.tr.bpool.Put(b.data)
}
b.tr = nil
b.data = nil
}
type dir int
const (
@@ -261,7 +263,7 @@ func (i *blockIter) Seek(key []byte) bool {
i.dir = dirForward
}
for i.Next() {
if i.block.cmp.Compare(i.key, key) >= 0 {
if i.block.tr.cmp.Compare(i.key, key) >= 0 {
return true
}
}
@@ -438,6 +440,7 @@ func (i *blockIter) Value() []byte {
func (i *blockIter) Release() {
if i.dir > dirReleased {
i.block = nil
i.prevNode = nil
i.prevKeys = nil
i.key = nil
@@ -469,7 +472,7 @@ func (i *blockIter) Error() error {
}
type filterBlock struct {
filter filter.Filter
tr *Reader
data []byte
oOffset int
baseLg uint
@@ -483,7 +486,7 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
n := int(binary.LittleEndian.Uint32(o))
m := int(binary.LittleEndian.Uint32(o[4:]))
if n < m && m <= b.oOffset {
return b.filter.Contains(b.data[n:m], key)
return b.tr.filter.Contains(b.data[n:m], key)
} else if n == m {
return false
}
@@ -491,10 +494,17 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
return true
}
func (b *filterBlock) Release() {
if b.tr.bpool != nil {
b.tr.bpool.Put(b.data)
}
b.tr = nil
b.data = nil
}
type indexIter struct {
blockIter
tableReader *Reader
slice *util.Range
*blockIter
slice *util.Range
// Options
checksum bool
fillCache bool
@@ -513,7 +523,7 @@ func (i *indexIter) Get() iterator.Iterator {
if i.slice != nil && (i.blockIter.isFirst() || i.blockIter.isLast()) {
slice = i.slice
}
return i.tableReader.getDataIter(dataBH, slice, i.checksum, i.fillCache)
return i.blockIter.block.tr.getDataIter(dataBH, slice, i.checksum, i.fillCache)
}
// Reader is a table reader.
@@ -528,9 +538,8 @@ type Reader struct {
checksum bool
strictIter bool
dataEnd int64
indexBlock *block
filterBlock *filterBlock
dataEnd int64
indexBH, filterBH blockHandle
}
func verifyChecksum(data []byte) bool {
@@ -547,6 +556,7 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
}
if checksum || r.checksum {
if !verifyChecksum(data) {
r.bpool.Put(data)
return nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
}
}
@@ -565,6 +575,7 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
return nil, err
}
default:
r.bpool.Put(data)
return nil, fmt.Errorf("leveldb/table: Reader: unknown block compression type: %d", data[bh.length])
}
return data, nil
@@ -577,7 +588,7 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
}
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
b := &block{
cmp: r.cmp,
tr: r,
data: data,
restartsLen: restartsLen,
restartsOffset: len(data) - (restartsLen+1)*4,
@@ -586,7 +597,44 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
return b, nil
}
func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterBlock, error) {
func (r *Reader) readBlockCached(bh blockHandle, checksum, fillCache bool) (*block, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *block
b, err = r.readBlock(bh, checksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
if ch != nil {
b, ok := ch.Value().(*block)
if !ok {
ch.Release()
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
}
if !b.checksum && (r.checksum || checksum) {
if !verifyChecksum(b.data) {
ch.Release()
return nil, nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
}
b.checksum = true
}
return b, ch, err
} else if err != nil {
return nil, nil, err
}
}
b, err := r.readBlock(bh, checksum)
return b, b, err
}
func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
data, err := r.readRawBlock(bh, true)
if err != nil {
return nil, err
@@ -601,7 +649,7 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
return nil, errors.New("leveldb/table: Reader: invalid filter block (invalid offset)")
}
b := &filterBlock{
filter: filter,
tr: r,
data: data,
oOffset: oOffset,
baseLg: uint(data[n-1]),
@@ -610,60 +658,42 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
return b, nil
}
type releaseBlock struct {
r *Reader
b *block
}
func (r releaseBlock) Release() {
if r.b.data != nil {
r.r.bpool.Put(r.b.data)
r.b.data = nil
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
if ch != nil {
b, ok := ch.Value().(*filterBlock)
if !ok {
ch.Release()
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
}
return b, ch, err
} else if err != nil {
return nil, nil, err
}
}
b, err := r.readFilterBlock(bh)
return b, b, err
}
func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fillCache bool) iterator.Iterator {
if r.cache != nil {
// Get/set block cache.
var err error
cache, ok := r.cache.Get(dataBH.offset, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
if !fillCache {
return
}
var dataBlock *block
dataBlock, err = r.readBlock(dataBH, checksum)
if err == nil {
ok = true
value = dataBlock
charge = int(dataBH.length)
fin = func() {
r.bpool.Put(dataBlock.data)
dataBlock.data = nil
}
}
return
})
if err != nil {
return iterator.NewEmptyIterator(err)
}
if ok {
dataBlock := cache.Value().(*block)
if !dataBlock.checksum && (r.checksum || checksum) {
if !verifyChecksum(dataBlock.data) {
return iterator.NewEmptyIterator(errors.New("leveldb/table: Reader: invalid block (checksum mismatch)"))
}
dataBlock.checksum = true
}
iter := dataBlock.newIterator(slice, false, cache)
return iter
}
}
dataBlock, err := r.readBlock(dataBH, checksum)
b, rel, err := r.readBlockCached(dataBH, checksum, fillCache)
if err != nil {
return iterator.NewEmptyIterator(err)
}
iter := dataBlock.newIterator(slice, false, releaseBlock{r, dataBlock})
return iter
return b.newIterator(slice, false, rel)
}
// NewIterator creates an iterator from the table.
@@ -677,18 +707,21 @@ func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fi
// when not used.
//
// Also read Iterator documentation of the leveldb/iterator package.
func (r *Reader) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
if r.err != nil {
return iterator.NewEmptyIterator(r.err)
}
fillCache := !ro.GetDontFillCache()
b, rel, err := r.readBlockCached(r.indexBH, true, fillCache)
if err != nil {
return iterator.NewEmptyIterator(err)
}
index := &indexIter{
blockIter: *r.indexBlock.newIterator(slice, true, nil),
tableReader: r,
slice: slice,
checksum: ro.GetStrict(opt.StrictBlockChecksum),
fillCache: !ro.GetDontFillCache(),
blockIter: b.newIterator(slice, true, rel),
slice: slice,
checksum: ro.GetStrict(opt.StrictBlockChecksum),
fillCache: !ro.GetDontFillCache(),
}
return iterator.NewIndexedIterator(index, r.strictIter || ro.GetStrict(opt.StrictIterator), false)
}
@@ -705,7 +738,13 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
return
}
index := r.indexBlock.newIterator(nil, true, nil)
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
if err != nil {
return
}
defer rel.Release()
index := indexBlock.newIterator(nil, true, nil)
defer index.Release()
if !index.Seek(key) {
err = index.Error()
@@ -719,9 +758,15 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
err = errors.New("leveldb/table: Reader: invalid table (bad data block handle)")
return
}
if r.filterBlock != nil && !r.filterBlock.contains(dataBH.offset, key) {
err = ErrNotFound
return
if r.filter != nil {
filterBlock, rel, ferr := r.readFilterBlockCached(r.filterBH, true)
if ferr == nil {
if !filterBlock.contains(dataBH.offset, key) {
rel.Release()
return nil, nil, ErrNotFound
}
rel.Release()
}
}
data := r.getDataIter(dataBH, nil, ro.GetStrict(opt.StrictBlockChecksum), !ro.GetDontFillCache())
defer data.Release()
@@ -768,7 +813,13 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
return
}
index := r.indexBlock.newIterator(nil, true, nil)
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
if err != nil {
return
}
defer rel.Release()
index := indexBlock.newIterator(nil, true, nil)
defer index.Release()
if index.Seek(key) {
dataBH, n := decodeBlockHandle(index.Value())
@@ -786,6 +837,17 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
return
}
// Release implements util.Releaser.
// It also close the file if it is an io.Closer.
func (r *Reader) Release() {
if closer, ok := r.reader.(io.Closer); ok {
closer.Close()
}
r.reader = nil
r.cache = nil
r.bpool = nil
}
// NewReader creates a new initialized table reader for the file.
// The cache and bpool is optional and can be nil.
//
@@ -825,16 +887,11 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.Buf
return r
}
// Decode the index block handle.
indexBH, n := decodeBlockHandle(footer[n:])
r.indexBH, n = decodeBlockHandle(footer[n:])
if n == 0 {
r.err = errors.New("leveldb/table: Reader: invalid table (bad index block handle)")
return r
}
// Read index block.
r.indexBlock, r.err = r.readBlock(indexBH, true)
if r.err != nil {
return r
}
// Read metaindex block.
metaBlock, err := r.readBlock(metaBH, true)
if err != nil {
@@ -850,32 +907,28 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.Buf
continue
}
fn := key[7:]
var filter filter.Filter
if f0 := o.GetFilter(); f0 != nil && f0.Name() == fn {
filter = f0
r.filter = f0
} else {
for _, f0 := range o.GetAltFilters() {
if f0.Name() == fn {
filter = f0
r.filter = f0
break
}
}
}
if filter != nil {
if r.filter != nil {
filterBH, n := decodeBlockHandle(metaIter.Value())
if n == 0 {
continue
}
r.filterBH = filterBH
// Update data end.
r.dataEnd = int64(filterBH.offset)
filterBlock, err := r.readFilterBlock(filterBH, filter)
if err != nil {
continue
}
r.filterBlock = filterBlock
break
}
}
metaIter.Release()
metaBlock.Release()
return r
}

View File

@@ -111,7 +111,9 @@ var _ = testutil.Defer(func() {
testutil.AllKeyValueTesting(nil, Build)
Describe("with one key per block", Test(testutil.KeyValue_Generate(nil, 9, 1, 10, 512, 512), func(r *Reader) {
It("should have correct blocks number", func() {
Expect(r.indexBlock.restartsLen).Should(Equal(9))
indexBlock, err := r.readBlock(r.indexBH, true)
Expect(err).To(BeNil())
Expect(indexBlock.restartsLen).Should(Equal(9))
})
}))
})

View File

@@ -19,13 +19,21 @@ type buffer struct {
// BufferPool is a 'buffer pool'.
type BufferPool struct {
pool [4]chan []byte
size [3]uint32
sizeMiss [3]uint32
baseline0 int
baseline1 int
baseline2 int
pool [6]chan []byte
size [5]uint32
sizeMiss [5]uint32
sizeHalf [5]uint32
baseline [4]int
baselinex0 int
baselinex1 int
baseline0 int
baseline1 int
baseline2 int
close chan struct{}
get uint32
put uint32
half uint32
less uint32
equal uint32
greater uint32
@@ -33,20 +41,21 @@ type BufferPool struct {
}
func (p *BufferPool) poolNum(n int) int {
switch {
case n <= p.baseline0:
if n <= p.baseline0 && n > p.baseline0/2 {
return 0
case n <= p.baseline1:
return 1
case n <= p.baseline2:
return 2
default:
return 3
}
for i, x := range p.baseline {
if n <= x {
return i + 1
}
}
return len(p.baseline) + 1
}
// Get returns buffer with length of n.
func (p *BufferPool) Get(n int) []byte {
atomic.AddUint32(&p.get, 1)
poolNum := p.poolNum(n)
pool := p.pool[poolNum]
if poolNum == 0 {
@@ -55,13 +64,22 @@ func (p *BufferPool) Get(n int) []byte {
case b := <-pool:
switch {
case cap(b) > n:
atomic.AddUint32(&p.less, 1)
return b[:n]
if cap(b)-n >= n {
atomic.AddUint32(&p.half, 1)
select {
case pool <- b:
default:
}
return make([]byte, n)
} else {
atomic.AddUint32(&p.less, 1)
return b[:n]
}
case cap(b) == n:
atomic.AddUint32(&p.equal, 1)
return b[:n]
default:
panic("not reached")
atomic.AddUint32(&p.greater, 1)
}
default:
atomic.AddUint32(&p.miss, 1)
@@ -75,8 +93,23 @@ func (p *BufferPool) Get(n int) []byte {
case b := <-pool:
switch {
case cap(b) > n:
atomic.AddUint32(&p.less, 1)
return b[:n]
if cap(b)-n >= n {
atomic.AddUint32(&p.half, 1)
sizeHalfPtr := &p.sizeHalf[poolNum-1]
if atomic.AddUint32(sizeHalfPtr, 1) == 20 {
atomic.StoreUint32(sizePtr, uint32(cap(b)/2))
atomic.StoreUint32(sizeHalfPtr, 0)
} else {
select {
case pool <- b:
default:
}
}
return make([]byte, n)
} else {
atomic.AddUint32(&p.less, 1)
return b[:n]
}
case cap(b) == n:
atomic.AddUint32(&p.equal, 1)
return b[:n]
@@ -112,6 +145,8 @@ func (p *BufferPool) Get(n int) []byte {
// Put adds given buffer to the pool.
func (p *BufferPool) Put(b []byte) {
atomic.AddUint32(&p.put, 1)
pool := p.pool[p.poolNum(cap(b))]
select {
case pool <- b:
@@ -120,20 +155,34 @@ func (p *BufferPool) Put(b []byte) {
}
func (p *BufferPool) Close() {
select {
case p.close <- struct{}{}:
default:
}
}
func (p *BufferPool) String() string {
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v L·%d E·%d G·%d M·%d}",
p.baseline0, p.size, p.sizeMiss, p.less, p.equal, p.greater, p.miss)
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v Zh·%v G·%d P·%d H·%d <·%d =·%d >·%d M·%d}",
p.baseline0, p.size, p.sizeMiss, p.sizeHalf, p.get, p.put, p.half, p.less, p.equal, p.greater, p.miss)
}
func (p *BufferPool) drain() {
ticker := time.NewTicker(2 * time.Second)
for {
time.Sleep(1 * time.Second)
select {
case <-p.pool[0]:
case <-p.pool[1]:
case <-p.pool[2]:
case <-p.pool[3]:
default:
case <-ticker.C:
for _, ch := range p.pool {
select {
case <-ch:
default:
}
}
case <-p.close:
for _, ch := range p.pool {
close(ch)
}
return
}
}
}
@@ -145,10 +194,10 @@ func NewBufferPool(baseline int) *BufferPool {
}
p := &BufferPool{
baseline0: baseline,
baseline1: baseline * 2,
baseline2: baseline * 4,
baseline: [...]int{baseline / 4, baseline / 2, baseline * 2, baseline * 4},
close: make(chan struct{}, 1),
}
for i, cap := range []int{6, 6, 3, 1} {
for i, cap := range []int{2, 2, 4, 4, 2, 1} {
p.pool[i] = make(chan []byte, cap)
}
go p.drain()

View File

@@ -14,3 +14,18 @@ type Range struct {
// Limit of the key range, not include in the range.
Limit []byte
}
// BytesPrefix returns key range that satisfy the given prefix.
// This only applicable for the standard 'bytes comparer'.
func BytesPrefix(prefix []byte) *Range {
var limit []byte
for i := len(prefix) - 1; i >= 0; i-- {
c := prefix[i]
if c < 0xff {
limit = make([]byte, i+1)
copy(limit, prefix)
limit[i] = c + 1
}
}
return &Range{prefix, limit}
}

View File

@@ -4,4 +4,20 @@
package auto_test
// Empty test file to generate 0% coverage rather than no coverage
import (
"bytes"
"testing"
"github.com/syncthing/syncthing/auto"
)
func TestAssets(t *testing.T) {
assets := auto.Assets()
idx, ok := assets["index.html"]
if !ok {
t.Fatal("No index.html in compiled in assets")
}
if !bytes.Contains(idx, []byte("<html")) {
t.Fatal("No html in index.html")
}
}

View File

@@ -1,2 +1,6 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// Package auto contains auto generated files for web assets.
package auto

View File

File diff suppressed because one or more lines are too long

View File

@@ -11,11 +11,6 @@ type recv struct {
src net.Addr
}
type dst struct {
intf string
conn *net.UDPConn
}
type Interface interface {
Send(data []byte)
Recv() ([]byte, net.Addr)

View File

@@ -9,7 +9,6 @@ import "net"
type Broadcast struct {
conn *net.UDPConn
port int
conns []dst
inbox chan []byte
outbox chan recv
}

View File

@@ -9,7 +9,6 @@ import "net"
type Multicast struct {
conn *net.UDPConn
addr *net.UDPAddr
conns []dst
inbox chan []byte
outbox chan recv
}

View File

@@ -163,7 +163,7 @@ func setup() {
}
func test(pkg string) {
runPrint("godep", "go", "test", pkg)
runPrint("godep", "go", "test", "-short", "-timeout", "10s", pkg)
}
func install(pkg string) {
@@ -243,20 +243,20 @@ func xdr() {
}
func translate() {
os.Chdir("gui")
runPipe("lang-en-new.json", "go", "run", "../cmd/translate/main.go", "lang-en.json", "index.html")
os.Chdir("gui/lang")
runPipe("lang-en-new.json", "go", "run", "../../cmd/translate/main.go", "lang-en.json", "../index.html")
os.Remove("lang-en.json")
err := os.Rename("lang-en-new.json", "lang-en.json")
if err != nil {
log.Fatal(err)
}
os.Chdir("..")
os.Chdir("../..")
}
func transifex() {
os.Chdir("gui")
runPrint("go", "run", "../cmd/transifexdl/main.go")
os.Chdir("..")
os.Chdir("gui/lang")
runPrint("go", "run", "../../cmd/transifexdl/main.go")
os.Chdir("../..")
assets()
}

View File

@@ -12,6 +12,10 @@ case "${1:-default}" in
;;
test)
ulimit -t 60 || true
ulimit -d 512000 || true
ulimit -m 512000 || true
go run build.go "$1"
;;
@@ -64,6 +68,10 @@ case "${1:-default}" in
;;
test-cov)
ulimit -t 60 || true
ulimit -d 512000 || true
ulimit -m 512000 || true
go get github.com/axw/gocov/gocov
go get github.com/AlekSi/gocov-xml

View File

@@ -14,7 +14,16 @@ no-docs-typos() {
grep -v f1120d7aa936c0658429edef0037792520b46334
}
for email in $(missing-contribs) ; do
git log --author="$email" --format="%H %ae %s" | no-docs-typos
done
print-missing-contribs() {
for email in $(missing-contribs) ; do
git log --author="$email" --format="%H %ae %s" | no-docs-typos
done
}
print-missing-copyright() {
find . -name \*.go | xargs grep -L 'Copyright (C)' | grep -v Godeps
}
print-missing-contribs
print-missing-copyright

View File

@@ -9,8 +9,8 @@ package main
import (
"bytes"
"compress/gzip"
"encoding/base64"
"flag"
"fmt"
"go/format"
"io"
"os"
@@ -23,27 +23,27 @@ var tpl = template.Must(template.New("assets").Parse(`package auto
import (
"bytes"
"compress/gzip"
"encoding/hex"
"encoding/base64"
"io/ioutil"
)
var Assets = make(map[string][]byte)
func init() {
func Assets() map[string][]byte {
var assets = make(map[string][]byte, {{.assets | len}})
var bs []byte
var gr *gzip.Reader
{{range $asset := .assets}}
bs, _ = hex.DecodeString("{{$asset.HexData}}")
bs, _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
gr, _ = gzip.NewReader(bytes.NewBuffer(bs))
bs, _ = ioutil.ReadAll(gr)
Assets["{{$asset.Name}}"] = bs
assets["{{$asset.Name}}"] = bs
{{end}}
return assets
}
`))
type asset struct {
Name string
HexData string
Name string
Data string
}
var assets []asset
@@ -69,8 +69,8 @@ func walkerFor(basePath string) filepath.WalkFunc {
name, _ = filepath.Rel(basePath, name)
assets = append(assets, asset{
Name: filepath.ToSlash(name),
HexData: fmt.Sprintf("%x", buf.Bytes()),
Name: filepath.ToSlash(name),
Data: base64.StdEncoding.EncodeToString(buf.Bytes()),
})
}

View File

@@ -11,7 +11,6 @@ import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"math/rand"
"mime"
"net"
@@ -45,7 +44,6 @@ var (
configInSync = true
guiErrors = []guiError{}
guiErrorsMut sync.Mutex
static func(http.ResponseWriter, *http.Request, *log.Logger)
apiKey string
modt = time.Now().UTC().Format(http.TimeFormat)
eventSub *events.BufferedSubscription
@@ -145,6 +143,9 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
// protected, other requests will grant cookies.
handler := csrfMiddleware("/rest", mux)
// Add our version as a header to responses
handler = withVersionMiddleware(handler)
// Wrap everything in basic auth, if user/password is set.
if len(cfg.User) > 0 {
handler = basicAuthMiddleware(cfg.User, cfg.Password, handler)
@@ -174,6 +175,13 @@ func noCacheMiddleware(h http.Handler) http.Handler {
})
}
func withVersionMiddleware(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-Syncthing-Version", Version)
h.ServeHTTP(w, r)
})
}
func withModel(m *model.Model, h func(m *model.Model, w http.ResponseWriter, r *http.Request)) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
h(m, w, r)
@@ -247,7 +255,7 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
func restPostOverride(m *model.Model, w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var repo = qs.Get("repo")
m.Override(repo)
go m.Override(repo)
}
func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
@@ -517,7 +525,7 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
var langs []string
for _, l := range strings.Split(lang, ",") {
parts := strings.SplitN(l, ";", 2)
langs = append(langs, strings.TrimSpace(parts[0]))
langs = append(langs, strings.ToLower(strings.TrimSpace(parts[0])))
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(langs)
@@ -642,6 +650,8 @@ func validAPIKey(k string) bool {
}
func embeddedStatic(assetDir string) http.Handler {
assets := auto.Assets()
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
file := r.URL.Path
@@ -662,13 +672,13 @@ func embeddedStatic(assetDir string) http.Handler {
}
}
bs, ok := auto.Assets[file]
bs, ok := assets[file]
if !ok {
http.NotFound(w, r)
return
}
mtype := mime.TypeByExtension(filepath.Ext(r.URL.Path))
mtype := mimeTypeForFile(file)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
@@ -678,3 +688,28 @@ func embeddedStatic(assetDir string) http.Handler {
w.Write(bs)
})
}
func mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
default:
return mime.TypeByExtension(ext)
}
}

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (

View File

@@ -14,7 +14,8 @@ import (
)
func init() {
if os.Getenv("STHEAPPROFILE") != "" {
if innerProcess && os.Getenv("STHEAPPROFILE") != "" {
l.Debugln("Starting heap profiling")
go saveHeapProfiles()
}
}

View File

@@ -16,7 +16,6 @@ import (
"net/http"
_ "net/http/pprof"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
@@ -31,6 +30,7 @@ import (
"github.com/syncthing/syncthing/config"
"github.com/syncthing/syncthing/discover"
"github.com/syncthing/syncthing/events"
"github.com/syncthing/syncthing/files"
"github.com/syncthing/syncthing/logger"
"github.com/syncthing/syncthing/model"
"github.com/syncthing/syncthing/osutil"
@@ -38,6 +38,7 @@ import (
"github.com/syncthing/syncthing/upgrade"
"github.com/syncthing/syncthing/upnp"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
var (
@@ -51,7 +52,15 @@ var (
GoArchExtra string // "", "v5", "v6", "v7"
)
const (
exitSuccess = 0
exitError = 1
exitNoUpgradeAvailable = 2
exitRestarting = 3
)
var l = logger.DefaultLogger
var innerProcess = os.Getenv("STNORESTART") != ""
func init() {
if Version != "unknown-dev" {
@@ -79,10 +88,8 @@ var (
confDir string
logFlags int = log.Ltime
rateBucket *ratelimit.Bucket
stop = make(chan bool)
stop = make(chan int)
discoverer *discover.Discoverer
lockConn *net.TCPListener
lockPort int
externalPort int
cert tls.Certificate
)
@@ -106,8 +113,8 @@ The following enviroment variables are interpreted by syncthing:
STGUIADDRESS Override GUI listen address set in config. Expects protocol type
followed by hostname or an IP address, followed by a port, such
as "https://127.0.0.1:8888".
STGUIAUTH Override GUI authentication credentials set in config. Expects
STGUIAUTH Override GUI authentication credentials set in config. Expects
a colon separated username and password, such as "admin:secret".
STGUIAPIKEY Override GUI API key set in config.
@@ -143,23 +150,28 @@ The following enviroment variables are interpreted by syncthing:
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
STDEADLOCKTIMEOUT Alter deadlock detection timeout (seconds; default 1200).`
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
available CPU cores.`
)
func init() {
rand.Seed(time.Now().UnixNano())
}
// Command line options
var (
reset bool
showVersion bool
doUpgrade bool
doUpgradeCheck bool
noBrowser bool
generateDir string
guiAddress string
guiAuthentication string
guiAPIKey string
)
func main() {
var reset bool
var showVersion bool
var doUpgrade bool
var doUpgradeCheck bool
var noBrowser bool
var generateDir string
var guiAddress string
var guiAuthentication string
var guiAPIKey string
flag.StringVar(&confDir, "home", getDefaultConfDir(), "Set configuration directory")
flag.BoolVar(&reset, "reset", false, "Prepare to resync from cluster")
flag.BoolVar(&showVersion, "version", false, "Show version")
@@ -214,7 +226,7 @@ func main() {
if upgrade.CompareVersions(rel.Tag, Version) <= 0 {
l.Infof("No upgrade available (current %q >= latest %q).", Version, rel.Tag)
os.Exit(2)
os.Exit(exitNoUpgradeAvailable)
}
l.Infof("Upgrade available (current %q < latest %q)", Version, rel.Tag)
@@ -231,12 +243,21 @@ func main() {
}
}
var err error
lockPort, err = getLockPort()
if err != nil {
l.Fatalln("Opening lock port:", err)
if reset {
resetRepositories()
return
}
if os.Getenv("STNORESTART") != "" {
syncthingMain()
} else {
monitorMain()
}
}
func syncthingMain() {
var err error
if len(os.Getenv("GOGC")) == 0 {
debug.SetGCPercent(25)
}
@@ -249,7 +270,7 @@ func main() {
events.Default.Log(events.Starting, map[string]string{"home": confDir})
if _, err := os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
if _, err = os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
// We are supposed to use the default configuration directory. It
// doesn't exist. In the past our default has been ~/.syncthing, so if
// that directory exists we move it to the new default location and
@@ -318,9 +339,10 @@ func main() {
cfg, err = config.Load(nil, myID)
cfg.Repositories = []config.RepositoryConfiguration{
{
ID: "default",
Directory: defaultRepo,
Nodes: []config.RepositoryNodeConfiguration{{NodeID: myID}},
ID: "default",
Directory: defaultRepo,
RescanIntervalS: 60,
Nodes: []config.RepositoryNodeConfiguration{{NodeID: myID}},
},
}
cfg.Nodes = []config.NodeConfiguration{
@@ -343,15 +365,6 @@ func main() {
l.Infof("Edit %s to taste or use the GUI\n", cfgFile)
}
if reset {
resetRepositories()
return
}
if len(os.Getenv("STRESTART")) > 0 {
waitForParentExit()
}
if profiler := os.Getenv("STPROFILER"); len(profiler) > 0 {
go func() {
l.Debugln("Starting profiler on", profiler)
@@ -386,10 +399,20 @@ func main() {
// If this is the first time the user runs v0.9, archive the old indexes and config.
archiveLegacyConfig()
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), nil)
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{MaxOpenFiles: 100})
if err != nil {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
}
// Remove database entries for repos that no longer exist in the config
repoMap := cfg.RepoMap()
for _, repo := range files.ListRepos(db) {
if _, ok := repoMap[repo]; !ok {
l.Infof("Cleaning data for dropped repo %q", repo)
files.DropRepo(db, repo)
}
}
m := model.NewModel(confDir, &cfg, myName, "syncthing", Version, db)
nextRepo:
@@ -407,6 +430,7 @@ nextRepo:
// that all files have been deleted which might not be the case,
// so mark it as invalid instead.
if err != nil || !fi.IsDir() {
l.Warnf("Stopping repository %q - directory missing, but has files in index", repo.ID)
cfg.Repositories[i].Invalid = "repo directory missing"
continue nextRepo
}
@@ -419,6 +443,7 @@ nextRepo:
if err != nil {
// If there was another error or we could not create the
// directory, the repository is invalid.
l.Warnf("Stopping repository %q - %v", err)
cfg.Repositories[i].Invalid = err.Error()
continue nextRepo
}
@@ -453,7 +478,7 @@ nextRepo:
proto = "https"
}
l.Infof("Starting web GUI on %s://%s:%d/", proto, hostShow, addr.Port)
l.Infof("Starting web GUI on %s://%s/", proto, net.JoinHostPort(hostShow, strconv.Itoa(addr.Port)))
err := startGUI(guiCfg, os.Getenv("STGUIASSETS"), m)
if err != nil {
l.Fatalln("Cannot start GUI:", err)
@@ -468,7 +493,13 @@ nextRepo:
// start needing a bunch of files which are nowhere to be found. This
// needs to be changed when we correctly do persistent indexes.
for _, repoCfg := range cfg.Repositories {
if repoCfg.Invalid != "" {
continue
}
for _, node := range repoCfg.NodeIDs() {
if node == myID {
continue
}
m.Index(node, repoCfg.ID, nil)
}
}
@@ -559,12 +590,15 @@ nextRepo:
}()
}
go standbyMonitor()
events.Default.Log(events.StartupComplete, nil)
go generateEvents()
<-stop
code := <-stop
l.Okln("Exiting")
os.Exit(code)
}
func generateEvents() {
@@ -574,25 +608,6 @@ func generateEvents() {
}
}
func waitForParentExit() {
l.Infoln("Waiting for parent to exit...")
lockPortStr := os.Getenv("STRESTART")
lockPort, err := strconv.Atoi(lockPortStr)
if err != nil {
l.Warnln("Invalid lock port %q: %v", lockPortStr, err)
}
// Wait for the listen address to become free, indicating that the parent has exited.
for {
ln, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", lockPort))
if err == nil {
ln.Close()
break
}
time.Sleep(250 * time.Millisecond)
}
l.Infoln("Continuing")
}
func setupUPnP() {
if len(cfg.Options.ListenAddress) == 1 {
_, portStr, err := net.SplitHostPort(cfg.Options.ListenAddress[0])
@@ -648,13 +663,16 @@ func renewUPnP(port int) {
}
// Just renew the same port that we already have
err = igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", cfg.Options.UPnPLease*60)
if err == nil {
l.Infoln("Renewed UPnP port mapping - external port", externalPort)
continue
if externalPort != 0 {
err = igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", cfg.Options.UPnPLease*60)
if err == nil {
l.Infoln("Renewed UPnP port mapping - external port", externalPort)
continue
}
}
// Something strange has happened. Perhaps the gateway has changed?
// Something strange has happened. We didn't have an external port before?
// Or perhaps the gateway has changed?
// Retry the same port sequence from the beginning.
r := setupExternalPort(igd, port)
if r != 0 {
@@ -710,7 +728,7 @@ func archiveLegacyConfig() {
l.Warnf("Cannot archive config:", err)
return
}
defer src.Close()
defer dst.Close()
l.Infoln("Archiving config.xml")
io.Copy(dst, src)
@@ -719,40 +737,12 @@ func archiveLegacyConfig() {
func restart() {
l.Infoln("Restarting")
if os.Getenv("SMF_FMRI") != "" || os.Getenv("STNORESTART") != "" {
// Solaris SMF
l.Infoln("Service manager detected; exit instead of restart")
stop <- true
return
}
env := os.Environ()
newEnv := make([]string, 0, len(env))
for _, s := range env {
if !strings.HasPrefix(s, "STRESTART=") {
newEnv = append(newEnv, s)
}
}
newEnv = append(newEnv, fmt.Sprintf("STRESTART=%d", lockPort))
pgm, err := exec.LookPath(os.Args[0])
if err != nil {
l.Warnln("Cannot restart:", err)
return
}
proc, err := os.StartProcess(pgm, os.Args, &os.ProcAttr{
Env: newEnv,
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
})
if err != nil {
l.Fatalln(err)
}
proc.Release()
stop <- true
stop <- exitRestarting
}
func shutdown() {
stop <- true
l.Infoln("Shutting down")
stop <- exitSuccess
}
var saveConfigCh = make(chan struct{})
@@ -1106,16 +1096,6 @@ func getFreePort(host string, ports ...int) (int, error) {
return addr.Port, nil
}
func getLockPort() (int, error) {
var err error
lockConn, err = net.ListenTCP("tcp", &net.TCPAddr{IP: net.IP{127, 0, 0, 1}})
if err != nil {
return 0, err
}
addr := lockConn.Addr().(*net.TCPAddr)
return addr.Port, nil
}
func overrideGUIConfig(originalCfg config.GUIConfiguration, address, authentication, apikey string) config.GUIConfiguration {
// Make a copy of the config
cfg := originalCfg
@@ -1164,3 +1144,15 @@ func overrideGUIConfig(originalCfg config.GUIConfiguration, address, authenticat
}
return cfg
}
func standbyMonitor() {
now := time.Now()
for {
time.Sleep(10 * time.Second)
if time.Since(now) > 2*time.Minute {
l.Infoln("Paused state detected, possibly woke up from standby.")
restart()
}
now = time.Now()
}
}

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build solaris
package main

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build freebsd
package main

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (

171
cmd/syncthing/monitor.go Normal file
View File

@@ -0,0 +1,171 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (
"bufio"
"io"
"os"
"os/exec"
"os/signal"
"path/filepath"
"strings"
"sync"
"syscall"
"time"
)
var (
stdoutFirstLines []string // The first 10 lines of stdout
stdoutLastLines []string // The last 50 lines of stdout
stdoutMut sync.Mutex
)
const (
countRestarts = 5
loopThreshold = 15 * time.Second
)
func monitorMain() {
os.Setenv("STNORESTART", "yes")
l.SetPrefix("[monitor] ")
args := os.Args
var restarts [countRestarts]time.Time
sign := make(chan os.Signal, 1)
sigTerm := syscall.Signal(0xf)
signal.Notify(sign, os.Interrupt, sigTerm, os.Kill)
for {
if t := time.Since(restarts[0]); t < loopThreshold {
l.Warnf("%d restarts in %v; not retrying further", countRestarts, t)
os.Exit(exitError)
}
copy(restarts[0:], restarts[1:])
restarts[len(restarts)-1] = time.Now()
cmd := exec.Command(args[0], args[1:]...)
stderr, err := cmd.StderrPipe()
if err != nil {
l.Fatalln(err)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
l.Fatalln(err)
}
l.Infoln("Starting syncthing")
err = cmd.Start()
if err != nil {
l.Fatalln(err)
}
stdoutMut.Lock()
stdoutFirstLines = make([]string, 0, 10)
stdoutLastLines = make([]string, 0, 50)
stdoutMut.Unlock()
go copyStderr(stderr)
go copyStdout(stdout)
exit := make(chan error)
go func() {
exit <- cmd.Wait()
}()
select {
case s := <-sign:
l.Infof("Signal %d received; exiting", s)
cmd.Process.Kill()
<-exit
return
case err = <-exit:
if err == nil {
// Successfull exit indicates an intentional shutdown
return
}
}
l.Infoln("Syncthing exited:", err)
time.Sleep(1 * time.Second)
// Let the next child process know that this is not the first time
// it's starting up.
os.Setenv("STRESTART", "yes")
}
}
func copyStderr(stderr io.ReadCloser) {
br := bufio.NewReader(stderr)
var panicFd *os.File
for {
line, err := br.ReadString('\n')
if err != nil {
if err != io.EOF {
l.Warnln("stderr:", err)
}
return
}
if panicFd == nil {
os.Stderr.WriteString(line)
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
panicFd, err = os.Create(filepath.Join(confDir, time.Now().Format("panic-20060102-150405.log")))
if err != nil {
l.Warnln("Create panic log:", err)
continue
}
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
l.Warnln("Please create an issue at https://github.com/syncting/syncthing/issues/ with the panic log attached")
stdoutMut.Lock()
for _, line := range stdoutFirstLines {
panicFd.WriteString(line)
}
panicFd.WriteString("...\n")
for _, line := range stdoutLastLines {
panicFd.WriteString(line)
}
}
}
if panicFd != nil {
panicFd.WriteString(line)
}
}
}
func copyStdout(stderr io.ReadCloser) {
br := bufio.NewReader(stderr)
for {
line, err := br.ReadString('\n')
if err != nil {
if err != io.EOF {
l.Warnln("stdout:", err)
}
return
}
stdoutMut.Lock()
if len(stdoutFirstLines) < cap(stdoutFirstLines) {
stdoutFirstLines = append(stdoutFirstLines, line)
}
if l := len(stdoutLastLines); l == cap(stdoutLastLines) {
stdoutLastLines = stdoutLastLines[:l-1]
}
stdoutLastLines = append(stdoutLastLines, line)
stdoutMut.Unlock()
os.Stdout.WriteString(line)
}
}

View File

@@ -2,7 +2,7 @@
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !windows
// +build !solaris,!windows
package main
@@ -15,7 +15,7 @@ import (
)
func init() {
if os.Getenv("STPERFSTATS") != "" {
if innerProcess && os.Getenv("STPERFSTATS") != "" {
go savePerfStats(fmt.Sprintf("perfstats-%d.csv", syscall.Getpid()))
}
}

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package main
import (

View File

@@ -8,9 +8,9 @@ import (
"bytes"
"encoding/hex"
"errors"
"fmt"
"io"
"net"
"strconv"
"sync"
"time"
@@ -38,7 +38,6 @@ type Discoverer struct {
forcedBcastTick chan time.Time
extAnnounceOK bool
extAnnounceOKmut sync.Mutex
globalBcastStop chan bool
}
type cacheEntry struct {
@@ -50,11 +49,6 @@ var (
ErrIncorrectMagic = errors.New("incorrect magic number")
)
// We tolerate a certain amount of errors because we might be running on
// laptops that sleep and wake, have intermittent network connectivity, etc.
// When we hit this many errors in succession, we stop.
const maxErrors = 30
func NewDiscoverer(id protocol.NodeID, addresses []string) *Discoverer {
return &Discoverer{
myID: id,
@@ -345,7 +339,7 @@ func (d *Discoverer) registerNode(addr net.Addr, node Node) bool {
for _, a := range node.Addresses {
var nodeAddr string
if len(a.IP) > 0 {
nodeAddr = fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
nodeAddr = net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
} else if addr != nil {
ua := addr.(*net.UDPAddr)
ua.Port = int(a.Port)
@@ -449,7 +443,7 @@ func (d *Discoverer) externalLookup(node protocol.NodeID) []string {
var addrs []string
for _, a := range pkt.This.Addresses {
nodeAddr := fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
nodeAddr := net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
addrs = append(addrs, nodeAddr)
}
return addrs

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package files
import "code.google.com/p/go.text/unicode/norm"

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !windows,!darwin
package files

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package files
import (

View File

@@ -1,3 +1,7 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package files
import (
@@ -117,6 +121,12 @@ func globalKeyName(key []byte) []byte {
return key[1+64:]
}
func globalKeyRepo(key []byte) []byte {
repo := key[1 : 1+64]
izero := bytes.IndexByte(repo, 0)
return repo[:izero]
}
type deletionHandler func(db dbReader, batch dbWriter, repo, node, name []byte, dbi iterator.Iterator) uint64
type fileIterator func(f protocol.FileIntf) bool
@@ -176,18 +186,28 @@ func ldbGenericReplace(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
maxLocalVer = lv
}
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
if fs[fsi].IsInvalid() {
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
} else {
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
}
fsi++
case moreFs && moreDb && cmp == 0:
// File exists on both sides - compare versions.
// File exists on both sides - compare versions. We might get an
// update with the same version and different flags if a node has
// marked a file as invalid, so handle that too.
var ef protocol.FileInfoTruncated
ef.UnmarshalXDR(dbi.Value())
if fs[fsi].Version > ef.Version {
if fs[fsi].Version > ef.Version || fs[fsi].Version != ef.Version {
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
maxLocalVer = lv
}
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
if fs[fsi].IsInvalid() {
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
} else {
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
}
}
// Iterate both sides.
fsi++
@@ -270,7 +290,11 @@ func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
maxLocalVer = lv
}
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
if f.IsInvalid() {
ldbRemoveFromGlobal(snap, batch, repo, node, name)
} else {
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
}
continue
}
@@ -279,11 +303,17 @@ func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64
if err != nil {
panic(err)
}
if ef.Version != f.Version {
// Flags might change without the version being bumped when we set the
// invalid flag on an existing file.
if ef.Version != f.Version || ef.Flags != f.Flags {
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
maxLocalVer = lv
}
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
if f.IsInvalid() {
ldbRemoveFromGlobal(snap, batch, repo, node, name)
} else {
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
}
}
}
@@ -375,7 +405,9 @@ func ldbRemoveFromGlobal(db dbReader, batch dbWriter, repo, node, file []byte) {
gk := globalKey(repo, file)
svl, err := db.Get(gk, nil)
if err != nil {
panic(err)
// We might be called to "remove" a global version that doesn't exist
// if the first update for the file is already marked invalid.
return
}
var fl versionList
@@ -585,6 +617,7 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterat
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
outer:
for dbi.Next() {
var vl versionList
err := vl.UnmarshalXDR(dbi.Value())
@@ -610,33 +643,110 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterat
if need || !have {
name := globalKeyName(dbi.Key())
fk := nodeKey(repo, vl.versions[0].node, name)
bs, err := snap.Get(fk, nil)
if err != nil {
panic(err)
}
needVersion := vl.versions[0].version
inner:
for i := range vl.versions {
if vl.versions[i].version != needVersion {
// We haven't found a valid copy of the file with the needed version.
continue outer
}
fk := nodeKey(repo, vl.versions[i].node, name)
bs, err := snap.Get(fk, nil)
if err != nil {
panic(err)
}
gf, err := unmarshalTrunc(bs, truncate)
if err != nil {
panic(err)
}
gf, err := unmarshalTrunc(bs, truncate)
if err != nil {
panic(err)
}
if gf.IsDeleted() && !have {
// We don't need deleted files that we don't have
continue
}
if gf.IsInvalid() {
// The file is marked invalid for whatever reason, don't use it.
continue inner
}
if debug {
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
}
if gf.IsDeleted() && !have {
// We don't need deleted files that we don't have
continue outer
}
if cont := fn(gf); !cont {
return
if debug {
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
}
if cont := fn(gf); !cont {
return
}
}
}
}
}
func ldbListRepos(db *leveldb.DB) []string {
defer runtime.GC()
start := []byte{keyTypeGlobal}
limit := []byte{keyTypeGlobal + 1}
snap, err := db.GetSnapshot()
if err != nil {
panic(err)
}
defer snap.Release()
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
defer dbi.Release()
repoExists := make(map[string]bool)
for dbi.Next() {
repo := string(globalKeyRepo(dbi.Key()))
if !repoExists[repo] {
repoExists[repo] = true
}
}
repos := make([]string, 0, len(repoExists))
for k := range repoExists {
repos = append(repos, k)
}
sort.Strings(repos)
return repos
}
func ldbDropRepo(db *leveldb.DB, repo []byte) {
defer runtime.GC()
snap, err := db.GetSnapshot()
if err != nil {
panic(err)
}
defer snap.Release()
// Remove all items related to the given repo from the node->file bucket
start := []byte{keyTypeNode}
limit := []byte{keyTypeNode + 1}
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
for dbi.Next() {
itemRepo := nodeKeyRepo(dbi.Key())
if bytes.Compare(repo, itemRepo) == 0 {
db.Delete(dbi.Key(), nil)
}
}
dbi.Release()
// Remove all items related to the given repo from the global bucket
start = []byte{keyTypeGlobal}
limit = []byte{keyTypeGlobal + 1}
dbi = snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
for dbi.Next() {
itemRepo := globalKeyRepo(dbi.Key())
if bytes.Compare(repo, itemRepo) == 0 {
db.Delete(dbi.Key(), nil)
}
}
dbi.Release()
}
func unmarshalTrunc(bs []byte, truncate bool) (protocol.FileIntf, error) {
if truncate {
var tf protocol.FileInfoTruncated

View File

@@ -155,6 +155,17 @@ func (s *Set) LocalVersion(node protocol.NodeID) uint64 {
return s.localVersion[node]
}
// ListRepos returns the repository IDs seen in the database.
func ListRepos(db *leveldb.DB) []string {
return ldbListRepos(db)
}
// DropRepo clears out all information related to the given repo from the
// database.
func DropRepo(db *leveldb.DB, repo string) {
ldbDropRepo(db, []byte(repo))
}
func normalizeFilenames(fs []protocol.FileInfo) {
for i := range fs {
fs[i].Name = normalizedFilename(fs[i].Name)

View File

@@ -7,6 +7,7 @@ package files_test
import (
"bytes"
"fmt"
"reflect"
"sort"
"testing"
@@ -17,10 +18,11 @@ import (
"github.com/syndtr/goleveldb/leveldb/storage"
)
var remoteNode protocol.NodeID
var remoteNode0, remoteNode1 protocol.NodeID
func init() {
remoteNode, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
remoteNode0, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
remoteNode1, _ = protocol.NodeIDFromString("I6KAH76-66SLLLB-5PFXSOA-UFJCDZC-YAOMLEK-CP2GB32-BV5RQST-3PSROAU")
}
func genBlocks(n int) []protocol.BlockInfo {
@@ -80,6 +82,16 @@ func (l fileList) Swap(a, b int) {
l[a], l[b] = l[b], l[a]
}
func (l fileList) String() string {
var b bytes.Buffer
b.WriteString("[]protocol.FileList{\n")
for _, f := range l {
fmt.Fprintf(&b, " %q: #%d, %d bytes, %d blocks, flags=%o\n", f.Name, f.Version, f.Size(), len(f.Blocks), f.Flags)
}
b.WriteString("}")
return b.String()
}
func TestGlobalSet(t *testing.T) {
lamport.Default = lamport.Clock{}
@@ -90,20 +102,20 @@ func TestGlobalSet(t *testing.T) {
m := files.NewSet("test", db)
local0 := []protocol.FileInfo{
local0 := fileList{
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "z", Version: 1000, Blocks: genBlocks(8)},
}
local1 := []protocol.FileInfo{
local1 := fileList{
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
}
localTot := []protocol.FileInfo{
localTot := fileList{
local0[0],
local0[1],
local0[2],
@@ -111,76 +123,76 @@ func TestGlobalSet(t *testing.T) {
protocol.FileInfo{Name: "z", Version: 1001, Flags: protocol.FlagDeleted},
}
remote0 := []protocol.FileInfo{
remote0 := fileList{
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5)},
}
remote1 := []protocol.FileInfo{
remote1 := fileList{
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(6)},
protocol.FileInfo{Name: "e", Version: 1000, Blocks: genBlocks(7)},
}
remoteTot := []protocol.FileInfo{
remoteTot := fileList{
remote0[0],
remote1[0],
remote0[2],
remote1[1],
}
expectedGlobal := []protocol.FileInfo{
remote0[0],
remote1[0],
remote0[2],
localTot[3],
remote1[1],
localTot[4],
expectedGlobal := fileList{
remote0[0], // a
remote1[0], // b
remote0[2], // c
localTot[3], // d
remote1[1], // e
localTot[4], // z
}
expectedLocalNeed := []protocol.FileInfo{
expectedLocalNeed := fileList{
remote1[0],
remote0[2],
remote1[1],
}
expectedRemoteNeed := []protocol.FileInfo{
expectedRemoteNeed := fileList{
local0[3],
}
m.ReplaceWithDelete(protocol.LocalNodeID, local0)
m.ReplaceWithDelete(protocol.LocalNodeID, local1)
m.Replace(remoteNode, remote0)
m.Update(remoteNode, remote1)
m.Replace(remoteNode0, remote0)
m.Update(remoteNode0, remote1)
g := globalList(m)
sort.Sort(fileList(g))
g := fileList(globalList(m))
sort.Sort(g)
if fmt.Sprint(g) != fmt.Sprint(expectedGlobal) {
t.Errorf("Global incorrect;\n A: %v !=\n E: %v", g, expectedGlobal)
}
h := haveList(m, protocol.LocalNodeID)
sort.Sort(fileList(h))
h := fileList(haveList(m, protocol.LocalNodeID))
sort.Sort(h)
if fmt.Sprint(h) != fmt.Sprint(localTot) {
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, localTot)
}
h = haveList(m, remoteNode)
sort.Sort(fileList(h))
h = fileList(haveList(m, remoteNode0))
sort.Sort(h)
if fmt.Sprint(h) != fmt.Sprint(remoteTot) {
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, remoteTot)
}
n := needList(m, protocol.LocalNodeID)
sort.Sort(fileList(n))
n := fileList(needList(m, protocol.LocalNodeID))
sort.Sort(n)
if fmt.Sprint(n) != fmt.Sprint(expectedLocalNeed) {
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedLocalNeed)
}
n = needList(m, remoteNode)
sort.Sort(fileList(n))
n = fileList(needList(m, remoteNode0))
sort.Sort(n)
if fmt.Sprint(n) != fmt.Sprint(expectedRemoteNeed) {
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedRemoteNeed)
@@ -191,7 +203,7 @@ func TestGlobalSet(t *testing.T) {
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, localTot[1])
}
f = m.Get(remoteNode, "b")
f = m.Get(remoteNode0, "b")
if fmt.Sprint(f) != fmt.Sprint(remote1[0]) {
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, remote1[0])
}
@@ -211,14 +223,14 @@ func TestGlobalSet(t *testing.T) {
t.Errorf("GetGlobal incorrect;\n A: %v !=\n E: %v", f, protocol.FileInfo{})
}
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode}
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode0}
a := m.Availability("a")
if !(len(a) == 2 && (a[0] == av[0] && a[1] == av[1] || a[0] == av[1] && a[1] == av[0])) {
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, av)
}
a = m.Availability("b")
if len(a) != 1 || a[0] != remoteNode {
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode)
if len(a) != 1 || a[0] != remoteNode0 {
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode0)
}
a = m.Availability("d")
if len(a) != 1 || a[0] != protocol.LocalNodeID {
@@ -226,6 +238,128 @@ func TestGlobalSet(t *testing.T) {
}
}
func TestNeedWithInvalid(t *testing.T) {
lamport.Default = lamport.Clock{}
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
t.Fatal(err)
}
s := files.NewSet("test", db)
localHave := fileList{
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
}
remote0Have := fileList{
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
}
remote1Have := fileList{
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "e", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
}
expectedNeed := fileList{
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
}
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
s.Replace(remoteNode0, remote0Have)
s.Replace(remoteNode1, remote1Have)
need := fileList(needList(s, protocol.LocalNodeID))
sort.Sort(need)
if fmt.Sprint(need) != fmt.Sprint(expectedNeed) {
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", need, expectedNeed)
}
}
func TestUpdateToInvalid(t *testing.T) {
lamport.Default = lamport.Clock{}
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
t.Fatal(err)
}
s := files.NewSet("test", db)
localHave := fileList{
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
}
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
have := fileList(haveList(s, protocol.LocalNodeID))
sort.Sort(have)
if fmt.Sprint(have) != fmt.Sprint(localHave) {
t.Errorf("Have incorrect before invalidation;\n A: %v !=\n E: %v", have, localHave)
}
localHave[1] = protocol.FileInfo{Name: "b", Version: 1001, Flags: protocol.FlagInvalid}
s.Update(protocol.LocalNodeID, localHave[1:2])
have = fileList(haveList(s, protocol.LocalNodeID))
sort.Sort(have)
if fmt.Sprint(have) != fmt.Sprint(localHave) {
t.Errorf("Have incorrect after invalidation;\n A: %v !=\n E: %v", have, localHave)
}
}
func TestInvalidAvailability(t *testing.T) {
lamport.Default = lamport.Clock{}
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
t.Fatal(err)
}
s := files.NewSet("test", db)
remote0Have := fileList{
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
}
remote1Have := fileList{
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
}
s.Replace(remoteNode0, remote0Have)
s.Replace(remoteNode1, remote1Have)
if av := s.Availability("both"); len(av) != 2 {
t.Error("Incorrect availability for 'both':", av)
}
if av := s.Availability("r0only"); len(av) != 1 || av[0] != remoteNode0 {
t.Error("Incorrect availability for 'r0only':", av)
}
if av := s.Availability("r1only"); len(av) != 1 || av[0] != remoteNode1 {
t.Error("Incorrect availability for 'r1only':", av)
}
if av := s.Availability("none"); len(av) != 0 {
t.Error("Incorrect availability for 'none':", av)
}
}
func TestLocalDeleted(t *testing.T) {
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
@@ -331,7 +465,7 @@ func Benchmark10kUpdateChg(b *testing.B) {
}
m := files.NewSet("test", db)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
var local []protocol.FileInfo
for i := 0; i < 10000; i++ {
@@ -362,7 +496,7 @@ func Benchmark10kUpdateSme(b *testing.B) {
b.Fatal(err)
}
m := files.NewSet("test", db)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
var local []protocol.FileInfo
for i := 0; i < 10000; i++ {
@@ -389,7 +523,7 @@ func Benchmark10kNeed2k(b *testing.B) {
}
m := files.NewSet("test", db)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
var local []protocol.FileInfo
for i := 0; i < 8000; i++ {
@@ -422,7 +556,7 @@ func Benchmark10kHaveFullList(b *testing.B) {
}
m := files.NewSet("test", db)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
var local []protocol.FileInfo
for i := 0; i < 2000; i++ {
@@ -455,7 +589,7 @@ func Benchmark10kGlobal(b *testing.B) {
}
m := files.NewSet("test", db)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
var local []protocol.FileInfo
for i := 0; i < 2000; i++ {
@@ -506,8 +640,8 @@ func TestGlobalReset(t *testing.T) {
t.Errorf("Global incorrect;\n%v !=\n%v", g, local)
}
m.Replace(remoteNode, remote)
m.Replace(remoteNode, nil)
m.Replace(remoteNode0, remote)
m.Replace(remoteNode0, nil)
g = globalList(m)
sort.Sort(fileList(g))
@@ -546,7 +680,7 @@ func TestNeed(t *testing.T) {
}
m.ReplaceWithDelete(protocol.LocalNodeID, local)
m.Replace(remoteNode, remote)
m.Replace(remoteNode0, remote)
need := needList(m, protocol.LocalNodeID)
@@ -597,6 +731,57 @@ func TestLocalVersion(t *testing.T) {
}
}
func TestListDropRepo(t *testing.T) {
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
t.Fatal(err)
}
s0 := files.NewSet("test0", db)
local1 := []protocol.FileInfo{
protocol.FileInfo{Name: "a", Version: 1000},
protocol.FileInfo{Name: "b", Version: 1000},
protocol.FileInfo{Name: "c", Version: 1000},
}
s0.Replace(protocol.LocalNodeID, local1)
s1 := files.NewSet("test1", db)
local2 := []protocol.FileInfo{
protocol.FileInfo{Name: "d", Version: 1002},
protocol.FileInfo{Name: "e", Version: 1002},
protocol.FileInfo{Name: "f", Version: 1002},
}
s1.Replace(remoteNode0, local2)
// Check that we have both repos and their data is in the global list
expectedRepoList := []string{"test0", "test1"}
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
}
if l := len(globalList(s0)); l != 3 {
t.Errorf("Incorrect global length %d != 3 for s0", l)
}
if l := len(globalList(s1)); l != 3 {
t.Errorf("Incorrect global length %d != 3 for s1", l)
}
// Drop one of them and check that it's gone.
files.DropRepo(db, "test1")
expectedRepoList = []string{"test0"}
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
}
if l := len(globalList(s0)); l != 3 {
t.Errorf("Incorrect global length %d != 3 for s0", l)
}
if l := len(globalList(s1)); l != 0 {
t.Errorf("Incorrect global length %d != 0 for s1", l)
}
}
func TestLongPath(t *testing.T) {
db, err := leveldb.Open(storage.NewMemStorage(), nil)
if err != nil {
@@ -652,7 +837,7 @@ func TestStressGlobalVersion(t *testing.T) {
m := files.NewSet("test", db)
done := make(chan struct{})
go stressWriter(m, remoteNode, set1, nil, done)
go stressWriter(m, remoteNode0, set1, nil, done)
go stressWriter(m, protocol.LocalNodeID, set2, nil, done)
t0 := time.Now()

View File

View File

@@ -15,7 +15,7 @@ syncthing.config(function ($httpProvider, $translateProvider) {
$httpProvider.defaults.xsrfCookieName = 'CSRF-Token';
$translateProvider.useStaticFilesLoader({
prefix: 'lang-',
prefix: 'lang/lang-',
suffix: '.json'
});
});
@@ -25,6 +25,16 @@ syncthing.controller('EventCtrl', function ($scope, $http) {
var lastID = 0;
var successFn = function (data) {
// When Syncthing restarts while the long polling connection is in
// progress the browser on some platforms returns a 200 (since the
// headers has been flushed with the return code 200), with no data.
// This basically means that the connection has been reset, and the call
// was not actually sucessful.
if (!data) {
errorFn(data);
return;
}
$scope.$emit('UIOnline');
if (lastID > 0) {
@@ -91,14 +101,27 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
var lang, matching;
for (var i = 0; i < langs.length; i++) {
lang = langs[i];
matching = validLangs.filter(function (l) {
return lang.length >= 2 && l.indexOf(lang) == 0;
if (lang.length < 2) {
continue;
}
matching = validLangs.filter(function (possibleLang) {
// The langs returned by the /rest/langs call will be in lower
// case. We compare to the lowercase version of the language
// code we have as well.
possibleLang = possibleLang.toLowerCase()
if (possibleLang.length > lang.length) {
return possibleLang.indexOf(lang) == 0;
} else {
return lang.indexOf(possibleLang) == 0;
}
});
if (matching.length >= 1) {
$translate.use(matching[0]);
break;
return;
}
}
// Fallback if nothing matched
$translate.use("en");
})
$(window).bind('beforeunload', function() {
@@ -714,10 +737,16 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
$scope.currentRepo.FileVersioningSelector = "none";
}
$scope.currentRepo.simpleKeep = $scope.currentRepo.simpleKeep || 5;
$scope.currentRepo.staggeredMaxAge = $scope.currentRepo.staggeredMaxAge || 365;
$scope.currentRepo.staggeredCleanInterval = $scope.currentRepo.staggeredCleanInterval || 3600;
$scope.currentRepo.staggeredVersionsPath = $scope.currentRepo.staggeredVersionsPath || "";
// staggeredMaxAge can validly be zero, which we should not replace
// with the default value of 365. So only set the default if it's
// actually undefined.
if (typeof $scope.currentRepo.staggeredMaxAge === 'undefined') {
$scope.currentRepo.staggeredMaxAge = 365;
}
$scope.editingExisting = true;
$scope.repoEditor.$setPristine();
$('#editRepo').modal();
@@ -877,10 +906,10 @@ function nodeCompare(a, b) {
}
function repoCompare(a, b) {
if (a.Directory < b.Directory) {
if (a.ID < b.ID) {
return -1;
}
return a.Directory > b.Directory;
return a.ID > b.ID;
}
function repoMap(l) {
@@ -942,9 +971,9 @@ function debounce(func, wait) {
} else {
timeout = null;
if (again) {
again = false;
result = func.apply(context, args);
context = args = null;
again = false;
}
}
};
@@ -1014,12 +1043,6 @@ syncthing.filter('metric', function () {
};
});
syncthing.filter('short', function () {
return function (input) {
return input.substr(0, 6);
};
});
syncthing.filter('alwaysNumber', function () {
return function (input) {
if (input === undefined) {
@@ -1029,18 +1052,6 @@ syncthing.filter('alwaysNumber', function () {
};
});
syncthing.filter('shortPath', function () {
return function (input) {
if (input === undefined)
return "";
var parts = input.split(/[\/\\]/);
if (!parts || parts.length <= 3) {
return input;
}
return ".../" + parts.slice(parts.length-2).join("/");
};
});
syncthing.filter('basename', function () {
return function (input) {
if (input === undefined)
@@ -1053,24 +1064,6 @@ syncthing.filter('basename', function () {
};
});
syncthing.filter('clean', function () {
return function (input) {
return encodeURIComponent(input).replace(/%/g, '');
};
});
syncthing.directive('optionEditor', function () {
return {
restrict: 'C',
replace: true,
transclude: true,
scope: {
setting: '=setting',
},
template: '<input type="text" ng-model="config.Options[setting.id]"></input>',
};
});
syncthing.directive('uniqueRepo', function() {
return {
require: 'ngModel',

BIN
gui/font/raleway-500.woff Normal file
View File

Binary file not shown.

View File

@@ -2,5 +2,5 @@
font-family: 'Raleway';
font-style: normal;
font-weight: 500;
src: local('Raleway'), url(raleway-500.ttf) format('truetype');
src: local('Raleway'), url(raleway-500.woff) format('woff');
}

View File

Before

Width:  |  Height:  |  Size: 6.4 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -11,93 +11,12 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<meta name="author" content="">
<link rel="shortcut icon" href="favicon.png">
<link rel="shortcut icon" href="img/favicon.png">
<title>Syncthing | {{thisNodeName()}}</title>
<link href="bootstrap/css/bootstrap.min.css" rel="stylesheet">
<link href="raleway.css" rel="stylesheet">
<style type="text/css">
body {
padding-bottom: 70px;
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
}
h1, h2, h3, h4, h5 {
font-family: "Raleway", "Helvetica Neue", Helvetica, Arial, sans-serif;
}
ul+h5 {
margin-top: 1.5em;
}
.text-monospace {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
}
.table-condensed>thead>tr>th, .table-condensed>tbody>tr>th, .table-condensed>tfoot>tr>th, .table-condensed>thead>tr>td, .table-condensed>tbody>tr>td, .table-condensed>tfoot>tr>td {
border-top: none;
}
.logo {
margin: 0;
padding: 0;
top: -5px;
position: relative;
}
.list-no-bullet {
list-style-type: none
}
.li-column {
display: inline-block;
min-width: 7em;
margin-right: 1em;
background-color: rgb(236, 240, 241);
border-radius: 3px;
padding: 1px 4px;
margin: 2px 2px;
}
.li-column span.data {
margin-left: 0.5em;
min-width: 10em;
text-align: right;
display: inline-block;
}
.ng-cloak {
display: none !important;
}
.table th {
white-space: nowrap;
font-weight: 400;
}
.table td {
padding-left: 20px !important;
}
.table td.small-data {
white-space: nowrap;
}
table.table-condensed {
table-layout: fixed;
}
table.table-condensed td {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
@media (max-width:767px) {
table.table-condensed td {
/* for mobile phones to allow linebreaks in long repro folder/shared with
* columns. */
white-space: normal;
}
}
</style>
<link href="font/raleway.css" rel="stylesheet">
<link href="overrides.css" rel="stylesheet">
</head>
<body>
@@ -107,7 +26,7 @@
<nav class="navbar navbar-top navbar-default" role="navigation">
<div class="container">
<span class="navbar-brand"><img class="logo" src="logo-text-64.png" height="32" width="117"/></span>
<span class="navbar-brand"><img class="logo" src="img/logo-text-64.png" height="32" width="117"/></span>
<p class="navbar-text hidden-xs">{{thisNodeName()}}</p>
<ul class="nav navbar-nav navbar-right">
<li ng-if="upgradeInfo.newer">
@@ -501,7 +420,7 @@
</div>
<div class="form-group" ng-class="{'has-error': repoEditor.repoPath.$invalid && repoEditor.repoPath.$dirty}">
<label translate for="repoPath">Repository Path</label>
<input name="repoPath" placeholder="~/Documents" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
<input name="repoPath" placeholder="~/Documents" ng-disabled="editingExisting" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
<p class="help-block">
<span translate ng-if="repoEditor.repoPath.$valid || repoEditor.repoPath.$pristine">Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for</span> <code>{{system.tilde}}</code>.
<span translate ng-if="repoEditor.repoPath.$error.required && repoEditor.repoPath.$dirty">The repository path cannot be blank.</span>
@@ -757,7 +676,7 @@
<!-- About modal -->
<modal id="about" large="yes" close="yes" status="info" title="About">
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="img/logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
<hr/>
<p translate>Copyright &copy; 2014 Jakob Borg and the following Contributors:</p>
@@ -779,6 +698,7 @@
<li>James Patterson</li>
<li>Jens Diemer</li>
<li>Marcin Dziadus</li>
<li>Michael Tilli</li>
<li>Philippe Schommers</li>
<li>Ryan Sullivan</li>
<li>Tully Robinson</li>
@@ -803,12 +723,12 @@
</modal>
<script src="angular.min.js"></script>
<script src="angular-translate.min.js"></script>
<script src="angular-translate-loader.min.js"></script>
<script src="jquery-2.0.3.min.js"></script>
<script src="angular/angular.min.js"></script>
<script src="angular/angular-translate.min.js"></script>
<script src="angular/angular-translate-loader.min.js"></script>
<script src="jquery/jquery-2.0.3.min.js"></script>
<script src="bootstrap/js/bootstrap.min.js"></script>
<script src="valid-langs.js"></script>
<script src="lang/valid-langs.js"></script>
<script src="app.js"></script>
</body>
</html>

View File

@@ -0,0 +1,7 @@
All files in this directory are auto generated. Do not change any of
them. To contribute translations, please head over to
https://www.transifex.com/projects/p/syncthing/
Any updates made on Transifex will be automatically pulled into these
files.

View File

137
gui/lang/lang-ca.json Normal file
View File

@@ -0,0 +1,137 @@
{
"API Key": "Clau API",
"About": "Sobre",
"Add Node": "Afegir Node",
"Add Repository": "Afegir Repositori",
"Address": "Adreça",
"Addresses": "Adreces",
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
"Announce Server": "Servidor d'anunciament",
"Anonymous Usage Reporting": "Informe anònim d'ús",
"Bugs": "Bugs",
"CPU Utilization": "Utilització del CPU",
"Close": "Tancar",
"Connection Error": "Error de connexió",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg i els següents contribuïdors",
"Delete": "Esborrar",
"Disconnected": "Desconnectat",
"Documentation": "Documentació",
"Download Rate": "Tasa de descarrega",
"Edit": "Editar",
"Edit Node": "Editar Node",
"Edit Repository": "Editar Repositori",
"Enable UPnP": "Habilitat UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
"Error": "Error",
"File Versioning": "Versionat de Fitxers",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres nodes, però els canvis fets en aquest node seran enviats a la resta del cluster.",
"Folder": "Carpeta",
"GUI Authentication Password": "Contrasenya d'autenticació GUI",
"GUI Authentication User": "Usuari d'autenticació GUI",
"GUI Listen Addresses": "Adreça d'escolta del GUI",
"Generate": "Generar",
"Global Discovery": "Descobriment Global",
"Global Discovery Server": "Servidor de Descobriment Global",
"Global Repository": "Repositori Global",
"Idle": "Inactiu",
"Ignore Permissions": "Ignora Permisos",
"Keep Versions": "Mantenir Versions",
"Last seen": "Vist per última vegada",
"Latest Release": "Última publicació",
"Local Discovery": "Descobriment Local",
"Local Discovery Port": "Port de Descobriment Local",
"Local Repository": "Repositori Local",
"Master Repo": "Rep Master",
"Max File Change Rate (KiB/s)": "Tasa Màxima d'intercanvi de fitxer (KiB/s)",
"Max Outstanding Requests": "Màxim de Peticions Pendents",
"Maximum Age": "Antiguitat Màxima",
"Never": "Mai",
"No": "No",
"No File Versioning": "Sense Versionat de Fitxer",
"Node ID": "ID del Node",
"Node Identification": "Identificació del Node",
"Node Name": "Nom Del Node",
"Notice": "Avís",
"OK": "OK",
"Offline": "Desconnectat",
"Online": "Connectat",
"Out Of Sync": "Fora de la Sincronització",
"Outgoing Rate Limit (KiB/s)": "Tasa Límit de Sortida (KiB/s)",
"Override Changes": "Sobreescriure Canvis",
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta del repositori a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Ruta on les versions s'haurien de guardar (deixa-ho buit per fer servir el directori .stversions per defecte al repositori)",
"Please wait": "Si-us-plau espera",
"Preview Usage Report": "Vista Prèvia de l'Informe d'Ús",
"RAM Utilization": "Utilització de la RAM",
"Reconnect Interval (s)": "Interval de Reconnexió (s)",
"Repository ID": "ID del Repositori",
"Repository Master": "Repositori Mestre",
"Repository Path": "Ruta del Repositori",
"Rescan": "Re-escanejar",
"Rescan Interval": "Interval de re-escaneig",
"Rescan Interval (s)": "Interval de re-escaneig (s)",
"Restart": "Reiniciar",
"Restart Needed": "És Necessari Reiniciar",
"Restarting": "Reiniciant",
"Save": "Guardar",
"Scanning": "Escanejant",
"Select the nodes to share this repository with.": "Seleccionar els nodes amb els que es comparteix el repositori.",
"Settings": "Preferències",
"Share With Nodes": "Compartir Amb Els Nodes",
"Shared With": "Compartir Amb",
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identificador curt pel repositori. Ha de ser el mateix per tots els nodes del cluster.",
"Show ID": "Mostrar ID",
"Shown instead of Node ID in the cluster status.": "Mostrat en comptes del ID del Node en l'estat del cluster.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Mostrat en comptes del ID del Node en l'estat del cluster. Serà advertit als altres nodes com un nom opcional per defecte.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Mostrat en comptes del ID del Node en l'estat del cluster. S'actualitzara al nom del node si es deixa buit.",
"Shutdown": "Apagar",
"Simple File Versioning": "Versionat de Fitxers Senzill",
"Source Code": "Codi Font",
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
"Start Browser": "Arrancar Navegador",
"Stopped": "Aturat",
"Support / Forum": "Suport / Fòrum",
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
"Synchronization": "Sincronització",
"Syncing": "Synthing",
"Syncthing has been shut down.": "S'ha aturat el synthing.",
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
"Syncthing is restarting.": "Reiniciant syncthing.",
"Syncthing is upgrading.": "Actualitzant syncthing.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de repositoris i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 caràcters amb lletres i números, els espais i les barres son opcionals.",
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es fan servir els següents intervals: per la primera hora es manté una versió cada 30 segons, pel primer dia es manté una versió cada hora, pel primer cada 30 dies es manté una versió cada dia, fins el màxim d'antiguitat es manté una versió cada setmana.",
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
"The node ID cannot be blank.": "El ID del node no pot estar en blanc.",
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "El ID del node per introduir aquí es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre node. Els espais i les barres son opcionals (s'ignoren).",
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
"The repository ID cannot be blank.": "El ID del repositori no pot estar en blanc.",
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "El ID del repositori ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
"The repository ID must be unique.": "El ID del repositori ha de ser únic",
"The repository path cannot be blank.": "La carpeta del repositori no pot estar en blanc.",
"The rescan interval must be at least 5 seconds.": "El interval de re-escaneig ha de ser com a mínim de 5 segons.",
"Unknown": "Desconegut",
"Up to Date": "Actualitzat",
"Upgrade To {%version%}": "Actualitzar a {{version}}",
"Upgrading": "Actualitzant",
"Upload Rate": "Tasa de Pujada",
"Usage": "Ús",
"Use Compression": "Utilitza compressió",
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
"Version": "Versió",
"Versions Path": "Carpeta de les Versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions son automàticament eliminades si son més antigues que el màxim d'antiguitat o si excedeixen del nombre de fitxers permesos en un interval.",
"When adding a new node, keep in mind that this node must be added on the other side too.": "Quan s'afegeix un nou node recorda que aquest node s'ha d'afegir tambe a l'altre banda.",
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Quan s'afegeix un nou repositori recorda que el ID del repositori s'utilitza per lligar repositoris entre nodes. Es distingeix entre majúscules i minúscules i ha de ser exactament iguals entre tots els nodes.",
"Yes": "Si",
"You must keep at least one version.": "Has de mantenir com a mínim una versió.",
"items": "Elements"
}

View File

View File

View File

View File

View File

@@ -38,7 +38,7 @@
"Idle": "Inactivo",
"Ignore Permissions": "Ignorar permisos",
"Keep Versions": "Conservar versiones",
"Last seen": "Last seen",
"Last seen": "Visto por ultima vez",
"Latest Release": "Última versión",
"Local Discovery": "Búsqueda en red local",
"Local Discovery Port": "Puerto de búsqueda de red local",
@@ -47,7 +47,7 @@
"Max File Change Rate (KiB/s)": "Tasa máxima de cambios (KiB/s)",
"Max Outstanding Requests": "Cantidad máxima de peticiones pendientes",
"Maximum Age": "Maximum Age",
"Never": "Never",
"Never": "Nunca",
"No": "No",
"No File Versioning": "No File Versioning",
"Node ID": "Nodo ID",
@@ -127,11 +127,11 @@
"Use Compression": "Usar compresión",
"Use HTTPS for GUI": "Usar HTTPS para la GUI",
"Version": "Versión",
"Versions Path": "Versions Path",
"Versions Path": "Ruta de versiones",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
"When adding a new node, keep in mind that this node must be added on the other side too.": "Al agregar un nuevo nodo, recuerde que este nodo debe ser agregado en el otro lado también.",
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Al agregar un nuevo repositorio, tenga en mente que el ID de repositorio se utiliza para ligar los repositorios entre nodos. Distingue mayúsculas y minúsculas y debe ser exactamente igual en todos los nodos.",
"Yes": "Si",
"You must keep at least one version.": "Debe mantener al menos una versión",
"items": "items"
"items": "Articulos"
}

View File

@@ -38,7 +38,7 @@
"Idle": "Au repos",
"Ignore Permissions": "Ignorer les permissions",
"Keep Versions": "Conserver les versions",
"Last seen": "Dernière appartition",
"Last seen": "Dernière apparition",
"Latest Release": "Dernière version",
"Local Discovery": "Recherche locale",
"Local Discovery Port": "Port de recherche locale",
@@ -61,7 +61,7 @@
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
"Override Changes": "Écraser les changements",
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin du répertoire sur l'ordinateur local. Il sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
"Please wait": "Merci de patienter",
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
"RAM Utilization": "Utilisation de la RAM",
@@ -84,8 +84,8 @@
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identifiant court pour le répertoire. Il doit être le même sur l'ensemble des nœuds du cluster.",
"Show ID": "Montrer l'ID",
"Shown instead of Node ID in the cluster status.": "Affiché à la place de l'ID du nœud au sein du cluster.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Affiché à la place de l'ID du nœud dans le statut du cluster. Sera annoncé aux autres nœuds comme un nom par défaut optionnel.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Affiché à la place de l'ID du nœud dans le statut du cluster. Sera mis à jour par le nom que le nœud annonce si laissé vide.",
"Shutdown": "Éteindre",
"Simple File Versioning": "Versions simples de fichier",
"Source Code": "Code source",
@@ -97,7 +97,7 @@
"Synchronization": "Synchronisation",
"Syncing": "En cours de synchronisation",
"Syncthing has been shut down.": "Syncthing a été éteint.",
"Syncthing includes the following software or portions thereof:": "Syncthing inclut les logiciels, ou portion de ceux-ci, suivants:",
"Syncthing includes the following software or portions thereof:": "Syncthing intègre les logiciels suivants (ou des éléments provenant de ces logiciels) :",
"Syncthing is restarting.": "Syncthing est cours de redémarrage.",
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
@@ -106,9 +106,9 @@
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des répertoires et les versions de l'application. Si le jeu de données rapportées devait être changé, il vous sera demandé de le valider de nouveau via ce dialogue.",
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID du nœud ne semble pas être valide. Il devrait ressembler à une chaine de 52 caractères comprenant lettres et chiffres, avec des espaces et des traits d'union optionnels.",
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID du nœud inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 comprenant lettres et chiffres, avec des espaces et des traits d'union optionnels.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Les intervalles suivant sont utilisés: la première heure une version est conservée chaque 30 secondes, le premier jour une version est conservée chaque heure, les premiers 30 jours une version est conservée chaque jour, jusqu'à la limite d'âge maximum une version est conservée chaque semaine.",
"The maximum age must be a number and cannot be blank.": "L'ancienneté maximum doit être un nombre et ne peut être vide.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Le temps maximum de conservation d'une version (en jours, mettre à 0 pour conserver les versions pour toujours)",
"The node ID cannot be blank.": "L'ID du nœud ne peut être vide.",
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "L'ID du nœud à insérer peut être trouvé à travers le menu \"Éditer > Montrer l'ID\" des autres nœuds. Les espaces et les traits d'union sont optionnels (ils seront ignorés).",
"The number of old versions to keep, per file.": "Le nombre d'anciennes versions à garder, par fichier.",
@@ -120,7 +120,7 @@
"The rescan interval must be at least 5 seconds.": "L'intervalle de scan doit être d'au minimum 5 secondes.",
"Unknown": "Inconnu",
"Up to Date": "Synchronisation à jour",
"Upgrade To {%version%}": "Upgrader vers {{version}}",
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
"Upgrading": "Mise à jour de Syncthing",
"Upload Rate": "Débit d'envoi",
"Usage": "Utilisation",

View File

View File

@@ -61,7 +61,7 @@
"Outgoing Rate Limit (KiB/s)": "Limite di Velocità in Uscita (KiB/s)",
"Override Changes": "Ignora Modifiche",
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Percorso del deposito nel computer locale. Verrà creato se non esiste già. Il carattere tilde (~) può essere utilizzato come scorciatoia per",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Percorso di salvataggio delle versioni (lasciare vuoto per utilizzare la cartella predefinita .stversions nel deposito).",
"Please wait": "Attendere prego",
"Preview Usage Report": "Anteprima Statistiche di Utilizzo",
"RAM Utilization": "Utilizzo RAM",
@@ -108,7 +108,7 @@
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID del nodo inserito non sembra valido. Dovrebbe essere una stringa di 52 o 56 caratteri costituita da lettere e numeri, con spazi e trattini opzionali.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
"The maximum age must be a number and cannot be blank.": "La durata massima dev'essere un numero e non può essere vuoto.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "La durata massima di una versione (in giorni, imposta a 0 per mantenere le versioni per sempre).",
"The node ID cannot be blank.": "L'ID del nodo non può essere vuoto.",
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Trova l'ID nella finestra di dialogo \"Modifica > Mostra ID\" dell'altro nodo, poi inseriscilo qui. Gli spazi e i trattini sono opzionali (ignorati).",
"The number of old versions to keep, per file.": "Il numero di vecchie versioni da mantenere, per file.",

View File

View File

View File

@@ -25,7 +25,7 @@
"Error": "Erro",
"File Versioning": "Gestão de versões",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "As permissões do ficheiro são ignoradas ao procurar alterações. Utilize nos sistemas de ficheiros FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Os ficheiros são movidos para versões carimbadas com o tempo numa pasta .stversions quando substituídos ou apagados pelo syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Os ficheiros são movidos para versões marcadas com o tempo numa pasta .stversions, quando substituídos ou apagados pelo syncthing.",
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Os ficheiros são protegidos das alterações feitas noutros nós, mas alterações feitas neste nó serão enviadas para o resto do agrupamento.",
"Folder": "Pasta",
"GUI Authentication Password": "Senha da autenticação na interface gráfica",
@@ -53,7 +53,7 @@
"Node ID": "ID do nó",
"Node Identification": "Identificação do nó",
"Node Name": "Nome do nó",
"Notice": "Nota",
"Notice": "Avisos",
"OK": "OK",
"Offline": "Desconectado",
"Online": "Conectado",
@@ -69,14 +69,14 @@
"Repository ID": "ID do repositório",
"Repository Master": "Repositório mestre",
"Repository Path": "Caminho do repositório",
"Rescan": "Voltar a varrer",
"Rescan Interval": "Intervalo entre varrimentos",
"Rescan Interval (s)": "Intervalo entre varrimentos (s)",
"Rescan": "Verificar agora",
"Rescan Interval": "Intervalo entre verificações",
"Rescan Interval (s)": "Intervalo entre verificações (s)",
"Restart": "Reiniciar",
"Restart Needed": "É preciso reiniciar",
"Restarting": "Reiniciando",
"Save": "Gravar",
"Scanning": "Varrendo",
"Scanning": "Verificando",
"Select the nodes to share this repository with.": "Seleccione os nós com os quais vai partilhar este repositório.",
"Settings": "Configurações",
"Share With Nodes": "Partilhar com os nós",
@@ -84,12 +84,12 @@
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identificador curto para o repositório. Tem que ser igual em todos os nós do agrupamento.",
"Show ID": "Mostrar ID",
"Shown instead of Node ID in the cluster status.": "Apresentado ao invés do ID do nó no estado do agrupamento.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Apresentado ao invés do ID do nó no estado do agrupamento. Será divulgado aos outros nós como um nome pré-definido opcional.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Apresentado ao invés do ID do nó no estado do agrupamento. Será actualizado para o nome que o nó divulga, se for deixado em branco.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Apresentado ao invés do ID do nó na informação de estado do agrupamento. Será divulgado aos outros nós como um nome predefinido opcional.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Apresentado ao invés do ID do nó na informação de estado do agrupamento. Será actualizado para o nome que o nó divulga, se for deixado em branco.",
"Shutdown": "Desligar",
"Simple File Versioning": "Gestão de versões de ficheiros simples",
"Source Code": "Código fonte",
"Staggered File Versioning": "Gestão de versões de ficheiros aleatória",
"Staggered File Versioning": "Gestão de versões de ficheiros escalonada",
"Start Browser": "Iniciar navegador",
"Stopped": "Parado",
"Support / Forum": "Suporte / Fórum",
@@ -108,7 +108,7 @@
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "O ID do nó fornecido não parece ser válido. Deveria ser um texto com 52 ou 56 caracteres constituídos por letras e números, com espaços e traços opcionais.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "São utilizados os seguintes intervalos: na primeira hora é guardada uma versão a cada 30 segundos, no primeiro dia é guardada uma versão a cada hora, nos primeiros 30 dias é guardada uma versão por dia e, até que atinja a idade máxima, é guardada uma versão por semana.",
"The maximum age must be a number and cannot be blank.": "A idade máxima tem que ser um número e não pode estar vazia.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Tempo máximo para manter uma versão (em dias, usando 0 para manter a versão para sempre).",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Tempo máximo para manter uma versão (em dias, use 0 para manter a versão para sempre).",
"The node ID cannot be blank.": "O ID do nó não pode estar vazio.",
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "O ID do nó a introduzir pode ser encontrado no diálogo \"Editar > Mostrar ID\" no outro nó. Espaços e traços são opcionais (ignorados).",
"The number of old versions to keep, per file.": "O número de versões antigas a manter, por ficheiro.",
@@ -117,7 +117,7 @@
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "O ID do repositório tem que ser um identificador curto (64 caracteres ou menos) e consiste em letras, números e os caracteres ponto (.), traço (-) e (_).",
"The repository ID must be unique.": "O ID do repositório tem que ser único.",
"The repository path cannot be blank.": "O caminho do repositório não pode estar vazio.",
"The rescan interval must be at least 5 seconds.": "O intervalo de varrimento tem que ter pelo menos 5 segundos.",
"The rescan interval must be at least 5 seconds.": "O intervalo entre verificações tem que ser pelo menos de 5 segundos.",
"Unknown": "Desconhecido",
"Up to Date": "Actualizado",
"Upgrade To {%version%}": "Actualizar para {{version}}",

View File

@@ -47,7 +47,7 @@
"Max File Change Rate (KiB/s)": "Максимальная скорость изменения файлов (KiB/s)",
"Max Outstanding Requests": "Максимальное количество исходящих запросов",
"Maximum Age": "Максимальный срок",
"Never": "Never",
"Never": "Никогда",
"No": "Нет",
"No File Versioning": "Без управления версиями файлов",
"Node ID": "ID Узла",

View File

View File

View File

@@ -38,7 +38,7 @@
"Idle": "Очікування",
"Ignore Permissions": "Ігнорувати права доступу до файлів",
"Keep Versions": "Зберігати версії",
"Last seen": "Last seen",
"Last seen": "З’являвся останній раз",
"Latest Release": "Останній реліз",
"Local Discovery": "Локальне виявлення",
"Local Discovery Port": "Локальний порт для виявлення",
@@ -46,10 +46,10 @@
"Master Repo": "Центральний репозиторій",
"Max File Change Rate (KiB/s)": "Максимальна швидкість змінення файлів (КіБ/с)",
"Max Outstanding Requests": "Максимальна кількість вихідних запитів",
"Maximum Age": "Maximum Age",
"Never": "Never",
"Maximum Age": "Максимальний вік",
"Never": "Ніколи",
"No": "Ні",
"No File Versioning": "No File Versioning",
"No File Versioning": "Версіонування вимкнено",
"Node ID": "ID вузла",
"Node Identification": "Ідентифікатор вузла",
"Node Name": "Назва вузла",
@@ -61,7 +61,7 @@
"Outgoing Rate Limit (KiB/s)": "Ліміт швидкості віддачі (КіБ/с)",
"Override Changes": "Перезаписати зміни",
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Шлях до репозиторія на локальному комп’ютері. Буде створений, якщо такий не існує. Символ тильди (~) може бути використаний як ярлик для",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Шлях, де повинні зберігатися версії (залиште порожнім для зберігання в директорію .stversions в репозиторії)",
"Please wait": "Будь ласка, зачекайте",
"Preview Usage Report": "Попередній перегляд статистичного звіту",
"RAM Utilization": "Використання RAM",
@@ -70,7 +70,7 @@
"Repository Master": "Центральний репозиторій",
"Repository Path": "Шлях до репозиторія",
"Rescan": "Пересканувати",
"Rescan Interval": "Rescan Interval",
"Rescan Interval": "Інтервал для повторного сканування",
"Rescan Interval (s)": "Інтервал для повторного сканування (с)",
"Restart": "Перезапуск",
"Restart Needed": "Необхідний перезапуск",
@@ -84,12 +84,12 @@
"Short identifier for the repository. Must be the same on all cluster nodes.": "Короткий ідентифікатор репозиторія. Повинен бути однаковим на всіх вузлах кластера.",
"Show ID": "Показати ID",
"Shown instead of Node ID in the cluster status.": "Показано замість ID вузла в статусі кластера.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Показується замість ID вузла в статусі кластера. Буде розголошено іншим вузлам як опціональне типове ім’я.",
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Показується замість ID вузла в статусі кластера. Буде оновлено ім’ям, яке розголошене вузлом, якщо залишити порожнім.",
"Shutdown": "Вимкнути",
"Simple File Versioning": "Simple File Versioning",
"Simple File Versioning": "Просте версіонування",
"Source Code": "Сирцевий код",
"Staggered File Versioning": "Staggered File Versioning",
"Staggered File Versioning": "Поступове версіонування",
"Start Browser": "Запустити браузер",
"Stopped": "Зупинено",
"Support / Forum": "Підтримка / Форум",
@@ -106,9 +106,9 @@
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Зашифрована статистика використання відсилається щоденно. Вона використовується для того, щоб розробники розуміли, на яких платформах працює програма, розміри репозиторіїв та версії програми. Якщо набір даних, що збирається зазнає змін, ви обов’язково будете повідомлені через це діалогове вікно.",
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Введений ID вузла невалідний. Ідентифікатор має вигляд строки довжиною 52 символи, що містить цифри та літери, із опціональними пробілами та тире.",
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Введений ID вузла невалідний. Ідентифікатор має вигляд строки довжиною 52 або 56 символів, що містить цифри та літери, із опціональними пробілами та тире.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Використовуються наступні інтервали: для першої години версія зберігається кожні 30 секунд, для першого дня версія зберігається щогодини, для перших 30 днів версія зберігається кожен день, опісля, до максимального строку, версія зберігається щотижня.",
"The maximum age must be a number and cannot be blank.": "Максимальний термін повинен бути числом та не може бути пустим.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максимальний термін, щоб зберігати версію (у днях, вствновіть в 0, щоби зберігати версії назавжди).",
"The node ID cannot be blank.": "ID вузла не може бути порожнім.",
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "ID вузла, який необхідно додати. Може бути знайдений у вікні \"Редагувати > Показати ID\" на іншому вузлі. Пробіли та тире опціональні (вони ігноруються програмою).",
"The number of old versions to keep, per file.": "Кількість старих версій, яку необхідно зберігати для кожного файлу.",
@@ -117,7 +117,7 @@
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "ID репозиторія повинен бути коротким ідентифікатором (64 символи або менше), що містить лише цифри та літери, знак крапки (.), тире (-) та нижнього підкреслення (_).",
"The repository ID must be unique.": "ID репозиторія повинен бути унікальним.",
"The repository path cannot be blank.": "Шлях до репозиторія не може бути порожнім.",
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
"The rescan interval must be at least 5 seconds.": "Інтервал повторного сканування повинен бути принаймні 5 секунд.",
"Unknown": "Невідомо",
"Up to Date": "Актуальа версія",
"Upgrade To {%version%}": "Оновити до {{version}}",
@@ -127,8 +127,8 @@
"Use Compression": "Використовувати компресію",
"Use HTTPS for GUI": "Використовувати HTTPS для доступу до панелі управління",
"Version": "Версія",
"Versions Path": "Versions Path",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
"Versions Path": "Шлях до версій",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Версії автоматично видаляються, якщо вони старше, ніж максимальний вік, або перевищують допустиму кількість файлів за інтервал.",
"When adding a new node, keep in mind that this node must be added on the other side too.": "Коли додаєте новий вузол, пам’ятайте, що цей вузол повинен бути доданий і на іншій стороні.",
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Коли додаєте новий репозиторій, пам’ятайте, що ID репозиторія використовується для того, щоб зв’язувати репозиторії разом між вузлами. Назви є чутливими до регістра та повинні співпадати точно між усіма вузлами.",
"Yes": "Так",

View File

@@ -38,7 +38,7 @@
"Idle": "空闲",
"Ignore Permissions": "忽略文件权限",
"Keep Versions": "保留历史版本数量",
"Last seen": "Last seen",
"Last seen": "最后可见",
"Latest Release": "最新版本",
"Local Discovery": "在局域网上寻找节点",
"Local Discovery Port": "局域网寻找监听端口",
@@ -47,7 +47,7 @@
"Max File Change Rate (KiB/s)": "最大文件变化速率(千字节/每秒)",
"Max Outstanding Requests": "同时可响应的请求数上限",
"Maximum Age": "历史版本最长保留时间",
"Never": "Never",
"Never": "从未",
"No": "否",
"No File Versioning": "不启用版本控制",
"Node ID": "节点ID",

View File

@@ -38,7 +38,7 @@
"Idle": "閒置",
"Ignore Permissions": "忽略權限",
"Keep Versions": "保留版本數",
"Last seen": "Last seen",
"Last seen": "最後發現時間",
"Latest Release": "最新發佈",
"Local Discovery": "本地探索",
"Local Discovery Port": "本地探索連接埠",
@@ -47,7 +47,7 @@
"Max File Change Rate (KiB/s)": "最大檔案改變速率 (KiB/s)",
"Max Outstanding Requests": "最大未完成的請求",
"Maximum Age": "最長存留時間",
"Never": "Never",
"Never": "從未",
"No": "否",
"No File Versioning": "無檔案版本控制",
"Node ID": "節點識別碼",
@@ -89,7 +89,7 @@
"Shutdown": "關閉",
"Simple File Versioning": "簡單檔案版本控制",
"Source Code": "原始碼",
"Staggered File Versioning": "交錯式檔案版本控制",
"Staggered File Versioning": "變動式檔案版本控制",
"Start Browser": "啟動瀏覽器",
"Stopped": "已停止",
"Support / Forum": "支援 / 論壇",

1
gui/lang/valid-langs.js Normal file
View File

@@ -0,0 +1 @@
var validLangs = ["bg","ca","da","de","el","en","es","fr","hu","it","lt","nl","pt-PT","ru","sv","tr","uk","zh-CN","zh-TW"]

80
gui/overrides.css Normal file
View File

@@ -0,0 +1,80 @@
body {
padding-bottom: 70px;
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
}
h1, h2, h3, h4, h5 {
font-family: "Raleway", "Helvetica Neue", Helvetica, Arial, sans-serif;
}
ul+h5 {
margin-top: 1.5em;
}
.text-monospace {
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
}
.table-condensed>thead>tr>th, .table-condensed>tbody>tr>th, .table-condensed>tfoot>tr>th, .table-condensed>thead>tr>td, .table-condensed>tbody>tr>td, .table-condensed>tfoot>tr>td {
border-top: none;
}
.logo {
margin: 0;
padding: 0;
top: -5px;
position: relative;
}
.list-no-bullet {
list-style-type: none
}
.li-column {
display: inline-block;
min-width: 7em;
margin-right: 1em;
background-color: rgb(236, 240, 241);
border-radius: 3px;
padding: 1px 4px;
margin: 2px 2px;
}
.li-column span.data {
margin-left: 0.5em;
min-width: 10em;
text-align: right;
display: inline-block;
}
.ng-cloak {
display: none !important;
}
.table th {
white-space: nowrap;
font-weight: 400;
}
.table td {
padding-left: 20px !important;
}
.table td.small-data {
white-space: nowrap;
}
table.table-condensed {
table-layout: fixed;
}
table.table-condensed td {
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
@media (max-width:767px) {
table.table-condensed td {
/* for mobile phones to allow linebreaks in long repro folder/shared with
* columns. */
white-space: normal;
}
}

View File

Binary file not shown.

View File

@@ -1 +0,0 @@
var validLangs = ["bg","da","de","el","en","es","fr","hu","it","lt","nl","pt-PT","ru","sv","tr","uk","zh-CN","zh-TW"]

146
ignore/ignore.go Normal file
View File

@@ -0,0 +1,146 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package ignore
import (
"bufio"
"fmt"
"io"
"os"
"path/filepath"
"regexp"
"strings"
"github.com/syncthing/syncthing/fnmatch"
)
type Pattern struct {
match *regexp.Regexp
include bool
}
type Patterns []Pattern
func Load(file string) (Patterns, error) {
seen := make(map[string]bool)
return loadIgnoreFile(file, seen)
}
func Parse(r io.Reader, file string) (Patterns, error) {
seen := map[string]bool{
file: true,
}
return parseIgnoreFile(r, file, seen)
}
func (l Patterns) Match(file string) bool {
for _, pattern := range l {
if pattern.match.MatchString(file) {
return pattern.include
}
}
return false
}
func loadIgnoreFile(file string, seen map[string]bool) (Patterns, error) {
if seen[file] {
return nil, fmt.Errorf("Multiple include of ignore file %q", file)
}
seen[file] = true
fd, err := os.Open(file)
if err != nil {
return nil, err
}
defer fd.Close()
return parseIgnoreFile(fd, file, seen)
}
func parseIgnoreFile(fd io.Reader, currentFile string, seen map[string]bool) (Patterns, error) {
var exps Patterns
addPattern := func(line string) error {
include := true
if strings.HasPrefix(line, "!") {
line = line[1:]
include = false
}
if strings.HasPrefix(line, "/") {
// Pattern is rooted in the current dir only
exp, err := fnmatch.Convert(line[1:], fnmatch.FNM_PATHNAME)
if err != nil {
return fmt.Errorf("Invalid pattern %q in ignore file", line)
}
exps = append(exps, Pattern{exp, include})
} else if strings.HasPrefix(line, "**/") {
// Add the pattern as is, and without **/ so it matches in current dir
exp, err := fnmatch.Convert(line, fnmatch.FNM_PATHNAME)
if err != nil {
return fmt.Errorf("Invalid pattern %q in ignore file", line)
}
exps = append(exps, Pattern{exp, include})
exp, err = fnmatch.Convert(line[3:], fnmatch.FNM_PATHNAME)
if err != nil {
return fmt.Errorf("Invalid pattern %q in ignore file", line)
}
exps = append(exps, Pattern{exp, include})
} else if strings.HasPrefix(line, "#include ") {
includeFile := filepath.Join(filepath.Dir(currentFile), line[len("#include "):])
includes, err := loadIgnoreFile(includeFile, seen)
if err != nil {
return err
} else {
exps = append(exps, includes...)
}
} else {
// Path name or pattern, add it so it matches files both in
// current directory and subdirs.
exp, err := fnmatch.Convert(line, fnmatch.FNM_PATHNAME)
if err != nil {
return fmt.Errorf("Invalid pattern %q in ignore file", line)
}
exps = append(exps, Pattern{exp, include})
exp, err = fnmatch.Convert("**/"+line, fnmatch.FNM_PATHNAME)
if err != nil {
return fmt.Errorf("Invalid pattern %q in ignore file", line)
}
exps = append(exps, Pattern{exp, include})
}
return nil
}
scanner := bufio.NewScanner(fd)
var err error
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
switch {
case line == "":
continue
case strings.HasPrefix(line, "#"):
err = addPattern(line)
case strings.HasSuffix(line, "/**"):
err = addPattern(line)
case strings.HasSuffix(line, "/"):
err = addPattern(line)
if err == nil {
err = addPattern(line + "**")
}
default:
err = addPattern(line)
if err == nil {
err = addPattern(line + "/**")
}
}
if err != nil {
return nil, err
}
}
return exps, nil
}

108
ignore/ignore_test.go Normal file
View File

@@ -0,0 +1,108 @@
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
// All rights reserved. Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package ignore_test
import (
"bytes"
"path/filepath"
"testing"
"github.com/syncthing/syncthing/ignore"
)
func TestIgnore(t *testing.T) {
pats, err := ignore.Load("testdata/.stignore")
if err != nil {
t.Fatal(err)
}
var tests = []struct {
f string
r bool
}{
{"afile", false},
{"bfile", true},
{"cfile", false},
{"dfile", false},
{"efile", true},
{"ffile", true},
{"dir1", false},
{filepath.Join("dir1", "cfile"), true},
{filepath.Join("dir1", "dfile"), false},
{filepath.Join("dir1", "efile"), true},
{filepath.Join("dir1", "ffile"), false},
{"dir2", false},
{filepath.Join("dir2", "cfile"), false},
{filepath.Join("dir2", "dfile"), true},
{filepath.Join("dir2", "efile"), true},
{filepath.Join("dir2", "ffile"), false},
{filepath.Join("dir3"), true},
{filepath.Join("dir3", "afile"), true},
}
for i, tc := range tests {
if r := pats.Match(tc.f); r != tc.r {
t.Errorf("Incorrect ignoreFile() #%d (%s); E: %v, A: %v", i, tc.f, tc.r, r)
}
}
}
func TestExcludes(t *testing.T) {
stignore := `
!iex2
!ign1/ex
ign1
i*2
!ign2
`
pats, err := ignore.Parse(bytes.NewBufferString(stignore), ".stignore")
if err != nil {
t.Fatal(err)
}
var tests = []struct {
f string
r bool
}{
{"ign1", true},
{"ign2", true},
{"ibla2", true},
{"iex2", false},
{"ign1/ign", true},
{"ign1/ex", false},
{"ign1/iex2", false},
{"iex2/ign", false},
{"foo/bar/ign1", true},
{"foo/bar/ign2", true},
{"foo/bar/iex2", false},
}
for _, tc := range tests {
if r := pats.Match(tc.f); r != tc.r {
t.Errorf("Incorrect match for %s: %v != %v", tc.f, r, tc.r)
}
}
}
func TestBadPatterns(t *testing.T) {
var badPatterns = []string{
"[",
"/[",
"**/[",
"#include nonexistent",
"#include .stignore",
"!#include makesnosense",
}
for _, pat := range badPatterns {
parsed, err := ignore.Parse(bytes.NewBufferString(pat), ".stignore")
if err == nil {
t.Errorf("No error for pattern %q: %v", pat, parsed)
}
}
}

6
ignore/testdata/.stignore vendored Normal file
View File

@@ -0,0 +1,6 @@
#include excludes
bfile
dir1/cfile
**/efile
/ffile

1
ignore/testdata/dir3/cfile vendored Normal file
View File

@@ -0,0 +1 @@
baz

1
ignore/testdata/dir3/dfile vendored Normal file
View File

@@ -0,0 +1 @@
quux

2
ignore/testdata/excludes vendored Normal file
View File

@@ -0,0 +1,2 @@
dir2/dfile
#include further-excludes

1
ignore/testdata/further-excludes vendored Normal file
View File

@@ -0,0 +1 @@
dir3

View File

@@ -2,7 +2,7 @@
set -euo pipefail
IFS=$'\n\t'
go test -tags integration -v
./test-http.sh
./test-merge.sh
./test-delupd.sh
go test -tags integration -v

View File

@@ -19,6 +19,7 @@ import (
"os"
"os/exec"
"path/filepath"
"runtime"
"time"
)
@@ -54,12 +55,16 @@ func (p *syncthingProcess) start() error {
}
func (p *syncthingProcess) stop() {
p.cmd.Process.Kill()
if runtime.GOOS != "windows" {
p.cmd.Process.Signal(os.Interrupt)
} else {
p.cmd.Process.Kill()
}
p.cmd.Wait()
}
func (p *syncthingProcess) peerCompletion() (map[string]int, error) {
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/rest/debug/peerCompletion", p.port))
resp, err := http.Get(fmt.Sprintf("http://127.0.0.1:%d/rest/debug/peerCompletion", p.port))
if err != nil {
return nil, err
}

View File

@@ -24,13 +24,6 @@ var env = []string{
"STTRACE=model",
}
func TestRestartBothDuringTransfer(t *testing.T) {
// Give the receiver some time to rot with needed files but
// without any peer. This triggers
// https://github.com/syncthing/syncthing/issues/463
testRestartDuringTransfer(t, true, true, 10*time.Second, 0)
}
func TestRestartReceiverDuringTransfer(t *testing.T) {
testRestartDuringTransfer(t, false, true, 0, 0)
}
@@ -39,6 +32,13 @@ func TestRestartSenderDuringTransfer(t *testing.T) {
testRestartDuringTransfer(t, true, false, 0, 0)
}
func TestRestartSenderAndReceiverDuringTransfer(t *testing.T) {
// // Give the receiver some time to rot with needed files but
// // without any peer. This triggers
// // https://github.com/syncthing/syncthing/issues/463
testRestartDuringTransfer(t, true, true, 10*time.Second, 0)
}
func testRestartDuringTransfer(t *testing.T, restartSender, restartReceiver bool, senderDelay, receiverDelay time.Duration) {
log.Println("Cleaning...")
err := removeAll("s1", "s2", "f1/index", "f2/index")
@@ -134,6 +134,9 @@ func testRestartDuringTransfer(t *testing.T, restartSender, restartReceiver bool
time.Sleep(1 * time.Second)
}
sender.stop()
receiver.stop()
log.Println("Comparing directories...")
err = compareDirectories("s1", "s2")
if err != nil {

View File

@@ -23,7 +23,7 @@ start() {
stop() {
for i in 1 2 3 ; do
curl -HX-API-Key:abc123 -X POST "http://localhost:808$i/rest/shutdown"
curl -HX-API-Key:abc123 -X POST "http://127.0.0.1:808$i/rest/shutdown"
done
exit $1
}
@@ -31,9 +31,9 @@ stop() {
testConvergence() {
while true ; do
sleep 5
s1comp=$(curl -HX-API-Key:abc123 -s "http://localhost:8082/rest/debug/peerCompletion" | ./json "$id1")
s2comp=$(curl -HX-API-Key:abc123 -s "http://localhost:8083/rest/debug/peerCompletion" | ./json "$id2")
s3comp=$(curl -HX-API-Key:abc123 -s "http://localhost:8081/rest/debug/peerCompletion" | ./json "$id3")
s1comp=$(curl -HX-API-Key:abc123 -s "http://127.0.0.1:8082/rest/debug/peerCompletion" | ./json "$id1")
s2comp=$(curl -HX-API-Key:abc123 -s "http://127.0.0.1:8083/rest/debug/peerCompletion" | ./json "$id2")
s3comp=$(curl -HX-API-Key:abc123 -s "http://127.0.0.1:8081/rest/debug/peerCompletion" | ./json "$id3")
s1comp=${s1comp:-0}
s2comp=${s2comp:-0}
s3comp=${s3comp:-0}
@@ -119,7 +119,7 @@ alterFiles() {
pkill -CONT syncthing
echo "Restarting instance 2"
curl -HX-API-Key:abc123 -X POST "http://localhost:8082/rest/restart"
curl -HX-API-Key:abc123 -X POST "http://127.0.0.1:8082/rest/restart"
}
rm -rf h?/*.idx.gz h?/index

View File

@@ -21,7 +21,7 @@ start() {
stop() {
echo "Stopping..."
for i in 1 2 ; do
curl -HX-API-Key:abc123 -X POST "http://localhost:808$i/rest/shutdown"
curl -HX-API-Key:abc123 -X POST "http://127.0.0.1:808$i/rest/shutdown"
done
}
@@ -46,8 +46,8 @@ setup() {
testConvergence() {
while true ; do
sleep 5
s1comp=$(curl -HX-API-Key:abc123 -s "http://localhost:8082/rest/debug/peerCompletion" | ./json "$id1")
s2comp=$(curl -HX-API-Key:abc123 -s "http://localhost:8081/rest/debug/peerCompletion" | ./json "$id2")
s1comp=$(curl -HX-API-Key:abc123 -s "http://127.0.0.1:8082/rest/debug/peerCompletion" | ./json "$id1")
s2comp=$(curl -HX-API-Key:abc123 -s "http://127.0.0.1:8081/rest/debug/peerCompletion" | ./json "$id2")
s1comp=${s1comp:-0}
s2comp=${s2comp:-0}
tot=$(($s1comp + $s2comp))

View File

@@ -10,8 +10,8 @@ id3=373HSRP-QLPNLIE-JYKZVQF-P4PKZ63-R2ZE6K3-YD442U2-JHBGBQG-WWXAHAU
stop() {
echo Stopping
curl -s -o/dev/null -HX-API-Key:abc123 -X POST http://localhost:8081/rest/shutdown
curl -s -o/dev/null -HX-API-Key:abc123 -X POST http://localhost:8082/rest/shutdown
curl -s -o/dev/null -HX-API-Key:abc123 -X POST http://127.0.0.1:8081/rest/shutdown
curl -s -o/dev/null -HX-API-Key:abc123 -X POST http://127.0.0.1:8082/rest/shutdown
exit $1
}
@@ -26,14 +26,14 @@ syncthing -home h2 > 2.out 2>&1 &
sleep 1
echo Fetching CSRF tokens
curl -s -o /dev/null http://testuser:testpass@localhost:8081/index.html
curl -s -o /dev/null http://localhost:8082/index.html
curl -s -o /dev/null http://testuser:testpass@127.0.0.1:8081/index.html
curl -s -o /dev/null http://127.0.0.1:8082/index.html
sleep 1
echo Testing
./http -target localhost:8081 -user testuser -pass testpass -csrf h1/csrftokens.txt || stop 1
./http -target localhost:8081 -api abc123 || stop 1
./http -target localhost:8082 -csrf h2/csrftokens.txt || stop 1
./http -target localhost:8082 -api abc123 || stop 1
./http -target 127.0.0.1:8081 -user testuser -pass testpass -csrf h1/csrftokens.txt || stop 1
./http -target 127.0.0.1:8081 -api abc123 || stop 1
./http -target 127.0.0.1:8082 -csrf h2/csrftokens.txt || stop 1
./http -target 127.0.0.1:8082 -api abc123 || stop 1
stop 0

Some files were not shown because too many files have changed in this diff Show More