mirror of
https://github.com/syncthing/syncthing.git
synced 2026-01-03 11:29:10 -05:00
Compare commits
197 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c07b39e58b | ||
|
|
384c543ab9 | ||
|
|
592b13d7db | ||
|
|
6fdba3c02e | ||
|
|
cbf758ead9 | ||
|
|
d1ad778a64 | ||
|
|
ce5ad296ae | ||
|
|
797e105786 | ||
|
|
d17d80747e | ||
|
|
55ea207a55 | ||
|
|
6384d1e5a3 | ||
|
|
aba01cdace | ||
|
|
517b7a14b4 | ||
|
|
2927de7cf9 | ||
|
|
6471ba70e4 | ||
|
|
9f9de01c51 | ||
|
|
3662decb8b | ||
|
|
583bcfb3c7 | ||
|
|
c45e3fa4d5 | ||
|
|
24cbcef620 | ||
|
|
e2a520ff49 | ||
|
|
a5e3317e28 | ||
|
|
5638c4ba87 | ||
|
|
bf7a128142 | ||
|
|
c5243cd4d5 | ||
|
|
db868ed29d | ||
|
|
450c7d80f8 | ||
|
|
abbb001975 | ||
|
|
f35d83ae48 | ||
|
|
a2315dc95e | ||
|
|
4e2feb6fbc | ||
|
|
13602b6769 | ||
|
|
85dba25246 | ||
|
|
66432672b3 | ||
|
|
e6d96e4c18 | ||
|
|
9812305bb9 | ||
|
|
f680a63a1f | ||
|
|
781d63cb2a | ||
|
|
9ff04ee3d8 | ||
|
|
5d85a24977 | ||
|
|
1e51fca0b0 | ||
|
|
5537d53f9a | ||
|
|
50a4170541 | ||
|
|
3a8255bda1 | ||
|
|
a617846f0f | ||
|
|
5772588c29 | ||
|
|
9d0dc45f74 | ||
|
|
c6aefbc9a0 | ||
|
|
dbbafb0cc9 | ||
|
|
6e8272f78f | ||
|
|
baf8a63121 | ||
|
|
fc4a76ee50 | ||
|
|
2117d1d035 | ||
|
|
0a70e0b7b6 | ||
|
|
64ffac5671 | ||
|
|
ac384e8a9c | ||
|
|
f97c8222c7 | ||
|
|
728289ee3a | ||
|
|
5faa16f9ee | ||
|
|
4e608b116a | ||
|
|
521b49166e | ||
|
|
8f32decf2d | ||
|
|
0d51f83d2d | ||
|
|
78c6a68db9 | ||
|
|
2949ab73e2 | ||
|
|
223741820d | ||
|
|
4b57821f52 | ||
|
|
74271a479f | ||
|
|
c377177108 | ||
|
|
84eb729bd4 | ||
|
|
14aea365c5 | ||
|
|
97cb3fa5a5 | ||
|
|
b5368db704 | ||
|
|
8c442b72f3 | ||
|
|
f8f6791d39 | ||
|
|
0c09f077aa | ||
|
|
af2831d7b6 | ||
|
|
64d5d4aec7 | ||
|
|
619a6b2adb | ||
|
|
33a26bc0cf | ||
|
|
b445a7c4d3 | ||
|
|
e6892d0c3e | ||
|
|
33e9a88b56 | ||
|
|
df00a2251e | ||
|
|
92c44c8abe | ||
|
|
8e4f7bbd3e | ||
|
|
a40217cf07 | ||
|
|
e586fda5f2 | ||
|
|
a58564ff88 | ||
|
|
89885b9fb9 | ||
|
|
5c7d977ae0 | ||
|
|
2cd3ee9698 | ||
|
|
dd3080e018 | ||
|
|
5915e8e86a | ||
|
|
3c67c06654 | ||
|
|
76232ca573 | ||
|
|
5235e82bda | ||
|
|
10f0713257 | ||
|
|
e9c7970ea4 | ||
|
|
1a6ac4aeb1 | ||
|
|
f633bdddf0 | ||
|
|
de0b91d157 | ||
|
|
2e77e498f5 | ||
|
|
4ac67eb1f9 | ||
|
|
2b536de37f | ||
|
|
2ffa92ba1b | ||
|
|
6ecddd8388 | ||
|
|
bd2772ea4c | ||
|
|
92bf79d53b | ||
|
|
eebe0eeb71 | ||
|
|
1068eaa0b9 | ||
|
|
faac3e7d7c | ||
|
|
dab4340207 | ||
|
|
fd2567748f | ||
|
|
c2daedbd11 | ||
|
|
7c604beb73 | ||
|
|
8c42aea827 | ||
|
|
cf1bfdfb61 | ||
|
|
75b26513e1 | ||
|
|
6c09a77a97 | ||
|
|
67389c39fb | ||
|
|
c326103e6e | ||
|
|
c2120a16da | ||
|
|
258ad4352e | ||
|
|
435d3958f4 | ||
|
|
b0408ef5c6 | ||
|
|
1c41b0bc2f | ||
|
|
aa827f3042 | ||
|
|
f44f5964bb | ||
|
|
91ba93bd7a | ||
|
|
0abe4cefb4 | ||
|
|
bccd460f3b | ||
|
|
d1023004e1 | ||
|
|
04a5f9cb04 | ||
|
|
9818e2b550 | ||
|
|
fe43e3b89d | ||
|
|
e1f1ae041f | ||
|
|
5bcf26e324 | ||
|
|
5f47a8149f | ||
|
|
00b662b53a | ||
|
|
faf519ab1b | ||
|
|
fce73f6f17 | ||
|
|
887890baf5 | ||
|
|
c66b24feeb | ||
|
|
84c6f147ad | ||
|
|
0cdb0daa8c | ||
|
|
eee702f299 | ||
|
|
df65247325 | ||
|
|
1a174e75d3 | ||
|
|
9e1fd3454f | ||
|
|
3b1603cadf | ||
|
|
8803bac708 | ||
|
|
3a01eaa4a6 | ||
|
|
9f84c1c448 | ||
|
|
dda0390156 | ||
|
|
c74509dd5f | ||
|
|
f61bbb2ff4 | ||
|
|
e7f60161a3 | ||
|
|
ebec4fbc24 | ||
|
|
1d4105ae3d | ||
|
|
586d49f0c3 | ||
|
|
5b0fab0697 | ||
|
|
2b3359dff3 | ||
|
|
63203aa14c | ||
|
|
716a8329c2 | ||
|
|
dab0aec85e | ||
|
|
1f1ab017c0 | ||
|
|
b6912ef95e | ||
|
|
db54dca694 | ||
|
|
0e751b983c | ||
|
|
997b20a975 | ||
|
|
386f9c42c2 | ||
|
|
cfae06db65 | ||
|
|
44260b7b5c | ||
|
|
13063b957f | ||
|
|
ee05e12480 | ||
|
|
5538545fb0 | ||
|
|
bc1167c2c5 | ||
|
|
c57656e4c3 | ||
|
|
264400a984 | ||
|
|
408db4eb1d | ||
|
|
9347f223ef | ||
|
|
518aa30c9c | ||
|
|
6bbf1f9355 | ||
|
|
b221e4d445 | ||
|
|
580fccbfca | ||
|
|
045916efcc | ||
|
|
4f92482294 | ||
|
|
2f055a75a0 | ||
|
|
f0621207e3 | ||
|
|
d657bc4e3d | ||
|
|
a1fd07b27c | ||
|
|
52219c5f3f | ||
|
|
1a66461e07 | ||
|
|
d20df12168 | ||
|
|
668b429615 | ||
|
|
7db528be39 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -9,3 +9,4 @@ coverage.out
|
||||
files/pidx
|
||||
bin
|
||||
perfstats*.csv
|
||||
coverage.xml
|
||||
|
||||
19
.travis.yml
19
.travis.yml
@@ -1,19 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.3
|
||||
- tip
|
||||
|
||||
install:
|
||||
- export PATH=$PATH:$HOME/gopath/bin
|
||||
- ./build.sh setup
|
||||
|
||||
script:
|
||||
- ./build.sh test-cov
|
||||
|
||||
after_success:
|
||||
- goveralls -coverprofile=coverage.out -service=travis-ci -package=syncthing/syncthing -repotoken="$COVERALLS_TOKEN"
|
||||
|
||||
env:
|
||||
global:
|
||||
secure: "TSPJDsokGCQhKLjgG3c58qHn8Qxhh4zEkWFf0XIOOY2nlDVzdgXDsC+Nq0YaP4106Ee4FgkSefsUTQV5lq/IyYW8elgqlgghjOtOi6RJa14eIS9Yy5Bkx6MXn0QfZX/lG+sy42pKSNk43y9GWx/qrt4nkfTtTvI5cXgwDGYdmX8="
|
||||
@@ -34,7 +34,10 @@ latest info on Transifex.
|
||||
Please do contribute! If you want to contribute but are unsure where to
|
||||
start, the [Contributions Needed
|
||||
topic](http://discourse.syncthing.net/t/49) lists areas in need of
|
||||
attention. In general, any open issues are fair game!
|
||||
attention. In general, any open issues are fair game! Be prepared for a
|
||||
[certain amount of
|
||||
review](https://discourse.syncthing.net/t/733); it's all in the name of
|
||||
quality. :)
|
||||
|
||||
## Licensing
|
||||
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Alexander Graf <register-github@alex-graf.de>
|
||||
Andrew Dunham <andrew@du.nham.ca>
|
||||
Audrius Butkevicius <audrius.butkevicius@gmail.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
|
||||
Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
James Patterson <jamespatterson@operamail.com>
|
||||
Jens Diemer <github.com@jensdiemer.de>
|
||||
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
Marcin Dziadus <dziadus.marcin@gmail.com>
|
||||
Michael Tilli <pyfisch@gmail.com>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Tully Robinson <tully@tojr.org>
|
||||
|
||||
6
Godeps/Godeps.json
generated
6
Godeps/Godeps.json
generated
@@ -37,11 +37,11 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/bkaradzic/go-lz4",
|
||||
"Rev": "77e2ba877bde9da31213bec75dbbe197fa507c21"
|
||||
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/xdr",
|
||||
"Rev": "e1714bbe4764b15490fcc8ebd25d4bd9ea50a4b9"
|
||||
"Rev": "a597b63b87d6140f79084c8aab214b4d533833a1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/juju/ratelimit",
|
||||
@@ -49,7 +49,7 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "a44c00531ccc005546f20c6e00ab7bb9a8f6b2e0"
|
||||
"Rev": "9bca75c48d6c31becfbb127702b425e7226052e3"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/coding",
|
||||
|
||||
12
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
12
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/lz4-example/main.go
generated
vendored
@@ -72,10 +72,18 @@ func main() {
|
||||
|
||||
if *decompress {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, _ = lz4.Decode(nil, data)
|
||||
data, err = lz4.Decode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to decode:", err)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
data, _ = ioutil.ReadAll(input)
|
||||
data, _ = lz4.Encode(nil, data)
|
||||
data, err = lz4.Encode(nil, data)
|
||||
if err != nil {
|
||||
fmt.Println("Failed to encode:", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(args[1], data, 0644)
|
||||
|
||||
4
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
4
Godeps/_workspace/src/github.com/bkaradzic/go-lz4/writer.go
generated
vendored
@@ -121,7 +121,7 @@ func Encode(dst, src []byte) ([]byte, error) {
|
||||
)
|
||||
|
||||
for {
|
||||
if int(e.pos)+4 >= len(e.src) {
|
||||
if int(e.pos)+12 >= len(e.src) {
|
||||
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
|
||||
return e.dst[:e.dpos], nil
|
||||
}
|
||||
@@ -158,7 +158,7 @@ func Encode(dst, src []byte) ([]byte, error) {
|
||||
ref += minMatch
|
||||
e.anchor = e.pos
|
||||
|
||||
for int(e.pos) < len(e.src) && e.src[e.pos] == e.src[ref] {
|
||||
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
|
||||
e.pos++
|
||||
ref++
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
@@ -22,7 +22,7 @@ func (r *Reader) ReadUint8() uint8 {
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint8=%d (0x%08x)", r.b[0], r.b[0])
|
||||
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
|
||||
}
|
||||
return r.b[0]
|
||||
}
|
||||
@@ -43,7 +43,7 @@ func (r *Reader) ReadUint16() uint16 {
|
||||
v := uint16(r.b[1]) | uint16(r.b[0])<<8
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint16=%d (0x%08x)", v, v)
|
||||
dl.Printf("rd uint16=%d (0x%04x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
@@ -249,7 +249,9 @@ func (p *dbBench) newIter() iterator.Iterator {
|
||||
}
|
||||
|
||||
func (p *dbBench) close() {
|
||||
p.b.Log(p.db.s.tops.bpool)
|
||||
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
|
||||
p.b.Log("Block pool stats: ", bp)
|
||||
}
|
||||
p.db.Close()
|
||||
p.stor.Close()
|
||||
os.RemoveAll(benchDB)
|
||||
|
||||
148
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
148
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
@@ -11,115 +11,149 @@ import (
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// SetFunc used by Namespace.Get method to create a cache object. SetFunc
|
||||
// may return ok false, in that case the cache object will not be created.
|
||||
type SetFunc func() (ok bool, value interface{}, charge int, fin SetFin)
|
||||
// SetFunc is the function that will be called by Namespace.Get to create
|
||||
// a cache object, if charge is less than one than the cache object will
|
||||
// not be registered to cache tree, if value is nil then the cache object
|
||||
// will not be created.
|
||||
type SetFunc func() (charge int, value interface{})
|
||||
|
||||
// SetFin will be called when corresponding cache object are released.
|
||||
type SetFin func()
|
||||
// DelFin is the function that will be called as the result of a delete operation.
|
||||
// Exist == true is indication that the object is exist, and pending == true is
|
||||
// indication of deletion already happen but haven't done yet (wait for all handles
|
||||
// to be released). And exist == false means the object doesn't exist.
|
||||
type DelFin func(exist, pending bool)
|
||||
|
||||
// DelFin will be called when corresponding cache object are released.
|
||||
// DelFin will be called after SetFin. The exist is true if the corresponding
|
||||
// cache object is actually exist in the cache tree.
|
||||
type DelFin func(exist bool)
|
||||
|
||||
// PurgeFin will be called when corresponding cache object are released.
|
||||
// PurgeFin will be called after SetFin. If PurgeFin present DelFin will
|
||||
// not be executed but passed to the PurgeFin, it is up to the caller
|
||||
// to call it or not.
|
||||
type PurgeFin func(ns, key uint64, delfin DelFin)
|
||||
// PurgeFin is the function that will be called as the result of a purge operation.
|
||||
type PurgeFin func(ns, key uint64)
|
||||
|
||||
// Cache is a cache tree. A cache instance must be goroutine-safe.
|
||||
type Cache interface {
|
||||
// SetCapacity sets cache capacity.
|
||||
// SetCapacity sets cache tree capacity.
|
||||
SetCapacity(capacity int)
|
||||
|
||||
// GetNamespace gets or creates a cache namespace for the given id.
|
||||
// Capacity returns cache tree capacity.
|
||||
Capacity() int
|
||||
|
||||
// Used returns used cache tree capacity.
|
||||
Used() int
|
||||
|
||||
// Size returns entire alive cache objects size.
|
||||
Size() int
|
||||
|
||||
// NumObjects returns number of alive objects.
|
||||
NumObjects() int
|
||||
|
||||
// GetNamespace gets cache namespace with the given id.
|
||||
// GetNamespace is never return nil.
|
||||
GetNamespace(id uint64) Namespace
|
||||
|
||||
// Purge purges all cache namespaces, read Namespace.Purge method documentation.
|
||||
// PurgeNamespace purges cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Purge.
|
||||
PurgeNamespace(id uint64, fin PurgeFin)
|
||||
|
||||
// ZapNamespace detaches cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Zap.
|
||||
ZapNamespace(id uint64)
|
||||
|
||||
// Purge purges all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Purge method on all cache namespace.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap zaps all cache namespaces, read Namespace.Zap method documentation.
|
||||
Zap(closed bool)
|
||||
// Zap detaches all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Zap method on all cache namespace.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
|
||||
type Namespace interface {
|
||||
// Get gets cache object for the given key. The given SetFunc (if not nil) will
|
||||
// be called if the given key does not exist.
|
||||
// If the given key does not exist, SetFunc is nil or SetFunc return ok false, Get
|
||||
// will return ok false.
|
||||
Get(key uint64, setf SetFunc) (obj Object, ok bool)
|
||||
|
||||
// Get deletes cache object for the given key. If exist the cache object will
|
||||
// be deleted later when all of its handles have been released (i.e. no one use
|
||||
// it anymore) and the given DelFin (if not nil) will finally be executed. If
|
||||
// such cache object does not exist the given DelFin will be executed anyway.
|
||||
// Get gets cache object with the given key.
|
||||
// If cache object is not found and setf is not nil, Get will atomically creates
|
||||
// the cache object by calling setf. Otherwise Get will returns nil.
|
||||
//
|
||||
// Delete returns true if such cache object exist.
|
||||
// The returned cache handle should be released after use by calling Release
|
||||
// method.
|
||||
Get(key uint64, setf SetFunc) Handle
|
||||
|
||||
// Delete removes cache object with the given key from cache tree.
|
||||
// A deleted cache object will be released as soon as all of its handles have
|
||||
// been released.
|
||||
// Delete only happen once, subsequent delete will consider cache object doesn't
|
||||
// exist, even if the cache object ins't released yet.
|
||||
//
|
||||
// If not nil, fin will be called if the cache object doesn't exist or when
|
||||
// finally be released.
|
||||
//
|
||||
// Delete returns true if such cache object exist and never been deleted.
|
||||
Delete(key uint64, fin DelFin) bool
|
||||
|
||||
// Purge deletes all cache objects, read Delete method documentation.
|
||||
// Purge removes all cache objects within this namespace from cache tree.
|
||||
// This is the same as doing delete on all cache objects.
|
||||
//
|
||||
// If not nil, fin will be called on all cache objects when its finally be
|
||||
// released.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap detaches the namespace from the cache tree and delete all its cache
|
||||
// objects. The cache objects deletion and finalizers execution are happen
|
||||
// immediately, even if its existing handles haven't yet been released.
|
||||
// A zapped namespace can't never be filled again.
|
||||
// If closed is false then the Get function will always call the given SetFunc
|
||||
// if it is not nil, but resultant of the SetFunc will not be cached.
|
||||
Zap(closed bool)
|
||||
// Zap detaches namespace from cache tree and release all its cache objects.
|
||||
// A zapped namespace can never be filled again.
|
||||
// Calling Get on zapped namespace will always return nil.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Object is a cache object.
|
||||
type Object interface {
|
||||
// Release releases the cache object. Other methods should not be called
|
||||
// after the cache object has been released.
|
||||
// Handle is a cache handle.
|
||||
type Handle interface {
|
||||
// Release releases this cache handle. This method can be safely called mutiple
|
||||
// times.
|
||||
Release()
|
||||
|
||||
// Value returns value of the cache object.
|
||||
// Value returns value of this cache handle.
|
||||
// Value will returns nil after this cache handle have be released.
|
||||
Value() interface{}
|
||||
}
|
||||
|
||||
const (
|
||||
DelNotExist = iota
|
||||
DelExist
|
||||
DelPendig
|
||||
)
|
||||
|
||||
// Namespace state.
|
||||
type nsState int
|
||||
|
||||
const (
|
||||
nsEffective nsState = iota
|
||||
nsZapped
|
||||
nsClosed
|
||||
)
|
||||
|
||||
// Node state.
|
||||
type nodeState int
|
||||
|
||||
const (
|
||||
nodeEffective nodeState = iota
|
||||
nodeZero nodeState = iota
|
||||
nodeEffective
|
||||
nodeEvicted
|
||||
nodeRemoved
|
||||
nodeDeleted
|
||||
)
|
||||
|
||||
// Fake object.
|
||||
type fakeObject struct {
|
||||
// Fake handle.
|
||||
type fakeHandle struct {
|
||||
value interface{}
|
||||
fin func()
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *fakeObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.value
|
||||
func (h *fakeHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *fakeObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *fakeHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
if o.fin != nil {
|
||||
o.fin()
|
||||
o.fin = nil
|
||||
if h.fin != nil {
|
||||
h.fin()
|
||||
h.fin = nil
|
||||
}
|
||||
}
|
||||
|
||||
495
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
495
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
@@ -7,15 +7,35 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, fin func()) Object {
|
||||
obj, _ := ns.Get(key, func() (bool, interface{}, int, SetFin) {
|
||||
return true, value, charge, fin
|
||||
type releaserFunc struct {
|
||||
fn func()
|
||||
value interface{}
|
||||
}
|
||||
|
||||
func (r releaserFunc) Release() {
|
||||
if r.fn != nil {
|
||||
r.fn()
|
||||
}
|
||||
}
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
|
||||
return ns.Get(key, func() (int, interface{}) {
|
||||
if relf != nil {
|
||||
return charge, releaserFunc{relf, value}
|
||||
} else {
|
||||
return charge, value
|
||||
}
|
||||
})
|
||||
return obj
|
||||
}
|
||||
|
||||
func TestCache_HitMiss(t *testing.T) {
|
||||
@@ -43,29 +63,31 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
setfin++
|
||||
}).Release()
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j <= i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i, x := range cases {
|
||||
finalizerOk := false
|
||||
ns.Delete(x.key, func(exist bool) {
|
||||
ns.Delete(x.key, func(exist, pending bool) {
|
||||
finalizerOk = true
|
||||
})
|
||||
|
||||
@@ -74,22 +96,24 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
}
|
||||
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j > i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -107,42 +131,42 @@ func TestLRUCache_Eviction(t *testing.T) {
|
||||
set(ns, 3, 3, 1, nil).Release()
|
||||
set(ns, 4, 4, 1, nil).Release()
|
||||
set(ns, 5, 5, 1, nil).Release()
|
||||
if r, ok := ns.Get(2, nil); ok { // 1,3,4,5,2
|
||||
r.Release()
|
||||
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
|
||||
h.Release()
|
||||
}
|
||||
set(ns, 9, 9, 10, nil).Release() // 5,2,9
|
||||
|
||||
for _, x := range []uint64{9, 2, 5, 1} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{9, 2, 5, 1} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
for _, x := range []uint64{1, 2, 5} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 5} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
for _, x := range []uint64{3, 4, 9} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{3, 4, 9} {
|
||||
h := ns.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -153,16 +177,15 @@ func TestLRUCache_SetGet(t *testing.T) {
|
||||
for i := 0; i < 200; i++ {
|
||||
n := uint64(rand.Intn(99999) % 20)
|
||||
set(ns, n, n, 1, nil).Release()
|
||||
if p, ok := ns.Get(n, nil); ok {
|
||||
if p.Value() == nil {
|
||||
if h := ns.Get(n, nil); h != nil {
|
||||
if h.Value() == nil {
|
||||
t.Errorf("key '%d' contains nil value", n)
|
||||
} else {
|
||||
got := p.Value().(uint64)
|
||||
if got != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, got)
|
||||
if x := h.Value().(uint64); x != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
|
||||
}
|
||||
}
|
||||
p.Release()
|
||||
h.Release()
|
||||
} else {
|
||||
t.Errorf("key '%d' doesn't exist", n)
|
||||
}
|
||||
@@ -176,31 +199,369 @@ func TestLRUCache_Purge(t *testing.T) {
|
||||
o2 := set(ns1, 2, 2, 1, nil)
|
||||
ns1.Purge(nil)
|
||||
set(ns1, 3, 3, 1, nil).Release()
|
||||
for _, x := range []uint64{1, 2, 3} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 3} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
o2.Release()
|
||||
for _, x := range []uint64{1, 2} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{1, 2} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type testingCacheObjectCounter struct {
|
||||
created uint32
|
||||
released uint32
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) createOne() {
|
||||
atomic.AddUint32(&c.created, 1)
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) releaseOne() {
|
||||
atomic.AddUint32(&c.released, 1)
|
||||
}
|
||||
|
||||
type testingCacheObject struct {
|
||||
t *testing.T
|
||||
cnt *testingCacheObjectCounter
|
||||
|
||||
ns, key uint64
|
||||
|
||||
releaseCalled uint32
|
||||
}
|
||||
|
||||
func (x *testingCacheObject) Release() {
|
||||
if atomic.CompareAndSwapUint32(&x.releaseCalled, 0, 1) {
|
||||
x.cnt.releaseOne()
|
||||
} else {
|
||||
x.t.Errorf("duplicate setfin NS#%d KEY#%s", x.ns, x.key)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLRUCache_Finalizer(t *testing.T) {
|
||||
const (
|
||||
capacity = 100
|
||||
goroutines = 100
|
||||
iterations = 10000
|
||||
keymax = 8000
|
||||
)
|
||||
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
defer runtime.GOMAXPROCS(1)
|
||||
|
||||
wg := &sync.WaitGroup{}
|
||||
cnt := &testingCacheObjectCounter{}
|
||||
|
||||
c := NewLRUCache(capacity)
|
||||
|
||||
type instance struct {
|
||||
seed int64
|
||||
rnd *rand.Rand
|
||||
ns uint64
|
||||
effective int32
|
||||
handles []Handle
|
||||
handlesMap map[uint64]int
|
||||
|
||||
delete bool
|
||||
purge bool
|
||||
zap bool
|
||||
wantDel int32
|
||||
delfinCalledAll int32
|
||||
delfinCalledEff int32
|
||||
purgefinCalled int32
|
||||
}
|
||||
|
||||
instanceGet := func(p *instance, ns Namespace, key uint64) {
|
||||
h := ns.Get(key, func() (charge int, value interface{}) {
|
||||
to := &testingCacheObject{
|
||||
t: t, cnt: cnt,
|
||||
ns: p.ns,
|
||||
key: key,
|
||||
}
|
||||
atomic.AddInt32(&p.effective, 1)
|
||||
cnt.createOne()
|
||||
return 1, releaserFunc{func() {
|
||||
to.Release()
|
||||
atomic.AddInt32(&p.effective, -1)
|
||||
}, to}
|
||||
})
|
||||
p.handles = append(p.handles, h)
|
||||
p.handlesMap[key] = p.handlesMap[key] + 1
|
||||
}
|
||||
instanceRelease := func(p *instance, ns Namespace, i int) {
|
||||
h := p.handles[i]
|
||||
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
|
||||
if n := p.handlesMap[key]; n == 0 {
|
||||
t.Fatal("key ref == 0")
|
||||
} else if n > 1 {
|
||||
p.handlesMap[key] = n - 1
|
||||
} else {
|
||||
delete(p.handlesMap, key)
|
||||
}
|
||||
h.Release()
|
||||
p.handles = append(p.handles[:i], p.handles[i+1:]...)
|
||||
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
|
||||
}
|
||||
|
||||
seeds := make([]int64, goroutines)
|
||||
instances := make([]instance, goroutines)
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
p.handlesMap = make(map[uint64]int)
|
||||
if seeds[i] == 0 {
|
||||
seeds[i] = time.Now().UnixNano()
|
||||
}
|
||||
p.seed = seeds[i]
|
||||
p.rnd = rand.New(rand.NewSource(p.seed))
|
||||
p.ns = uint64(i)
|
||||
p.delete = i%6 == 0
|
||||
p.purge = i%8 == 0
|
||||
p.zap = i%12 == 0 || i%3 == 0
|
||||
}
|
||||
|
||||
seedsStr := make([]string, len(seeds))
|
||||
for i, seed := range seeds {
|
||||
seedsStr[i] = fmt.Sprint(seed)
|
||||
}
|
||||
t.Logf("seeds := []int64{%s}", strings.Join(seedsStr, ", "))
|
||||
|
||||
// Get and release.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
for i := 0; i < iterations; i++ {
|
||||
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
|
||||
instanceGet(p, ns, uint64(p.rnd.Intn(keymax)))
|
||||
} else {
|
||||
instanceRelease(p, ns, p.rnd.Intn(len(p.handles)))
|
||||
}
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if used, cap := c.Used(), c.Capacity(); used > cap {
|
||||
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
|
||||
}
|
||||
|
||||
// Check effective objects.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
if int(p.effective) < len(p.handlesMap) {
|
||||
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
|
||||
}
|
||||
}
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Delete and purge.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
p.wantDel = p.effective
|
||||
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
|
||||
if p.delete {
|
||||
for key := uint64(0); key < keymax; key++ {
|
||||
_, wantExist := p.handlesMap[key]
|
||||
gotExist := ns.Delete(key, func(exist, pending bool) {
|
||||
atomic.AddInt32(&p.delfinCalledAll, 1)
|
||||
if exist {
|
||||
atomic.AddInt32(&p.delfinCalledEff, 1)
|
||||
}
|
||||
})
|
||||
if !gotExist && wantExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d not found", p.ns, key)
|
||||
}
|
||||
}
|
||||
|
||||
var delfinCalled int
|
||||
for key := uint64(0); key < keymax; key++ {
|
||||
func(key uint64) {
|
||||
gotExist := ns.Delete(key, func(exist, pending bool) {
|
||||
if exist && !pending {
|
||||
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.ns, key)
|
||||
}
|
||||
delfinCalled++
|
||||
})
|
||||
if gotExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d found", p.ns, key)
|
||||
}
|
||||
}(key)
|
||||
}
|
||||
if delfinCalled != keymax {
|
||||
t.Errorf("(2) #%d not all delete fin called, diff=%d", p.ns, keymax-delfinCalled)
|
||||
}
|
||||
}
|
||||
|
||||
if p.purge {
|
||||
ns.Purge(func(ns, key uint64) {
|
||||
atomic.AddInt32(&p.purgefinCalled, 1)
|
||||
})
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Release.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if !p.zap {
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
for i := len(p.handles) - 1; i >= 0; i-- {
|
||||
instanceRelease(p, ns, i)
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Zap.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if p.zap {
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
ns.Zap()
|
||||
|
||||
p.handles = nil
|
||||
p.handlesMap = nil
|
||||
}(p)
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
|
||||
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
|
||||
}
|
||||
|
||||
c.Purge(nil)
|
||||
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if p.delete {
|
||||
if p.delfinCalledAll != keymax {
|
||||
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.ns, p.purge, p.zap, keymax-p.delfinCalledAll)
|
||||
}
|
||||
if p.delfinCalledEff != p.wantDel {
|
||||
t.Errorf("#%d not all effective delete fin called, diff=%d", p.ns, p.wantDel-p.delfinCalledEff)
|
||||
}
|
||||
if p.purge && p.purgefinCalled > 0 {
|
||||
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.ns, p.delete, p.zap, p.purgefinCalled)
|
||||
}
|
||||
} else {
|
||||
if p.purge {
|
||||
if p.purgefinCalled != p.wantDel {
|
||||
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.ns, p.delete, p.zap, p.wantDel-p.purgefinCalled)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if cnt.created != cnt.released {
|
||||
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Set(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Get(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
ns.Get(i, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Get2(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
ns.Get(i, func() (charge int, value interface{}) {
|
||||
return 0, nil
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_Release(b *testing.B) {
|
||||
c := NewLRUCache(0)
|
||||
ns := c.GetNamespace(0)
|
||||
handles := make([]Handle, b.N)
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
handles[i] = set(ns, i, "", 1, nil)
|
||||
}
|
||||
b.ResetTimer()
|
||||
for _, h := range handles {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_SetRelease(b *testing.B) {
|
||||
capacity := b.N / 100
|
||||
if capacity <= 0 {
|
||||
@@ -210,7 +571,7 @@ func BenchmarkLRUCache_SetRelease(b *testing.B) {
|
||||
ns := c.GetNamespace(0)
|
||||
b.ResetTimer()
|
||||
for i := uint64(0); i < uint64(b.N); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -227,10 +588,10 @@ func BenchmarkLRUCache_SetReleaseTwice(b *testing.B) {
|
||||
nb := b.N - na
|
||||
|
||||
for i := uint64(0); i < uint64(na); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
|
||||
for i := uint64(0); i < uint64(nb); i++ {
|
||||
set(ns, i, nil, 1, nil).Release()
|
||||
set(ns, i, "", 1, nil).Release()
|
||||
}
|
||||
}
|
||||
|
||||
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
@@ -1,246 +0,0 @@
|
||||
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package cache
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type emptyCache struct {
|
||||
sync.Mutex
|
||||
table map[uint64]*emptyNS
|
||||
}
|
||||
|
||||
// NewEmptyCache creates a new initialized empty cache.
|
||||
func NewEmptyCache() Cache {
|
||||
return &emptyCache{
|
||||
table: make(map[uint64]*emptyNS),
|
||||
}
|
||||
}
|
||||
|
||||
func (c *emptyCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
ns := &emptyNS{
|
||||
cache: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*emptyNode),
|
||||
}
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *emptyCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (c *emptyCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
}
|
||||
c.table = make(map[uint64]*emptyNS)
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (*emptyCache) SetCapacity(capacity int) {}
|
||||
|
||||
type emptyNS struct {
|
||||
cache *emptyCache
|
||||
id uint64
|
||||
table map[uint64]*emptyNode
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
ns.cache.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
ns.cache.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if ok {
|
||||
n.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n = &emptyNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
setfin: fin,
|
||||
ref: 1,
|
||||
}
|
||||
ns.table[key] = n
|
||||
}
|
||||
|
||||
ns.cache.Unlock()
|
||||
o = &emptyObject{node: n}
|
||||
return
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Delete(key uint64, fin DelFin) bool {
|
||||
ns.cache.Lock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
n.delfin = fin
|
||||
ns.cache.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *emptyNS) purgeNB(fin PurgeFin) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Purge(fin PurgeFin) {
|
||||
ns.cache.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
func (ns *emptyNS) zapNB(closed bool) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.execFin()
|
||||
}
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
ns.state = nsZapped
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Zap(closed bool) {
|
||||
ns.cache.Lock()
|
||||
ns.zapNB(closed)
|
||||
delete(ns.cache.table, ns.id)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyNode struct {
|
||||
ns *emptyNS
|
||||
key uint64
|
||||
value interface{}
|
||||
ref int
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
|
||||
func (n *emptyNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.delfin = nil
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *emptyNode) evict() {
|
||||
n.ns.cache.Lock()
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// Remove elem.
|
||||
delete(n.ns.table, n.key)
|
||||
// Execute finalizer.
|
||||
n.execFin()
|
||||
}
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: emptyNode: negative node reference")
|
||||
}
|
||||
n.ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyObject struct {
|
||||
node *emptyNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *emptyObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *emptyObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
}
|
||||
617
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
617
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
@@ -9,16 +9,24 @@ package cache
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
// The LLRB implementation were taken from https://github.com/petar/GoLLRB.
|
||||
// Which contains the following header:
|
||||
//
|
||||
// Copyright 2010 Petar Maymounkov. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// lruCache represent a LRU cache state.
|
||||
type lruCache struct {
|
||||
sync.Mutex
|
||||
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
size int
|
||||
mu sync.Mutex
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
used, size, alive int
|
||||
}
|
||||
|
||||
// NewLRUCache creates a new initialized LRU cache with the given capacity.
|
||||
@@ -32,245 +40,408 @@ func NewLRUCache(capacity int) Cache {
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *lruCache) Capacity() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.capacity
|
||||
}
|
||||
|
||||
func (c *lruCache) Used() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.used
|
||||
}
|
||||
|
||||
func (c *lruCache) Size() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.size
|
||||
}
|
||||
|
||||
func (c *lruCache) NumObjects() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.alive
|
||||
}
|
||||
|
||||
// SetCapacity set cache capacity.
|
||||
func (c *lruCache) SetCapacity(capacity int) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
c.capacity = capacity
|
||||
c.evict()
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// GetNamespace return namespace object for given id.
|
||||
func (c *lruCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if p, ok := c.table[id]; ok {
|
||||
return p
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
p := &lruNs{
|
||||
lru: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*lruNode),
|
||||
ns := &lruNs{lru: c, id: id}
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *lruCache) ZapNamespace(id uint64) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.zapNB()
|
||||
delete(c.table, id)
|
||||
}
|
||||
c.table[id] = p
|
||||
return p
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Purge purge entire cache.
|
||||
func (c *lruCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
func (c *lruCache) Zap() {
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
ns.zapNB()
|
||||
}
|
||||
c.table = make(map[uint64]*lruNs)
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) evict() {
|
||||
top := &c.recent
|
||||
for n := c.recent.rPrev; c.size > c.capacity && n != top; {
|
||||
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
|
||||
n.state = nodeEvicted
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
c.size -= n.charge
|
||||
n.derefNB()
|
||||
c.used -= n.charge
|
||||
n = c.recent.rPrev
|
||||
}
|
||||
}
|
||||
|
||||
type lruNs struct {
|
||||
lru *lruCache
|
||||
id uint64
|
||||
table map[uint64]*lruNode
|
||||
state nsState
|
||||
lru *lruCache
|
||||
id uint64
|
||||
rbRoot *lruNode
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
lru.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
lru.Unlock()
|
||||
return
|
||||
func (ns *lruNs) rbGetOrCreateNode(h *lruNode, key uint64) (hn, n *lruNode) {
|
||||
if h == nil {
|
||||
n = &lruNode{ns: ns, key: key}
|
||||
return n, n
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if ok {
|
||||
switch n.state {
|
||||
case nodeEvicted:
|
||||
// Insert to recent list.
|
||||
n.state = nodeEffective
|
||||
n.ref++
|
||||
lru.size += n.charge
|
||||
lru.evict()
|
||||
fallthrough
|
||||
case nodeEffective:
|
||||
// Bump to front
|
||||
n.rRemove()
|
||||
n.rInsert(&lru.recent)
|
||||
if key < h.key {
|
||||
hn, n = ns.rbGetOrCreateNode(h.rbLeft, key)
|
||||
if hn != nil {
|
||||
h.rbLeft = hn
|
||||
} else {
|
||||
return nil, n
|
||||
}
|
||||
} else if key > h.key {
|
||||
hn, n = ns.rbGetOrCreateNode(h.rbRight, key)
|
||||
if hn != nil {
|
||||
h.rbRight = hn
|
||||
} else {
|
||||
return nil, n
|
||||
}
|
||||
n.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
lru.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var charge int
|
||||
var fin func()
|
||||
ok, value, charge, fin = setf()
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n = &lruNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
charge: charge,
|
||||
setfin: fin,
|
||||
ref: 2,
|
||||
}
|
||||
ns.table[key] = n
|
||||
n.rInsert(&lru.recent)
|
||||
|
||||
lru.size += charge
|
||||
lru.evict()
|
||||
return nil, h
|
||||
}
|
||||
|
||||
lru.Unlock()
|
||||
o = &lruObject{node: n}
|
||||
return
|
||||
if rbIsRed(h.rbRight) && !rbIsRed(h.rbLeft) {
|
||||
h = rbRotLeft(h)
|
||||
}
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
|
||||
rbFlip(h)
|
||||
}
|
||||
return h, n
|
||||
}
|
||||
|
||||
func (ns *lruNs) getOrCreateNode(key uint64) *lruNode {
|
||||
hn, n := ns.rbGetOrCreateNode(ns.rbRoot, key)
|
||||
if hn != nil {
|
||||
ns.rbRoot = hn
|
||||
ns.rbRoot.rbBlack = true
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbGetNode(key uint64) *lruNode {
|
||||
h := ns.rbRoot
|
||||
for h != nil {
|
||||
switch {
|
||||
case key < h.key:
|
||||
h = h.rbLeft
|
||||
case key > h.key:
|
||||
h = h.rbRight
|
||||
default:
|
||||
return h
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ns *lruNs) getNode(key uint64) *lruNode {
|
||||
return ns.rbGetNode(key)
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbDeleteNode(h *lruNode, key uint64) *lruNode {
|
||||
if h == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if key < h.key {
|
||||
if h.rbLeft == nil { // key not present. Nothing to delete
|
||||
return h
|
||||
}
|
||||
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbMoveLeft(h)
|
||||
}
|
||||
h.rbLeft = ns.rbDeleteNode(h.rbLeft, key)
|
||||
} else {
|
||||
if rbIsRed(h.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
// If @key equals @h.key and no right children at @h
|
||||
if h.key == key && h.rbRight == nil {
|
||||
return nil
|
||||
}
|
||||
if h.rbRight != nil && !rbIsRed(h.rbRight) && !rbIsRed(h.rbRight.rbLeft) {
|
||||
h = rbMoveRight(h)
|
||||
}
|
||||
// If @key equals @h.key, and (from above) 'h.Right != nil'
|
||||
if h.key == key {
|
||||
var x *lruNode
|
||||
h.rbRight, x = rbDeleteMin(h.rbRight)
|
||||
if x == nil {
|
||||
panic("logic")
|
||||
}
|
||||
x.rbLeft, h.rbLeft = h.rbLeft, nil
|
||||
x.rbRight, h.rbRight = h.rbRight, nil
|
||||
x.rbBlack = h.rbBlack
|
||||
h = x
|
||||
} else { // Else, @key is bigger than @h.key
|
||||
h.rbRight = ns.rbDeleteNode(h.rbRight, key)
|
||||
}
|
||||
}
|
||||
|
||||
return rbFixup(h)
|
||||
}
|
||||
|
||||
func (ns *lruNs) deleteNode(key uint64) {
|
||||
ns.rbRoot = ns.rbDeleteNode(ns.rbRoot, key)
|
||||
if ns.rbRoot != nil {
|
||||
ns.rbRoot.rbBlack = true
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *lruNs) rbIterateNodes(h *lruNode, pivot uint64, iter func(n *lruNode) bool) bool {
|
||||
if h == nil {
|
||||
return true
|
||||
}
|
||||
if h.key >= pivot {
|
||||
if !ns.rbIterateNodes(h.rbLeft, pivot, iter) {
|
||||
return false
|
||||
}
|
||||
if !iter(h) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return ns.rbIterateNodes(h.rbRight, pivot, iter)
|
||||
}
|
||||
|
||||
func (ns *lruNs) iterateNodes(iter func(n *lruNode) bool) {
|
||||
ns.rbIterateNodes(ns.rbRoot, 0, iter)
|
||||
}
|
||||
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
|
||||
ns.lru.mu.Lock()
|
||||
defer ns.lru.mu.Unlock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
return nil
|
||||
}
|
||||
|
||||
var n *lruNode
|
||||
if setf == nil {
|
||||
n = ns.getNode(key)
|
||||
if n == nil {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
n = ns.getOrCreateNode(key)
|
||||
}
|
||||
switch n.state {
|
||||
case nodeZero:
|
||||
charge, value := setf()
|
||||
if value == nil {
|
||||
ns.deleteNode(key)
|
||||
return nil
|
||||
}
|
||||
if charge < 0 {
|
||||
charge = 0
|
||||
}
|
||||
|
||||
n.value = value
|
||||
n.charge = charge
|
||||
n.state = nodeEvicted
|
||||
|
||||
ns.lru.size += charge
|
||||
ns.lru.alive++
|
||||
|
||||
fallthrough
|
||||
case nodeEvicted:
|
||||
if n.charge == 0 {
|
||||
break
|
||||
}
|
||||
|
||||
// Insert to recent list.
|
||||
n.state = nodeEffective
|
||||
n.ref++
|
||||
ns.lru.used += n.charge
|
||||
ns.lru.evict()
|
||||
|
||||
fallthrough
|
||||
case nodeEffective:
|
||||
// Bump to front.
|
||||
n.rRemove()
|
||||
n.rInsert(&ns.lru.recent)
|
||||
}
|
||||
n.ref++
|
||||
|
||||
return &lruHandle{node: n}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
defer ns.lru.mu.Unlock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
lru.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
n := ns.getNode(key)
|
||||
if n == nil {
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
return false
|
||||
|
||||
}
|
||||
|
||||
n.delfin = fin
|
||||
switch n.state {
|
||||
case nodeRemoved:
|
||||
lru.Unlock()
|
||||
return false
|
||||
case nodeEffective:
|
||||
lru.size -= n.charge
|
||||
ns.lru.used -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.delfin = fin
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
n.derefNB()
|
||||
case nodeEvicted:
|
||||
n.state = nodeDeleted
|
||||
n.delfin = fin
|
||||
case nodeDeleted:
|
||||
if fin != nil {
|
||||
fin(true, true)
|
||||
}
|
||||
return false
|
||||
default:
|
||||
panic("invalid state")
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
|
||||
lru.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *lruNs) purgeNB(fin PurgeFin) {
|
||||
lru := ns.lru
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
if ns.state == nsEffective {
|
||||
var nodes []*lruNode
|
||||
ns.iterateNodes(func(n *lruNode) bool {
|
||||
nodes = append(nodes, n)
|
||||
return true
|
||||
})
|
||||
for _, n := range nodes {
|
||||
switch n.state {
|
||||
case nodeEffective:
|
||||
ns.lru.used -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.purgefin = fin
|
||||
n.rRemove()
|
||||
n.derefNB()
|
||||
case nodeEvicted:
|
||||
n.state = nodeDeleted
|
||||
n.purgefin = fin
|
||||
case nodeDeleted:
|
||||
default:
|
||||
panic("invalid state")
|
||||
}
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Purge(fin PurgeFin) {
|
||||
ns.lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ns *lruNs) zapNB(closed bool) {
|
||||
lru := ns.lru
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
func (ns *lruNs) zapNB() {
|
||||
if ns.state == nsEffective {
|
||||
ns.state = nsZapped
|
||||
|
||||
ns.iterateNodes(func(n *lruNode) bool {
|
||||
if n.state == nodeEffective {
|
||||
ns.lru.used -= n.charge
|
||||
n.rRemove()
|
||||
}
|
||||
ns.lru.size -= n.charge
|
||||
n.state = nodeDeleted
|
||||
n.fin()
|
||||
|
||||
return true
|
||||
})
|
||||
ns.rbRoot = nil
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
n.execFin()
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *lruNs) Zap(closed bool) {
|
||||
ns.lru.Lock()
|
||||
ns.zapNB(closed)
|
||||
func (ns *lruNs) Zap() {
|
||||
ns.lru.mu.Lock()
|
||||
ns.zapNB()
|
||||
delete(ns.lru.table, ns.id)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruNode struct {
|
||||
ns *lruNs
|
||||
|
||||
rNext, rPrev *lruNode
|
||||
rNext, rPrev *lruNode
|
||||
rbLeft, rbRight *lruNode
|
||||
rbBlack bool
|
||||
|
||||
key uint64
|
||||
value interface{}
|
||||
charge int
|
||||
ref int
|
||||
state nodeState
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
@@ -284,7 +455,6 @@ func (n *lruNode) rInsert(at *lruNode) {
|
||||
}
|
||||
|
||||
func (n *lruNode) rRemove() bool {
|
||||
// only remove if not already removed
|
||||
if n.rPrev == nil {
|
||||
return false
|
||||
}
|
||||
@@ -297,58 +467,147 @@ func (n *lruNode) rRemove() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (n *lruNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
func (n *lruNode) fin() {
|
||||
if r, ok := n.value.(util.Releaser); ok {
|
||||
r.Release()
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.purgefin(n.ns.id, n.key)
|
||||
n.delfin = nil
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin(true, false)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evictNB() {
|
||||
func (n *lruNode) derefNB() {
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// remove elem
|
||||
delete(n.ns.table, n.key)
|
||||
// execute finalizer
|
||||
n.execFin()
|
||||
// Remove elemement.
|
||||
n.ns.deleteNode(n.key)
|
||||
n.ns.lru.size -= n.charge
|
||||
n.ns.lru.alive--
|
||||
n.fin()
|
||||
}
|
||||
n.value = nil
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: lruCache: negative node reference")
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evict() {
|
||||
n.ns.lru.Lock()
|
||||
n.evictNB()
|
||||
n.ns.lru.Unlock()
|
||||
func (n *lruNode) deref() {
|
||||
n.ns.lru.mu.Lock()
|
||||
n.derefNB()
|
||||
n.ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruObject struct {
|
||||
type lruHandle struct {
|
||||
node *lruNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *lruObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
func (h *lruHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *lruObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *lruHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
h.node.deref()
|
||||
h.node = nil
|
||||
}
|
||||
|
||||
func rbIsRed(h *lruNode) bool {
|
||||
if h == nil {
|
||||
return false
|
||||
}
|
||||
return !h.rbBlack
|
||||
}
|
||||
|
||||
func rbRotLeft(h *lruNode) *lruNode {
|
||||
x := h.rbRight
|
||||
if x.rbBlack {
|
||||
panic("rotating a black link")
|
||||
}
|
||||
h.rbRight = x.rbLeft
|
||||
x.rbLeft = h
|
||||
x.rbBlack = h.rbBlack
|
||||
h.rbBlack = false
|
||||
return x
|
||||
}
|
||||
|
||||
func rbRotRight(h *lruNode) *lruNode {
|
||||
x := h.rbLeft
|
||||
if x.rbBlack {
|
||||
panic("rotating a black link")
|
||||
}
|
||||
h.rbLeft = x.rbRight
|
||||
x.rbRight = h
|
||||
x.rbBlack = h.rbBlack
|
||||
h.rbBlack = false
|
||||
return x
|
||||
}
|
||||
|
||||
func rbFlip(h *lruNode) {
|
||||
h.rbBlack = !h.rbBlack
|
||||
h.rbLeft.rbBlack = !h.rbLeft.rbBlack
|
||||
h.rbRight.rbBlack = !h.rbRight.rbBlack
|
||||
}
|
||||
|
||||
func rbMoveLeft(h *lruNode) *lruNode {
|
||||
rbFlip(h)
|
||||
if rbIsRed(h.rbRight.rbLeft) {
|
||||
h.rbRight = rbRotRight(h.rbRight)
|
||||
h = rbRotLeft(h)
|
||||
rbFlip(h)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func rbMoveRight(h *lruNode) *lruNode {
|
||||
rbFlip(h)
|
||||
if rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
rbFlip(h)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func rbFixup(h *lruNode) *lruNode {
|
||||
if rbIsRed(h.rbRight) {
|
||||
h = rbRotLeft(h)
|
||||
}
|
||||
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbRotRight(h)
|
||||
}
|
||||
|
||||
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
|
||||
rbFlip(h)
|
||||
}
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
func rbDeleteMin(h *lruNode) (hn, n *lruNode) {
|
||||
if h == nil {
|
||||
return nil, nil
|
||||
}
|
||||
if h.rbLeft == nil {
|
||||
return nil, h
|
||||
}
|
||||
|
||||
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
|
||||
h = rbMoveLeft(h)
|
||||
}
|
||||
|
||||
h.rbLeft, n = rbDeleteMin(h.rbLeft)
|
||||
|
||||
return rbFixup(h), n
|
||||
}
|
||||
|
||||
35
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
35
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
@@ -14,6 +14,7 @@ import (
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
@@ -35,7 +36,7 @@ type DB struct {
|
||||
|
||||
// MemDB.
|
||||
memMu sync.RWMutex
|
||||
memPool *util.Pool
|
||||
memPool chan *memdb.DB
|
||||
mem, frozenMem *memDB
|
||||
journal *journal.Writer
|
||||
journalWriter storage.Writer
|
||||
@@ -47,6 +48,9 @@ type DB struct {
|
||||
snapsMu sync.Mutex
|
||||
snapsRoot snapshotElement
|
||||
|
||||
// Stats.
|
||||
aliveSnaps, aliveIters int32
|
||||
|
||||
// Write.
|
||||
writeC chan *Batch
|
||||
writeMergedC chan bool
|
||||
@@ -80,7 +84,7 @@ func openDB(s *session) (*DB, error) {
|
||||
// Initial sequence
|
||||
seq: s.stSeq,
|
||||
// MemDB
|
||||
memPool: util.NewPool(1),
|
||||
memPool: make(chan *memdb.DB, 1),
|
||||
// Write
|
||||
writeC: make(chan *Batch),
|
||||
writeMergedC: make(chan bool),
|
||||
@@ -122,6 +126,7 @@ func openDB(s *session) (*DB, error) {
|
||||
go db.tCompaction()
|
||||
go db.mCompaction()
|
||||
go db.jWriter()
|
||||
go db.mpoolDrain()
|
||||
|
||||
s.logf("db@open done T·%v", time.Since(start))
|
||||
|
||||
@@ -568,7 +573,7 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
|
||||
}
|
||||
defer m.decref()
|
||||
|
||||
mk, mv, me := m.db.Find(ikey)
|
||||
mk, mv, me := m.mdb.Find(ikey)
|
||||
if me == nil {
|
||||
ukey, _, t, ok := parseIkey(mk)
|
||||
if ok && db.s.icmp.uCompare(ukey, key) == 0 {
|
||||
@@ -655,6 +660,16 @@ func (db *DB) GetSnapshot() (*Snapshot, error) {
|
||||
// Returns statistics of the underlying DB.
|
||||
// leveldb.sstables
|
||||
// Returns sstables list for each level.
|
||||
// leveldb.blockpool
|
||||
// Returns block pool stats.
|
||||
// leveldb.cachedblock
|
||||
// Returns size of cached block.
|
||||
// leveldb.openedtables
|
||||
// Returns number of opened tables.
|
||||
// leveldb.alivesnaps
|
||||
// Returns number of alive snapshots.
|
||||
// leveldb.aliveiters
|
||||
// Returns number of alive iterators.
|
||||
func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
err = db.ok()
|
||||
if err != nil {
|
||||
@@ -700,6 +715,20 @@ func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.imin, t.imax)
|
||||
}
|
||||
}
|
||||
case p == "blockpool":
|
||||
value = fmt.Sprintf("%v", db.s.tops.bpool)
|
||||
case p == "cachedblock":
|
||||
if bc := db.s.o.GetBlockCache(); bc != nil {
|
||||
value = fmt.Sprintf("%d", bc.Size())
|
||||
} else {
|
||||
value = "<nil>"
|
||||
}
|
||||
case p == "openedtables":
|
||||
value = fmt.Sprintf("%d", db.s.tops.cache.Size())
|
||||
case p == "alivesnaps":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveSnaps))
|
||||
case p == "aliveiters":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
|
||||
default:
|
||||
err = errors.New("leveldb: GetProperty: unknown property: " + name)
|
||||
}
|
||||
|
||||
6
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
6
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
@@ -221,10 +221,10 @@ func (db *DB) memCompaction() {
|
||||
c := newCMem(db.s)
|
||||
stats := new(cStatsStaging)
|
||||
|
||||
db.logf("mem@flush N·%d S·%s", mem.db.Len(), shortenb(mem.db.Size()))
|
||||
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
|
||||
|
||||
// Don't compact empty memdb.
|
||||
if mem.db.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
db.logf("mem@flush skipping")
|
||||
// drop frozen mem
|
||||
db.dropFrozenMem()
|
||||
@@ -242,7 +242,7 @@ func (db *DB) memCompaction() {
|
||||
db.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
|
||||
stats.startTimer()
|
||||
defer stats.stopTimer()
|
||||
return c.flush(mem.db, -1)
|
||||
return c.flush(mem.mdb, -1)
|
||||
}, func() error {
|
||||
for _, r := range c.rec.addedTables {
|
||||
db.logf("mem@flush rollback @%d", r.num)
|
||||
|
||||
11
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
11
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
@@ -10,6 +10,7 @@ import (
|
||||
"errors"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -38,11 +39,11 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
|
||||
ti := v.getIterators(slice, ro)
|
||||
n := len(ti) + 2
|
||||
i := make([]iterator.Iterator, 0, n)
|
||||
emi := em.db.NewIterator(slice)
|
||||
emi := em.mdb.NewIterator(slice)
|
||||
emi.SetReleaser(&memdbReleaser{m: em})
|
||||
i = append(i, emi)
|
||||
if fm != nil {
|
||||
fmi := fm.db.NewIterator(slice)
|
||||
fmi := fm.mdb.NewIterator(slice)
|
||||
fmi.SetReleaser(&memdbReleaser{m: fm})
|
||||
i = append(i, fmi)
|
||||
}
|
||||
@@ -66,6 +67,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
|
||||
}
|
||||
rawIter := db.newRawIterator(islice, ro)
|
||||
iter := &dbIter{
|
||||
db: db,
|
||||
icmp: db.s.icmp,
|
||||
iter: rawIter,
|
||||
seq: seq,
|
||||
@@ -73,6 +75,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
|
||||
key: make([]byte, 0),
|
||||
value: make([]byte, 0),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveIters, 1)
|
||||
runtime.SetFinalizer(iter, (*dbIter).Release)
|
||||
return iter
|
||||
}
|
||||
@@ -89,6 +92,7 @@ const (
|
||||
|
||||
// dbIter represent an interator states over a database session.
|
||||
type dbIter struct {
|
||||
db *DB
|
||||
icmp *iComparer
|
||||
iter iterator.Iterator
|
||||
seq uint64
|
||||
@@ -303,6 +307,7 @@ func (i *dbIter) Release() {
|
||||
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
}
|
||||
|
||||
i.dir = dirReleased
|
||||
@@ -310,6 +315,8 @@ func (i *dbIter) Release() {
|
||||
i.value = nil
|
||||
i.iter.Release()
|
||||
i.iter = nil
|
||||
atomic.AddInt32(&i.db.aliveIters, -1)
|
||||
i.db = nil
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
@@ -9,6 +9,7 @@ package leveldb
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -81,7 +82,7 @@ func (db *DB) minSeq() uint64 {
|
||||
type Snapshot struct {
|
||||
db *DB
|
||||
elem *snapshotElement
|
||||
mu sync.Mutex
|
||||
mu sync.RWMutex
|
||||
released bool
|
||||
}
|
||||
|
||||
@@ -91,6 +92,7 @@ func (db *DB) newSnapshot() *Snapshot {
|
||||
db: db,
|
||||
elem: db.acquireSnapshot(),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveSnaps, 1)
|
||||
runtime.SetFinalizer(snap, (*Snapshot).Release)
|
||||
return snap
|
||||
}
|
||||
@@ -105,8 +107,8 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snap.mu.Lock()
|
||||
defer snap.mu.Unlock()
|
||||
snap.mu.RLock()
|
||||
defer snap.mu.RUnlock()
|
||||
if snap.released {
|
||||
err = ErrSnapshotReleased
|
||||
return
|
||||
@@ -160,6 +162,7 @@ func (snap *Snapshot) Release() {
|
||||
|
||||
snap.released = true
|
||||
snap.db.releaseSnapshot(snap.elem)
|
||||
atomic.AddInt32(&snap.db.aliveSnaps, -1)
|
||||
snap.db = nil
|
||||
snap.elem = nil
|
||||
}
|
||||
|
||||
70
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
70
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
@@ -8,16 +8,16 @@ package leveldb
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/journal"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
type memDB struct {
|
||||
pool *util.Pool
|
||||
db *memdb.DB
|
||||
ref int32
|
||||
db *DB
|
||||
mdb *memdb.DB
|
||||
ref int32
|
||||
}
|
||||
|
||||
func (m *memDB) incref() {
|
||||
@@ -26,7 +26,13 @@ func (m *memDB) incref() {
|
||||
|
||||
func (m *memDB) decref() {
|
||||
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
|
||||
m.pool.Put(m)
|
||||
// Only put back memdb with std capacity.
|
||||
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
|
||||
m.mdb.Reset()
|
||||
m.db.mpoolPut(m.mdb)
|
||||
}
|
||||
m.db = nil
|
||||
m.mdb = nil
|
||||
} else if ref < 0 {
|
||||
panic("negative memdb ref")
|
||||
}
|
||||
@@ -42,6 +48,41 @@ func (db *DB) addSeq(delta uint64) {
|
||||
atomic.AddUint64(&db.seq, delta)
|
||||
}
|
||||
|
||||
func (db *DB) mpoolPut(mem *memdb.DB) {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
select {
|
||||
case db.memPool <- mem:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolGet() *memdb.DB {
|
||||
select {
|
||||
case mem := <-db.memPool:
|
||||
return mem
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolDrain() {
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
select {
|
||||
case <-db.memPool:
|
||||
default:
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
close(db.memPool)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create new memdb and froze the old one; need external synchronization.
|
||||
// newMem only called synchronously by the writer.
|
||||
func (db *DB) newMem(n int) (mem *memDB, err error) {
|
||||
@@ -70,18 +111,15 @@ func (db *DB) newMem(n int) (mem *memDB, err error) {
|
||||
db.journalWriter = w
|
||||
db.journalFile = file
|
||||
db.frozenMem = db.mem
|
||||
mem, ok := db.memPool.Get().(*memDB)
|
||||
if ok && mem.db.Capacity() >= n {
|
||||
mem.db.Reset()
|
||||
mem.incref()
|
||||
} else {
|
||||
mem = &memDB{
|
||||
pool: db.memPool,
|
||||
db: memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n)),
|
||||
ref: 1,
|
||||
}
|
||||
mdb := db.mpoolGet()
|
||||
if mdb == nil || mdb.Capacity() < n {
|
||||
mdb = memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n))
|
||||
}
|
||||
mem = &memDB{
|
||||
db: db,
|
||||
mdb: mdb,
|
||||
ref: 2,
|
||||
}
|
||||
mem.incref()
|
||||
db.mem = mem
|
||||
// The seq only incremented by the writer. And whoever called newMem
|
||||
// should hold write lock, so no need additional synchronization here.
|
||||
|
||||
12
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
12
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
@@ -1210,7 +1210,7 @@ func TestDb_DeletionMarkers2(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestDb_CompactionTableOpenError(t *testing.T) {
|
||||
h := newDbHarnessWopt(t, &opt.Options{MaxOpenFiles: 0})
|
||||
h := newDbHarnessWopt(t, &opt.Options{CachedOpenFiles: -1})
|
||||
defer h.close()
|
||||
|
||||
im := 10
|
||||
@@ -1577,7 +1577,11 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
return fmt.Sprintf("key%06d", i)
|
||||
}
|
||||
|
||||
n := 10000
|
||||
const (
|
||||
n = 10000
|
||||
indexOverhead = 19898
|
||||
filterOverhead = 19799
|
||||
)
|
||||
|
||||
// Populate multiple layers
|
||||
for i := 0; i < n; i++ {
|
||||
@@ -1601,7 +1605,7 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
cnt := int(h.stor.ReadCounter())
|
||||
t.Logf("lookup of %d present keys yield %d sstable I/O reads", n, cnt)
|
||||
|
||||
if min, max := n, n+2*n/100; cnt < min || cnt > max {
|
||||
if min, max := n+indexOverhead+filterOverhead, n+indexOverhead+filterOverhead+2*n/100; cnt < min || cnt > max {
|
||||
t.Errorf("num of sstable I/O reads of present keys not in range of %d - %d, got %d", min, max, cnt)
|
||||
}
|
||||
|
||||
@@ -1612,7 +1616,7 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
}
|
||||
cnt = int(h.stor.ReadCounter())
|
||||
t.Logf("lookup of %d missing keys yield %d sstable I/O reads", n, cnt)
|
||||
if max := 3 * n / 100; cnt > max {
|
||||
if max := 3*n/100 + indexOverhead + filterOverhead; cnt > max {
|
||||
t.Errorf("num of sstable I/O reads of missing keys was more than %d, got %d", max, cnt)
|
||||
}
|
||||
|
||||
|
||||
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
@@ -75,7 +75,7 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
mem = nil
|
||||
}
|
||||
}()
|
||||
nn = mem.db.Free()
|
||||
nn = mem.mdb.Free()
|
||||
switch {
|
||||
case v.tLen(0) >= kL0_SlowdownWritesTrigger && !delayed:
|
||||
delayed = true
|
||||
@@ -90,13 +90,13 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
}
|
||||
default:
|
||||
// Allow memdb to grow if it has no entry.
|
||||
if mem.db.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
nn = n
|
||||
} else {
|
||||
mem.decref()
|
||||
mem, err = db.rotateMem(n)
|
||||
if err == nil {
|
||||
nn = mem.db.Free()
|
||||
nn = mem.mdb.Free()
|
||||
} else {
|
||||
nn = 0
|
||||
}
|
||||
@@ -190,7 +190,7 @@ drain:
|
||||
return
|
||||
case db.journalC <- b:
|
||||
// Write into memdb
|
||||
b.memReplay(mem.db)
|
||||
b.memReplay(mem.mdb)
|
||||
}
|
||||
// Wait for journal writer
|
||||
select {
|
||||
@@ -200,7 +200,7 @@ drain:
|
||||
case err = <-db.journalAckC:
|
||||
if err != nil {
|
||||
// Revert memdb if error detected
|
||||
b.revertMemReplay(mem.db)
|
||||
b.revertMemReplay(mem.mdb)
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -209,7 +209,7 @@ drain:
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
b.memReplay(mem.db)
|
||||
b.memReplay(mem.mdb)
|
||||
}
|
||||
|
||||
// Set last seq number.
|
||||
@@ -271,7 +271,7 @@ func (db *DB) CompactRange(r util.Range) error {
|
||||
// Check for overlaps in memdb.
|
||||
mem := db.getEffectiveMem()
|
||||
defer mem.decref()
|
||||
if isMemOverlaps(db.s.icmp, mem.db, r.Start, r.Limit) {
|
||||
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
|
||||
// Memdb compaction.
|
||||
if _, err := db.rotateMem(0); err != nil {
|
||||
<-db.writeLockC
|
||||
|
||||
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
@@ -37,6 +37,16 @@
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Iterate over subset of database content with a particular prefix:
|
||||
// iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
|
||||
// for iter.Next() {
|
||||
// // Use key/value.
|
||||
// ...
|
||||
// }
|
||||
// iter.Release()
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Seek-then-Iterate:
|
||||
//
|
||||
// iter := db.NewIterator(nil, nil)
|
||||
|
||||
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/external_test.go
generated
vendored
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/external_test.go
generated
vendored
@@ -21,7 +21,7 @@ var _ = testutil.Defer(func() {
|
||||
BlockRestartInterval: 5,
|
||||
BlockSize: 50,
|
||||
Compression: opt.NoCompression,
|
||||
MaxOpenFiles: 0,
|
||||
CachedOpenFiles: -1,
|
||||
Strict: opt.StrictAll,
|
||||
WriteBuffer: 1000,
|
||||
}
|
||||
@@ -40,18 +40,17 @@ var _ = testutil.Defer(func() {
|
||||
})
|
||||
|
||||
Describe("read test", func() {
|
||||
testutil.AllKeyValueTesting(nil, func(kv testutil.KeyValue) testutil.DB {
|
||||
testutil.AllKeyValueTesting(nil, nil, func(kv testutil.KeyValue) testutil.DB {
|
||||
// Building the DB.
|
||||
db := newTestingDB(o, nil, nil)
|
||||
kv.IterateShuffled(nil, func(i int, key, value []byte) {
|
||||
err := db.TestPut(key, value)
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
})
|
||||
testutil.Defer("teardown", func() {
|
||||
db.TestClose()
|
||||
})
|
||||
|
||||
return db
|
||||
}, func(db testutil.DB) {
|
||||
db.(*testingDB).TestClose()
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_test.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/memdb/memdb_test.go
generated
vendored
@@ -129,7 +129,7 @@ var _ = testutil.Defer(func() {
|
||||
}
|
||||
|
||||
return db
|
||||
})
|
||||
}, nil, nil)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
46
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
46
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
@@ -24,16 +24,22 @@ const (
|
||||
DefaultBlockRestartInterval = 16
|
||||
DefaultBlockSize = 4 * KiB
|
||||
DefaultCompressionType = SnappyCompression
|
||||
DefaultMaxOpenFiles = 1000
|
||||
DefaultCachedOpenFiles = 500
|
||||
DefaultWriteBuffer = 4 * MiB
|
||||
)
|
||||
|
||||
type noCache struct{}
|
||||
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap(closed bool) {}
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) Capacity() int { return 0 }
|
||||
func (noCache) Used() int { return 0 }
|
||||
func (noCache) Size() int { return 0 }
|
||||
func (noCache) NumObjects() int { return 0 }
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
|
||||
func (noCache) ZapNamespace(id uint64) {}
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap() {}
|
||||
|
||||
var NoCache cache.Cache = noCache{}
|
||||
|
||||
@@ -119,6 +125,13 @@ type Options struct {
|
||||
// The default value is 4KiB.
|
||||
BlockSize int
|
||||
|
||||
// CachedOpenFiles defines number of open files to kept around when not
|
||||
// in-use, the counting includes still in-use files.
|
||||
// Set this to negative value to disable caching.
|
||||
//
|
||||
// The default value is 500.
|
||||
CachedOpenFiles int
|
||||
|
||||
// Comparer defines a total ordering over the space of []byte keys: a 'less
|
||||
// than' relationship. The same comparison algorithm must be used for reads
|
||||
// and writes over the lifetime of the DB.
|
||||
@@ -159,13 +172,6 @@ type Options struct {
|
||||
// The default value is nil.
|
||||
Filter filter.Filter
|
||||
|
||||
// MaxOpenFiles defines maximum number of open files to kept around
|
||||
// (cached). This is not an hard limit, actual open files may exceed
|
||||
// the defined value.
|
||||
//
|
||||
// The default value is 1000.
|
||||
MaxOpenFiles int
|
||||
|
||||
// Strict defines the DB strict level.
|
||||
Strict Strict
|
||||
|
||||
@@ -207,6 +213,15 @@ func (o *Options) GetBlockSize() int {
|
||||
return o.BlockSize
|
||||
}
|
||||
|
||||
func (o *Options) GetCachedOpenFiles() int {
|
||||
if o == nil || o.CachedOpenFiles == 0 {
|
||||
return DefaultCachedOpenFiles
|
||||
} else if o.CachedOpenFiles < 0 {
|
||||
return 0
|
||||
}
|
||||
return o.CachedOpenFiles
|
||||
}
|
||||
|
||||
func (o *Options) GetComparer() comparer.Comparer {
|
||||
if o == nil || o.Comparer == nil {
|
||||
return comparer.DefaultComparer
|
||||
@@ -242,13 +257,6 @@ func (o *Options) GetFilter() filter.Filter {
|
||||
return o.Filter
|
||||
}
|
||||
|
||||
func (o *Options) GetMaxOpenFiles() int {
|
||||
if o == nil || o.MaxOpenFiles <= 0 {
|
||||
return DefaultMaxOpenFiles
|
||||
}
|
||||
return o.MaxOpenFiles
|
||||
}
|
||||
|
||||
func (o *Options) GetStrict(strict Strict) bool {
|
||||
if o == nil || o.Strict == 0 {
|
||||
return DefaultStrict&strict != 0
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session.go
generated
vendored
@@ -58,7 +58,7 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
|
||||
storLock: storLock,
|
||||
}
|
||||
s.setOptions(o)
|
||||
s.tops = newTableOps(s, s.o.GetMaxOpenFiles())
|
||||
s.tops = newTableOps(s, s.o.GetCachedOpenFiles())
|
||||
s.setVersion(&version{s: s})
|
||||
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock D·DeletedEntry L·Level Q·SeqNum T·TimeElapsed")
|
||||
return
|
||||
|
||||
71
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
71
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
@@ -297,7 +297,7 @@ func (t *tOps) create() (*tWriter, error) {
|
||||
func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
w, err := t.create()
|
||||
if err != nil {
|
||||
return f, n, err
|
||||
return
|
||||
}
|
||||
|
||||
defer func() {
|
||||
@@ -322,33 +322,24 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Opens table. It returns a cache object, which should
|
||||
// Opens table. It returns a cache handle, which should
|
||||
// be released after use.
|
||||
func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
|
||||
num := f.file.Num()
|
||||
c, ok := t.cacheNS.Get(num, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
|
||||
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
|
||||
var r storage.Reader
|
||||
r, err = f.file.Open()
|
||||
if err != nil {
|
||||
return
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
o := t.s.o
|
||||
|
||||
var cacheNS cache.Namespace
|
||||
if bc := o.GetBlockCache(); bc != nil {
|
||||
cacheNS = bc.GetNamespace(num)
|
||||
var bcacheNS cache.Namespace
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bcacheNS = bc.GetNamespace(num)
|
||||
}
|
||||
|
||||
ok = true
|
||||
value = table.NewReader(r, int64(f.size), cacheNS, t.bpool, o)
|
||||
charge = 1
|
||||
fin = func() {
|
||||
r.Close()
|
||||
}
|
||||
return
|
||||
return 1, table.NewReader(r, int64(f.size), bcacheNS, t.bpool, t.s.o)
|
||||
})
|
||||
if !ok && err == nil {
|
||||
if ch == nil && err == nil {
|
||||
err = ErrClosed
|
||||
}
|
||||
return
|
||||
@@ -357,34 +348,33 @@ func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
// Finds key/value pair whose key is greater than or equal to the
|
||||
// given key.
|
||||
func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer c.Release()
|
||||
return c.Value().(*table.Reader).Find(key, ro)
|
||||
defer ch.Release()
|
||||
return ch.Value().(*table.Reader).Find(key, ro)
|
||||
}
|
||||
|
||||
// Returns approximate offset of the given key.
|
||||
func (t *tOps) offsetOf(f *tFile, key []byte) (offset uint64, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_offset, err := c.Value().(*table.Reader).OffsetOf(key)
|
||||
offset = uint64(_offset)
|
||||
c.Release()
|
||||
return
|
||||
defer ch.Release()
|
||||
offset_, err := ch.Value().(*table.Reader).OffsetOf(key)
|
||||
return uint64(offset_), err
|
||||
}
|
||||
|
||||
// Creates an iterator from the given table.
|
||||
func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
iter := c.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(c)
|
||||
iter := ch.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(ch)
|
||||
return iter
|
||||
}
|
||||
|
||||
@@ -392,14 +382,16 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
|
||||
// no one use the the table.
|
||||
func (t *tOps) remove(f *tFile) {
|
||||
num := f.file.Num()
|
||||
t.cacheNS.Delete(num, func(exist bool) {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.GetNamespace(num).Zap(false)
|
||||
t.cacheNS.Delete(num, func(exist, pending bool) {
|
||||
if !pending {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.ZapNamespace(num)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -407,7 +399,8 @@ func (t *tOps) remove(f *tFile) {
|
||||
// Closes the table ops instance. It will close all tables,
|
||||
// regadless still used or not.
|
||||
func (t *tOps) close() {
|
||||
t.cache.Zap(true)
|
||||
t.cache.Zap()
|
||||
t.bpool.Close()
|
||||
}
|
||||
|
||||
// Creates new initialized table ops instance.
|
||||
|
||||
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
4
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
@@ -40,7 +40,7 @@ var _ = testutil.Defer(func() {
|
||||
data := bw.buf.Bytes()
|
||||
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
|
||||
return &block{
|
||||
cmp: comparer.DefaultComparer,
|
||||
tr: &Reader{cmp: comparer.DefaultComparer},
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
@@ -59,7 +59,7 @@ var _ = testutil.Defer(func() {
|
||||
// Make block.
|
||||
br := Build(kv, restartInterval)
|
||||
// Do testing.
|
||||
testutil.KeyValueTesting(nil, br, kv.Clone())
|
||||
testutil.KeyValueTesting(nil, kv.Clone(), br, nil, nil)
|
||||
}
|
||||
|
||||
Describe(Text(), Test)
|
||||
|
||||
253
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
253
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
@@ -37,7 +37,7 @@ func max(x, y int) int {
|
||||
}
|
||||
|
||||
type block struct {
|
||||
cmp comparer.BasicComparer
|
||||
tr *Reader
|
||||
data []byte
|
||||
restartsLen int
|
||||
restartsOffset int
|
||||
@@ -46,31 +46,25 @@ type block struct {
|
||||
}
|
||||
|
||||
func (b *block) seek(rstart, rlimit int, key []byte) (index, offset int, err error) {
|
||||
n := b.restartsOffset
|
||||
data := b.data
|
||||
cmp := b.cmp
|
||||
|
||||
index = sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
|
||||
offset := int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):]))
|
||||
offset += 1 // shared always zero, since this is a restart point
|
||||
v1, n1 := binary.Uvarint(data[offset:]) // key length
|
||||
_, n2 := binary.Uvarint(data[offset+n1:]) // value length
|
||||
offset := int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):]))
|
||||
offset += 1 // shared always zero, since this is a restart point
|
||||
v1, n1 := binary.Uvarint(b.data[offset:]) // key length
|
||||
_, n2 := binary.Uvarint(b.data[offset+n1:]) // value length
|
||||
m := offset + n1 + n2
|
||||
return cmp.Compare(data[m:m+int(v1)], key) > 0
|
||||
return b.tr.cmp.Compare(b.data[m:m+int(v1)], key) > 0
|
||||
}) + rstart - 1
|
||||
if index < rstart {
|
||||
// The smallest key is greater-than key sought.
|
||||
index = rstart
|
||||
}
|
||||
offset = int(binary.LittleEndian.Uint32(data[n+4*index:]))
|
||||
offset = int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*index:]))
|
||||
return
|
||||
}
|
||||
|
||||
func (b *block) restartIndex(rstart, rlimit, offset int) int {
|
||||
n := b.restartsOffset
|
||||
data := b.data
|
||||
return sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
|
||||
return int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):])) > offset
|
||||
return int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):])) > offset
|
||||
}) + rstart - 1
|
||||
}
|
||||
|
||||
@@ -139,6 +133,14 @@ func (b *block) newIterator(slice *util.Range, inclLimit bool, cache util.Releas
|
||||
return bi
|
||||
}
|
||||
|
||||
func (b *block) Release() {
|
||||
if b.tr.bpool != nil {
|
||||
b.tr.bpool.Put(b.data)
|
||||
}
|
||||
b.tr = nil
|
||||
b.data = nil
|
||||
}
|
||||
|
||||
type dir int
|
||||
|
||||
const (
|
||||
@@ -261,7 +263,7 @@ func (i *blockIter) Seek(key []byte) bool {
|
||||
i.dir = dirForward
|
||||
}
|
||||
for i.Next() {
|
||||
if i.block.cmp.Compare(i.key, key) >= 0 {
|
||||
if i.block.tr.cmp.Compare(i.key, key) >= 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
@@ -438,6 +440,7 @@ func (i *blockIter) Value() []byte {
|
||||
|
||||
func (i *blockIter) Release() {
|
||||
if i.dir > dirReleased {
|
||||
i.block = nil
|
||||
i.prevNode = nil
|
||||
i.prevKeys = nil
|
||||
i.key = nil
|
||||
@@ -469,7 +472,7 @@ func (i *blockIter) Error() error {
|
||||
}
|
||||
|
||||
type filterBlock struct {
|
||||
filter filter.Filter
|
||||
tr *Reader
|
||||
data []byte
|
||||
oOffset int
|
||||
baseLg uint
|
||||
@@ -483,7 +486,7 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
|
||||
n := int(binary.LittleEndian.Uint32(o))
|
||||
m := int(binary.LittleEndian.Uint32(o[4:]))
|
||||
if n < m && m <= b.oOffset {
|
||||
return b.filter.Contains(b.data[n:m], key)
|
||||
return b.tr.filter.Contains(b.data[n:m], key)
|
||||
} else if n == m {
|
||||
return false
|
||||
}
|
||||
@@ -491,10 +494,17 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (b *filterBlock) Release() {
|
||||
if b.tr.bpool != nil {
|
||||
b.tr.bpool.Put(b.data)
|
||||
}
|
||||
b.tr = nil
|
||||
b.data = nil
|
||||
}
|
||||
|
||||
type indexIter struct {
|
||||
blockIter
|
||||
tableReader *Reader
|
||||
slice *util.Range
|
||||
*blockIter
|
||||
slice *util.Range
|
||||
// Options
|
||||
checksum bool
|
||||
fillCache bool
|
||||
@@ -513,7 +523,7 @@ func (i *indexIter) Get() iterator.Iterator {
|
||||
if i.slice != nil && (i.blockIter.isFirst() || i.blockIter.isLast()) {
|
||||
slice = i.slice
|
||||
}
|
||||
return i.tableReader.getDataIter(dataBH, slice, i.checksum, i.fillCache)
|
||||
return i.blockIter.block.tr.getDataIter(dataBH, slice, i.checksum, i.fillCache)
|
||||
}
|
||||
|
||||
// Reader is a table reader.
|
||||
@@ -528,9 +538,8 @@ type Reader struct {
|
||||
checksum bool
|
||||
strictIter bool
|
||||
|
||||
dataEnd int64
|
||||
indexBlock *block
|
||||
filterBlock *filterBlock
|
||||
dataEnd int64
|
||||
indexBH, filterBH blockHandle
|
||||
}
|
||||
|
||||
func verifyChecksum(data []byte) bool {
|
||||
@@ -547,6 +556,7 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
|
||||
}
|
||||
if checksum || r.checksum {
|
||||
if !verifyChecksum(data) {
|
||||
r.bpool.Put(data)
|
||||
return nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
|
||||
}
|
||||
}
|
||||
@@ -565,6 +575,7 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
|
||||
return nil, err
|
||||
}
|
||||
default:
|
||||
r.bpool.Put(data)
|
||||
return nil, fmt.Errorf("leveldb/table: Reader: unknown block compression type: %d", data[bh.length])
|
||||
}
|
||||
return data, nil
|
||||
@@ -577,7 +588,7 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
|
||||
}
|
||||
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
|
||||
b := &block{
|
||||
cmp: r.cmp,
|
||||
tr: r,
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
@@ -586,7 +597,44 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterBlock, error) {
|
||||
func (r *Reader) readBlockCached(bh blockHandle, checksum, fillCache bool) (*block, util.Releaser, error) {
|
||||
if r.cache != nil {
|
||||
var err error
|
||||
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
|
||||
if !fillCache {
|
||||
return 0, nil
|
||||
}
|
||||
var b *block
|
||||
b, err = r.readBlock(bh, checksum)
|
||||
if err != nil {
|
||||
return 0, nil
|
||||
}
|
||||
return cap(b.data), b
|
||||
})
|
||||
if ch != nil {
|
||||
b, ok := ch.Value().(*block)
|
||||
if !ok {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
|
||||
}
|
||||
if !b.checksum && (r.checksum || checksum) {
|
||||
if !verifyChecksum(b.data) {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
|
||||
}
|
||||
b.checksum = true
|
||||
}
|
||||
return b, ch, err
|
||||
} else if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
b, err := r.readBlock(bh, checksum)
|
||||
return b, b, err
|
||||
}
|
||||
|
||||
func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
|
||||
data, err := r.readRawBlock(bh, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -601,7 +649,7 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
|
||||
return nil, errors.New("leveldb/table: Reader: invalid filter block (invalid offset)")
|
||||
}
|
||||
b := &filterBlock{
|
||||
filter: filter,
|
||||
tr: r,
|
||||
data: data,
|
||||
oOffset: oOffset,
|
||||
baseLg: uint(data[n-1]),
|
||||
@@ -610,60 +658,42 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
|
||||
return b, nil
|
||||
}
|
||||
|
||||
type releaseBlock struct {
|
||||
r *Reader
|
||||
b *block
|
||||
}
|
||||
|
||||
func (r releaseBlock) Release() {
|
||||
if r.b.data != nil {
|
||||
r.r.bpool.Put(r.b.data)
|
||||
r.b.data = nil
|
||||
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
|
||||
if r.cache != nil {
|
||||
var err error
|
||||
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
|
||||
if !fillCache {
|
||||
return 0, nil
|
||||
}
|
||||
var b *filterBlock
|
||||
b, err = r.readFilterBlock(bh)
|
||||
if err != nil {
|
||||
return 0, nil
|
||||
}
|
||||
return cap(b.data), b
|
||||
})
|
||||
if ch != nil {
|
||||
b, ok := ch.Value().(*filterBlock)
|
||||
if !ok {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
|
||||
}
|
||||
return b, ch, err
|
||||
} else if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
b, err := r.readFilterBlock(bh)
|
||||
return b, b, err
|
||||
}
|
||||
|
||||
func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fillCache bool) iterator.Iterator {
|
||||
if r.cache != nil {
|
||||
// Get/set block cache.
|
||||
var err error
|
||||
cache, ok := r.cache.Get(dataBH.offset, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
|
||||
if !fillCache {
|
||||
return
|
||||
}
|
||||
var dataBlock *block
|
||||
dataBlock, err = r.readBlock(dataBH, checksum)
|
||||
if err == nil {
|
||||
ok = true
|
||||
value = dataBlock
|
||||
charge = int(dataBH.length)
|
||||
fin = func() {
|
||||
r.bpool.Put(dataBlock.data)
|
||||
dataBlock.data = nil
|
||||
}
|
||||
}
|
||||
return
|
||||
})
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
if ok {
|
||||
dataBlock := cache.Value().(*block)
|
||||
if !dataBlock.checksum && (r.checksum || checksum) {
|
||||
if !verifyChecksum(dataBlock.data) {
|
||||
return iterator.NewEmptyIterator(errors.New("leveldb/table: Reader: invalid block (checksum mismatch)"))
|
||||
}
|
||||
dataBlock.checksum = true
|
||||
}
|
||||
iter := dataBlock.newIterator(slice, false, cache)
|
||||
return iter
|
||||
}
|
||||
}
|
||||
dataBlock, err := r.readBlock(dataBH, checksum)
|
||||
b, rel, err := r.readBlockCached(dataBH, checksum, fillCache)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
iter := dataBlock.newIterator(slice, false, releaseBlock{r, dataBlock})
|
||||
return iter
|
||||
return b.newIterator(slice, false, rel)
|
||||
}
|
||||
|
||||
// NewIterator creates an iterator from the table.
|
||||
@@ -677,18 +707,21 @@ func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fi
|
||||
// when not used.
|
||||
//
|
||||
// Also read Iterator documentation of the leveldb/iterator package.
|
||||
|
||||
func (r *Reader) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
if r.err != nil {
|
||||
return iterator.NewEmptyIterator(r.err)
|
||||
}
|
||||
|
||||
fillCache := !ro.GetDontFillCache()
|
||||
b, rel, err := r.readBlockCached(r.indexBH, true, fillCache)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
index := &indexIter{
|
||||
blockIter: *r.indexBlock.newIterator(slice, true, nil),
|
||||
tableReader: r,
|
||||
slice: slice,
|
||||
checksum: ro.GetStrict(opt.StrictBlockChecksum),
|
||||
fillCache: !ro.GetDontFillCache(),
|
||||
blockIter: b.newIterator(slice, true, rel),
|
||||
slice: slice,
|
||||
checksum: ro.GetStrict(opt.StrictBlockChecksum),
|
||||
fillCache: !ro.GetDontFillCache(),
|
||||
}
|
||||
return iterator.NewIndexedIterator(index, r.strictIter || ro.GetStrict(opt.StrictIterator), false)
|
||||
}
|
||||
@@ -705,7 +738,13 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
|
||||
return
|
||||
}
|
||||
|
||||
index := r.indexBlock.newIterator(nil, true, nil)
|
||||
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer rel.Release()
|
||||
|
||||
index := indexBlock.newIterator(nil, true, nil)
|
||||
defer index.Release()
|
||||
if !index.Seek(key) {
|
||||
err = index.Error()
|
||||
@@ -719,9 +758,15 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
|
||||
err = errors.New("leveldb/table: Reader: invalid table (bad data block handle)")
|
||||
return
|
||||
}
|
||||
if r.filterBlock != nil && !r.filterBlock.contains(dataBH.offset, key) {
|
||||
err = ErrNotFound
|
||||
return
|
||||
if r.filter != nil {
|
||||
filterBlock, rel, ferr := r.readFilterBlockCached(r.filterBH, true)
|
||||
if ferr == nil {
|
||||
if !filterBlock.contains(dataBH.offset, key) {
|
||||
rel.Release()
|
||||
return nil, nil, ErrNotFound
|
||||
}
|
||||
rel.Release()
|
||||
}
|
||||
}
|
||||
data := r.getDataIter(dataBH, nil, ro.GetStrict(opt.StrictBlockChecksum), !ro.GetDontFillCache())
|
||||
defer data.Release()
|
||||
@@ -768,7 +813,13 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
index := r.indexBlock.newIterator(nil, true, nil)
|
||||
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer rel.Release()
|
||||
|
||||
index := indexBlock.newIterator(nil, true, nil)
|
||||
defer index.Release()
|
||||
if index.Seek(key) {
|
||||
dataBH, n := decodeBlockHandle(index.Value())
|
||||
@@ -786,6 +837,17 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Release implements util.Releaser.
|
||||
// It also close the file if it is an io.Closer.
|
||||
func (r *Reader) Release() {
|
||||
if closer, ok := r.reader.(io.Closer); ok {
|
||||
closer.Close()
|
||||
}
|
||||
r.reader = nil
|
||||
r.cache = nil
|
||||
r.bpool = nil
|
||||
}
|
||||
|
||||
// NewReader creates a new initialized table reader for the file.
|
||||
// The cache and bpool is optional and can be nil.
|
||||
//
|
||||
@@ -825,16 +887,11 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.Buf
|
||||
return r
|
||||
}
|
||||
// Decode the index block handle.
|
||||
indexBH, n := decodeBlockHandle(footer[n:])
|
||||
r.indexBH, n = decodeBlockHandle(footer[n:])
|
||||
if n == 0 {
|
||||
r.err = errors.New("leveldb/table: Reader: invalid table (bad index block handle)")
|
||||
return r
|
||||
}
|
||||
// Read index block.
|
||||
r.indexBlock, r.err = r.readBlock(indexBH, true)
|
||||
if r.err != nil {
|
||||
return r
|
||||
}
|
||||
// Read metaindex block.
|
||||
metaBlock, err := r.readBlock(metaBH, true)
|
||||
if err != nil {
|
||||
@@ -850,32 +907,28 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.Buf
|
||||
continue
|
||||
}
|
||||
fn := key[7:]
|
||||
var filter filter.Filter
|
||||
if f0 := o.GetFilter(); f0 != nil && f0.Name() == fn {
|
||||
filter = f0
|
||||
r.filter = f0
|
||||
} else {
|
||||
for _, f0 := range o.GetAltFilters() {
|
||||
if f0.Name() == fn {
|
||||
filter = f0
|
||||
r.filter = f0
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if filter != nil {
|
||||
if r.filter != nil {
|
||||
filterBH, n := decodeBlockHandle(metaIter.Value())
|
||||
if n == 0 {
|
||||
continue
|
||||
}
|
||||
r.filterBH = filterBH
|
||||
// Update data end.
|
||||
r.dataEnd = int64(filterBH.offset)
|
||||
filterBlock, err := r.readFilterBlock(filterBH, filter)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
r.filterBlock = filterBlock
|
||||
break
|
||||
}
|
||||
}
|
||||
metaIter.Release()
|
||||
metaBlock.Release()
|
||||
return r
|
||||
}
|
||||
|
||||
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
@@ -104,14 +104,16 @@ var _ = testutil.Defer(func() {
|
||||
if body != nil {
|
||||
body(db.(tableWrapper).Reader)
|
||||
}
|
||||
testutil.KeyValueTesting(nil, db, *kv)
|
||||
testutil.KeyValueTesting(nil, *kv, db, nil, nil)
|
||||
}
|
||||
}
|
||||
|
||||
testutil.AllKeyValueTesting(nil, Build)
|
||||
testutil.AllKeyValueTesting(nil, Build, nil, nil)
|
||||
Describe("with one key per block", Test(testutil.KeyValue_Generate(nil, 9, 1, 10, 512, 512), func(r *Reader) {
|
||||
It("should have correct blocks number", func() {
|
||||
Expect(r.indexBlock.restartsLen).Should(Equal(9))
|
||||
indexBlock, err := r.readBlock(r.indexBH, true)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(indexBlock.restartsLen).Should(Equal(9))
|
||||
})
|
||||
}))
|
||||
})
|
||||
|
||||
100
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/kvtest.go
generated
vendored
100
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/testutil/kvtest.go
generated
vendored
@@ -16,13 +16,22 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
func KeyValueTesting(rnd *rand.Rand, kv KeyValue, p DB, setup func(KeyValue) DB, teardown func(DB)) {
|
||||
if rnd == nil {
|
||||
rnd = NewRand()
|
||||
}
|
||||
|
||||
if db, ok := p.(Find); ok {
|
||||
It("Should find all keys with Find", func() {
|
||||
if p == nil {
|
||||
BeforeEach(func() {
|
||||
p = setup(kv)
|
||||
})
|
||||
AfterEach(func() {
|
||||
teardown(p)
|
||||
})
|
||||
}
|
||||
|
||||
It("Should find all keys with Find", func() {
|
||||
if db, ok := p.(Find); ok {
|
||||
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
|
||||
key_, key, value := kv.IndexInexact(i)
|
||||
|
||||
@@ -38,9 +47,11 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
Expect(rkey).Should(Equal(key))
|
||||
Expect(rvalue).Should(Equal(value), "Value for key %q (%q)", key_, key)
|
||||
})
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
It("Should return error if the key is not present", func() {
|
||||
It("Should return error if the key is not present", func() {
|
||||
if db, ok := p.(Find); ok {
|
||||
var key []byte
|
||||
if kv.Len() > 0 {
|
||||
key_, _ := kv.Index(kv.Len() - 1)
|
||||
@@ -49,11 +60,11 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
rkey, _, err := db.TestFind(key)
|
||||
Expect(err).Should(HaveOccurred(), "Find for key %q yield key %q", key, rkey)
|
||||
Expect(err).Should(Equal(util.ErrNotFound))
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if db, ok := p.(Get); ok {
|
||||
It("Should only find exact key with Get", func() {
|
||||
It("Should only find exact key with Get", func() {
|
||||
if db, ok := p.(Get); ok {
|
||||
ShuffledIndex(nil, kv.Len(), 1, func(i int) {
|
||||
key_, key, value := kv.IndexInexact(i)
|
||||
|
||||
@@ -69,11 +80,11 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
Expect(err).Should(Equal(util.ErrNotFound))
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if db, ok := p.(NewIterator); ok {
|
||||
TestIter := func(r *util.Range, _kv KeyValue) {
|
||||
TestIter := func(r *util.Range, _kv KeyValue) {
|
||||
if db, ok := p.(NewIterator); ok {
|
||||
iter := db.TestNewIterator(r)
|
||||
Expect(iter.Error()).ShouldNot(HaveOccurred())
|
||||
|
||||
@@ -84,45 +95,48 @@ func KeyValueTesting(rnd *rand.Rand, p DB, kv KeyValue) {
|
||||
|
||||
DoIteratorTesting(&t)
|
||||
}
|
||||
}
|
||||
|
||||
It("Should iterates and seeks correctly", func(done Done) {
|
||||
TestIter(nil, kv.Clone())
|
||||
done <- true
|
||||
}, 3.0)
|
||||
It("Should iterates and seeks correctly", func(done Done) {
|
||||
TestIter(nil, kv.Clone())
|
||||
done <- true
|
||||
}, 3.0)
|
||||
|
||||
RandomIndex(rnd, kv.Len(), kv.Len(), func(i int) {
|
||||
type slice struct {
|
||||
r *util.Range
|
||||
start, limit int
|
||||
}
|
||||
RandomIndex(rnd, kv.Len(), kv.Len(), func(i int) {
|
||||
type slice struct {
|
||||
r *util.Range
|
||||
start, limit int
|
||||
}
|
||||
|
||||
key_, _, _ := kv.IndexInexact(i)
|
||||
for _, x := range []slice{
|
||||
{&util.Range{Start: key_, Limit: nil}, i, kv.Len()},
|
||||
{&util.Range{Start: nil, Limit: key_}, 0, i},
|
||||
} {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", x.start, x.limit), func(done Done) {
|
||||
TestIter(x.r, kv.Slice(x.start, x.limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
}
|
||||
})
|
||||
|
||||
RandomRange(rnd, kv.Len(), kv.Len(), func(start, limit int) {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", start, limit), func(done Done) {
|
||||
r := kv.Range(start, limit)
|
||||
TestIter(&r, kv.Slice(start, limit))
|
||||
key_, _, _ := kv.IndexInexact(i)
|
||||
for _, x := range []slice{
|
||||
{&util.Range{Start: key_, Limit: nil}, i, kv.Len()},
|
||||
{&util.Range{Start: nil, Limit: key_}, 0, i},
|
||||
} {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", x.start, x.limit), func(done Done) {
|
||||
TestIter(x.r, kv.Slice(x.start, x.limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
RandomRange(rnd, kv.Len(), kv.Len(), func(start, limit int) {
|
||||
It(fmt.Sprintf("Should iterates and seeks correctly of a slice %d .. %d", start, limit), func(done Done) {
|
||||
r := kv.Range(start, limit)
|
||||
TestIter(&r, kv.Slice(start, limit))
|
||||
done <- true
|
||||
}, 3.0)
|
||||
})
|
||||
}
|
||||
|
||||
func AllKeyValueTesting(rnd *rand.Rand, body func(kv KeyValue) DB) {
|
||||
func AllKeyValueTesting(rnd *rand.Rand, body, setup func(KeyValue) DB, teardown func(DB)) {
|
||||
Test := func(kv *KeyValue) func() {
|
||||
return func() {
|
||||
db := body(*kv)
|
||||
KeyValueTesting(rnd, db, *kv)
|
||||
var p DB
|
||||
if body != nil {
|
||||
p = body(*kv)
|
||||
}
|
||||
KeyValueTesting(rnd, *kv, p, setup, teardown)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
@@ -4,14 +4,12 @@
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.3
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type buffer struct {
|
||||
@@ -21,13 +19,21 @@ type buffer struct {
|
||||
|
||||
// BufferPool is a 'buffer pool'.
|
||||
type BufferPool struct {
|
||||
pool [4]sync.Pool
|
||||
size [3]uint32
|
||||
sizeMiss [3]uint32
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
pool [6]chan []byte
|
||||
size [5]uint32
|
||||
sizeMiss [5]uint32
|
||||
sizeHalf [5]uint32
|
||||
baseline [4]int
|
||||
baselinex0 int
|
||||
baselinex1 int
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
close chan struct{}
|
||||
|
||||
get uint32
|
||||
put uint32
|
||||
half uint32
|
||||
less uint32
|
||||
equal uint32
|
||||
greater uint32
|
||||
@@ -35,34 +41,47 @@ type BufferPool struct {
|
||||
}
|
||||
|
||||
func (p *BufferPool) poolNum(n int) int {
|
||||
switch {
|
||||
case n <= p.baseline0:
|
||||
if n <= p.baseline0 && n > p.baseline0/2 {
|
||||
return 0
|
||||
case n <= p.baseline1:
|
||||
return 1
|
||||
case n <= p.baseline2:
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
for i, x := range p.baseline {
|
||||
if n <= x {
|
||||
return i + 1
|
||||
}
|
||||
}
|
||||
return len(p.baseline) + 1
|
||||
}
|
||||
|
||||
// Get returns buffer with length of n.
|
||||
func (p *BufferPool) Get(n int) []byte {
|
||||
if poolNum := p.poolNum(n); poolNum == 0 {
|
||||
atomic.AddUint32(&p.get, 1)
|
||||
|
||||
poolNum := p.poolNum(n)
|
||||
pool := p.pool[poolNum]
|
||||
if poolNum == 0 {
|
||||
// Fast path.
|
||||
if b, ok := p.pool[0].Get().([]byte); ok {
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
panic("not reached")
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
}
|
||||
} else {
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
@@ -70,21 +89,40 @@ func (p *BufferPool) Get(n int) []byte {
|
||||
} else {
|
||||
sizePtr := &p.size[poolNum-1]
|
||||
|
||||
if b, ok := p.pool[poolNum].Get().([]byte); ok {
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
sizeHalfPtr := &p.sizeHalf[poolNum-1]
|
||||
if atomic.AddUint32(sizeHalfPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(cap(b)/2))
|
||||
atomic.StoreUint32(sizeHalfPtr, 0)
|
||||
} else {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
|
||||
p.pool[poolNum].Put(b)
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
@@ -107,12 +145,46 @@ func (p *BufferPool) Get(n int) []byte {
|
||||
|
||||
// Put adds given buffer to the pool.
|
||||
func (p *BufferPool) Put(b []byte) {
|
||||
p.pool[p.poolNum(cap(b))].Put(b)
|
||||
atomic.AddUint32(&p.put, 1)
|
||||
|
||||
pool := p.pool[p.poolNum(cap(b))]
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *BufferPool) Close() {
|
||||
select {
|
||||
case p.close <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (p *BufferPool) String() string {
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v L·%d E·%d G·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.less, p.equal, p.greater, p.miss)
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v Zh·%v G·%d P·%d H·%d <·%d =·%d >·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.sizeHalf, p.get, p.put, p.half, p.less, p.equal, p.greater, p.miss)
|
||||
}
|
||||
|
||||
func (p *BufferPool) drain() {
|
||||
ticker := time.NewTicker(2 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
for _, ch := range p.pool {
|
||||
select {
|
||||
case <-ch:
|
||||
default:
|
||||
}
|
||||
}
|
||||
case <-p.close:
|
||||
for _, ch := range p.pool {
|
||||
close(ch)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewBufferPool creates a new initialized 'buffer pool'.
|
||||
@@ -120,9 +192,14 @@ func NewBufferPool(baseline int) *BufferPool {
|
||||
if baseline <= 0 {
|
||||
panic("baseline can't be <= 0")
|
||||
}
|
||||
return &BufferPool{
|
||||
p := &BufferPool{
|
||||
baseline0: baseline,
|
||||
baseline1: baseline * 2,
|
||||
baseline2: baseline * 4,
|
||||
baseline: [...]int{baseline / 4, baseline / 2, baseline * 2, baseline * 4},
|
||||
close: make(chan struct{}, 1),
|
||||
}
|
||||
for i, cap := range []int{2, 2, 4, 4, 2, 1} {
|
||||
p.pool[i] = make(chan []byte, cap)
|
||||
}
|
||||
go p.drain()
|
||||
return p
|
||||
}
|
||||
|
||||
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool_legacy.go
generated
vendored
143
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool_legacy.go
generated
vendored
@@ -1,143 +0,0 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build !go1.3
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type buffer struct {
|
||||
b []byte
|
||||
miss int
|
||||
}
|
||||
|
||||
// BufferPool is a 'buffer pool'.
|
||||
type BufferPool struct {
|
||||
pool [4]chan []byte
|
||||
size [3]uint32
|
||||
sizeMiss [3]uint32
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
|
||||
less uint32
|
||||
equal uint32
|
||||
greater uint32
|
||||
miss uint32
|
||||
}
|
||||
|
||||
func (p *BufferPool) poolNum(n int) int {
|
||||
switch {
|
||||
case n <= p.baseline0:
|
||||
return 0
|
||||
case n <= p.baseline1:
|
||||
return 1
|
||||
case n <= p.baseline2:
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns buffer with length of n.
|
||||
func (p *BufferPool) Get(n int) []byte {
|
||||
poolNum := p.poolNum(n)
|
||||
pool := p.pool[poolNum]
|
||||
if poolNum == 0 {
|
||||
// Fast path.
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
panic("not reached")
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
return make([]byte, n, p.baseline0)
|
||||
} else {
|
||||
sizePtr := &p.size[poolNum-1]
|
||||
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
if size := atomic.LoadUint32(sizePtr); uint32(n) > size {
|
||||
if size == 0 {
|
||||
atomic.CompareAndSwapUint32(sizePtr, 0, uint32(n))
|
||||
} else {
|
||||
sizeMissPtr := &p.sizeMiss[poolNum-1]
|
||||
if atomic.AddUint32(sizeMissPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(n))
|
||||
atomic.StoreUint32(sizeMissPtr, 0)
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
return make([]byte, n, size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Put adds given buffer to the pool.
|
||||
func (p *BufferPool) Put(b []byte) {
|
||||
pool := p.pool[p.poolNum(cap(b))]
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *BufferPool) String() string {
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v L·%d E·%d G·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.less, p.equal, p.greater, p.miss)
|
||||
}
|
||||
|
||||
// NewBufferPool creates a new initialized 'buffer pool'.
|
||||
func NewBufferPool(baseline int) *BufferPool {
|
||||
if baseline <= 0 {
|
||||
panic("baseline can't be <= 0")
|
||||
}
|
||||
p := &BufferPool{
|
||||
baseline0: baseline,
|
||||
baseline1: baseline * 2,
|
||||
baseline2: baseline * 4,
|
||||
}
|
||||
for i, cap := range []int{6, 6, 3, 1} {
|
||||
p.pool[i] = make(chan []byte, cap)
|
||||
}
|
||||
return p
|
||||
}
|
||||
16
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
16
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
@@ -14,3 +14,19 @@ type Range struct {
|
||||
// Limit of the key range, not include in the range.
|
||||
Limit []byte
|
||||
}
|
||||
|
||||
// BytesPrefix returns key range that satisfy the given prefix.
|
||||
// This only applicable for the standard 'bytes comparer'.
|
||||
func BytesPrefix(prefix []byte) *Range {
|
||||
var limit []byte
|
||||
for i := len(prefix) - 1; i >= 0; i-- {
|
||||
c := prefix[i]
|
||||
if c < 0xff {
|
||||
limit = make([]byte, i+1)
|
||||
copy(limit, prefix)
|
||||
limit[i] = c + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
return &Range{prefix, limit}
|
||||
}
|
||||
|
||||
13
README.md
13
README.md
@@ -1,9 +1,7 @@
|
||||
syncthing
|
||||
=========
|
||||
|
||||
[](http://build.syncthing.net/job/syncthing/lastSuccessfulBuild/artifact/)
|
||||
[](https://travis-ci.org/syncthing/syncthing)
|
||||
[](https://coveralls.io/r/syncthing/syncthing?branch=master)
|
||||
[](http://build.syncthing.net/job/syncthing/lastBuild/)
|
||||
[](http://godoc.org/github.com/syncthing/syncthing)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
|
||||
@@ -27,7 +25,14 @@ for incompatible changes.
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/getting-started/46).
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
|
||||
|
||||
Building
|
||||
--------
|
||||
|
||||
Building Syncthing from source is easy, and there's a
|
||||
[guide](http://discourse.syncthing.net/t/44)
|
||||
that describes it for both Unix and Windows.
|
||||
|
||||
Signed Releases
|
||||
---------------
|
||||
|
||||
23
auto/auto_test.go
Normal file
23
auto/auto_test.go
Normal file
@@ -0,0 +1,23 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package auto_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/syncthing/syncthing/auto"
|
||||
)
|
||||
|
||||
func TestAssets(t *testing.T) {
|
||||
assets := auto.Assets()
|
||||
idx, ok := assets["index.html"]
|
||||
if !ok {
|
||||
t.Fatal("No index.html in compiled in assets")
|
||||
}
|
||||
if !bytes.Contains(idx, []byte("<html")) {
|
||||
t.Fatal("No html in index.html")
|
||||
}
|
||||
}
|
||||
@@ -1,2 +1,6 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package auto contains auto generated files for web assets.
|
||||
package auto
|
||||
|
||||
File diff suppressed because one or more lines are too long
105
beacon/beacon.go
105
beacon/beacon.go
@@ -11,52 +11,17 @@ type recv struct {
|
||||
src net.Addr
|
||||
}
|
||||
|
||||
type dst struct {
|
||||
intf string
|
||||
conn *net.UDPConn
|
||||
type Interface interface {
|
||||
Send(data []byte)
|
||||
Recv() ([]byte, net.Addr)
|
||||
}
|
||||
|
||||
type Beacon struct {
|
||||
conn *net.UDPConn
|
||||
port int
|
||||
conns []dst
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func New(port int) (*Beacon, error) {
|
||||
conn, err := net.ListenUDP("udp", &net.UDPAddr{Port: port})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Beacon{
|
||||
conn: conn,
|
||||
port: port,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go b.reader()
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Beacon) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Beacon) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Beacon) reader() {
|
||||
func genericReader(conn *net.UDPConn, outbox chan<- recv) {
|
||||
bs := make([]byte, 65536)
|
||||
for {
|
||||
n, addr, err := b.conn.ReadFrom(bs)
|
||||
n, addr, err := conn.ReadFrom(bs)
|
||||
if err != nil {
|
||||
l.Warnln("Beacon read:", err)
|
||||
l.Warnln("multicast read:", err)
|
||||
return
|
||||
}
|
||||
if debug {
|
||||
@@ -66,7 +31,7 @@ func (b *Beacon) reader() {
|
||||
c := make([]byte, n)
|
||||
copy(c, bs)
|
||||
select {
|
||||
case b.outbox <- recv{c, addr}:
|
||||
case outbox <- recv{c, addr}:
|
||||
default:
|
||||
if debug {
|
||||
l.Debugln("dropping message")
|
||||
@@ -74,59 +39,3 @@ func (b *Beacon) reader() {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *Beacon) writer() {
|
||||
for bs := range b.inbox {
|
||||
|
||||
addrs, err := net.InterfaceAddrs()
|
||||
if err != nil {
|
||||
l.Warnln("Beacon: interface addresses:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var dsts []net.IP
|
||||
for _, addr := range addrs {
|
||||
if iaddr, ok := addr.(*net.IPNet); ok && iaddr.IP.IsGlobalUnicast() && iaddr.IP.To4() != nil {
|
||||
baddr := bcast(iaddr)
|
||||
dsts = append(dsts, baddr.IP)
|
||||
}
|
||||
}
|
||||
|
||||
if len(dsts) == 0 {
|
||||
// Fall back to the general IPv4 broadcast address
|
||||
dsts = append(dsts, net.IP{0xff, 0xff, 0xff, 0xff})
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugln("addresses:", dsts)
|
||||
}
|
||||
|
||||
for _, ip := range dsts {
|
||||
dst := &net.UDPAddr{IP: ip, Port: b.port}
|
||||
|
||||
_, err := b.conn.WriteTo(bs, dst)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func bcast(ip *net.IPNet) *net.IPNet {
|
||||
var bc = &net.IPNet{}
|
||||
bc.IP = make([]byte, len(ip.IP))
|
||||
copy(bc.IP, ip.IP)
|
||||
bc.Mask = ip.Mask
|
||||
|
||||
offset := len(bc.IP) - len(bc.Mask)
|
||||
for i := range bc.IP {
|
||||
if i-offset >= 0 {
|
||||
bc.IP[i] = ip.IP[i] | ^ip.Mask[i-offset]
|
||||
}
|
||||
}
|
||||
return bc
|
||||
}
|
||||
|
||||
97
beacon/broadcast.go
Normal file
97
beacon/broadcast.go
Normal file
@@ -0,0 +1,97 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package beacon
|
||||
|
||||
import "net"
|
||||
|
||||
type Broadcast struct {
|
||||
conn *net.UDPConn
|
||||
port int
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func NewBroadcast(port int) (*Broadcast, error) {
|
||||
conn, err := net.ListenUDP("udp", &net.UDPAddr{Port: port})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Broadcast{
|
||||
conn: conn,
|
||||
port: port,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go genericReader(b.conn, b.outbox)
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Broadcast) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Broadcast) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Broadcast) writer() {
|
||||
for bs := range b.inbox {
|
||||
|
||||
addrs, err := net.InterfaceAddrs()
|
||||
if err != nil {
|
||||
l.Warnln("Broadcast: interface addresses:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var dsts []net.IP
|
||||
for _, addr := range addrs {
|
||||
if iaddr, ok := addr.(*net.IPNet); ok && iaddr.IP.IsGlobalUnicast() && iaddr.IP.To4() != nil {
|
||||
baddr := bcast(iaddr)
|
||||
dsts = append(dsts, baddr.IP)
|
||||
}
|
||||
}
|
||||
|
||||
if len(dsts) == 0 {
|
||||
// Fall back to the general IPv4 broadcast address
|
||||
dsts = append(dsts, net.IP{0xff, 0xff, 0xff, 0xff})
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugln("addresses:", dsts)
|
||||
}
|
||||
|
||||
for _, ip := range dsts {
|
||||
dst := &net.UDPAddr{IP: ip, Port: b.port}
|
||||
|
||||
_, err := b.conn.WriteTo(bs, dst)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func bcast(ip *net.IPNet) *net.IPNet {
|
||||
var bc = &net.IPNet{}
|
||||
bc.IP = make([]byte, len(ip.IP))
|
||||
copy(bc.IP, ip.IP)
|
||||
bc.Mask = ip.Mask
|
||||
|
||||
offset := len(bc.IP) - len(bc.Mask)
|
||||
for i := range bc.IP {
|
||||
if i-offset >= 0 {
|
||||
bc.IP[i] = ip.IP[i] | ^ip.Mask[i-offset]
|
||||
}
|
||||
}
|
||||
return bc
|
||||
}
|
||||
69
beacon/multicast.go
Normal file
69
beacon/multicast.go
Normal file
@@ -0,0 +1,69 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package beacon
|
||||
|
||||
import "net"
|
||||
|
||||
type Multicast struct {
|
||||
conn *net.UDPConn
|
||||
addr *net.UDPAddr
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func NewMulticast(addr string) (*Multicast, error) {
|
||||
gaddr, err := net.ResolveUDPAddr("udp", addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
conn, err := net.ListenMulticastUDP("udp", nil, gaddr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Multicast{
|
||||
conn: conn,
|
||||
addr: gaddr,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go genericReader(b.conn, b.outbox)
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Multicast) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Multicast) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Multicast) writer() {
|
||||
for bs := range b.inbox {
|
||||
intfs, err := net.Interfaces()
|
||||
if err != nil {
|
||||
l.Warnln("multicast interfaces:", err)
|
||||
continue
|
||||
}
|
||||
for _, intf := range intfs {
|
||||
if intf.Flags&net.FlagUp != 0 && intf.Flags&net.FlagMulticast != 0 {
|
||||
addr := *b.addr
|
||||
addr.Zone = intf.Name
|
||||
_, err = b.conn.WriteTo(bs, &addr)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err, "on write to", addr)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), addr.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
500
build.go
Normal file
500
build.go
Normal file
@@ -0,0 +1,500 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
versionRe = regexp.MustCompile(`-[0-9]{1,3}-g[0-9a-f]{5,10}`)
|
||||
goarch string
|
||||
goos string
|
||||
noupgrade bool
|
||||
)
|
||||
|
||||
const minGoVersion = 1.3
|
||||
|
||||
func main() {
|
||||
log.SetOutput(os.Stdout)
|
||||
log.SetFlags(0)
|
||||
|
||||
if os.Getenv("GOPATH") == "" {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
gopath := filepath.Clean(filepath.Join(cwd, "../../../../"))
|
||||
log.Println("GOPATH is", gopath)
|
||||
os.Setenv("GOPATH", gopath)
|
||||
}
|
||||
os.Setenv("PATH", fmt.Sprintf("%s%cbin%c%s", os.Getenv("GOPATH"), os.PathSeparator, os.PathListSeparator, os.Getenv("PATH")))
|
||||
|
||||
flag.StringVar(&goarch, "goarch", runtime.GOARCH, "GOARCH")
|
||||
flag.StringVar(&goos, "goos", runtime.GOOS, "GOOS")
|
||||
flag.BoolVar(&noupgrade, "no-upgrade", false, "Disable upgrade functionality")
|
||||
flag.Parse()
|
||||
|
||||
switch goarch {
|
||||
case "386", "amd64", "armv5", "armv6", "armv7":
|
||||
break
|
||||
case "arm":
|
||||
log.Println("Invalid goarch \"arm\". Use one of \"armv5\", \"armv6\", \"armv7\".")
|
||||
log.Fatalln("Note that producing a correct \"armv5\" binary requires a rebuilt stdlib.")
|
||||
default:
|
||||
log.Printf("Unknown goarch %q; proceed with caution!", goarch)
|
||||
}
|
||||
|
||||
checkRequiredGoVersion()
|
||||
|
||||
if check() != nil {
|
||||
setup()
|
||||
}
|
||||
|
||||
if flag.NArg() == 0 {
|
||||
install("./cmd/...")
|
||||
return
|
||||
}
|
||||
|
||||
switch flag.Arg(0) {
|
||||
case "install":
|
||||
pkg := "./cmd/..."
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
install(pkg)
|
||||
|
||||
case "build":
|
||||
pkg := "./cmd/syncthing"
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
}
|
||||
build(pkg, tags)
|
||||
|
||||
case "test":
|
||||
pkg := "./..."
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
test(pkg)
|
||||
|
||||
case "assets":
|
||||
assets()
|
||||
|
||||
case "xdr":
|
||||
xdr()
|
||||
|
||||
case "translate":
|
||||
translate()
|
||||
|
||||
case "transifex":
|
||||
transifex()
|
||||
|
||||
case "deps":
|
||||
deps()
|
||||
|
||||
case "tar":
|
||||
buildTar()
|
||||
|
||||
case "zip":
|
||||
buildZip()
|
||||
|
||||
case "clean":
|
||||
clean()
|
||||
|
||||
default:
|
||||
log.Fatalf("Unknown command %q", flag.Arg(0))
|
||||
}
|
||||
}
|
||||
|
||||
func check() error {
|
||||
_, err := exec.LookPath("godep")
|
||||
return err
|
||||
}
|
||||
|
||||
func checkRequiredGoVersion() {
|
||||
ver := run("go", "version")
|
||||
re := regexp.MustCompile(`go version go(\d+\.\d+)`)
|
||||
if m := re.FindSubmatch(ver); len(m) == 2 {
|
||||
vs := string(m[1])
|
||||
// This is a standard go build. Verify that it's new enough.
|
||||
f, err := strconv.ParseFloat(vs, 64)
|
||||
if err != nil {
|
||||
log.Printf("*** Could parse Go version out of %q.\n*** This isn't known to work, proceed on your own risk.", vs)
|
||||
return
|
||||
}
|
||||
if f < minGoVersion {
|
||||
log.Fatalf("*** Go version %.01f is less than required %.01f.\n*** This is known not to work, not proceeding.", f, minGoVersion)
|
||||
}
|
||||
} else {
|
||||
log.Printf("*** Unknown Go version %q.\n*** This isn't known to work, proceed on your own risk.", ver)
|
||||
}
|
||||
}
|
||||
|
||||
func setup() {
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/cover")
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/vet")
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.net/html")
|
||||
runPrint("go", "get", "-v", "github.com/tools/godep")
|
||||
}
|
||||
|
||||
func test(pkg string) {
|
||||
runPrint("godep", "go", "test", "-short", "-timeout", "10s", pkg)
|
||||
}
|
||||
|
||||
func install(pkg string) {
|
||||
os.Setenv("GOBIN", "./bin")
|
||||
setBuildEnv()
|
||||
runPrint("godep", "go", "install", "-ldflags", ldflags(), pkg)
|
||||
}
|
||||
|
||||
func build(pkg string, tags []string) {
|
||||
rmr("syncthing", "syncthing.exe")
|
||||
args := []string{"go", "build", "-ldflags", ldflags()}
|
||||
if len(tags) > 0 {
|
||||
args = append(args, "-tags", strings.Join(tags, ","))
|
||||
}
|
||||
args = append(args, pkg)
|
||||
setBuildEnv()
|
||||
runPrint("godep", args...)
|
||||
}
|
||||
|
||||
func buildTar() {
|
||||
name := archiveName()
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
name += "-noupgrade"
|
||||
}
|
||||
build("./cmd/syncthing", tags)
|
||||
filename := name + ".tar.gz"
|
||||
tarGz(filename, []archiveFile{
|
||||
{"README.md", name + "/README.txt"},
|
||||
{"LICENSE", name + "/LICENSE.txt"},
|
||||
{"CONTRIBUTORS", name + "/CONTRIBUTORS.txt"},
|
||||
{"syncthing", name + "/syncthing"},
|
||||
})
|
||||
log.Println(filename)
|
||||
}
|
||||
|
||||
func buildZip() {
|
||||
name := archiveName()
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
name += "-noupgrade"
|
||||
}
|
||||
build("./cmd/syncthing", tags)
|
||||
filename := name + ".zip"
|
||||
zipFile(filename, []archiveFile{
|
||||
{"README.md", name + "/README.txt"},
|
||||
{"LICENSE", name + "/LICENSE.txt"},
|
||||
{"CONTRIBUTORS", name + "/CONTRIBUTORS.txt"},
|
||||
{"syncthing.exe", name + "/syncthing.exe"},
|
||||
})
|
||||
log.Println(filename)
|
||||
}
|
||||
|
||||
func setBuildEnv() {
|
||||
os.Setenv("GOOS", goos)
|
||||
if strings.HasPrefix(goarch, "arm") {
|
||||
os.Setenv("GOARCH", "arm")
|
||||
os.Setenv("GOARM", goarch[4:])
|
||||
} else {
|
||||
os.Setenv("GOARCH", goarch)
|
||||
}
|
||||
if goarch == "386" {
|
||||
os.Setenv("GO386", "387")
|
||||
}
|
||||
}
|
||||
|
||||
func assets() {
|
||||
runPipe("auto/gui.files.go", "godep", "go", "run", "cmd/genassets/main.go", "gui")
|
||||
}
|
||||
|
||||
func xdr() {
|
||||
for _, f := range []string{"discover/packets", "files/leveldb", "protocol/message"} {
|
||||
runPipe(f+"_xdr.go", "go", "run", "./Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go", "--", f+".go")
|
||||
}
|
||||
}
|
||||
|
||||
func translate() {
|
||||
os.Chdir("gui/lang")
|
||||
runPipe("lang-en-new.json", "go", "run", "../../cmd/translate/main.go", "lang-en.json", "../index.html")
|
||||
os.Remove("lang-en.json")
|
||||
err := os.Rename("lang-en-new.json", "lang-en.json")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
os.Chdir("../..")
|
||||
}
|
||||
|
||||
func transifex() {
|
||||
os.Chdir("gui/lang")
|
||||
runPrint("go", "run", "../../cmd/transifexdl/main.go")
|
||||
os.Chdir("../..")
|
||||
assets()
|
||||
}
|
||||
|
||||
func deps() {
|
||||
rmr("Godeps")
|
||||
runPrint("godep", "save", "./cmd/...")
|
||||
}
|
||||
|
||||
func clean() {
|
||||
rmr("bin", "Godeps/_workspace/pkg", "Godeps/_workspace/bin")
|
||||
rmr(filepath.Join(os.Getenv("GOPATH"), fmt.Sprintf("pkg/%s_%s/github.com/syncthing", goos, goarch)))
|
||||
}
|
||||
|
||||
func ldflags() string {
|
||||
var b bytes.Buffer
|
||||
b.WriteString("-w")
|
||||
b.WriteString(fmt.Sprintf(" -X main.Version %s", version()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildStamp %d", buildStamp()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildUser %s", buildUser()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildHost %s", buildHost()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildEnv %s", buildEnvironment()))
|
||||
if strings.HasPrefix(goarch, "arm") {
|
||||
b.WriteString(fmt.Sprintf(" -X main.GoArchExtra %s", goarch[3:]))
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func rmr(paths ...string) {
|
||||
for _, path := range paths {
|
||||
log.Println("rm -r", path)
|
||||
os.RemoveAll(path)
|
||||
}
|
||||
}
|
||||
|
||||
func version() string {
|
||||
v := run("git", "describe", "--always", "--dirty")
|
||||
v = versionRe.ReplaceAllFunc(v, func(s []byte) []byte {
|
||||
s[0] = '+'
|
||||
return s
|
||||
})
|
||||
return string(v)
|
||||
}
|
||||
|
||||
func buildStamp() int64 {
|
||||
bs := run("git", "show", "-s", "--format=%ct")
|
||||
s, _ := strconv.ParseInt(string(bs), 10, 64)
|
||||
return s
|
||||
}
|
||||
|
||||
func buildUser() string {
|
||||
u, err := user.Current()
|
||||
if err != nil {
|
||||
return "unknown-user"
|
||||
}
|
||||
return strings.Replace(u.Username, " ", "-", -1)
|
||||
}
|
||||
|
||||
func buildHost() string {
|
||||
h, err := os.Hostname()
|
||||
if err != nil {
|
||||
return "unknown-host"
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func buildEnvironment() string {
|
||||
if v := os.Getenv("ENVIRONMENT"); len(v) > 0 {
|
||||
return v
|
||||
}
|
||||
return "default"
|
||||
}
|
||||
|
||||
func buildArch() string {
|
||||
os := goos
|
||||
if os == "darwin" {
|
||||
os = "macosx"
|
||||
}
|
||||
return fmt.Sprintf("%s-%s", os, goarch)
|
||||
}
|
||||
|
||||
func archiveName() string {
|
||||
return fmt.Sprintf("syncthing-%s-%s", buildArch(), version())
|
||||
}
|
||||
|
||||
func run(cmd string, args ...string) []byte {
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
bs, err := ecmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Println(cmd, strings.Join(args, " "))
|
||||
log.Println(string(bs))
|
||||
log.Fatal(err)
|
||||
}
|
||||
return bytes.TrimSpace(bs)
|
||||
}
|
||||
|
||||
func runPrint(cmd string, args ...string) {
|
||||
log.Println(cmd, strings.Join(args, " "))
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
ecmd.Stdout = os.Stdout
|
||||
ecmd.Stderr = os.Stderr
|
||||
err := ecmd.Run()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func runPipe(file, cmd string, args ...string) {
|
||||
log.Println(cmd, strings.Join(args, " "), ">", file)
|
||||
fd, err := os.Create(file)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
ecmd.Stdout = fd
|
||||
ecmd.Stderr = os.Stderr
|
||||
err = ecmd.Run()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fd.Close()
|
||||
}
|
||||
|
||||
type archiveFile struct {
|
||||
src string
|
||||
dst string
|
||||
}
|
||||
|
||||
func tarGz(out string, files []archiveFile) {
|
||||
fd, err := os.Create(out)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
gw := gzip.NewWriter(fd)
|
||||
tw := tar.NewWriter(gw)
|
||||
|
||||
for _, f := range files {
|
||||
sf, err := os.Open(f.src)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
info, err := sf.Stat()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
h := &tar.Header{
|
||||
Name: f.dst,
|
||||
Size: info.Size(),
|
||||
Mode: int64(info.Mode()),
|
||||
ModTime: info.ModTime(),
|
||||
}
|
||||
|
||||
err = tw.WriteHeader(h)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
_, err = io.Copy(tw, sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
sf.Close()
|
||||
}
|
||||
|
||||
err = tw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = gw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func zipFile(out string, files []archiveFile) {
|
||||
fd, err := os.Create(out)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
zw := zip.NewWriter(fd)
|
||||
|
||||
for _, f := range files {
|
||||
sf, err := os.Open(f.src)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
info, err := sf.Stat()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
fh, err := zip.FileInfoHeader(info)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fh.Name = f.dst
|
||||
fh.Method = zip.Deflate
|
||||
|
||||
if strings.HasSuffix(f.dst, ".txt") {
|
||||
// Text file. Read it and convert line endings.
|
||||
bs, err := ioutil.ReadAll(sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
bs = bytes.Replace(bs, []byte{'\n'}, []byte{'\n', '\r'}, -1)
|
||||
fh.UncompressedSize = uint32(len(bs))
|
||||
fh.UncompressedSize64 = uint64(len(bs))
|
||||
|
||||
of, err := zw.CreateHeader(fh)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
of.Write(bs)
|
||||
} else {
|
||||
// Binary file. Copy verbatim.
|
||||
of, err := zw.CreateHeader(fh)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
_, err = io.Copy(of, sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err = zw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
273
build.sh
273
build.sh
@@ -2,238 +2,101 @@
|
||||
set -euo pipefail
|
||||
IFS=$'\n\t'
|
||||
|
||||
export COPYFILE_DISABLE=true
|
||||
export GO386=387 # Don't use SSE on 32 bit builds
|
||||
|
||||
distFiles=(README.md LICENSE CONTRIBUTORS) # apart from the binary itself
|
||||
|
||||
# replace "...-12-g123abc" with "...+12-g123abc" to remain semver compatible-ish
|
||||
version=$(git describe --always --dirty)
|
||||
version=$(echo "$version" | sed 's/-\([0-9]\{1,3\}-g[0-9a-f]\{5,10\}\)/+\1/')
|
||||
|
||||
date=$(git show -s --format=%ct)
|
||||
user=$(whoami)
|
||||
host=$(hostname)
|
||||
host=${host%%.*}
|
||||
bldenv=${ENVIRONMENT:-default}
|
||||
ldflags="-w -X main.Version $version -X main.BuildStamp $date -X main.BuildUser $user -X main.BuildHost $host -X main.BuildEnv $bldenv"
|
||||
|
||||
check() {
|
||||
if ! command -v godep >/dev/null ; then
|
||||
echo "Error: no godep. Try \"$0 setup\"."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
build() {
|
||||
check
|
||||
godep go build $* -ldflags "$ldflags" ./cmd/syncthing
|
||||
}
|
||||
|
||||
assets() {
|
||||
check
|
||||
godep go run cmd/genassets/main.go gui > auto/gui.files.go
|
||||
}
|
||||
|
||||
test-cov() {
|
||||
echo "mode: set" > coverage.out
|
||||
fail=0
|
||||
|
||||
for dir in $(go list ./...) ; do
|
||||
godep go test -coverprofile=profile.out $dir || fail=1
|
||||
if [ -f profile.out ] ; then
|
||||
grep -v "mode: set" profile.out >> coverage.out
|
||||
rm profile.out
|
||||
fi
|
||||
done
|
||||
|
||||
exit $fail
|
||||
}
|
||||
|
||||
test() {
|
||||
check
|
||||
go vet ./...
|
||||
godep go test -cpu=1,2,4 $* ./...
|
||||
}
|
||||
|
||||
tarDist() {
|
||||
name="$1"
|
||||
rm -rf "$name"
|
||||
mkdir -p "$name"
|
||||
cp syncthing "${distFiles[@]}" "$name"
|
||||
tar zcvf "$name.tar.gz" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
zipDist() {
|
||||
name="$1"
|
||||
rm -rf "$name"
|
||||
mkdir -p "$name"
|
||||
for f in "${distFiles[@]}" ; do
|
||||
GOARCH="" GOOS="" go run cmd/todos/main.go < "$f" > "$name/$f.txt"
|
||||
done
|
||||
cp syncthing.exe "$name"
|
||||
zip -r "$name.zip" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
deps() {
|
||||
check
|
||||
godep save ./cmd/...
|
||||
}
|
||||
|
||||
setup() {
|
||||
go get -v code.google.com/p/go.tools/cmd/cover
|
||||
go get -v code.google.com/p/go.tools/cmd/vet
|
||||
go get -v github.com/mattn/goveralls
|
||||
go get -v github.com/tools/godep
|
||||
}
|
||||
|
||||
xdr() {
|
||||
for f in discover/packets files/leveldb protocol/message ; do
|
||||
go run "$(godep path)/src/github.com/calmh/xdr/cmd/genxdr/main.go" -- "${f}.go" > "${f}_xdr.go"
|
||||
done
|
||||
}
|
||||
|
||||
translate() {
|
||||
pushd gui
|
||||
go run ../cmd/translate/main.go lang-en.json < index.html > lang-en-new.json
|
||||
mv lang-en-new.json lang-en.json
|
||||
popd
|
||||
}
|
||||
|
||||
transifex() {
|
||||
pushd gui
|
||||
go run ../cmd/transifexdl/main.go
|
||||
popd
|
||||
assets
|
||||
}
|
||||
|
||||
build-all() {
|
||||
rm -f *.tar.gz *.zip
|
||||
test -short
|
||||
assets
|
||||
|
||||
rm -rf bin Godeps/_workspace/pkg $GOPATH/pkg/*/github.com/syncthing
|
||||
for os in darwin-amd64 freebsd-amd64 freebsd-386 linux-amd64 linux-386 windows-amd64 windows-386 ; do
|
||||
export GOOS=${os%-*}
|
||||
export GOARCH=${os#*-}
|
||||
|
||||
build $*
|
||||
|
||||
name="syncthing-${os/darwin/macosx}-$version"
|
||||
case $GOOS in
|
||||
windows)
|
||||
zipDist "$name"
|
||||
rm -f syncthing.exe
|
||||
;;
|
||||
*)
|
||||
tarDist "$name"
|
||||
rm -f syncthing
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
export GOOS=linux
|
||||
export GOARCH=arm
|
||||
|
||||
origldflags="$ldflags"
|
||||
|
||||
export GOARM=7
|
||||
ldflags="$origldflags -X main.GoArchExtra v7"
|
||||
build $*
|
||||
tarDist "syncthing-linux-armv7-$version"
|
||||
|
||||
export GOARM=6
|
||||
ldflags="$origldflags -X main.GoArchExtra v6"
|
||||
build $*
|
||||
tarDist "syncthing-linux-armv6-$version"
|
||||
|
||||
export GOARM=5
|
||||
ldflags="$origldflags -X main.GoArchExtra v5"
|
||||
build $*
|
||||
tarDist "syncthing-linux-armv5-$version"
|
||||
}
|
||||
|
||||
case "${1:-default}" in
|
||||
default)
|
||||
if [[ $# -gt 1 ]] ; then
|
||||
shift
|
||||
fi
|
||||
export GOBIN=$(pwd)/bin
|
||||
godep go install $* -ldflags "$ldflags" ./cmd/...
|
||||
go run build.go
|
||||
;;
|
||||
|
||||
clean)
|
||||
rm -rf bin Godeps/_workspace/pkg $GOPATH/pkg/*/github.com/syncthing
|
||||
;;
|
||||
|
||||
noupgrade)
|
||||
export GOBIN=$(pwd)/bin
|
||||
godep go install -tags noupgrade -ldflags "$ldflags" ./cmd/...
|
||||
;;
|
||||
|
||||
race)
|
||||
build -race
|
||||
;;
|
||||
|
||||
guidev)
|
||||
echo "Syncthing is already built for GUI developments. Try:"
|
||||
echo " STGUIASSETS=~/someDir/gui syncthing"
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
test)
|
||||
test -short
|
||||
;;
|
||||
ulimit -t 60 &>/dev/null || true
|
||||
ulimit -d 512000 &>/dev/null || true
|
||||
ulimit -m 512000 &>/dev/null || true
|
||||
|
||||
test-cov)
|
||||
test-cov
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
tar)
|
||||
rm -f *.tar.gz *.zip
|
||||
test -short
|
||||
assets
|
||||
build
|
||||
|
||||
eval $(go env)
|
||||
name="syncthing-${GOOS/darwin/macosx}-$GOARCH-$version"
|
||||
|
||||
tarDist "$name"
|
||||
;;
|
||||
|
||||
all)
|
||||
shift
|
||||
build-all
|
||||
;;
|
||||
|
||||
all-noupgrade)
|
||||
shift
|
||||
build-all -tags noupgrade
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
deps)
|
||||
deps
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
assets)
|
||||
assets
|
||||
;;
|
||||
|
||||
setup)
|
||||
setup
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
xdr)
|
||||
xdr
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
translate)
|
||||
translate
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
transifex)
|
||||
transifex
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
noupgrade)
|
||||
go run build.go -no-upgrade tar
|
||||
;;
|
||||
|
||||
all)
|
||||
go run build.go -goos linux -goarch amd64 tar
|
||||
go run build.go -goos linux -goarch 386 tar
|
||||
go run build.go -goos linux -goarch armv5 tar
|
||||
go run build.go -goos linux -goarch armv6 tar
|
||||
go run build.go -goos linux -goarch armv7 tar
|
||||
|
||||
go run build.go -goos freebsd -goarch amd64 tar
|
||||
go run build.go -goos freebsd -goarch 386 tar
|
||||
|
||||
go run build.go -goos darwin -goarch amd64 tar
|
||||
|
||||
go run build.go -goos windows -goarch amd64 zip
|
||||
go run build.go -goos windows -goarch 386 zip
|
||||
;;
|
||||
|
||||
setup)
|
||||
echo "Don't worry, just build."
|
||||
;;
|
||||
|
||||
test-cov)
|
||||
ulimit -t 60 &>/dev/null || true
|
||||
ulimit -d 512000 &>/dev/null || true
|
||||
ulimit -m 512000 &>/dev/null || true
|
||||
|
||||
go get github.com/axw/gocov/gocov
|
||||
go get github.com/AlekSi/gocov-xml
|
||||
|
||||
echo "mode: set" > coverage.out
|
||||
fail=0
|
||||
|
||||
# For every package in the repo
|
||||
for dir in $(go list ./...) ; do
|
||||
# run the tests
|
||||
godep go test -coverprofile=profile.out $dir
|
||||
if [ -f profile.out ] ; then
|
||||
# and if there was test output, append it to coverage.out
|
||||
grep -v "mode: set" profile.out >> coverage.out
|
||||
rm profile.out
|
||||
fi
|
||||
done
|
||||
|
||||
gocov convert coverage.out | gocov-xml > coverage.xml
|
||||
|
||||
# This is usually run from within Jenkins. If it is, we need to
|
||||
# tweak the paths in coverage.xml so cobertura finds the
|
||||
# source.
|
||||
if [[ "${WORKSPACE:-default}" != "default" ]] ; then
|
||||
sed "s#$WORKSPACE##g" < coverage.xml > coverage.xml.new && mv coverage.xml.new coverage.xml
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
|
||||
29
check-contrib.sh
Executable file
29
check-contrib.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
|
||||
missing-contribs() {
|
||||
for email in $(git log --format=%ae master | grep -v jakob@nym.se | sort | uniq) ; do
|
||||
grep -q "$email" CONTRIBUTORS || echo $email
|
||||
done
|
||||
}
|
||||
|
||||
no-docs-typos() {
|
||||
# Commits that are known to not change code
|
||||
grep -v f2459ef3319b2f060dbcdacd0c35a1788a94b8bd |\
|
||||
grep -v b61f418bf2d1f7d5a9d7088a20a2a448e5e66801 |\
|
||||
grep -v f0621207e3953711f9ab86d99724f1d0faac45b1 |\
|
||||
grep -v f1120d7aa936c0658429edef0037792520b46334
|
||||
}
|
||||
|
||||
print-missing-contribs() {
|
||||
for email in $(missing-contribs) ; do
|
||||
git log --author="$email" --format="%H %ae %s" | no-docs-typos
|
||||
done
|
||||
}
|
||||
|
||||
print-missing-copyright() {
|
||||
find . -name \*.go | xargs grep -L 'Copyright (C)' | grep -v Godeps
|
||||
}
|
||||
|
||||
print-missing-contribs
|
||||
print-missing-copyright
|
||||
|
||||
@@ -9,8 +9,8 @@ package main
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/base64"
|
||||
"flag"
|
||||
"fmt"
|
||||
"go/format"
|
||||
"io"
|
||||
"os"
|
||||
@@ -23,27 +23,27 @@ var tpl = template.Must(template.New("assets").Parse(`package auto
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/hex"
|
||||
"encoding/base64"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
var Assets = make(map[string][]byte)
|
||||
|
||||
func init() {
|
||||
func Assets() map[string][]byte {
|
||||
var assets = make(map[string][]byte, {{.assets | len}})
|
||||
var bs []byte
|
||||
var gr *gzip.Reader
|
||||
{{range $asset := .assets}}
|
||||
bs, _ = hex.DecodeString("{{$asset.HexData}}")
|
||||
bs, _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
|
||||
gr, _ = gzip.NewReader(bytes.NewBuffer(bs))
|
||||
bs, _ = ioutil.ReadAll(gr)
|
||||
Assets["{{$asset.Name}}"] = bs
|
||||
assets["{{$asset.Name}}"] = bs
|
||||
{{end}}
|
||||
return assets
|
||||
}
|
||||
`))
|
||||
|
||||
type asset struct {
|
||||
Name string
|
||||
HexData string
|
||||
Name string
|
||||
Data string
|
||||
}
|
||||
|
||||
var assets []asset
|
||||
@@ -69,8 +69,8 @@ func walkerFor(basePath string) filepath.WalkFunc {
|
||||
|
||||
name, _ = filepath.Rel(basePath, name)
|
||||
assets = append(assets, asset{
|
||||
Name: filepath.ToSlash(name),
|
||||
HexData: fmt.Sprintf("%x", buf.Bytes()),
|
||||
Name: filepath.ToSlash(name),
|
||||
Data: base64.StdEncoding.EncodeToString(buf.Bytes()),
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -5,13 +5,10 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"mime"
|
||||
"net"
|
||||
"net/http"
|
||||
@@ -24,7 +21,6 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"crypto/tls"
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/syncthing/syncthing/auto"
|
||||
"github.com/syncthing/syncthing/config"
|
||||
@@ -45,16 +41,10 @@ var (
|
||||
configInSync = true
|
||||
guiErrors = []guiError{}
|
||||
guiErrorsMut sync.Mutex
|
||||
static func(http.ResponseWriter, *http.Request, *log.Logger)
|
||||
apiKey string
|
||||
modt = time.Now().UTC().Format(http.TimeFormat)
|
||||
eventSub *events.BufferedSubscription
|
||||
)
|
||||
|
||||
const (
|
||||
unchangedPassword = "--password-unchanged--"
|
||||
)
|
||||
|
||||
func init() {
|
||||
l.AddHandler(logger.LevelWarn, showGuiError)
|
||||
sub := events.Default.Subscribe(events.AllEvents)
|
||||
@@ -62,36 +52,28 @@ func init() {
|
||||
}
|
||||
|
||||
func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) error {
|
||||
var listener net.Listener
|
||||
var err error
|
||||
if cfg.UseTLS {
|
||||
cert, err := loadCert(confDir, "https-")
|
||||
if err != nil {
|
||||
l.Infoln("Loading HTTPS certificate:", err)
|
||||
l.Infoln("Creating new HTTPS certificate")
|
||||
newCertificate(confDir, "https-")
|
||||
cert, err = loadCert(confDir, "https-")
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tlsCfg := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
ServerName: "syncthing",
|
||||
}
|
||||
listener, err = tls.Listen("tcp", cfg.Address, tlsCfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
listener, err = net.Listen("tcp", cfg.Address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cert, err := loadCert(confDir, "https-")
|
||||
if err != nil {
|
||||
l.Infoln("Loading HTTPS certificate:", err)
|
||||
l.Infoln("Creating new HTTPS certificate")
|
||||
newCertificate(confDir, "https-")
|
||||
cert, err = loadCert(confDir, "https-")
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tlsCfg := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
ServerName: "syncthing",
|
||||
}
|
||||
|
||||
apiKey = cfg.APIKey
|
||||
loadCsrfTokens()
|
||||
rawListener, err := net.Listen("tcp", cfg.Address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
listener := &DowngradingListener{rawListener, tlsCfg}
|
||||
|
||||
// The GET handlers
|
||||
getRestMux := http.NewServeMux()
|
||||
@@ -111,6 +93,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
getRestMux.HandleFunc("/rest/system", restGetSystem)
|
||||
getRestMux.HandleFunc("/rest/upgrade", restGetUpgrade)
|
||||
getRestMux.HandleFunc("/rest/version", restGetVersion)
|
||||
getRestMux.HandleFunc("/rest/stats/node", withModel(m, restGetNodeStats))
|
||||
|
||||
// Debug endpoints, not for general use
|
||||
getRestMux.HandleFunc("/rest/debug/peerCompletion", withModel(m, restGetPeerCompletion))
|
||||
@@ -142,11 +125,19 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
|
||||
// Wrap everything in CSRF protection. The /rest prefix should be
|
||||
// protected, other requests will grant cookies.
|
||||
handler := csrfMiddleware("/rest", mux)
|
||||
handler := csrfMiddleware("/rest", cfg.APIKey, mux)
|
||||
|
||||
// Add our version as a header to responses
|
||||
handler = withVersionMiddleware(handler)
|
||||
|
||||
// Wrap everything in basic auth, if user/password is set.
|
||||
if len(cfg.User) > 0 {
|
||||
handler = basicAuthMiddleware(cfg.User, cfg.Password, handler)
|
||||
if len(cfg.User) > 0 && len(cfg.Password) > 0 {
|
||||
handler = basicAuthAndSessionMiddleware(cfg, handler)
|
||||
}
|
||||
|
||||
// Redirect to HTTPS if we are supposed to
|
||||
if cfg.UseTLS {
|
||||
handler = redirectToHTTPSMiddleware(handler)
|
||||
}
|
||||
|
||||
go http.Serve(listener, handler)
|
||||
@@ -166,6 +157,23 @@ func getPostHandler(get, post http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
func redirectToHTTPSMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Add a generous access-control-allow-origin header since we may be
|
||||
// redirecting REST requests over protocols
|
||||
w.Header().Add("Access-Control-Allow-Origin", "*")
|
||||
|
||||
if r.TLS == nil {
|
||||
// Redirect HTTP requests to HTTPS
|
||||
r.URL.Host = r.Host
|
||||
r.URL.Scheme = "https"
|
||||
http.Redirect(w, r, r.URL.String(), http.StatusFound)
|
||||
} else {
|
||||
h.ServeHTTP(w, r)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func noCacheMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
@@ -173,6 +181,13 @@ func noCacheMiddleware(h http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
func withVersionMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("X-Syncthing-Version", Version)
|
||||
h.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
func withModel(m *model.Model, h func(m *model.Model, w http.ResponseWriter, r *http.Request)) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
h(m, w, r)
|
||||
@@ -246,14 +261,14 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func restPostOverride(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var repo = qs.Get("repo")
|
||||
m.Override(repo)
|
||||
go m.Override(repo)
|
||||
}
|
||||
|
||||
func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var repo = qs.Get("repo")
|
||||
|
||||
files := m.NeedFilesRepo(repo)
|
||||
files := m.NeedFilesRepoLimited(repo, 100, 2500) // max 100 files or 2500 blocks
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(files)
|
||||
@@ -265,35 +280,35 @@ func restGetConnections(m *model.Model, w http.ResponseWriter, r *http.Request)
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetConfig(w http.ResponseWriter, r *http.Request) {
|
||||
encCfg := cfg
|
||||
if encCfg.GUI.Password != "" {
|
||||
encCfg.GUI.Password = unchangedPassword
|
||||
}
|
||||
func restGetNodeStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var res = m.NodeStatistics()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(encCfg)
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetConfig(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(cfg)
|
||||
}
|
||||
|
||||
func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var newCfg config.Configuration
|
||||
err := json.NewDecoder(r.Body).Decode(&newCfg)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("decoding posted config:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
} else {
|
||||
if newCfg.GUI.Password == "" {
|
||||
// Leave it empty
|
||||
} else if newCfg.GUI.Password == unchangedPassword {
|
||||
newCfg.GUI.Password = cfg.GUI.Password
|
||||
} else {
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
} else {
|
||||
newCfg.GUI.Password = string(hash)
|
||||
if newCfg.GUI.Password != cfg.GUI.Password {
|
||||
if newCfg.GUI.Password != "" {
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
|
||||
if err != nil {
|
||||
l.Warnln("bcrypting password:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
} else {
|
||||
newCfg.GUI.Password = string(hash)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -346,8 +361,9 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// Activate and save
|
||||
|
||||
newCfg.Location = cfg.Location
|
||||
newCfg.Save()
|
||||
cfg = newCfg
|
||||
saveConfig()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -459,12 +475,18 @@ func restGetEvents(w http.ResponseWriter, r *http.Request) {
|
||||
since, _ := strconv.Atoi(sinceStr)
|
||||
limit, _ := strconv.Atoi(limitStr)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
|
||||
// Flush before blocking, to indicate that we've received the request
|
||||
// and that it should not be retried.
|
||||
f := w.(http.Flusher)
|
||||
f.Flush()
|
||||
|
||||
evs := eventSub.Since(since, nil)
|
||||
if 0 < limit && limit < len(evs) {
|
||||
evs = evs[len(evs)-limit:]
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(evs)
|
||||
}
|
||||
|
||||
@@ -504,7 +526,7 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
var langs []string
|
||||
for _, l := range strings.Split(lang, ",") {
|
||||
parts := strings.SplitN(l, ";", 2)
|
||||
langs = append(langs, strings.TrimSpace(parts[0]))
|
||||
langs = append(langs, strings.ToLower(strings.TrimSpace(parts[0])))
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(langs)
|
||||
@@ -513,7 +535,7 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
rel, err := upgrade.LatestRelease(strings.Contains(Version, "-beta"))
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("getting latest release:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
@@ -521,12 +543,14 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
if upgrade.CompareVersions(rel.Tag, Version) == 1 {
|
||||
err = upgrade.UpgradeTo(rel, GoArchExtra)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("upgrading:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
|
||||
restPostRestart(w, r)
|
||||
flushResponse(`{"ok": "restarting"}`, w)
|
||||
l.Infoln("Upgrading")
|
||||
stop <- exitUpgrading
|
||||
}
|
||||
}
|
||||
|
||||
@@ -578,57 +602,9 @@ func restGetPeerCompletion(m *model.Model, w http.ResponseWriter, r *http.Reques
|
||||
json.NewEncoder(w).Encode(comp)
|
||||
}
|
||||
|
||||
func basicAuthMiddleware(username string, passhash string, next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if validAPIKey(r.Header.Get("X-API-Key")) {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
error := func() {
|
||||
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
|
||||
w.Header().Set("WWW-Authenticate", "Basic realm=\"Authorization Required\"")
|
||||
http.Error(w, "Not Authorized", http.StatusUnauthorized)
|
||||
}
|
||||
|
||||
hdr := r.Header.Get("Authorization")
|
||||
if !strings.HasPrefix(hdr, "Basic ") {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
hdr = hdr[6:]
|
||||
bs, err := base64.StdEncoding.DecodeString(hdr)
|
||||
if err != nil {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
fields := bytes.SplitN(bs, []byte(":"), 2)
|
||||
if len(fields) != 2 {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
if string(fields[0]) != username {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(passhash), fields[1]); err != nil {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
func validAPIKey(k string) bool {
|
||||
return len(apiKey) > 0 && k == apiKey
|
||||
}
|
||||
|
||||
func embeddedStatic(assetDir string) http.Handler {
|
||||
assets := auto.Assets()
|
||||
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
file := r.URL.Path
|
||||
|
||||
@@ -649,13 +625,13 @@ func embeddedStatic(assetDir string) http.Handler {
|
||||
}
|
||||
}
|
||||
|
||||
bs, ok := auto.Assets[file]
|
||||
bs, ok := assets[file]
|
||||
if !ok {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
mtype := mime.TypeByExtension(filepath.Ext(r.URL.Path))
|
||||
mtype := mimeTypeForFile(file)
|
||||
if len(mtype) != 0 {
|
||||
w.Header().Set("Content-Type", mtype)
|
||||
}
|
||||
@@ -665,3 +641,28 @@ func embeddedStatic(assetDir string) http.Handler {
|
||||
w.Write(bs)
|
||||
})
|
||||
}
|
||||
|
||||
func mimeTypeForFile(file string) string {
|
||||
// We use a built in table of the common types since the system
|
||||
// TypeByExtension might be unreliable. But if we don't know, we delegate
|
||||
// to the system.
|
||||
ext := filepath.Ext(file)
|
||||
switch ext {
|
||||
case ".htm", ".html":
|
||||
return "text/html"
|
||||
case ".css":
|
||||
return "text/css"
|
||||
case ".js":
|
||||
return "application/javascript"
|
||||
case ".json":
|
||||
return "application/json"
|
||||
case ".png":
|
||||
return "image/png"
|
||||
case ".ttf":
|
||||
return "application/x-font-ttf"
|
||||
case ".woff":
|
||||
return "application/x-font-woff"
|
||||
default:
|
||||
return mime.TypeByExtension(ext)
|
||||
}
|
||||
}
|
||||
|
||||
90
cmd/syncthing/gui_auth.go
Executable file
90
cmd/syncthing/gui_auth.go
Executable file
@@ -0,0 +1,90 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/syncthing/syncthing/config"
|
||||
)
|
||||
|
||||
var (
|
||||
sessions = make(map[string]bool)
|
||||
sessionsMut sync.Mutex
|
||||
)
|
||||
|
||||
func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if cfg.APIKey != "" && r.Header.Get("X-API-Key") == cfg.APIKey {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
cookie, err := r.Cookie("sessionid")
|
||||
if err == nil && cookie != nil {
|
||||
sessionsMut.Lock()
|
||||
_, ok := sessions[cookie.Value]
|
||||
sessionsMut.Unlock()
|
||||
if ok {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
error := func() {
|
||||
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
|
||||
w.Header().Set("WWW-Authenticate", "Basic realm=\"Authorization Required\"")
|
||||
http.Error(w, "Not Authorized", http.StatusUnauthorized)
|
||||
}
|
||||
|
||||
hdr := r.Header.Get("Authorization")
|
||||
if !strings.HasPrefix(hdr, "Basic ") {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
hdr = hdr[6:]
|
||||
bs, err := base64.StdEncoding.DecodeString(hdr)
|
||||
if err != nil {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
fields := bytes.SplitN(bs, []byte(":"), 2)
|
||||
if len(fields) != 2 {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
if string(fields[0]) != cfg.User {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(cfg.Password), fields[1]); err != nil {
|
||||
error()
|
||||
return
|
||||
}
|
||||
|
||||
sessionid := randomString(32)
|
||||
sessionsMut.Lock()
|
||||
sessions[sessionid] = true
|
||||
sessionsMut.Unlock()
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "sessionid",
|
||||
Value: sessionid,
|
||||
MaxAge: 0,
|
||||
})
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
@@ -21,10 +25,11 @@ var csrfMut sync.Mutex
|
||||
// Check for CSRF token on /rest/ URLs. If a correct one is not given, reject
|
||||
// the request with 403. For / and /index.html, set a new CSRF cookie if none
|
||||
// is currently set.
|
||||
func csrfMiddleware(prefix string, next http.Handler) http.Handler {
|
||||
func csrfMiddleware(prefix, apiKey string, next http.Handler) http.Handler {
|
||||
loadCsrfTokens()
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Allow requests carrying a valid API key
|
||||
if validAPIKey(r.Header.Get("X-API-Key")) {
|
||||
if apiKey != "" && r.Header.Get("X-API-Key") == apiKey {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
@@ -72,13 +77,7 @@ func validCsrfToken(token string) bool {
|
||||
}
|
||||
|
||||
func newCsrfToken() string {
|
||||
bs := make([]byte, 30)
|
||||
_, err := rand.Reader.Read(bs)
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
token := base64.StdEncoding.EncodeToString(bs)
|
||||
token := randomString(30)
|
||||
|
||||
csrfMut.Lock()
|
||||
csrfTokens = append(csrfTokens, token)
|
||||
@@ -130,3 +129,13 @@ func loadCsrfTokens() {
|
||||
csrfTokens = append(csrfTokens, s.Text())
|
||||
}
|
||||
}
|
||||
|
||||
func randomString(len int) string {
|
||||
bs := make([]byte, len)
|
||||
_, err := rand.Reader.Read(bs)
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
return base64.StdEncoding.EncodeToString(bs)
|
||||
}
|
||||
|
||||
@@ -79,7 +79,7 @@ func trackCPUUsage() {
|
||||
for _ = range time.NewTicker(time.Second).C {
|
||||
err := solarisPrusage(pid, &rusage)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("getting prusage:", err)
|
||||
continue
|
||||
}
|
||||
curTime := time.Now().UnixNano()
|
||||
|
||||
@@ -14,7 +14,8 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
if os.Getenv("STHEAPPROFILE") != "" {
|
||||
if innerProcess && os.Getenv("STHEAPPROFILE") != "" {
|
||||
l.Debugln("Starting heap profiling")
|
||||
go saveHeapProfiles()
|
||||
}
|
||||
}
|
||||
|
||||
24
cmd/syncthing/limitedreader.go
Normal file
24
cmd/syncthing/limitedreader.go
Normal file
@@ -0,0 +1,24 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/juju/ratelimit"
|
||||
)
|
||||
|
||||
type limitedReader struct {
|
||||
r io.Reader
|
||||
bucket *ratelimit.Bucket
|
||||
}
|
||||
|
||||
func (r *limitedReader) Read(buf []byte) (int, error) {
|
||||
n, err := r.r.Read(buf)
|
||||
if r.bucket != nil {
|
||||
r.bucket.Wait(int64(n))
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
@@ -26,17 +25,19 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/juju/ratelimit"
|
||||
"github.com/syncthing/syncthing/config"
|
||||
"github.com/syncthing/syncthing/discover"
|
||||
"github.com/syncthing/syncthing/events"
|
||||
"github.com/syncthing/syncthing/files"
|
||||
"github.com/syncthing/syncthing/logger"
|
||||
"github.com/syncthing/syncthing/model"
|
||||
"github.com/syncthing/syncthing/osutil"
|
||||
"github.com/syncthing/syncthing/protocol"
|
||||
"github.com/syncthing/syncthing/upgrade"
|
||||
"github.com/syncthing/syncthing/upnp"
|
||||
"github.com/syndtr/goleveldb/leveldb"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -50,7 +51,16 @@ var (
|
||||
GoArchExtra string // "", "v5", "v6", "v7"
|
||||
)
|
||||
|
||||
const (
|
||||
exitSuccess = 0
|
||||
exitError = 1
|
||||
exitNoUpgradeAvailable = 2
|
||||
exitRestarting = 3
|
||||
exitUpgrading = 4
|
||||
)
|
||||
|
||||
var l = logger.DefaultLogger
|
||||
var innerProcess = os.Getenv("STNORESTART") != ""
|
||||
|
||||
func init() {
|
||||
if Version != "unknown-dev" {
|
||||
@@ -73,17 +83,16 @@ func init() {
|
||||
}
|
||||
|
||||
var (
|
||||
cfg config.Configuration
|
||||
myID protocol.NodeID
|
||||
confDir string
|
||||
logFlags int = log.Ltime
|
||||
rateBucket *ratelimit.Bucket
|
||||
stop = make(chan bool)
|
||||
discoverer *discover.Discoverer
|
||||
lockConn *net.TCPListener
|
||||
lockPort int
|
||||
externalPort int
|
||||
cert tls.Certificate
|
||||
cfg config.Configuration
|
||||
myID protocol.NodeID
|
||||
confDir string
|
||||
logFlags int = log.Ltime
|
||||
writeRateLimit *ratelimit.Bucket
|
||||
readRateLimit *ratelimit.Bucket
|
||||
stop = make(chan int)
|
||||
discoverer *discover.Discoverer
|
||||
externalPort int
|
||||
cert tls.Certificate
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -102,6 +111,15 @@ show time only (2).
|
||||
|
||||
The following enviroment variables are interpreted by syncthing:
|
||||
|
||||
STGUIADDRESS Override GUI listen address set in config. Expects protocol type
|
||||
followed by hostname or an IP address, followed by a port, such
|
||||
as "https://127.0.0.1:8888".
|
||||
|
||||
STGUIAUTH Override GUI authentication credentials set in config. Expects
|
||||
a colon separated username and password, such as "admin:secret".
|
||||
|
||||
STGUIAPIKEY Override GUI API key set in config.
|
||||
|
||||
STNORESTART Do not attempt to restart when requested to, instead just exit.
|
||||
Set this variable when running under a service manager such as
|
||||
runit, launchd, etc.
|
||||
@@ -115,6 +133,7 @@ The following enviroment variables are interpreted by syncthing:
|
||||
- "net" (the main package; connections & network messages)
|
||||
- "model" (the model package)
|
||||
- "scanner" (the scanner package)
|
||||
- "stats" (the stats package)
|
||||
- "upnp" (the upnp package)
|
||||
- "xdr" (the xdr package)
|
||||
- "all" (all of the above)
|
||||
@@ -132,28 +151,39 @@ The following enviroment variables are interpreted by syncthing:
|
||||
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
|
||||
supported on Windows.
|
||||
|
||||
STDEADLOCKTIMEOUT Alter deadlock detection timeout (seconds; default 1200).`
|
||||
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
|
||||
available CPU cores.`
|
||||
)
|
||||
|
||||
func init() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
}
|
||||
|
||||
// Command line options
|
||||
var (
|
||||
reset bool
|
||||
showVersion bool
|
||||
doUpgrade bool
|
||||
doUpgradeCheck bool
|
||||
noBrowser bool
|
||||
generateDir string
|
||||
guiAddress string
|
||||
guiAuthentication string
|
||||
guiAPIKey string
|
||||
)
|
||||
|
||||
func main() {
|
||||
var reset bool
|
||||
var showVersion bool
|
||||
var doUpgrade bool
|
||||
var doUpgradeCheck bool
|
||||
var generateDir string
|
||||
var noBrowser bool
|
||||
flag.StringVar(&confDir, "home", getDefaultConfDir(), "Set configuration directory")
|
||||
flag.BoolVar(&reset, "reset", false, "Prepare to resync from cluster")
|
||||
flag.BoolVar(&showVersion, "version", false, "Show version")
|
||||
flag.BoolVar(&doUpgrade, "upgrade", false, "Perform upgrade")
|
||||
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
|
||||
flag.BoolVar(&noBrowser, "no-browser", false, "Do not start browser")
|
||||
flag.IntVar(&logFlags, "logflags", logFlags, "Set log flags")
|
||||
flag.StringVar(&generateDir, "generate", "", "Generate key in specified dir")
|
||||
flag.StringVar(&guiAddress, "gui-address", "", "Override GUI address")
|
||||
flag.StringVar(&guiAuthentication, "gui-authentication", "", "Override GUI authentication. Expects 'username:password'")
|
||||
flag.StringVar(&guiAPIKey, "gui-apikey", "", "Override GUI API key")
|
||||
flag.IntVar(&logFlags, "logflags", logFlags, "Set log flags")
|
||||
flag.Usage = usageFor(flag.CommandLine, usage, extraUsage)
|
||||
flag.Parse()
|
||||
|
||||
@@ -197,7 +227,7 @@ func main() {
|
||||
|
||||
if upgrade.CompareVersions(rel.Tag, Version) <= 0 {
|
||||
l.Infof("No upgrade available (current %q >= latest %q).", Version, rel.Tag)
|
||||
os.Exit(2)
|
||||
os.Exit(exitNoUpgradeAvailable)
|
||||
}
|
||||
|
||||
l.Infof("Upgrade available (current %q < latest %q)", Version, rel.Tag)
|
||||
@@ -214,12 +244,27 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
var err error
|
||||
lockPort, err = getLockPort()
|
||||
if err != nil {
|
||||
l.Fatalln("Opening lock port:", err)
|
||||
if reset {
|
||||
resetRepositories()
|
||||
return
|
||||
}
|
||||
|
||||
confDir = expandTilde(confDir)
|
||||
|
||||
if info, err := os.Stat(confDir); err == nil && !info.IsDir() {
|
||||
l.Fatalln("Config directory", confDir, "is not a directory")
|
||||
}
|
||||
|
||||
if os.Getenv("STNORESTART") != "" {
|
||||
syncthingMain()
|
||||
} else {
|
||||
monitorMain()
|
||||
}
|
||||
}
|
||||
|
||||
func syncthingMain() {
|
||||
var err error
|
||||
|
||||
if len(os.Getenv("GOGC")) == 0 {
|
||||
debug.SetGCPercent(25)
|
||||
}
|
||||
@@ -228,11 +273,9 @@ func main() {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
}
|
||||
|
||||
confDir = expandTilde(confDir)
|
||||
|
||||
events.Default.Log(events.Starting, map[string]string{"home": confDir})
|
||||
|
||||
if _, err := os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
|
||||
if _, err = os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
|
||||
// We are supposed to use the default configuration directory. It
|
||||
// doesn't exist. In the past our default has been ~/.syncthing, so if
|
||||
// that directory exists we move it to the new default location and
|
||||
@@ -272,21 +315,14 @@ func main() {
|
||||
// Prepare to be able to save configuration
|
||||
|
||||
cfgFile := filepath.Join(confDir, "config.xml")
|
||||
go saveConfigLoop(cfgFile)
|
||||
|
||||
var myName string
|
||||
|
||||
// Load the configuration file, if it exists.
|
||||
// If it does not, create a template.
|
||||
|
||||
cf, err := os.Open(cfgFile)
|
||||
cfg, err = config.Load(cfgFile, myID)
|
||||
if err == nil {
|
||||
// Read config.xml
|
||||
cfg, err = config.Load(cf, myID)
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
cf.Close()
|
||||
myCfg := cfg.GetNodeConfiguration(myID)
|
||||
if myCfg == nil || myCfg.Name == "" {
|
||||
myName, _ = os.Hostname()
|
||||
@@ -298,12 +334,13 @@ func main() {
|
||||
myName, _ = os.Hostname()
|
||||
defaultRepo := filepath.Join(getHomeDir(), "Sync")
|
||||
|
||||
cfg, err = config.Load(nil, myID)
|
||||
cfg = config.New(cfgFile, myID)
|
||||
cfg.Repositories = []config.RepositoryConfiguration{
|
||||
{
|
||||
ID: "default",
|
||||
Directory: defaultRepo,
|
||||
Nodes: []config.NodeConfiguration{{NodeID: myID}},
|
||||
ID: "default",
|
||||
Directory: defaultRepo,
|
||||
RescanIntervalS: 60,
|
||||
Nodes: []config.RepositoryNodeConfiguration{{NodeID: myID}},
|
||||
},
|
||||
}
|
||||
cfg.Nodes = []config.NodeConfiguration{
|
||||
@@ -322,19 +359,10 @@ func main() {
|
||||
l.FatalErr(err)
|
||||
cfg.Options.ListenAddress = []string{fmt.Sprintf("0.0.0.0:%d", port)}
|
||||
|
||||
saveConfig()
|
||||
cfg.Save()
|
||||
l.Infof("Edit %s to taste or use the GUI\n", cfgFile)
|
||||
}
|
||||
|
||||
if reset {
|
||||
resetRepositories()
|
||||
return
|
||||
}
|
||||
|
||||
if len(os.Getenv("STRESTART")) > 0 {
|
||||
waitForParentExit()
|
||||
}
|
||||
|
||||
if profiler := os.Getenv("STPROFILER"); len(profiler) > 0 {
|
||||
go func() {
|
||||
l.Debugln("Starting profiler on", profiler)
|
||||
@@ -359,20 +387,33 @@ func main() {
|
||||
MinVersion: tls.VersionTLS12,
|
||||
}
|
||||
|
||||
// If the write rate should be limited, set up a rate limiter for it.
|
||||
// If the read or write rate should be limited, set up a rate limiter for it.
|
||||
// This will be used on connections created in the connect and listen routines.
|
||||
|
||||
if cfg.Options.MaxSendKbps > 0 {
|
||||
rateBucket = ratelimit.NewBucketWithRate(float64(1000*cfg.Options.MaxSendKbps), int64(5*1000*cfg.Options.MaxSendKbps))
|
||||
writeRateLimit = ratelimit.NewBucketWithRate(float64(1000*cfg.Options.MaxSendKbps), int64(5*1000*cfg.Options.MaxSendKbps))
|
||||
}
|
||||
if cfg.Options.MaxRecvKbps > 0 {
|
||||
readRateLimit = ratelimit.NewBucketWithRate(float64(1000*cfg.Options.MaxRecvKbps), int64(5*1000*cfg.Options.MaxRecvKbps))
|
||||
}
|
||||
|
||||
// If this is the first time the user runs v0.9, archive the old indexes and config.
|
||||
archiveLegacyConfig()
|
||||
|
||||
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), nil)
|
||||
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{CachedOpenFiles: 100})
|
||||
if err != nil {
|
||||
l.Fatalln("leveldb.OpenFile():", err)
|
||||
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
|
||||
}
|
||||
|
||||
// Remove database entries for repos that no longer exist in the config
|
||||
repoMap := cfg.RepoMap()
|
||||
for _, repo := range files.ListRepos(db) {
|
||||
if _, ok := repoMap[repo]; !ok {
|
||||
l.Infof("Cleaning data for dropped repo %q", repo)
|
||||
files.DropRepo(db, repo)
|
||||
}
|
||||
}
|
||||
|
||||
m := model.NewModel(confDir, &cfg, myName, "syncthing", Version, db)
|
||||
|
||||
nextRepo:
|
||||
@@ -390,6 +431,7 @@ nextRepo:
|
||||
// that all files have been deleted which might not be the case,
|
||||
// so mark it as invalid instead.
|
||||
if err != nil || !fi.IsDir() {
|
||||
l.Warnf("Stopping repository %q - directory missing, but has files in index", repo.ID)
|
||||
cfg.Repositories[i].Invalid = "repo directory missing"
|
||||
continue nextRepo
|
||||
}
|
||||
@@ -402,6 +444,7 @@ nextRepo:
|
||||
if err != nil {
|
||||
// If there was another error or we could not create the
|
||||
// directory, the repository is invalid.
|
||||
l.Warnf("Stopping repository %q - %v", err)
|
||||
cfg.Repositories[i].Invalid = err.Error()
|
||||
continue nextRepo
|
||||
}
|
||||
@@ -410,10 +453,13 @@ nextRepo:
|
||||
}
|
||||
|
||||
// GUI
|
||||
if cfg.GUI.Enabled && cfg.GUI.Address != "" {
|
||||
addr, err := net.ResolveTCPAddr("tcp", cfg.GUI.Address)
|
||||
|
||||
guiCfg := overrideGUIConfig(cfg.GUI, guiAddress, guiAuthentication, guiAPIKey)
|
||||
|
||||
if guiCfg.Enabled && guiCfg.Address != "" {
|
||||
addr, err := net.ResolveTCPAddr("tcp", guiCfg.Address)
|
||||
if err != nil {
|
||||
l.Fatalf("Cannot start GUI on %q: %v", cfg.GUI.Address, err)
|
||||
l.Fatalf("Cannot start GUI on %q: %v", guiCfg.Address, err)
|
||||
} else {
|
||||
var hostOpen, hostShow string
|
||||
switch {
|
||||
@@ -429,12 +475,12 @@ nextRepo:
|
||||
}
|
||||
|
||||
var proto = "http"
|
||||
if cfg.GUI.UseTLS {
|
||||
if guiCfg.UseTLS {
|
||||
proto = "https"
|
||||
}
|
||||
|
||||
l.Infof("Starting web GUI on %s://%s:%d/", proto, hostShow, addr.Port)
|
||||
err := startGUI(cfg.GUI, os.Getenv("STGUIASSETS"), m)
|
||||
l.Infof("Starting web GUI on %s://%s/", proto, net.JoinHostPort(hostShow, strconv.Itoa(addr.Port)))
|
||||
err := startGUI(guiCfg, os.Getenv("STGUIASSETS"), m)
|
||||
if err != nil {
|
||||
l.Fatalln("Cannot start GUI:", err)
|
||||
}
|
||||
@@ -448,7 +494,13 @@ nextRepo:
|
||||
// start needing a bunch of files which are nowhere to be found. This
|
||||
// needs to be changed when we correctly do persistent indexes.
|
||||
for _, repoCfg := range cfg.Repositories {
|
||||
if repoCfg.Invalid != "" {
|
||||
continue
|
||||
}
|
||||
for _, node := range repoCfg.NodeIDs() {
|
||||
if node == myID {
|
||||
continue
|
||||
}
|
||||
m.Index(node, repoCfg.ID, nil)
|
||||
}
|
||||
}
|
||||
@@ -483,6 +535,14 @@ nextRepo:
|
||||
}
|
||||
}
|
||||
|
||||
// The default port we announce, possibly modified by setupUPnP next.
|
||||
|
||||
addr, err := net.ResolveTCPAddr("tcp", cfg.Options.ListenAddress[0])
|
||||
if err != nil {
|
||||
l.Fatalln("Bad listen address:", err)
|
||||
}
|
||||
externalPort = addr.Port
|
||||
|
||||
// UPnP
|
||||
|
||||
if cfg.Options.UPnPEnabled {
|
||||
@@ -539,12 +599,17 @@ nextRepo:
|
||||
}()
|
||||
}
|
||||
|
||||
if cfg.Options.RestartOnWakeup {
|
||||
go standbyMonitor()
|
||||
}
|
||||
|
||||
events.Default.Log(events.StartupComplete, nil)
|
||||
go generateEvents()
|
||||
|
||||
<-stop
|
||||
code := <-stop
|
||||
|
||||
l.Okln("Exiting")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func generateEvents() {
|
||||
@@ -554,25 +619,6 @@ func generateEvents() {
|
||||
}
|
||||
}
|
||||
|
||||
func waitForParentExit() {
|
||||
l.Infoln("Waiting for parent to exit...")
|
||||
lockPortStr := os.Getenv("STRESTART")
|
||||
lockPort, err := strconv.Atoi(lockPortStr)
|
||||
if err != nil {
|
||||
l.Warnln("Invalid lock port %q: %v", lockPortStr, err)
|
||||
}
|
||||
// Wait for the listen address to become free, indicating that the parent has exited.
|
||||
for {
|
||||
ln, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", lockPort))
|
||||
if err == nil {
|
||||
ln.Close()
|
||||
break
|
||||
}
|
||||
time.Sleep(250 * time.Millisecond)
|
||||
}
|
||||
l.Infoln("Continuing")
|
||||
}
|
||||
|
||||
func setupUPnP() {
|
||||
if len(cfg.Options.ListenAddress) == 1 {
|
||||
_, portStr, err := net.SplitHostPort(cfg.Options.ListenAddress[0])
|
||||
@@ -628,13 +674,16 @@ func renewUPnP(port int) {
|
||||
}
|
||||
|
||||
// Just renew the same port that we already have
|
||||
err = igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", cfg.Options.UPnPLease*60)
|
||||
if err == nil {
|
||||
l.Infoln("Renewed UPnP port mapping - external port", externalPort)
|
||||
continue
|
||||
if externalPort != 0 {
|
||||
err = igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", cfg.Options.UPnPLease*60)
|
||||
if err == nil {
|
||||
l.Infoln("Renewed UPnP port mapping - external port", externalPort)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Something strange has happened. Perhaps the gateway has changed?
|
||||
// Something strange has happened. We didn't have an external port before?
|
||||
// Or perhaps the gateway has changed?
|
||||
// Retry the same port sequence from the beginning.
|
||||
r := setupExternalPort(igd, port)
|
||||
if r != 0 {
|
||||
@@ -690,7 +739,7 @@ func archiveLegacyConfig() {
|
||||
l.Warnf("Cannot archive config:", err)
|
||||
return
|
||||
}
|
||||
defer src.Close()
|
||||
defer dst.Close()
|
||||
|
||||
l.Infoln("Archiving config.xml")
|
||||
io.Copy(dst, src)
|
||||
@@ -699,74 +748,12 @@ func archiveLegacyConfig() {
|
||||
|
||||
func restart() {
|
||||
l.Infoln("Restarting")
|
||||
if os.Getenv("SMF_FMRI") != "" || os.Getenv("STNORESTART") != "" {
|
||||
// Solaris SMF
|
||||
l.Infoln("Service manager detected; exit instead of restart")
|
||||
stop <- true
|
||||
return
|
||||
}
|
||||
|
||||
env := os.Environ()
|
||||
newEnv := make([]string, 0, len(env))
|
||||
for _, s := range env {
|
||||
if !strings.HasPrefix(s, "STRESTART=") {
|
||||
newEnv = append(newEnv, s)
|
||||
}
|
||||
}
|
||||
newEnv = append(newEnv, fmt.Sprintf("STRESTART=%d", lockPort))
|
||||
|
||||
pgm, err := exec.LookPath(os.Args[0])
|
||||
if err != nil {
|
||||
l.Warnln("Cannot restart:", err)
|
||||
return
|
||||
}
|
||||
proc, err := os.StartProcess(pgm, os.Args, &os.ProcAttr{
|
||||
Env: newEnv,
|
||||
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
|
||||
})
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
proc.Release()
|
||||
stop <- true
|
||||
stop <- exitRestarting
|
||||
}
|
||||
|
||||
func shutdown() {
|
||||
stop <- true
|
||||
}
|
||||
|
||||
var saveConfigCh = make(chan struct{})
|
||||
|
||||
func saveConfigLoop(cfgFile string) {
|
||||
for _ = range saveConfigCh {
|
||||
fd, err := os.Create(cfgFile + ".tmp")
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
err = config.Save(fd, cfg)
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
fd.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
err = osutil.Rename(cfgFile+".tmp", cfgFile)
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func saveConfig() {
|
||||
saveConfigCh <- struct{}{}
|
||||
l.Infoln("Shutting down")
|
||||
stop <- exitSuccess
|
||||
}
|
||||
|
||||
func listenConnect(myID protocol.NodeID, m *model.Model, tlsCfg *tls.Config) {
|
||||
@@ -822,15 +809,20 @@ next:
|
||||
continue next
|
||||
}
|
||||
|
||||
// If rate limiting is set, we wrap the write side of the
|
||||
// connection in a limiter.
|
||||
// If rate limiting is set, we wrap the connection in a
|
||||
// limiter.
|
||||
var wr io.Writer = conn
|
||||
if rateBucket != nil {
|
||||
wr = &limitedWriter{conn, rateBucket}
|
||||
if writeRateLimit != nil {
|
||||
wr = &limitedWriter{conn, writeRateLimit}
|
||||
}
|
||||
|
||||
var rd io.Reader = conn
|
||||
if readRateLimit != nil {
|
||||
rd = &limitedReader{conn, readRateLimit}
|
||||
}
|
||||
|
||||
name := fmt.Sprintf("%s-%s", conn.LocalAddr(), conn.RemoteAddr())
|
||||
protoConn := protocol.NewConnection(remoteID, conn, wr, m, name, nodeCfg.Compression)
|
||||
protoConn := protocol.NewConnection(remoteID, rd, wr, m, name, nodeCfg.Compression)
|
||||
|
||||
l.Infof("Established secure connection to %s at %s", remoteID, name)
|
||||
if debugNet {
|
||||
@@ -846,6 +838,10 @@ next:
|
||||
}
|
||||
}
|
||||
|
||||
events.Default.Log(events.NodeRejected, map[string]string{
|
||||
"node": remoteID.String(),
|
||||
"address": conn.RemoteAddr().String(),
|
||||
})
|
||||
l.Infof("Connection from %s with unknown node ID %s; ignoring", conn.RemoteAddr(), remoteID)
|
||||
conn.Close()
|
||||
}
|
||||
@@ -985,19 +981,15 @@ func setTCPOptions(conn *net.TCPConn) {
|
||||
}
|
||||
|
||||
func discovery(extPort int) *discover.Discoverer {
|
||||
disc, err := discover.NewDiscoverer(myID, cfg.Options.ListenAddress, cfg.Options.LocalAnnPort)
|
||||
if err != nil {
|
||||
l.Warnf("No discovery possible (%v)", err)
|
||||
return nil
|
||||
}
|
||||
disc := discover.NewDiscoverer(myID, cfg.Options.ListenAddress)
|
||||
|
||||
if cfg.Options.LocalAnnEnabled {
|
||||
l.Infoln("Sending local discovery announcements")
|
||||
disc.StartLocal()
|
||||
l.Infoln("Starting local discovery announcements")
|
||||
disc.StartLocal(cfg.Options.LocalAnnPort, cfg.Options.LocalAnnMCAddr)
|
||||
}
|
||||
|
||||
if cfg.Options.GlobalAnnEnabled {
|
||||
l.Infoln("Sending global discovery announcements")
|
||||
l.Infoln("Starting global discovery announcements")
|
||||
disc.StartGlobal(cfg.Options.GlobalAnnServer, uint16(extPort))
|
||||
}
|
||||
|
||||
@@ -1086,12 +1078,71 @@ func getFreePort(host string, ports ...int) (int, error) {
|
||||
return addr.Port, nil
|
||||
}
|
||||
|
||||
func getLockPort() (int, error) {
|
||||
var err error
|
||||
lockConn, err = net.ListenTCP("tcp", &net.TCPAddr{IP: net.IP{127, 0, 0, 1}})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
func overrideGUIConfig(originalCfg config.GUIConfiguration, address, authentication, apikey string) config.GUIConfiguration {
|
||||
// Make a copy of the config
|
||||
cfg := originalCfg
|
||||
|
||||
if address == "" {
|
||||
address = os.Getenv("STGUIADDRESS")
|
||||
}
|
||||
|
||||
if address != "" {
|
||||
cfg.Enabled = true
|
||||
|
||||
addressParts := strings.SplitN(address, "://", 2)
|
||||
switch addressParts[0] {
|
||||
case "http":
|
||||
cfg.UseTLS = false
|
||||
case "https":
|
||||
cfg.UseTLS = true
|
||||
default:
|
||||
l.Fatalln("Unidentified protocol", addressParts[0])
|
||||
}
|
||||
cfg.Address = addressParts[1]
|
||||
}
|
||||
|
||||
if authentication == "" {
|
||||
authentication = os.Getenv("STGUIAUTH")
|
||||
}
|
||||
|
||||
if authentication != "" {
|
||||
authenticationParts := strings.SplitN(authentication, ":", 2)
|
||||
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(authenticationParts[1]), 0)
|
||||
if err != nil {
|
||||
l.Fatalln("Invalid GUI password:", err)
|
||||
}
|
||||
|
||||
cfg.User = authenticationParts[0]
|
||||
cfg.Password = string(hash)
|
||||
}
|
||||
|
||||
if apikey == "" {
|
||||
apikey = os.Getenv("STGUIAPIKEY")
|
||||
}
|
||||
|
||||
if apikey != "" {
|
||||
cfg.APIKey = apikey
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
|
||||
func standbyMonitor() {
|
||||
restartDelay := time.Duration(60 * time.Second)
|
||||
now := time.Now()
|
||||
for {
|
||||
time.Sleep(10 * time.Second)
|
||||
if time.Since(now) > 2*time.Minute {
|
||||
l.Infoln("Paused state detected, possibly woke up from standby. Restarting in", restartDelay)
|
||||
|
||||
// We most likely just woke from standby. If we restart
|
||||
// immediately chances are we won't have networking ready. Give
|
||||
// things a moment to stabilize.
|
||||
time.Sleep(restartDelay)
|
||||
|
||||
restart()
|
||||
return
|
||||
}
|
||||
now = time.Now()
|
||||
}
|
||||
addr := lockConn.Addr().(*net.TCPAddr)
|
||||
return addr.Port, nil
|
||||
}
|
||||
|
||||
7
cmd/syncthing/main_test.go
Normal file
7
cmd/syncthing/main_test.go
Normal file
@@ -0,0 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main_test
|
||||
|
||||
// Empty test file to generate 0% coverage rather than no coverage
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build solaris
|
||||
|
||||
package main
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build freebsd
|
||||
|
||||
package main
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
182
cmd/syncthing/monitor.go
Normal file
182
cmd/syncthing/monitor.go
Normal file
@@ -0,0 +1,182 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
stdoutFirstLines []string // The first 10 lines of stdout
|
||||
stdoutLastLines []string // The last 50 lines of stdout
|
||||
stdoutMut sync.Mutex
|
||||
)
|
||||
|
||||
const (
|
||||
countRestarts = 5
|
||||
loopThreshold = 15 * time.Second
|
||||
)
|
||||
|
||||
func monitorMain() {
|
||||
os.Setenv("STNORESTART", "yes")
|
||||
l.SetPrefix("[monitor] ")
|
||||
|
||||
args := os.Args
|
||||
var restarts [countRestarts]time.Time
|
||||
|
||||
sign := make(chan os.Signal, 1)
|
||||
sigTerm := syscall.Signal(0xf)
|
||||
signal.Notify(sign, os.Interrupt, sigTerm, os.Kill)
|
||||
|
||||
for {
|
||||
if t := time.Since(restarts[0]); t < loopThreshold {
|
||||
l.Warnf("%d restarts in %v; not retrying further", countRestarts, t)
|
||||
os.Exit(exitError)
|
||||
}
|
||||
|
||||
copy(restarts[0:], restarts[1:])
|
||||
restarts[len(restarts)-1] = time.Now()
|
||||
|
||||
cmd := exec.Command(args[0], args[1:]...)
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
l.Infoln("Starting syncthing")
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
stdoutMut.Lock()
|
||||
stdoutFirstLines = make([]string, 0, 10)
|
||||
stdoutLastLines = make([]string, 0, 50)
|
||||
stdoutMut.Unlock()
|
||||
|
||||
go copyStderr(stderr)
|
||||
go copyStdout(stdout)
|
||||
|
||||
exit := make(chan error)
|
||||
|
||||
go func() {
|
||||
exit <- cmd.Wait()
|
||||
}()
|
||||
|
||||
select {
|
||||
case s := <-sign:
|
||||
l.Infof("Signal %d received; exiting", s)
|
||||
cmd.Process.Kill()
|
||||
<-exit
|
||||
return
|
||||
|
||||
case err = <-exit:
|
||||
if err == nil {
|
||||
// Successfull exit indicates an intentional shutdown
|
||||
return
|
||||
} else if exiterr, ok := err.(*exec.ExitError); ok {
|
||||
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
|
||||
switch status.ExitStatus() {
|
||||
case exitUpgrading:
|
||||
// Restart the monitor process to release the .old
|
||||
// binary as part of the upgrade process.
|
||||
l.Infoln("Restarting monitor...")
|
||||
os.Setenv("STNORESTART", "")
|
||||
err := exec.Command(args[0], args[1:]...).Start()
|
||||
if err != nil {
|
||||
l.Warnln("restart:", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
l.Infoln("Syncthing exited:", err)
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
// Let the next child process know that this is not the first time
|
||||
// it's starting up.
|
||||
os.Setenv("STRESTART", "yes")
|
||||
}
|
||||
}
|
||||
|
||||
func copyStderr(stderr io.ReadCloser) {
|
||||
br := bufio.NewReader(stderr)
|
||||
|
||||
var panicFd *os.File
|
||||
for {
|
||||
line, err := br.ReadString('\n')
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if panicFd == nil {
|
||||
os.Stderr.WriteString(line)
|
||||
|
||||
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
|
||||
panicFd, err = os.Create(filepath.Join(confDir, time.Now().Format("panic-20060102-150405.log")))
|
||||
if err != nil {
|
||||
l.Warnln("Create panic log:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
|
||||
l.Warnln("Please create an issue at https://github.com/syncthing/syncthing/issues/ with the panic log attached")
|
||||
|
||||
panicFd.WriteString("Panic at " + time.Now().Format(time.RFC1123) + "\n")
|
||||
stdoutMut.Lock()
|
||||
for _, line := range stdoutFirstLines {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
panicFd.WriteString("...\n")
|
||||
for _, line := range stdoutLastLines {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if panicFd != nil {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func copyStdout(stderr io.ReadCloser) {
|
||||
br := bufio.NewReader(stderr)
|
||||
for {
|
||||
line, err := br.ReadString('\n')
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
stdoutMut.Lock()
|
||||
if len(stdoutFirstLines) < cap(stdoutFirstLines) {
|
||||
stdoutFirstLines = append(stdoutFirstLines, line)
|
||||
}
|
||||
if l := len(stdoutLastLines); l == cap(stdoutLastLines) {
|
||||
stdoutLastLines = stdoutLastLines[:l-1]
|
||||
}
|
||||
stdoutLastLines = append(stdoutLastLines, line)
|
||||
stdoutMut.Unlock()
|
||||
|
||||
os.Stdout.WriteString(line)
|
||||
}
|
||||
}
|
||||
@@ -2,7 +2,7 @@
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !windows
|
||||
// +build !solaris,!windows
|
||||
|
||||
package main
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
if os.Getenv("STPERFSTATS") != "" {
|
||||
if innerProcess && os.Getenv("STPERFSTATS") != "" {
|
||||
go savePerfStats(fmt.Sprintf("perfstats-%d.csv", syscall.Getpid()))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
@@ -13,8 +14,10 @@ import (
|
||||
"crypto/x509/pkix"
|
||||
"encoding/binary"
|
||||
"encoding/pem"
|
||||
"io"
|
||||
"math/big"
|
||||
mr "math/rand"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
@@ -73,3 +76,40 @@ func newCertificate(dir string, prefix string) {
|
||||
pem.Encode(keyOut, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(priv)})
|
||||
keyOut.Close()
|
||||
}
|
||||
|
||||
type DowngradingListener struct {
|
||||
net.Listener
|
||||
TLSConfig *tls.Config
|
||||
}
|
||||
|
||||
type WrappedConnection struct {
|
||||
io.Reader
|
||||
net.Conn
|
||||
}
|
||||
|
||||
func (l *DowngradingListener) Accept() (net.Conn, error) {
|
||||
conn, err := l.Listener.Accept()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
br := bufio.NewReader(conn)
|
||||
bs, err := br.Peek(1)
|
||||
if err != nil {
|
||||
conn.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
wrapper := &WrappedConnection{br, conn}
|
||||
|
||||
// 0x16 is the first byte of a TLS handshake
|
||||
if bs[0] == 0x16 {
|
||||
return tls.Server(wrapper, l.TLSConfig), nil
|
||||
}
|
||||
|
||||
return wrapper, nil
|
||||
}
|
||||
|
||||
func (c *WrappedConnection) Read(b []byte) (n int, err error) {
|
||||
return c.Reader.Read(b)
|
||||
}
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -81,10 +81,16 @@ func main() {
|
||||
}
|
||||
fd.Close()
|
||||
|
||||
doc, err := html.Parse(os.Stdin)
|
||||
fd, err = os.Open(os.Args[2])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
doc, err := html.Parse(fd)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fd.Close()
|
||||
|
||||
generalNode(doc)
|
||||
bs, err := json.MarshalIndent(trans, "", " ")
|
||||
if err != nil {
|
||||
|
||||
203
config/config.go
203
config/config.go
@@ -8,20 +8,22 @@ package config
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/syncthing/syncthing/events"
|
||||
"github.com/syncthing/syncthing/logger"
|
||||
"github.com/syncthing/syncthing/osutil"
|
||||
"github.com/syncthing/syncthing/protocol"
|
||||
)
|
||||
|
||||
var l = logger.DefaultLogger
|
||||
|
||||
type Configuration struct {
|
||||
Location string `xml:"-" json:"-"`
|
||||
Version int `xml:"version,attr" default:"3"`
|
||||
Repositories []RepositoryConfiguration `xml:"repository"`
|
||||
Nodes []NodeConfiguration `xml:"node"`
|
||||
@@ -31,13 +33,14 @@ type Configuration struct {
|
||||
}
|
||||
|
||||
type RepositoryConfiguration struct {
|
||||
ID string `xml:"id,attr"`
|
||||
Directory string `xml:"directory,attr"`
|
||||
Nodes []NodeConfiguration `xml:"node"`
|
||||
ReadOnly bool `xml:"ro,attr"`
|
||||
IgnorePerms bool `xml:"ignorePerms,attr"`
|
||||
Invalid string `xml:"-"` // Set at runtime when there is an error, not saved
|
||||
Versioning VersioningConfiguration `xml:"versioning"`
|
||||
ID string `xml:"id,attr"`
|
||||
Directory string `xml:"directory,attr"`
|
||||
Nodes []RepositoryNodeConfiguration `xml:"node"`
|
||||
ReadOnly bool `xml:"ro,attr"`
|
||||
RescanIntervalS int `xml:"rescanIntervalS,attr" default:"60"`
|
||||
IgnorePerms bool `xml:"ignorePerms,attr"`
|
||||
Invalid string `xml:"-"` // Set at runtime when there is an error, not saved
|
||||
Versioning VersioningConfiguration `xml:"versioning"`
|
||||
|
||||
nodeIDs []protocol.NodeID
|
||||
}
|
||||
@@ -100,27 +103,37 @@ type NodeConfiguration struct {
|
||||
CertName string `xml:"certName,attr,omitempty"`
|
||||
}
|
||||
|
||||
type RepositoryNodeConfiguration struct {
|
||||
NodeID protocol.NodeID `xml:"id,attr"`
|
||||
|
||||
Deprecated_Name string `xml:"name,attr,omitempty" json:"-"`
|
||||
Deprecated_Addresses []string `xml:"address,omitempty" json:"-"`
|
||||
}
|
||||
|
||||
type OptionsConfiguration struct {
|
||||
ListenAddress []string `xml:"listenAddress" default:"0.0.0.0:22000"`
|
||||
GlobalAnnServer string `xml:"globalAnnounceServer" default:"announce.syncthing.net:22026"`
|
||||
GlobalAnnEnabled bool `xml:"globalAnnounceEnabled" default:"true"`
|
||||
LocalAnnEnabled bool `xml:"localAnnounceEnabled" default:"true"`
|
||||
LocalAnnPort int `xml:"localAnnouncePort" default:"21025"`
|
||||
LocalAnnMCAddr string `xml:"localAnnounceMCAddr" default:"[ff32::5222]:21026"`
|
||||
ParallelRequests int `xml:"parallelRequests" default:"16"`
|
||||
MaxSendKbps int `xml:"maxSendKbps"`
|
||||
RescanIntervalS int `xml:"rescanIntervalS" default:"60"`
|
||||
MaxRecvKbps int `xml:"maxRecvKbps"`
|
||||
ReconnectIntervalS int `xml:"reconnectionIntervalS" default:"60"`
|
||||
StartBrowser bool `xml:"startBrowser" default:"true"`
|
||||
UPnPEnabled bool `xml:"upnpEnabled" default:"true"`
|
||||
UPnPLease int `xml:"upnpLeaseMinutes" default:"0"`
|
||||
UPnPRenewal int `xml:"upnpRenewalMinutes" default:"30"`
|
||||
URAccepted int `xml:"urAccepted"` // Accepted usage reporting version; 0 for off (undecided), -1 for off (permanently)
|
||||
RestartOnWakeup bool `xml:"restartOnWakeup" default:"true"`
|
||||
|
||||
Deprecated_UREnabled bool `xml:"urEnabled,omitempty" json:"-"`
|
||||
Deprecated_URDeclined bool `xml:"urDeclined,omitempty" json:"-"`
|
||||
Deprecated_ReadOnly bool `xml:"readOnly,omitempty" json:"-"`
|
||||
Deprecated_GUIEnabled bool `xml:"guiEnabled,omitempty" json:"-"`
|
||||
Deprecated_GUIAddress string `xml:"guiAddress,omitempty" json:"-"`
|
||||
Deprecated_RescanIntervalS int `xml:"rescanIntervalS,omitempty" json:"-"`
|
||||
Deprecated_UREnabled bool `xml:"urEnabled,omitempty" json:"-"`
|
||||
Deprecated_URDeclined bool `xml:"urDeclined,omitempty" json:"-"`
|
||||
Deprecated_ReadOnly bool `xml:"readOnly,omitempty" json:"-"`
|
||||
Deprecated_GUIEnabled bool `xml:"guiEnabled,omitempty" json:"-"`
|
||||
Deprecated_GUIAddress string `xml:"guiAddress,omitempty" json:"-"`
|
||||
}
|
||||
|
||||
type GUIConfiguration struct {
|
||||
@@ -218,14 +231,39 @@ func fillNilSlices(data interface{}) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func Save(wr io.Writer, cfg Configuration) error {
|
||||
e := xml.NewEncoder(wr)
|
||||
e.Indent("", " ")
|
||||
err := e.Encode(cfg)
|
||||
func (cfg *Configuration) Save() error {
|
||||
fd, err := os.Create(cfg.Location + ".tmp")
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
return err
|
||||
}
|
||||
_, err = wr.Write([]byte("\n"))
|
||||
|
||||
e := xml.NewEncoder(fd)
|
||||
e.Indent("", " ")
|
||||
err = e.Encode(cfg)
|
||||
if err != nil {
|
||||
fd.Close()
|
||||
return err
|
||||
}
|
||||
_, err = fd.Write([]byte("\n"))
|
||||
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
fd.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = osutil.Rename(cfg.Location+".tmp", cfg.Location)
|
||||
if err != nil {
|
||||
l.Warnln("Saving config:", err)
|
||||
}
|
||||
events.Default.Log(events.ConfigSaved, cfg)
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -243,18 +281,7 @@ func uniqueStrings(ss []string) []string {
|
||||
return us
|
||||
}
|
||||
|
||||
func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
var cfg Configuration
|
||||
|
||||
setDefaults(&cfg)
|
||||
setDefaults(&cfg.Options)
|
||||
setDefaults(&cfg.GUI)
|
||||
|
||||
var err error
|
||||
if rd != nil {
|
||||
err = xml.NewDecoder(rd).Decode(&cfg)
|
||||
}
|
||||
|
||||
func (cfg *Configuration) prepare(myID protocol.NodeID) {
|
||||
fillNilSlices(&cfg.Options)
|
||||
|
||||
cfg.Options.ListenAddress = uniqueStrings(cfg.Options.ListenAddress)
|
||||
@@ -303,19 +330,24 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
|
||||
// Upgrade to v2 configuration if appropriate
|
||||
if cfg.Version == 1 {
|
||||
convertV1V2(&cfg)
|
||||
convertV1V2(cfg)
|
||||
}
|
||||
|
||||
// Upgrade to v3 configuration if appropriate
|
||||
if cfg.Version == 2 {
|
||||
convertV2V3(&cfg)
|
||||
convertV2V3(cfg)
|
||||
}
|
||||
|
||||
// Upgrade to v4 configuration if appropriate
|
||||
if cfg.Version == 3 {
|
||||
convertV3V4(cfg)
|
||||
}
|
||||
|
||||
// Hash old cleartext passwords
|
||||
if len(cfg.GUI.Password) > 0 && cfg.GUI.Password[0] != '$' {
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(cfg.GUI.Password), 0)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("bcrypting password:", err)
|
||||
} else {
|
||||
cfg.GUI.Password = string(hash)
|
||||
}
|
||||
@@ -329,15 +361,22 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
}
|
||||
|
||||
// Ensure this node is present in all relevant places
|
||||
me := cfg.GetNodeConfiguration(myID)
|
||||
if me == nil {
|
||||
myName, _ := os.Hostname()
|
||||
cfg.Nodes = append(cfg.Nodes, NodeConfiguration{
|
||||
NodeID: myID,
|
||||
Name: myName,
|
||||
})
|
||||
}
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
// Ensure that any loose nodes are not present in the wrong places
|
||||
// Ensure that there are no duplicate nodes
|
||||
cfg.Nodes = ensureNodePresent(cfg.Nodes, myID)
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
for i := range cfg.Repositories {
|
||||
cfg.Repositories[i].Nodes = ensureNodePresent(cfg.Repositories[i].Nodes, myID)
|
||||
cfg.Repositories[i].Nodes = ensureExistingNodes(cfg.Repositories[i].Nodes, existingNodes)
|
||||
cfg.Repositories[i].Nodes = ensureNoDuplicates(cfg.Repositories[i].Nodes)
|
||||
sort.Sort(NodeConfigurationList(cfg.Repositories[i].Nodes))
|
||||
sort.Sort(RepositoryNodeConfigurationList(cfg.Repositories[i].Nodes))
|
||||
}
|
||||
|
||||
// An empty address list is equivalent to a single "dynamic" entry
|
||||
@@ -347,10 +386,68 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
n.Addresses = []string{"dynamic"}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func New(location string, myID protocol.NodeID) Configuration {
|
||||
var cfg Configuration
|
||||
|
||||
cfg.Location = location
|
||||
|
||||
setDefaults(&cfg)
|
||||
setDefaults(&cfg.Options)
|
||||
setDefaults(&cfg.GUI)
|
||||
|
||||
cfg.prepare(myID)
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
func Load(location string, myID protocol.NodeID) (Configuration, error) {
|
||||
var cfg Configuration
|
||||
|
||||
cfg.Location = location
|
||||
|
||||
setDefaults(&cfg)
|
||||
setDefaults(&cfg.Options)
|
||||
setDefaults(&cfg.GUI)
|
||||
|
||||
fd, err := os.Open(location)
|
||||
if err != nil {
|
||||
return Configuration{}, err
|
||||
}
|
||||
err = xml.NewDecoder(fd).Decode(&cfg)
|
||||
fd.Close()
|
||||
|
||||
cfg.prepare(myID)
|
||||
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
func convertV3V4(cfg *Configuration) {
|
||||
// In previous versions, rescan interval was common for each repository.
|
||||
// From now, it can be set independently. We have to make sure, that after upgrade
|
||||
// the individual rescan interval will be defined for every existing repository.
|
||||
for i := range cfg.Repositories {
|
||||
cfg.Repositories[i].RescanIntervalS = cfg.Options.Deprecated_RescanIntervalS
|
||||
}
|
||||
|
||||
cfg.Options.Deprecated_RescanIntervalS = 0
|
||||
|
||||
// In previous versions, repositories held full node configurations.
|
||||
// Since that's the only place where node configs were in V1, we still have
|
||||
// to define the deprecated fields to be able to upgrade from V1 to V4.
|
||||
for i, repo := range cfg.Repositories {
|
||||
|
||||
for j := range repo.Nodes {
|
||||
rncfg := cfg.Repositories[i].Nodes[j]
|
||||
rncfg.Deprecated_Name = ""
|
||||
rncfg.Deprecated_Addresses = nil
|
||||
}
|
||||
}
|
||||
|
||||
cfg.Version = 4
|
||||
}
|
||||
|
||||
func convertV2V3(cfg *Configuration) {
|
||||
// In previous versions, compression was always on. When upgrading, enable
|
||||
// compression on all existing new. New nodes will get compression on by
|
||||
@@ -372,7 +469,7 @@ func convertV1V2(cfg *Configuration) {
|
||||
// Collect the list of nodes.
|
||||
// Replace node configs inside repositories with only a reference to the nide ID.
|
||||
// Set all repositories to read only if the global read only flag is set.
|
||||
var nodes = map[string]NodeConfiguration{}
|
||||
var nodes = map[string]RepositoryNodeConfiguration{}
|
||||
for i, repo := range cfg.Repositories {
|
||||
cfg.Repositories[i].ReadOnly = cfg.Options.Deprecated_ReadOnly
|
||||
for j, node := range repo.Nodes {
|
||||
@@ -380,14 +477,18 @@ func convertV1V2(cfg *Configuration) {
|
||||
if _, ok := nodes[id]; !ok {
|
||||
nodes[id] = node
|
||||
}
|
||||
cfg.Repositories[i].Nodes[j] = NodeConfiguration{NodeID: node.NodeID}
|
||||
cfg.Repositories[i].Nodes[j] = RepositoryNodeConfiguration{NodeID: node.NodeID}
|
||||
}
|
||||
}
|
||||
cfg.Options.Deprecated_ReadOnly = false
|
||||
|
||||
// Set and sort the list of nodes.
|
||||
for _, node := range nodes {
|
||||
cfg.Nodes = append(cfg.Nodes, node)
|
||||
cfg.Nodes = append(cfg.Nodes, NodeConfiguration{
|
||||
NodeID: node.NodeID,
|
||||
Name: node.Deprecated_Name,
|
||||
Addresses: node.Deprecated_Addresses,
|
||||
})
|
||||
}
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
|
||||
@@ -412,23 +513,33 @@ func (l NodeConfigurationList) Len() int {
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func ensureNodePresent(nodes []NodeConfiguration, myID protocol.NodeID) []NodeConfiguration {
|
||||
type RepositoryNodeConfigurationList []RepositoryNodeConfiguration
|
||||
|
||||
func (l RepositoryNodeConfigurationList) Less(a, b int) bool {
|
||||
return l[a].NodeID.Compare(l[b].NodeID) == -1
|
||||
}
|
||||
func (l RepositoryNodeConfigurationList) Swap(a, b int) {
|
||||
l[a], l[b] = l[b], l[a]
|
||||
}
|
||||
func (l RepositoryNodeConfigurationList) Len() int {
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func ensureNodePresent(nodes []RepositoryNodeConfiguration, myID protocol.NodeID) []RepositoryNodeConfiguration {
|
||||
for _, node := range nodes {
|
||||
if node.NodeID.Equals(myID) {
|
||||
return nodes
|
||||
}
|
||||
}
|
||||
|
||||
name, _ := os.Hostname()
|
||||
nodes = append(nodes, NodeConfiguration{
|
||||
nodes = append(nodes, RepositoryNodeConfiguration{
|
||||
NodeID: myID,
|
||||
Name: name,
|
||||
})
|
||||
|
||||
return nodes
|
||||
}
|
||||
|
||||
func ensureExistingNodes(nodes []NodeConfiguration, existingNodes map[protocol.NodeID]bool) []NodeConfiguration {
|
||||
func ensureExistingNodes(nodes []RepositoryNodeConfiguration, existingNodes map[protocol.NodeID]bool) []RepositoryNodeConfiguration {
|
||||
count := len(nodes)
|
||||
i := 0
|
||||
loop:
|
||||
@@ -443,7 +554,7 @@ loop:
|
||||
return nodes[0:count]
|
||||
}
|
||||
|
||||
func ensureNoDuplicates(nodes []NodeConfiguration) []NodeConfiguration {
|
||||
func ensureNoDuplicates(nodes []RepositoryNodeConfiguration) []RepositoryNodeConfiguration {
|
||||
count := len(nodes)
|
||||
i := 0
|
||||
seenNodes := make(map[protocol.NodeID]bool)
|
||||
|
||||
@@ -5,8 +5,6 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
@@ -30,20 +28,19 @@ func TestDefaultValues(t *testing.T) {
|
||||
GlobalAnnEnabled: true,
|
||||
LocalAnnEnabled: true,
|
||||
LocalAnnPort: 21025,
|
||||
LocalAnnMCAddr: "[ff32::5222]:21026",
|
||||
ParallelRequests: 16,
|
||||
MaxSendKbps: 0,
|
||||
RescanIntervalS: 60,
|
||||
MaxRecvKbps: 0,
|
||||
ReconnectIntervalS: 60,
|
||||
StartBrowser: true,
|
||||
UPnPEnabled: true,
|
||||
UPnPLease: 0,
|
||||
UPnPRenewal: 30,
|
||||
RestartOnWakeup: true,
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(nil), node1)
|
||||
if err != io.EOF {
|
||||
t.Error(err)
|
||||
}
|
||||
cfg := New("test", node1)
|
||||
|
||||
if !reflect.DeepEqual(cfg.Options, expected) {
|
||||
t.Errorf("Default config differs;\n E: %#v\n A: %#v", expected, cfg.Options)
|
||||
@@ -51,73 +48,19 @@ func TestDefaultValues(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNodeConfig(t *testing.T) {
|
||||
v1data := []byte(`
|
||||
<configuration version="1">
|
||||
<repository id="test" directory="~/Sync">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</repository>
|
||||
<options>
|
||||
<readOnly>true</readOnly>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
v2data := []byte(`
|
||||
<configuration version="2">
|
||||
<repository id="test" directory="~/Sync" ro="true">
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ"/>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ"/>
|
||||
<node id="C4YBIESWDUAIGU62GOSRXCRAAJDWVE3TKCPMURZE2LH5QHAF576A"/>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ"/>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ"/>
|
||||
<node id="C4YBIESWDUAIGU62GOSRXCRAAJDWVE3TKCPMURZE2LH5QHAF576A"/>
|
||||
</repository>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
v3data := []byte(`
|
||||
<configuration version="3">
|
||||
<repository id="test" directory="~/Sync" ro="true" ignorePerms="false">
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" compression="false"></node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" compression="false"></node>
|
||||
</repository>
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" name="node one" compression="true">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="true">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</configuration>`)
|
||||
|
||||
for i, data := range [][]byte{v1data, v2data, v3data} {
|
||||
cfg, err := Load(bytes.NewReader(data), node1)
|
||||
for i, ver := range []string{"v1", "v2", "v3", "v4"} {
|
||||
cfg, err := Load("testdata/"+ver+".xml", node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
expectedRepos := []RepositoryConfiguration{
|
||||
{
|
||||
ID: "test",
|
||||
Directory: "~/Sync",
|
||||
Nodes: []NodeConfiguration{{NodeID: node1}, {NodeID: node4}},
|
||||
ReadOnly: true,
|
||||
ID: "test",
|
||||
Directory: "~/Sync",
|
||||
Nodes: []RepositoryNodeConfiguration{{NodeID: node1}, {NodeID: node4}},
|
||||
ReadOnly: true,
|
||||
RescanIntervalS: 600,
|
||||
},
|
||||
}
|
||||
expectedNodes := []NodeConfiguration{
|
||||
@@ -136,7 +79,7 @@ func TestNodeConfig(t *testing.T) {
|
||||
}
|
||||
expectedNodeIDs := []protocol.NodeID{node1, node4}
|
||||
|
||||
if cfg.Version != 3 {
|
||||
if cfg.Version != 4 {
|
||||
t.Errorf("%d: Incorrect version %d != 3", i, cfg.Version)
|
||||
}
|
||||
if !reflect.DeepEqual(cfg.Repositories, expectedRepos) {
|
||||
@@ -159,14 +102,7 @@ func TestNodeConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNoListenAddress(t *testing.T) {
|
||||
data := []byte(`<configuration version="1">
|
||||
<options>
|
||||
<listenAddress></listenAddress>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node1)
|
||||
cfg, err := Load("testdata/nolistenaddress.xml", node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@@ -178,43 +114,25 @@ func TestNoListenAddress(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestOverriddenValues(t *testing.T) {
|
||||
data := []byte(`<configuration version="2">
|
||||
<options>
|
||||
<listenAddress>:23000</listenAddress>
|
||||
<allowDelete>false</allowDelete>
|
||||
<globalAnnounceServer>syncthing.nym.se:22026</globalAnnounceServer>
|
||||
<globalAnnounceEnabled>false</globalAnnounceEnabled>
|
||||
<localAnnounceEnabled>false</localAnnounceEnabled>
|
||||
<localAnnouncePort>42123</localAnnouncePort>
|
||||
<parallelRequests>32</parallelRequests>
|
||||
<maxSendKbps>1234</maxSendKbps>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
<reconnectionIntervalS>6000</reconnectionIntervalS>
|
||||
<startBrowser>false</startBrowser>
|
||||
<upnpEnabled>false</upnpEnabled>
|
||||
<upnpLeaseMinutes>60</upnpLeaseMinutes>
|
||||
<upnpRenewalMinutes>15</upnpRenewalMinutes>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
expected := OptionsConfiguration{
|
||||
ListenAddress: []string{":23000"},
|
||||
GlobalAnnServer: "syncthing.nym.se:22026",
|
||||
GlobalAnnEnabled: false,
|
||||
LocalAnnEnabled: false,
|
||||
LocalAnnPort: 42123,
|
||||
LocalAnnMCAddr: "quux:3232",
|
||||
ParallelRequests: 32,
|
||||
MaxSendKbps: 1234,
|
||||
RescanIntervalS: 600,
|
||||
MaxRecvKbps: 2341,
|
||||
ReconnectIntervalS: 6000,
|
||||
StartBrowser: false,
|
||||
UPnPEnabled: false,
|
||||
UPnPLease: 60,
|
||||
UPnPRenewal: 15,
|
||||
RestartOnWakeup: false,
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node1)
|
||||
cfg, err := Load("testdata/overridenvalues.xml", node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@@ -225,19 +143,6 @@ func TestOverriddenValues(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNodeAddressesDynamic(t *testing.T) {
|
||||
data := []byte(`
|
||||
<configuration version="2">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ">
|
||||
<address></address>
|
||||
</node>
|
||||
<node id="GYRZZQBIRNPV4T7TC52WEQYJ3TFDQW6MWDFLMU4SSSU6EMFBK2VA">
|
||||
</node>
|
||||
<node id="LGFPDIT7SKNNJVJZA4FC7QNCRKCE753K72BW5QD2FOZ7FRFEP57Q">
|
||||
<address>dynamic</address>
|
||||
</node>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
name, _ := os.Hostname()
|
||||
expected := []NodeConfiguration{
|
||||
{
|
||||
@@ -262,7 +167,7 @@ func TestNodeAddressesDynamic(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node4)
|
||||
cfg, err := Load("testdata/nodeaddressesdynamic.xml", node4)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@@ -273,23 +178,6 @@ func TestNodeAddressesDynamic(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestNodeAddressesStatic(t *testing.T) {
|
||||
data := []byte(`
|
||||
<configuration version="3">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ">
|
||||
<address>192.0.2.1</address>
|
||||
<address>192.0.2.2</address>
|
||||
</node>
|
||||
<node id="GYRZZQBIRNPV4T7TC52WEQYJ3TFDQW6MWDFLMU4SSSU6EMFBK2VA">
|
||||
<address>192.0.2.3:6070</address>
|
||||
<address>[2001:db8::42]:4242</address>
|
||||
</node>
|
||||
<node id="LGFPDIT7SKNNJVJZA4FC7QNCRKCE753K72BW5QD2FOZ7FRFEP57Q">
|
||||
<address>[2001:db8::44]:4444</address>
|
||||
<address>192.0.2.4:6090</address>
|
||||
</node>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
name, _ := os.Hostname()
|
||||
expected := []NodeConfiguration{
|
||||
{
|
||||
@@ -311,7 +199,7 @@ func TestNodeAddressesStatic(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node4)
|
||||
cfg, err := Load("testdata/nodeaddressesstatic.xml", node4)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@@ -322,18 +210,7 @@ func TestNodeAddressesStatic(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestVersioningConfig(t *testing.T) {
|
||||
data := []byte(`
|
||||
<configuration version="2">
|
||||
<repository id="test" directory="~/Sync" ro="true">
|
||||
<versioning type="simple">
|
||||
<param key="foo" val="bar"/>
|
||||
<param key="baz" val="quux"/>
|
||||
</versioning>
|
||||
</repository>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node4)
|
||||
cfg, err := Load("testdata/versioningconfig.xml", node4)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
@@ -354,3 +231,67 @@ func TestVersioningConfig(t *testing.T) {
|
||||
t.Errorf("vc.Params differ;\n E: %#v\n A: %#v", expected, vc.Params)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewSaveLoad(t *testing.T) {
|
||||
path := "testdata/temp.xml"
|
||||
os.Remove(path)
|
||||
|
||||
exists := func(path string) bool {
|
||||
_, err := os.Stat(path)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
cfg := New(path, node1)
|
||||
|
||||
// To make the equality pass later
|
||||
cfg.XMLName.Local = "configuration"
|
||||
|
||||
if exists(path) {
|
||||
t.Error(path, "exists")
|
||||
}
|
||||
|
||||
err := cfg.Save()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if !exists(path) {
|
||||
t.Error(path, "does not exist")
|
||||
}
|
||||
|
||||
cfg2, err := Load(path, node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(cfg, cfg2) {
|
||||
t.Errorf("Configs are not equal;\n E: %#v\n A: %#v", cfg, cfg2)
|
||||
}
|
||||
|
||||
cfg.GUI.User = "test"
|
||||
cfg.Save()
|
||||
|
||||
cfg2, err = Load(path, node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if cfg2.GUI.User != "test" || !reflect.DeepEqual(cfg, cfg2) {
|
||||
t.Errorf("Configs are not equal;\n E: %#v\n A: %#v", cfg, cfg2)
|
||||
}
|
||||
|
||||
os.Remove(path)
|
||||
}
|
||||
|
||||
func TestPrepare(t *testing.T) {
|
||||
var cfg Configuration
|
||||
|
||||
if cfg.Repositories != nil || cfg.Nodes != nil || cfg.Options.ListenAddress != nil {
|
||||
t.Error("Expected nil")
|
||||
}
|
||||
|
||||
cfg.prepare(node1)
|
||||
|
||||
if cfg.Repositories == nil || cfg.Nodes == nil || cfg.Options.ListenAddress == nil {
|
||||
t.Error("Unexpected nil")
|
||||
}
|
||||
}
|
||||
|
||||
10
config/testdata/nodeaddressesdynamic.xml
vendored
Executable file
10
config/testdata/nodeaddressesdynamic.xml
vendored
Executable file
@@ -0,0 +1,10 @@
|
||||
<configuration version="2">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ">
|
||||
<address></address>
|
||||
</node>
|
||||
<node id="GYRZZQBIRNPV4T7TC52WEQYJ3TFDQW6MWDFLMU4SSSU6EMFBK2VA">
|
||||
</node>
|
||||
<node id="LGFPDIT7SKNNJVJZA4FC7QNCRKCE753K72BW5QD2FOZ7FRFEP57Q">
|
||||
<address>dynamic</address>
|
||||
</node>
|
||||
</configuration>
|
||||
14
config/testdata/nodeaddressesstatic.xml
vendored
Executable file
14
config/testdata/nodeaddressesstatic.xml
vendored
Executable file
@@ -0,0 +1,14 @@
|
||||
<configuration version="3">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ">
|
||||
<address>192.0.2.1</address>
|
||||
<address>192.0.2.2</address>
|
||||
</node>
|
||||
<node id="GYRZZQBIRNPV4T7TC52WEQYJ3TFDQW6MWDFLMU4SSSU6EMFBK2VA">
|
||||
<address>192.0.2.3:6070</address>
|
||||
<address>[2001:db8::42]:4242</address>
|
||||
</node>
|
||||
<node id="LGFPDIT7SKNNJVJZA4FC7QNCRKCE753K72BW5QD2FOZ7FRFEP57Q">
|
||||
<address>[2001:db8::44]:4444</address>
|
||||
<address>192.0.2.4:6090</address>
|
||||
</node>
|
||||
</configuration>
|
||||
5
config/testdata/nolistenaddress.xml
vendored
Executable file
5
config/testdata/nolistenaddress.xml
vendored
Executable file
@@ -0,0 +1,5 @@
|
||||
<configuration version="1">
|
||||
<options>
|
||||
<listenAddress></listenAddress>
|
||||
</options>
|
||||
</configuration>
|
||||
20
config/testdata/overridenvalues.xml
vendored
Executable file
20
config/testdata/overridenvalues.xml
vendored
Executable file
@@ -0,0 +1,20 @@
|
||||
<configuration version="2">
|
||||
<options>
|
||||
<listenAddress>:23000</listenAddress>
|
||||
<allowDelete>false</allowDelete>
|
||||
<globalAnnounceServer>syncthing.nym.se:22026</globalAnnounceServer>
|
||||
<globalAnnounceEnabled>false</globalAnnounceEnabled>
|
||||
<localAnnounceEnabled>false</localAnnounceEnabled>
|
||||
<localAnnouncePort>42123</localAnnouncePort>
|
||||
<localAnnounceMCAddr>quux:3232</localAnnounceMCAddr>
|
||||
<parallelRequests>32</parallelRequests>
|
||||
<maxSendKbps>1234</maxSendKbps>
|
||||
<maxRecvKbps>2341</maxRecvKbps>
|
||||
<reconnectionIntervalS>6000</reconnectionIntervalS>
|
||||
<startBrowser>false</startBrowser>
|
||||
<upnpEnabled>false</upnpEnabled>
|
||||
<upnpLeaseMinutes>60</upnpLeaseMinutes>
|
||||
<upnpRenewalMinutes>15</upnpRenewalMinutes>
|
||||
<restartOnWakeup>false</restartOnWakeup>
|
||||
</options>
|
||||
</configuration>
|
||||
20
config/testdata/v1.xml
vendored
Executable file
20
config/testdata/v1.xml
vendored
Executable file
@@ -0,0 +1,20 @@
|
||||
<configuration version="1">
|
||||
<repository id="test" directory="~/Sync">
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</repository>
|
||||
<options>
|
||||
<readOnly>true</readOnly>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>
|
||||
19
config/testdata/v2.xml
vendored
Executable file
19
config/testdata/v2.xml
vendored
Executable file
@@ -0,0 +1,19 @@
|
||||
<configuration version="2">
|
||||
<repository id="test" directory="~/Sync" ro="true">
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ"/>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ"/>
|
||||
<node id="C4YBIESWDUAIGU62GOSRXCRAAJDWVE3TKCPMURZE2LH5QHAF576A"/>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ"/>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ"/>
|
||||
<node id="C4YBIESWDUAIGU62GOSRXCRAAJDWVE3TKCPMURZE2LH5QHAF576A"/>
|
||||
</repository>
|
||||
<node id="AIR6LPZ7K4PTTUXQSMUUCPQ5YWOEDFIIQJUG7772YQXXR5YD6AWQ" name="node one">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<options>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>
|
||||
15
config/testdata/v3.xml
vendored
Executable file
15
config/testdata/v3.xml
vendored
Executable file
@@ -0,0 +1,15 @@
|
||||
<configuration version="3">
|
||||
<repository id="test" directory="~/Sync" ro="true" ignorePerms="false">
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" compression="false"></node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" compression="false"></node>
|
||||
</repository>
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" name="node one" compression="true">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="true">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<options>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>
|
||||
12
config/testdata/v4.xml
vendored
Executable file
12
config/testdata/v4.xml
vendored
Executable file
@@ -0,0 +1,12 @@
|
||||
<configuration version="4">
|
||||
<repository id="test" directory="~/Sync" ro="true" ignorePerms="false" rescanIntervalS="600">
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR"></node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2"></node>
|
||||
</repository>
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" name="node one" compression="true">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="true">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</configuration>
|
||||
8
config/testdata/versioningconfig.xml
vendored
Executable file
8
config/testdata/versioningconfig.xml
vendored
Executable file
@@ -0,0 +1,8 @@
|
||||
<configuration version="2">
|
||||
<repository id="test" directory="~/Sync" ro="true">
|
||||
<versioning type="simple">
|
||||
<param key="foo" val="bar"/>
|
||||
<param key="baz" val="quux"/>
|
||||
</versioning>
|
||||
</repository>
|
||||
</configuration>
|
||||
@@ -36,7 +36,7 @@ The Announcement packet has the following structure:
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic (0x029E4C77) |
|
||||
| Magic (0x9D79BC39) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Node Structure \
|
||||
@@ -121,7 +121,7 @@ The Query packet has the following structure:
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Magic Number (0x23D63A9A) |
|
||||
| Magic Number (0x2CA856F5) |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of Node ID |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
@@ -8,9 +8,9 @@ import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -26,7 +26,8 @@ type Discoverer struct {
|
||||
globalBcastIntv time.Duration
|
||||
errorRetryIntv time.Duration
|
||||
cacheLifetime time.Duration
|
||||
beacon *beacon.Beacon
|
||||
broadcastBeacon beacon.Interface
|
||||
multicastBeacon beacon.Interface
|
||||
registry map[protocol.NodeID][]cacheEntry
|
||||
registryLock sync.RWMutex
|
||||
extServer string
|
||||
@@ -37,7 +38,6 @@ type Discoverer struct {
|
||||
forcedBcastTick chan time.Time
|
||||
extAnnounceOK bool
|
||||
extAnnounceOKmut sync.Mutex
|
||||
globalBcastStop chan bool
|
||||
}
|
||||
|
||||
type cacheEntry struct {
|
||||
@@ -49,36 +49,52 @@ var (
|
||||
ErrIncorrectMagic = errors.New("incorrect magic number")
|
||||
)
|
||||
|
||||
// We tolerate a certain amount of errors because we might be running on
|
||||
// laptops that sleep and wake, have intermittent network connectivity, etc.
|
||||
// When we hit this many errors in succession, we stop.
|
||||
const maxErrors = 30
|
||||
|
||||
func NewDiscoverer(id protocol.NodeID, addresses []string, localPort int) (*Discoverer, error) {
|
||||
b, err := beacon.New(localPort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
disc := &Discoverer{
|
||||
func NewDiscoverer(id protocol.NodeID, addresses []string) *Discoverer {
|
||||
return &Discoverer{
|
||||
myID: id,
|
||||
listenAddrs: addresses,
|
||||
localBcastIntv: 30 * time.Second,
|
||||
globalBcastIntv: 1800 * time.Second,
|
||||
errorRetryIntv: 60 * time.Second,
|
||||
cacheLifetime: 5 * time.Minute,
|
||||
beacon: b,
|
||||
registry: make(map[protocol.NodeID][]cacheEntry),
|
||||
}
|
||||
|
||||
go disc.recvAnnouncements()
|
||||
|
||||
return disc, nil
|
||||
}
|
||||
|
||||
func (d *Discoverer) StartLocal() {
|
||||
d.localBcastTick = time.Tick(d.localBcastIntv)
|
||||
d.forcedBcastTick = make(chan time.Time)
|
||||
go d.sendLocalAnnouncements()
|
||||
func (d *Discoverer) StartLocal(localPort int, localMCAddr string) {
|
||||
if localPort > 0 {
|
||||
bb, err := beacon.NewBroadcast(localPort)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
l.Infoln("Local discovery over IPv4 unavailable")
|
||||
} else {
|
||||
d.broadcastBeacon = bb
|
||||
go d.recvAnnouncements(bb)
|
||||
}
|
||||
}
|
||||
|
||||
if len(localMCAddr) > 0 {
|
||||
mb, err := beacon.NewMulticast(localMCAddr)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
l.Infoln("Local discovery over IPv6 unavailable")
|
||||
} else {
|
||||
d.multicastBeacon = mb
|
||||
go d.recvAnnouncements(mb)
|
||||
}
|
||||
}
|
||||
|
||||
if d.broadcastBeacon == nil && d.multicastBeacon == nil {
|
||||
l.Warnln("Local discovery unavailable")
|
||||
} else {
|
||||
d.localBcastTick = time.Tick(d.localBcastIntv)
|
||||
d.forcedBcastTick = make(chan time.Time)
|
||||
go d.sendLocalAnnouncements()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *Discoverer) StartGlobal(server string, extPort uint16) {
|
||||
@@ -92,8 +108,10 @@ func (d *Discoverer) StartGlobal(server string, extPort uint16) {
|
||||
}
|
||||
|
||||
func (d *Discoverer) StopGlobal() {
|
||||
close(d.stopGlobal)
|
||||
d.globalWG.Wait()
|
||||
if d.stopGlobal != nil {
|
||||
close(d.stopGlobal)
|
||||
d.globalWG.Wait()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *Discoverer) ExtAnnounceOK() bool {
|
||||
@@ -187,7 +205,12 @@ func (d *Discoverer) sendLocalAnnouncements() {
|
||||
msg := pkt.MarshalXDR()
|
||||
|
||||
for {
|
||||
d.beacon.Send(msg)
|
||||
if d.multicastBeacon != nil {
|
||||
d.multicastBeacon.Send(msg)
|
||||
}
|
||||
if d.broadcastBeacon != nil {
|
||||
d.broadcastBeacon.Send(msg)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-d.localBcastTick:
|
||||
@@ -284,9 +307,9 @@ loop:
|
||||
}
|
||||
}
|
||||
|
||||
func (d *Discoverer) recvAnnouncements() {
|
||||
func (d *Discoverer) recvAnnouncements(b beacon.Interface) {
|
||||
for {
|
||||
buf, addr := d.beacon.Recv()
|
||||
buf, addr := b.Recv()
|
||||
|
||||
if debug {
|
||||
l.Debugf("discover: read announcement from %s:\n%s", addr, hex.Dump(buf))
|
||||
@@ -324,7 +347,7 @@ func (d *Discoverer) registerNode(addr net.Addr, node Node) bool {
|
||||
for _, a := range node.Addresses {
|
||||
var nodeAddr string
|
||||
if len(a.IP) > 0 {
|
||||
nodeAddr = fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
|
||||
nodeAddr = net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
|
||||
} else if addr != nil {
|
||||
ua := addr.(*net.UDPAddr)
|
||||
ua.Port = int(a.Port)
|
||||
@@ -428,7 +451,7 @@ func (d *Discoverer) externalLookup(node protocol.NodeID) []string {
|
||||
|
||||
var addrs []string
|
||||
for _, a := range pkt.This.Addresses {
|
||||
nodeAddr := fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
|
||||
nodeAddr := net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
|
||||
addrs = append(addrs, nodeAddr)
|
||||
}
|
||||
return addrs
|
||||
|
||||
7
discover/discover_test.go
Normal file
7
discover/discover_test.go
Normal file
@@ -0,0 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package discover_test
|
||||
|
||||
// Empty test file to generate 0% coverage rather than no coverage
|
||||
@@ -20,10 +20,13 @@ const (
|
||||
NodeDiscovered
|
||||
NodeConnected
|
||||
NodeDisconnected
|
||||
NodeRejected
|
||||
LocalIndexUpdated
|
||||
RemoteIndexUpdated
|
||||
ItemStarted
|
||||
StateChanged
|
||||
RepoRejected
|
||||
ConfigSaved
|
||||
|
||||
AllEvents = ^EventType(0)
|
||||
)
|
||||
@@ -42,6 +45,8 @@ func (t EventType) String() string {
|
||||
return "NodeConnected"
|
||||
case NodeDisconnected:
|
||||
return "NodeDisconnected"
|
||||
case NodeRejected:
|
||||
return "NodeRejected"
|
||||
case LocalIndexUpdated:
|
||||
return "LocalIndexUpdated"
|
||||
case RemoteIndexUpdated:
|
||||
@@ -50,6 +55,10 @@ func (t EventType) String() string {
|
||||
return "ItemStarted"
|
||||
case StateChanged:
|
||||
return "StateChanged"
|
||||
case RepoRejected:
|
||||
return "RepoRejected"
|
||||
case ConfigSaved:
|
||||
return "ConfigSaved"
|
||||
default:
|
||||
return "Unknown"
|
||||
}
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import "code.google.com/p/go.text/unicode/norm"
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !windows,!darwin
|
||||
|
||||
package files
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import (
|
||||
|
||||
165
files/leveldb.go
165
files/leveldb.go
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import (
|
||||
@@ -117,6 +121,12 @@ func globalKeyName(key []byte) []byte {
|
||||
return key[1+64:]
|
||||
}
|
||||
|
||||
func globalKeyRepo(key []byte) []byte {
|
||||
repo := key[1 : 1+64]
|
||||
izero := bytes.IndexByte(repo, 0)
|
||||
return repo[:izero]
|
||||
}
|
||||
|
||||
type deletionHandler func(db dbReader, batch dbWriter, repo, node, name []byte, dbi iterator.Iterator) uint64
|
||||
|
||||
type fileIterator func(f protocol.FileIntf) bool
|
||||
@@ -176,18 +186,28 @@ func ldbGenericReplace(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo
|
||||
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
if fs[fsi].IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
}
|
||||
fsi++
|
||||
|
||||
case moreFs && moreDb && cmp == 0:
|
||||
// File exists on both sides - compare versions.
|
||||
// File exists on both sides - compare versions. We might get an
|
||||
// update with the same version and different flags if a node has
|
||||
// marked a file as invalid, so handle that too.
|
||||
var ef protocol.FileInfoTruncated
|
||||
ef.UnmarshalXDR(dbi.Value())
|
||||
if fs[fsi].Version > ef.Version {
|
||||
if fs[fsi].Version > ef.Version || fs[fsi].Version != ef.Version {
|
||||
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
if fs[fsi].IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
}
|
||||
}
|
||||
// Iterate both sides.
|
||||
fsi++
|
||||
@@ -270,7 +290,11 @@ func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64
|
||||
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
if f.IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, name)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -279,11 +303,17 @@ func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if ef.Version != f.Version {
|
||||
// Flags might change without the version being bumped when we set the
|
||||
// invalid flag on an existing file.
|
||||
if ef.Version != f.Version || ef.Flags != f.Flags {
|
||||
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
if f.IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, name)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -375,7 +405,9 @@ func ldbRemoveFromGlobal(db dbReader, batch dbWriter, repo, node, file []byte) {
|
||||
gk := globalKey(repo, file)
|
||||
svl, err := db.Get(gk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
// We might be called to "remove" a global version that doesn't exist
|
||||
// if the first update for the file is already marked invalid.
|
||||
return
|
||||
}
|
||||
|
||||
var fl versionList
|
||||
@@ -585,6 +617,7 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterat
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
defer dbi.Release()
|
||||
|
||||
outer:
|
||||
for dbi.Next() {
|
||||
var vl versionList
|
||||
err := vl.UnmarshalXDR(dbi.Value())
|
||||
@@ -610,33 +643,113 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterat
|
||||
|
||||
if need || !have {
|
||||
name := globalKeyName(dbi.Key())
|
||||
fk := nodeKey(repo, vl.versions[0].node, name)
|
||||
bs, err := snap.Get(fk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
needVersion := vl.versions[0].version
|
||||
inner:
|
||||
for i := range vl.versions {
|
||||
if vl.versions[i].version != needVersion {
|
||||
// We haven't found a valid copy of the file with the needed version.
|
||||
continue outer
|
||||
}
|
||||
fk := nodeKey(repo, vl.versions[i].node, name)
|
||||
bs, err := snap.Get(fk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
gf, err := unmarshalTrunc(bs, truncate)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
gf, err := unmarshalTrunc(bs, truncate)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if gf.IsDeleted() && !have {
|
||||
// We don't need deleted files that we don't have
|
||||
continue
|
||||
}
|
||||
if gf.IsInvalid() {
|
||||
// The file is marked invalid for whatever reason, don't use it.
|
||||
continue inner
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
|
||||
}
|
||||
if gf.IsDeleted() && !have {
|
||||
// We don't need deleted files that we don't have
|
||||
continue outer
|
||||
}
|
||||
|
||||
if cont := fn(gf); !cont {
|
||||
return
|
||||
if debug {
|
||||
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
|
||||
}
|
||||
|
||||
if cont := fn(gf); !cont {
|
||||
return
|
||||
}
|
||||
|
||||
// This file is handled, no need to look further in the version list
|
||||
continue outer
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ldbListRepos(db *leveldb.DB) []string {
|
||||
defer runtime.GC()
|
||||
|
||||
start := []byte{keyTypeGlobal}
|
||||
limit := []byte{keyTypeGlobal + 1}
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer snap.Release()
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
defer dbi.Release()
|
||||
|
||||
repoExists := make(map[string]bool)
|
||||
for dbi.Next() {
|
||||
repo := string(globalKeyRepo(dbi.Key()))
|
||||
if !repoExists[repo] {
|
||||
repoExists[repo] = true
|
||||
}
|
||||
}
|
||||
|
||||
repos := make([]string, 0, len(repoExists))
|
||||
for k := range repoExists {
|
||||
repos = append(repos, k)
|
||||
}
|
||||
|
||||
sort.Strings(repos)
|
||||
return repos
|
||||
}
|
||||
|
||||
func ldbDropRepo(db *leveldb.DB, repo []byte) {
|
||||
defer runtime.GC()
|
||||
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer snap.Release()
|
||||
|
||||
// Remove all items related to the given repo from the node->file bucket
|
||||
start := []byte{keyTypeNode}
|
||||
limit := []byte{keyTypeNode + 1}
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
for dbi.Next() {
|
||||
itemRepo := nodeKeyRepo(dbi.Key())
|
||||
if bytes.Compare(repo, itemRepo) == 0 {
|
||||
db.Delete(dbi.Key(), nil)
|
||||
}
|
||||
}
|
||||
dbi.Release()
|
||||
|
||||
// Remove all items related to the given repo from the global bucket
|
||||
start = []byte{keyTypeGlobal}
|
||||
limit = []byte{keyTypeGlobal + 1}
|
||||
dbi = snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
for dbi.Next() {
|
||||
itemRepo := globalKeyRepo(dbi.Key())
|
||||
if bytes.Compare(repo, itemRepo) == 0 {
|
||||
db.Delete(dbi.Key(), nil)
|
||||
}
|
||||
}
|
||||
dbi.Release()
|
||||
}
|
||||
|
||||
func unmarshalTrunc(bs []byte, truncate bool) (protocol.FileIntf, error) {
|
||||
if truncate {
|
||||
var tf protocol.FileInfoTruncated
|
||||
|
||||
11
files/set.go
11
files/set.go
@@ -155,6 +155,17 @@ func (s *Set) LocalVersion(node protocol.NodeID) uint64 {
|
||||
return s.localVersion[node]
|
||||
}
|
||||
|
||||
// ListRepos returns the repository IDs seen in the database.
|
||||
func ListRepos(db *leveldb.DB) []string {
|
||||
return ldbListRepos(db)
|
||||
}
|
||||
|
||||
// DropRepo clears out all information related to the given repo from the
|
||||
// database.
|
||||
func DropRepo(db *leveldb.DB, repo string) {
|
||||
ldbDropRepo(db, []byte(repo))
|
||||
}
|
||||
|
||||
func normalizeFilenames(fs []protocol.FileInfo) {
|
||||
for i := range fs {
|
||||
fs[i].Name = normalizedFilename(fs[i].Name)
|
||||
|
||||
@@ -5,7 +5,9 @@
|
||||
package files_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
@@ -16,10 +18,11 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
)
|
||||
|
||||
var remoteNode protocol.NodeID
|
||||
var remoteNode0, remoteNode1 protocol.NodeID
|
||||
|
||||
func init() {
|
||||
remoteNode, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
|
||||
remoteNode0, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
|
||||
remoteNode1, _ = protocol.NodeIDFromString("I6KAH76-66SLLLB-5PFXSOA-UFJCDZC-YAOMLEK-CP2GB32-BV5RQST-3PSROAU")
|
||||
}
|
||||
|
||||
func genBlocks(n int) []protocol.BlockInfo {
|
||||
@@ -79,6 +82,16 @@ func (l fileList) Swap(a, b int) {
|
||||
l[a], l[b] = l[b], l[a]
|
||||
}
|
||||
|
||||
func (l fileList) String() string {
|
||||
var b bytes.Buffer
|
||||
b.WriteString("[]protocol.FileList{\n")
|
||||
for _, f := range l {
|
||||
fmt.Fprintf(&b, " %q: #%d, %d bytes, %d blocks, flags=%o\n", f.Name, f.Version, f.Size(), len(f.Blocks), f.Flags)
|
||||
}
|
||||
b.WriteString("}")
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func TestGlobalSet(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
@@ -89,20 +102,20 @@ func TestGlobalSet(t *testing.T) {
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
|
||||
local0 := []protocol.FileInfo{
|
||||
local0 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
|
||||
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "z", Version: 1000, Blocks: genBlocks(8)},
|
||||
}
|
||||
local1 := []protocol.FileInfo{
|
||||
local1 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
|
||||
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
|
||||
}
|
||||
localTot := []protocol.FileInfo{
|
||||
localTot := fileList{
|
||||
local0[0],
|
||||
local0[1],
|
||||
local0[2],
|
||||
@@ -110,76 +123,76 @@ func TestGlobalSet(t *testing.T) {
|
||||
protocol.FileInfo{Name: "z", Version: 1001, Flags: protocol.FlagDeleted},
|
||||
}
|
||||
|
||||
remote0 := []protocol.FileInfo{
|
||||
remote0 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5)},
|
||||
}
|
||||
remote1 := []protocol.FileInfo{
|
||||
remote1 := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(6)},
|
||||
protocol.FileInfo{Name: "e", Version: 1000, Blocks: genBlocks(7)},
|
||||
}
|
||||
remoteTot := []protocol.FileInfo{
|
||||
remoteTot := fileList{
|
||||
remote0[0],
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
remote1[1],
|
||||
}
|
||||
|
||||
expectedGlobal := []protocol.FileInfo{
|
||||
remote0[0],
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
localTot[3],
|
||||
remote1[1],
|
||||
localTot[4],
|
||||
expectedGlobal := fileList{
|
||||
remote0[0], // a
|
||||
remote1[0], // b
|
||||
remote0[2], // c
|
||||
localTot[3], // d
|
||||
remote1[1], // e
|
||||
localTot[4], // z
|
||||
}
|
||||
|
||||
expectedLocalNeed := []protocol.FileInfo{
|
||||
expectedLocalNeed := fileList{
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
remote1[1],
|
||||
}
|
||||
|
||||
expectedRemoteNeed := []protocol.FileInfo{
|
||||
expectedRemoteNeed := fileList{
|
||||
local0[3],
|
||||
}
|
||||
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local0)
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local1)
|
||||
m.Replace(remoteNode, remote0)
|
||||
m.Update(remoteNode, remote1)
|
||||
m.Replace(remoteNode0, remote0)
|
||||
m.Update(remoteNode0, remote1)
|
||||
|
||||
g := globalList(m)
|
||||
sort.Sort(fileList(g))
|
||||
g := fileList(globalList(m))
|
||||
sort.Sort(g)
|
||||
|
||||
if fmt.Sprint(g) != fmt.Sprint(expectedGlobal) {
|
||||
t.Errorf("Global incorrect;\n A: %v !=\n E: %v", g, expectedGlobal)
|
||||
}
|
||||
|
||||
h := haveList(m, protocol.LocalNodeID)
|
||||
sort.Sort(fileList(h))
|
||||
h := fileList(haveList(m, protocol.LocalNodeID))
|
||||
sort.Sort(h)
|
||||
|
||||
if fmt.Sprint(h) != fmt.Sprint(localTot) {
|
||||
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, localTot)
|
||||
}
|
||||
|
||||
h = haveList(m, remoteNode)
|
||||
sort.Sort(fileList(h))
|
||||
h = fileList(haveList(m, remoteNode0))
|
||||
sort.Sort(h)
|
||||
|
||||
if fmt.Sprint(h) != fmt.Sprint(remoteTot) {
|
||||
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, remoteTot)
|
||||
}
|
||||
|
||||
n := needList(m, protocol.LocalNodeID)
|
||||
sort.Sort(fileList(n))
|
||||
n := fileList(needList(m, protocol.LocalNodeID))
|
||||
sort.Sort(n)
|
||||
|
||||
if fmt.Sprint(n) != fmt.Sprint(expectedLocalNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedLocalNeed)
|
||||
}
|
||||
|
||||
n = needList(m, remoteNode)
|
||||
sort.Sort(fileList(n))
|
||||
n = fileList(needList(m, remoteNode0))
|
||||
sort.Sort(n)
|
||||
|
||||
if fmt.Sprint(n) != fmt.Sprint(expectedRemoteNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedRemoteNeed)
|
||||
@@ -190,7 +203,7 @@ func TestGlobalSet(t *testing.T) {
|
||||
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, localTot[1])
|
||||
}
|
||||
|
||||
f = m.Get(remoteNode, "b")
|
||||
f = m.Get(remoteNode0, "b")
|
||||
if fmt.Sprint(f) != fmt.Sprint(remote1[0]) {
|
||||
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, remote1[0])
|
||||
}
|
||||
@@ -210,14 +223,14 @@ func TestGlobalSet(t *testing.T) {
|
||||
t.Errorf("GetGlobal incorrect;\n A: %v !=\n E: %v", f, protocol.FileInfo{})
|
||||
}
|
||||
|
||||
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode}
|
||||
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode0}
|
||||
a := m.Availability("a")
|
||||
if !(len(a) == 2 && (a[0] == av[0] && a[1] == av[1] || a[0] == av[1] && a[1] == av[0])) {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, av)
|
||||
}
|
||||
a = m.Availability("b")
|
||||
if len(a) != 1 || a[0] != remoteNode {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode)
|
||||
if len(a) != 1 || a[0] != remoteNode0 {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode0)
|
||||
}
|
||||
a = m.Availability("d")
|
||||
if len(a) != 1 || a[0] != protocol.LocalNodeID {
|
||||
@@ -225,6 +238,128 @@ func TestGlobalSet(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestNeedWithInvalid(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
localHave := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
}
|
||||
remote0Have := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
remote1Have := fileList{
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "e", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
|
||||
expectedNeed := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
|
||||
s.Replace(remoteNode0, remote0Have)
|
||||
s.Replace(remoteNode1, remote1Have)
|
||||
|
||||
need := fileList(needList(s, protocol.LocalNodeID))
|
||||
sort.Sort(need)
|
||||
|
||||
if fmt.Sprint(need) != fmt.Sprint(expectedNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", need, expectedNeed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateToInvalid(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
localHave := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
|
||||
|
||||
have := fileList(haveList(s, protocol.LocalNodeID))
|
||||
sort.Sort(have)
|
||||
|
||||
if fmt.Sprint(have) != fmt.Sprint(localHave) {
|
||||
t.Errorf("Have incorrect before invalidation;\n A: %v !=\n E: %v", have, localHave)
|
||||
}
|
||||
|
||||
localHave[1] = protocol.FileInfo{Name: "b", Version: 1001, Flags: protocol.FlagInvalid}
|
||||
s.Update(protocol.LocalNodeID, localHave[1:2])
|
||||
|
||||
have = fileList(haveList(s, protocol.LocalNodeID))
|
||||
sort.Sort(have)
|
||||
|
||||
if fmt.Sprint(have) != fmt.Sprint(localHave) {
|
||||
t.Errorf("Have incorrect after invalidation;\n A: %v !=\n E: %v", have, localHave)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidAvailability(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
remote0Have := fileList{
|
||||
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
remote1Have := fileList{
|
||||
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
|
||||
s.Replace(remoteNode0, remote0Have)
|
||||
s.Replace(remoteNode1, remote1Have)
|
||||
|
||||
if av := s.Availability("both"); len(av) != 2 {
|
||||
t.Error("Incorrect availability for 'both':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("r0only"); len(av) != 1 || av[0] != remoteNode0 {
|
||||
t.Error("Incorrect availability for 'r0only':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("r1only"); len(av) != 1 || av[0] != remoteNode1 {
|
||||
t.Error("Incorrect availability for 'r1only':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("none"); len(av) != 0 {
|
||||
t.Error("Incorrect availability for 'none':", av)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLocalDeleted(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
@@ -330,7 +465,7 @@ func Benchmark10kUpdateChg(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -361,7 +496,7 @@ func Benchmark10kUpdateSme(b *testing.B) {
|
||||
b.Fatal(err)
|
||||
}
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -388,7 +523,7 @@ func Benchmark10kNeed2k(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 8000; i++ {
|
||||
@@ -421,7 +556,7 @@ func Benchmark10kHaveFullList(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 2000; i++ {
|
||||
@@ -454,7 +589,7 @@ func Benchmark10kGlobal(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 2000; i++ {
|
||||
@@ -505,8 +640,8 @@ func TestGlobalReset(t *testing.T) {
|
||||
t.Errorf("Global incorrect;\n%v !=\n%v", g, local)
|
||||
}
|
||||
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode, nil)
|
||||
m.Replace(remoteNode0, remote)
|
||||
m.Replace(remoteNode0, nil)
|
||||
|
||||
g = globalList(m)
|
||||
sort.Sort(fileList(g))
|
||||
@@ -545,7 +680,7 @@ func TestNeed(t *testing.T) {
|
||||
}
|
||||
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
need := needList(m, protocol.LocalNodeID)
|
||||
|
||||
@@ -596,6 +731,126 @@ func TestLocalVersion(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestListDropRepo(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s0 := files.NewSet("test0", db)
|
||||
local1 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "a", Version: 1000},
|
||||
protocol.FileInfo{Name: "b", Version: 1000},
|
||||
protocol.FileInfo{Name: "c", Version: 1000},
|
||||
}
|
||||
s0.Replace(protocol.LocalNodeID, local1)
|
||||
|
||||
s1 := files.NewSet("test1", db)
|
||||
local2 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "d", Version: 1002},
|
||||
protocol.FileInfo{Name: "e", Version: 1002},
|
||||
protocol.FileInfo{Name: "f", Version: 1002},
|
||||
}
|
||||
s1.Replace(remoteNode0, local2)
|
||||
|
||||
// Check that we have both repos and their data is in the global list
|
||||
|
||||
expectedRepoList := []string{"test0", "test1"}
|
||||
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
|
||||
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
|
||||
}
|
||||
if l := len(globalList(s0)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s0", l)
|
||||
}
|
||||
if l := len(globalList(s1)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s1", l)
|
||||
}
|
||||
|
||||
// Drop one of them and check that it's gone.
|
||||
|
||||
files.DropRepo(db, "test1")
|
||||
|
||||
expectedRepoList = []string{"test0"}
|
||||
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
|
||||
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
|
||||
}
|
||||
if l := len(globalList(s0)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s0", l)
|
||||
}
|
||||
if l := len(globalList(s1)); l != 0 {
|
||||
t.Errorf("Incorrect global length %d != 0 for s1", l)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGlobalNeedWithInvalid(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test1", db)
|
||||
|
||||
rem0 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1002, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "b", Version: 1002, Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(4)},
|
||||
}
|
||||
s.Replace(remoteNode0, rem0)
|
||||
|
||||
rem1 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1002, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "b", Version: 1002, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Flags: protocol.FlagInvalid},
|
||||
}
|
||||
s.Replace(remoteNode1, rem1)
|
||||
|
||||
total := fileList{
|
||||
// There's a valid copy of each file, so it should be merged
|
||||
protocol.FileInfo{Name: "a", Version: 1002, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "b", Version: 1002, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(4)},
|
||||
}
|
||||
|
||||
need := fileList(needList(s, protocol.LocalNodeID))
|
||||
if fmt.Sprint(need) != fmt.Sprint(total) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", need, total)
|
||||
}
|
||||
|
||||
global := fileList(globalList(s))
|
||||
if fmt.Sprint(global) != fmt.Sprint(total) {
|
||||
t.Errorf("Global incorrect;\n A: %v !=\n E: %v", global, total)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLongPath(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
var b bytes.Buffer
|
||||
for i := 0; i < 100; i++ {
|
||||
b.WriteString("012345678901234567890123456789012345678901234567890")
|
||||
}
|
||||
name := b.String() // 5000 characters
|
||||
|
||||
local := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: string(name), Version: 1000},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, local)
|
||||
|
||||
gf := globalList(s)
|
||||
if l := len(gf); l != 1 {
|
||||
t.Fatalf("Incorrect len %d != 1 for global list", l)
|
||||
}
|
||||
if gf[0].Name != local[0].Name {
|
||||
t.Error("Incorrect long filename;\n%q !=\n%q", gf[0].Name, local[0].Name)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
var gf protocol.FileInfo
|
||||
|
||||
@@ -622,7 +877,7 @@ func TestStressGlobalVersion(t *testing.T) {
|
||||
m := files.NewSet("test", db)
|
||||
|
||||
done := make(chan struct{})
|
||||
go stressWriter(m, remoteNode, set1, nil, done)
|
||||
go stressWriter(m, remoteNode0, set1, nil, done)
|
||||
go stressWriter(m, protocol.LocalNodeID, set2, nil, done)
|
||||
|
||||
t0 := time.Now()
|
||||
|
||||
69
fnmatch/fnmatch.go
Normal file
69
fnmatch/fnmatch.go
Normal file
@@ -0,0 +1,69 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package fnmatch
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
FNM_NOESCAPE = (1 << iota)
|
||||
FNM_PATHNAME
|
||||
FNM_CASEFOLD
|
||||
)
|
||||
|
||||
func Convert(pattern string, flags int) (*regexp.Regexp, error) {
|
||||
any := "."
|
||||
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
flags |= FNM_NOESCAPE | FNM_CASEFOLD
|
||||
pattern = filepath.FromSlash(pattern)
|
||||
if flags&FNM_PATHNAME != 0 {
|
||||
any = "[^\\\\]"
|
||||
}
|
||||
case "darwin":
|
||||
flags |= FNM_CASEFOLD
|
||||
fallthrough
|
||||
default:
|
||||
if flags&FNM_PATHNAME != 0 {
|
||||
any = "[^/]"
|
||||
}
|
||||
}
|
||||
|
||||
if flags&FNM_NOESCAPE != 0 {
|
||||
pattern = strings.Replace(pattern, "\\", "\\\\", -1)
|
||||
} else {
|
||||
pattern = strings.Replace(pattern, "\\*", "[:escapedstar:]", -1)
|
||||
pattern = strings.Replace(pattern, "\\?", "[:escapedques:]", -1)
|
||||
pattern = strings.Replace(pattern, "\\.", "[:escapeddot:]", -1)
|
||||
}
|
||||
pattern = strings.Replace(pattern, ".", "\\.", -1)
|
||||
pattern = strings.Replace(pattern, "**", "[:doublestar:]", -1)
|
||||
pattern = strings.Replace(pattern, "*", any+"*", -1)
|
||||
pattern = strings.Replace(pattern, "[:doublestar:]", ".*", -1)
|
||||
pattern = strings.Replace(pattern, "?", any, -1)
|
||||
pattern = strings.Replace(pattern, "[:escapedstar:]", "\\*", -1)
|
||||
pattern = strings.Replace(pattern, "[:escapedques:]", "\\?", -1)
|
||||
pattern = strings.Replace(pattern, "[:escapeddot:]", "\\.", -1)
|
||||
pattern = "^" + pattern + "$"
|
||||
if flags&FNM_CASEFOLD != 0 {
|
||||
pattern = "(?i)" + pattern
|
||||
}
|
||||
return regexp.Compile(pattern)
|
||||
}
|
||||
|
||||
// Matches the pattern against the string, with the given flags,
|
||||
// and returns true if the match is successful.
|
||||
func Match(pattern, s string, flags int) (bool, error) {
|
||||
exp, err := Convert(pattern, flags)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return exp.MatchString(s), nil
|
||||
}
|
||||
86
fnmatch/fnmatch_test.go
Normal file
86
fnmatch/fnmatch_test.go
Normal file
@@ -0,0 +1,86 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package fnmatch
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type testcase struct {
|
||||
pat string
|
||||
name string
|
||||
flags int
|
||||
match bool
|
||||
}
|
||||
|
||||
var testcases = []testcase{
|
||||
{"", "", 0, true},
|
||||
{"*", "", 0, true},
|
||||
{"*", "foo", 0, true},
|
||||
{"*", "bar", 0, true},
|
||||
{"*", "*", 0, true},
|
||||
{"**", "f", 0, true},
|
||||
{"**", "foo.txt", 0, true},
|
||||
{"*.*", "foo.txt", 0, true},
|
||||
{"foo*.txt", "foobar.txt", 0, true},
|
||||
{"foo.txt", "foo.txt", 0, true},
|
||||
|
||||
{"foo.txt", "bar/foo.txt", 0, false},
|
||||
{"*/foo.txt", "bar/foo.txt", 0, true},
|
||||
{"f?o.txt", "foo.txt", 0, true},
|
||||
{"f?o.txt", "fooo.txt", 0, false},
|
||||
{"f[ab]o.txt", "foo.txt", 0, false},
|
||||
{"f[ab]o.txt", "fao.txt", 0, true},
|
||||
{"f[ab]o.txt", "fbo.txt", 0, true},
|
||||
{"f[ab]o.txt", "fco.txt", 0, false},
|
||||
{"f[ab]o.txt", "fabo.txt", 0, false},
|
||||
{"f[ab]o.txt", "f[ab]o.txt", 0, false},
|
||||
{"f\\[ab\\]o.txt", "f[ab]o.txt", FNM_NOESCAPE, false},
|
||||
|
||||
{"*foo.txt", "bar/foo.txt", 0, true},
|
||||
{"*foo.txt", "bar/foo.txt", FNM_PATHNAME, false},
|
||||
{"*/foo.txt", "bar/foo.txt", 0, true},
|
||||
{"*/foo.txt", "bar/foo.txt", FNM_PATHNAME, true},
|
||||
{"*/foo.txt", "bar/baz/foo.txt", 0, true},
|
||||
{"*/foo.txt", "bar/baz/foo.txt", FNM_PATHNAME, false},
|
||||
{"**/foo.txt", "bar/baz/foo.txt", 0, true},
|
||||
{"**/foo.txt", "bar/baz/foo.txt", FNM_PATHNAME, true},
|
||||
|
||||
{"foo.txt", "foo.TXT", FNM_CASEFOLD, true},
|
||||
}
|
||||
|
||||
func TestMatch(t *testing.T) {
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
testcases = append(testcases, testcase{"foo.txt", "foo.TXT", 0, true})
|
||||
case "darwin":
|
||||
testcases = append(testcases, testcase{"foo.txt", "foo.TXT", 0, true})
|
||||
fallthrough
|
||||
default:
|
||||
testcases = append(testcases, testcase{"f\\[ab\\]o.txt", "f[ab]o.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\.txt", "foo.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\*.txt", "foo*.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\.txt", "foo.txt", FNM_NOESCAPE, false})
|
||||
testcases = append(testcases, testcase{"f\\\\\\[ab\\\\\\]o.txt", "f\\[ab\\]o.txt", 0, true})
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
if m, err := Match(tc.pat, filepath.FromSlash(tc.name), tc.flags); m != tc.match {
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
} else {
|
||||
t.Errorf("Match(%q, %q, %d) != %v", tc.pat, tc.name, tc.flags, tc.match)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalid(t *testing.T) {
|
||||
if _, err := Match("foo[bar", "...", 0); err == nil {
|
||||
t.Error("Unexpected nil error")
|
||||
}
|
||||
}
|
||||
253
gui/app.js
253
gui/app.js
@@ -15,22 +15,28 @@ syncthing.config(function ($httpProvider, $translateProvider) {
|
||||
$httpProvider.defaults.xsrfCookieName = 'CSRF-Token';
|
||||
|
||||
$translateProvider.useStaticFilesLoader({
|
||||
prefix: 'lang-',
|
||||
prefix: 'lang/lang-',
|
||||
suffix: '.json'
|
||||
});
|
||||
});
|
||||
|
||||
syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
$scope.lastEvent = null;
|
||||
var online = false;
|
||||
var lastID = 0;
|
||||
|
||||
var successFn = function (data) {
|
||||
if (!online) {
|
||||
$scope.$emit('UIOnline');
|
||||
online = true;
|
||||
// When Syncthing restarts while the long polling connection is in
|
||||
// progress the browser on some platforms returns a 200 (since the
|
||||
// headers has been flushed with the return code 200), with no data.
|
||||
// This basically means that the connection has been reset, and the call
|
||||
// was not actually sucessful.
|
||||
if (!data) {
|
||||
errorFn(data);
|
||||
return;
|
||||
}
|
||||
|
||||
$scope.$emit('UIOnline');
|
||||
|
||||
if (lastID > 0) {
|
||||
data.forEach(function (event) {
|
||||
console.log("event", event.id, event.type, event.data);
|
||||
@@ -49,10 +55,8 @@ syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
};
|
||||
|
||||
var errorFn = function (data) {
|
||||
if (online) {
|
||||
$scope.$emit('UIOffline');
|
||||
online = false;
|
||||
}
|
||||
$scope.$emit('UIOffline');
|
||||
|
||||
setTimeout(function () {
|
||||
$http.get(urlbase + '/events?limit=1')
|
||||
.success(successFn)
|
||||
@@ -68,6 +72,8 @@ syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $location) {
|
||||
var prevDate = 0;
|
||||
var getOK = true;
|
||||
var navigatingAway = false;
|
||||
var online = false;
|
||||
var restarting = false;
|
||||
|
||||
$scope.completion = {};
|
||||
@@ -84,6 +90,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.repos = {};
|
||||
$scope.seenError = '';
|
||||
$scope.upgradeInfo = {};
|
||||
$scope.stats = {};
|
||||
|
||||
$http.get(urlbase+"/lang").success(function (langs) {
|
||||
// Find the first language in the list provided by the user's browser
|
||||
@@ -94,16 +101,33 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
var lang, matching;
|
||||
for (var i = 0; i < langs.length; i++) {
|
||||
lang = langs[i];
|
||||
matching = validLangs.filter(function (l) {
|
||||
return l.indexOf(lang) == 0;
|
||||
if (lang.length < 2) {
|
||||
continue;
|
||||
}
|
||||
matching = validLangs.filter(function (possibleLang) {
|
||||
// The langs returned by the /rest/langs call will be in lower
|
||||
// case. We compare to the lowercase version of the language
|
||||
// code we have as well.
|
||||
possibleLang = possibleLang.toLowerCase()
|
||||
if (possibleLang.length > lang.length) {
|
||||
return possibleLang.indexOf(lang) == 0;
|
||||
} else {
|
||||
return lang.indexOf(possibleLang) == 0;
|
||||
}
|
||||
});
|
||||
if (matching.length >= 1) {
|
||||
$translate.use(matching[0]);
|
||||
break;
|
||||
return;
|
||||
}
|
||||
}
|
||||
// Fallback if nothing matched
|
||||
$translate.use("en");
|
||||
})
|
||||
|
||||
$(window).bind('beforeunload', function() {
|
||||
navigatingAway = true;
|
||||
});
|
||||
|
||||
$scope.$on("$locationChangeSuccess", function () {
|
||||
var lang = $location.search().lang;
|
||||
if (lang) {
|
||||
@@ -125,16 +149,30 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
|
||||
$scope.$on('UIOnline', function (event, arg) {
|
||||
console.log('UIOnline');
|
||||
$scope.init();
|
||||
restarting = false;
|
||||
$('#networkError').modal('hide');
|
||||
$('#restarting').modal('hide');
|
||||
$('#shutdown').modal('hide');
|
||||
if (online && !restarting) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (restarting){
|
||||
document.location.reload(true);
|
||||
} else {
|
||||
console.log('UIOnline');
|
||||
$scope.init();
|
||||
online = true;
|
||||
restarting = false;
|
||||
$('#networkError').modal('hide');
|
||||
$('#restarting').modal('hide');
|
||||
$('#shutdown').modal('hide');
|
||||
}
|
||||
});
|
||||
|
||||
$scope.$on('UIOffline', function (event, arg) {
|
||||
if (navigatingAway || !online) {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('UIOffline');
|
||||
online = false;
|
||||
if (!restarting) {
|
||||
$('#networkError').modal();
|
||||
}
|
||||
@@ -165,6 +203,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.$on('NodeDisconnected', function (event, arg) {
|
||||
delete $scope.connections[arg.data.id];
|
||||
refreshNodeStats();
|
||||
});
|
||||
|
||||
$scope.$on('NodeConnected', function (event, arg) {
|
||||
@@ -202,6 +241,14 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
})
|
||||
|
||||
$scope.$on('ConfigSaved', function (event, arg) {
|
||||
updateLocalConfig(arg.data);
|
||||
|
||||
$http.get(urlbase + '/config/sync').success(function (data) {
|
||||
$scope.configInSync = data.configInSync;
|
||||
});
|
||||
});
|
||||
|
||||
var debouncedFuncs = {};
|
||||
|
||||
function refreshRepo(repo) {
|
||||
@@ -217,6 +264,33 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
debouncedFuncs[key]();
|
||||
}
|
||||
|
||||
function updateLocalConfig(config) {
|
||||
var hasConfig = !isEmptyObject($scope.config);
|
||||
|
||||
$scope.config = config;
|
||||
$scope.config.Options.ListenStr = $scope.config.Options.ListenAddress.join(', ');
|
||||
|
||||
$scope.nodes = $scope.config.Nodes;
|
||||
$scope.nodes.forEach(function (nodeCfg) {
|
||||
$scope.completion[nodeCfg.NodeID] = {
|
||||
_total: 100,
|
||||
};
|
||||
});
|
||||
$scope.nodes.sort(nodeCompare);
|
||||
|
||||
$scope.repos = repoMap($scope.config.Repositories);
|
||||
Object.keys($scope.repos).forEach(function (repo) {
|
||||
refreshRepo(repo);
|
||||
$scope.repos[repo].Nodes.forEach(function (nodeCfg) {
|
||||
refreshCompletion(nodeCfg.NodeID, repo);
|
||||
});
|
||||
});
|
||||
|
||||
if (!hasConfig) {
|
||||
$scope.$emit('ConfigLoaded');
|
||||
}
|
||||
}
|
||||
|
||||
function refreshSystem() {
|
||||
$http.get(urlbase + '/system').success(function (data) {
|
||||
$scope.myID = data.myID;
|
||||
@@ -289,31 +363,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
function refreshConfig() {
|
||||
$http.get(urlbase + '/config').success(function (data) {
|
||||
var hasConfig = !isEmptyObject($scope.config);
|
||||
|
||||
$scope.config = data;
|
||||
$scope.config.Options.ListenStr = $scope.config.Options.ListenAddress.join(', ');
|
||||
|
||||
$scope.nodes = $scope.config.Nodes;
|
||||
$scope.nodes.forEach(function (nodeCfg) {
|
||||
$scope.completion[nodeCfg.NodeID] = {
|
||||
_total: 100,
|
||||
};
|
||||
});
|
||||
$scope.nodes.sort(nodeCompare);
|
||||
|
||||
$scope.repos = repoMap($scope.config.Repositories);
|
||||
Object.keys($scope.repos).forEach(function (repo) {
|
||||
refreshRepo(repo);
|
||||
$scope.repos[repo].Nodes.forEach(function (nodeCfg) {
|
||||
refreshCompletion(nodeCfg.NodeID, repo);
|
||||
});
|
||||
});
|
||||
|
||||
if (!hasConfig) {
|
||||
$scope.$emit('ConfigLoaded');
|
||||
}
|
||||
|
||||
updateLocalConfig(data);
|
||||
console.log("refreshConfig", data);
|
||||
});
|
||||
|
||||
@@ -322,10 +372,22 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
});
|
||||
}
|
||||
|
||||
var refreshNodeStats = debounce(function () {
|
||||
$http.get(urlbase+"/stats/node").success(function (data) {
|
||||
$scope.stats = data;
|
||||
for (var node in $scope.stats) {
|
||||
$scope.stats[node].LastSeen = new Date($scope.stats[node].LastSeen);
|
||||
$scope.stats[node].LastSeenDays = (new Date() - $scope.stats[node].LastSeen) / 1000 / 86400;
|
||||
}
|
||||
console.log("refreshNodeStats", data);
|
||||
});
|
||||
}, 500);
|
||||
|
||||
$scope.init = function() {
|
||||
refreshSystem();
|
||||
refreshConfig();
|
||||
refreshConnectionStats();
|
||||
refreshNodeStats();
|
||||
|
||||
$http.get(urlbase + '/version').success(function (data) {
|
||||
$scope.version = data;
|
||||
@@ -434,17 +496,6 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
return '';
|
||||
};
|
||||
|
||||
$scope.nodeVer = function (nodeCfg) {
|
||||
if (nodeCfg.NodeID === $scope.myID) {
|
||||
return $scope.version;
|
||||
}
|
||||
var conn = $scope.connections[nodeCfg.NodeID];
|
||||
if (conn) {
|
||||
return conn.ClientVersion;
|
||||
}
|
||||
return '?';
|
||||
};
|
||||
|
||||
$scope.findNode = function (nodeID) {
|
||||
var matches = $scope.nodes.filter(function (n) { return n.NodeID == nodeID; });
|
||||
if (matches.length != 1) {
|
||||
@@ -478,6 +529,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
// Make a working copy
|
||||
$scope.tmpOptions = angular.copy($scope.config.Options);
|
||||
$scope.tmpOptions.UREnabled = ($scope.tmpOptions.URAccepted > 0);
|
||||
$scope.tmpOptions.NodeName = $scope.thisNode().Name;
|
||||
$scope.tmpGUI = angular.copy($scope.config.GUI);
|
||||
$('#settings').modal();
|
||||
};
|
||||
@@ -510,6 +562,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
|
||||
// Apply new settings locally
|
||||
$scope.thisNode().Name = $scope.tmpOptions.NodeName;
|
||||
$scope.config.Options = angular.copy($scope.tmpOptions);
|
||||
$scope.config.GUI = angular.copy($scope.tmpGUI);
|
||||
$scope.config.Options.ListenAddress = $scope.config.Options.ListenStr.split(',').map(function (x) { return x.trim(); });
|
||||
@@ -536,7 +589,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
setTimeout(function(){
|
||||
window.location.protocol = protocol;
|
||||
}, 1000);
|
||||
}, 2500);
|
||||
|
||||
$scope.protocolChanged = false;
|
||||
}
|
||||
@@ -682,9 +735,28 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
});
|
||||
if ($scope.currentRepo.Versioning && $scope.currentRepo.Versioning.Type === "simple") {
|
||||
$scope.currentRepo.simpleFileVersioning = true;
|
||||
$scope.currentRepo.FileVersioningSelector = "simple";
|
||||
$scope.currentRepo.simpleKeep = +$scope.currentRepo.Versioning.Params.keep;
|
||||
} else if ($scope.currentRepo.Versioning && $scope.currentRepo.Versioning.Type === "staggered") {
|
||||
$scope.currentRepo.staggeredFileVersioning = true;
|
||||
$scope.currentRepo.FileVersioningSelector = "staggered";
|
||||
$scope.currentRepo.staggeredMaxAge = Math.floor(+$scope.currentRepo.Versioning.Params.maxAge / 86400);
|
||||
$scope.currentRepo.staggeredCleanInterval = +$scope.currentRepo.Versioning.Params.cleanInterval;
|
||||
$scope.currentRepo.staggeredVersionsPath = $scope.currentRepo.Versioning.Params.versionsPath;
|
||||
} else {
|
||||
$scope.currentRepo.FileVersioningSelector = "none";
|
||||
}
|
||||
$scope.currentRepo.simpleKeep = $scope.currentRepo.simpleKeep || 5;
|
||||
$scope.currentRepo.staggeredCleanInterval = $scope.currentRepo.staggeredCleanInterval || 3600;
|
||||
$scope.currentRepo.staggeredVersionsPath = $scope.currentRepo.staggeredVersionsPath || "";
|
||||
|
||||
// staggeredMaxAge can validly be zero, which we should not replace
|
||||
// with the default value of 365. So only set the default if it's
|
||||
// actually undefined.
|
||||
if (typeof $scope.currentRepo.staggeredMaxAge === 'undefined') {
|
||||
$scope.currentRepo.staggeredMaxAge = 365;
|
||||
}
|
||||
|
||||
$scope.editingExisting = true;
|
||||
$scope.repoEditor.$setPristine();
|
||||
$('#editRepo').modal();
|
||||
@@ -692,6 +764,12 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.addRepo = function () {
|
||||
$scope.currentRepo = {selectedNodes: {}};
|
||||
$scope.currentRepo.RescanIntervalS = 60;
|
||||
$scope.currentRepo.FileVersioningSelector = "none";
|
||||
$scope.currentRepo.simpleKeep = 5;
|
||||
$scope.currentRepo.staggeredMaxAge = 365;
|
||||
$scope.currentRepo.staggeredCleanInterval = 3600;
|
||||
$scope.currentRepo.staggeredVersionsPath = "";
|
||||
$scope.editingExisting = false;
|
||||
$scope.repoEditor.$setPristine();
|
||||
$('#editRepo').modal();
|
||||
@@ -711,7 +789,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
delete repoCfg.selectedNodes;
|
||||
|
||||
if (repoCfg.simpleFileVersioning) {
|
||||
if (repoCfg.FileVersioningSelector === "simple") {
|
||||
repoCfg.Versioning = {
|
||||
'Type': 'simple',
|
||||
'Params': {
|
||||
@@ -720,6 +798,20 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
};
|
||||
delete repoCfg.simpleFileVersioning;
|
||||
delete repoCfg.simpleKeep;
|
||||
} else if (repoCfg.FileVersioningSelector === "staggered") {
|
||||
repoCfg.Versioning = {
|
||||
'Type': 'staggered',
|
||||
'Params': {
|
||||
'maxAge': '' + (repoCfg.staggeredMaxAge * 86400),
|
||||
'cleanInterval': '' + repoCfg.staggeredCleanInterval,
|
||||
'versionsPath': '' + repoCfg.staggeredVersionsPath,
|
||||
}
|
||||
};
|
||||
delete repoCfg.staggeredFileVersioning;
|
||||
delete repoCfg.staggeredMaxAge;
|
||||
delete repoCfg.staggeredCleanInterval;
|
||||
delete repoCfg.staggeredVersionsPath;
|
||||
|
||||
} else {
|
||||
delete repoCfg.Versioning;
|
||||
}
|
||||
@@ -755,6 +847,13 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
cfg.APIKey = randomString(30, 32);
|
||||
};
|
||||
|
||||
$scope.showURPreview = function () {
|
||||
$('#settings').modal('hide');
|
||||
$('#urPreview').modal().on('hidden.bs.modal', function () {
|
||||
$('#settings').modal();
|
||||
});
|
||||
}
|
||||
|
||||
$scope.acceptUR = function () {
|
||||
$scope.config.Options.URAccepted = 1000; // Larger than the largest existing report version
|
||||
$scope.saveConfig();
|
||||
@@ -824,10 +923,10 @@ function nodeCompare(a, b) {
|
||||
}
|
||||
|
||||
function repoCompare(a, b) {
|
||||
if (a.Directory < b.Directory) {
|
||||
if (a.ID < b.ID) {
|
||||
return -1;
|
||||
}
|
||||
return a.Directory > b.Directory;
|
||||
return a.ID > b.ID;
|
||||
}
|
||||
|
||||
function repoMap(l) {
|
||||
@@ -889,9 +988,9 @@ function debounce(func, wait) {
|
||||
} else {
|
||||
timeout = null;
|
||||
if (again) {
|
||||
again = false;
|
||||
result = func.apply(context, args);
|
||||
context = args = null;
|
||||
again = false;
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -961,12 +1060,6 @@ syncthing.filter('metric', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('short', function () {
|
||||
return function (input) {
|
||||
return input.substr(0, 6);
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('alwaysNumber', function () {
|
||||
return function (input) {
|
||||
if (input === undefined) {
|
||||
@@ -976,18 +1069,6 @@ syncthing.filter('alwaysNumber', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('shortPath', function () {
|
||||
return function (input) {
|
||||
if (input === undefined)
|
||||
return "";
|
||||
var parts = input.split(/[\/\\]/);
|
||||
if (!parts || parts.length <= 3) {
|
||||
return input;
|
||||
}
|
||||
return ".../" + parts.slice(parts.length-2).join("/");
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('basename', function () {
|
||||
return function (input) {
|
||||
if (input === undefined)
|
||||
@@ -1000,24 +1081,6 @@ syncthing.filter('basename', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('clean', function () {
|
||||
return function (input) {
|
||||
return encodeURIComponent(input).replace(/%/g, '');
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.directive('optionEditor', function () {
|
||||
return {
|
||||
restrict: 'C',
|
||||
replace: true,
|
||||
transclude: true,
|
||||
scope: {
|
||||
setting: '=setting',
|
||||
},
|
||||
template: '<input type="text" ng-model="config.Options[setting.id]"></input>',
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.directive('uniqueRepo', function() {
|
||||
return {
|
||||
require: 'ngModel',
|
||||
|
||||
BIN
gui/font/raleway-500.woff
Normal file
BIN
gui/font/raleway-500.woff
Normal file
Binary file not shown.
@@ -2,5 +2,5 @@
|
||||
font-family: 'Raleway';
|
||||
font-style: normal;
|
||||
font-weight: 500;
|
||||
src: local('Raleway'), url(raleway-500.ttf) format('truetype');
|
||||
src: local('Raleway'), url(raleway-500.woff) format('woff');
|
||||
}
|
||||
|
Before Width: | Height: | Size: 6.4 KiB After Width: | Height: | Size: 6.4 KiB |
|
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
566
gui/index.html
566
gui/index.html
@@ -11,86 +11,12 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
<link rel="shortcut icon" href="favicon.png">
|
||||
<link rel="shortcut icon" href="img/favicon.png">
|
||||
|
||||
<title>Syncthing | {{thisNodeName()}}</title>
|
||||
<link href="bootstrap/css/bootstrap.min.css" rel="stylesheet">
|
||||
<link href="raleway.css" rel="stylesheet">
|
||||
<style type="text/css">
|
||||
body {
|
||||
padding-bottom: 70px;
|
||||
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5 {
|
||||
font-family: "Raleway", "Helvetica Neue", Helvetica, Arial, sans-serif;
|
||||
}
|
||||
|
||||
ul+h5 {
|
||||
margin-top: 1.5em;
|
||||
}
|
||||
|
||||
.text-monospace {
|
||||
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
|
||||
}
|
||||
|
||||
.table-condensed>thead>tr>th, .table-condensed>tbody>tr>th, .table-condensed>tfoot>tr>th, .table-condensed>thead>tr>td, .table-condensed>tbody>tr>td, .table-condensed>tfoot>tr>td {
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
.logo {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
top: -5px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.list-no-bullet {
|
||||
list-style-type: none
|
||||
}
|
||||
|
||||
.li-column {
|
||||
display: inline-block;
|
||||
min-width: 7em;
|
||||
margin-right: 1em;
|
||||
background-color: rgb(236, 240, 241);
|
||||
border-radius: 3px;
|
||||
padding: 1px 4px;
|
||||
margin: 2px 2px;
|
||||
}
|
||||
.li-column span.data {
|
||||
margin-left: 0.5em;
|
||||
min-width: 10em;
|
||||
text-align: right;
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
.ng-cloak {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.table th {
|
||||
white-space: nowrap;
|
||||
font-weight: 400;
|
||||
}
|
||||
|
||||
.table td {
|
||||
padding-left: 20px !important;
|
||||
}
|
||||
|
||||
.table td.small-data {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
@media (max-width:767px) {
|
||||
.table-responsive>.table>tbody>tr>td {
|
||||
/* revert a bootstrap setting e.g.:
|
||||
* for mobile phones to allow linebreaks in long repro folder/shared with
|
||||
* columns. */
|
||||
white-space: normal;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
<link href="font/raleway.css" rel="stylesheet">
|
||||
<link href="overrides.css" rel="stylesheet">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -100,21 +26,18 @@
|
||||
|
||||
<nav class="navbar navbar-top navbar-default" role="navigation">
|
||||
<div class="container">
|
||||
<span class="navbar-brand"><img class="logo" src="logo-text-64.png" height="32" width="117"/></span>
|
||||
<span class="navbar-brand"><img class="logo" src="img/logo-text-64.png" height="32" width="117"/></span>
|
||||
<p class="navbar-text hidden-xs">{{thisNodeName()}}</p>
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
<li ng-if="upgradeInfo.newer">
|
||||
<button type="button" class="btn navbar-btn btn-default" href="" ng-click="upgrade()">
|
||||
<span class="glyphicon glyphicon-chevron-up"></span> 
|
||||
<span translate translate-value-version="{{upgradeInfo.latest}}">Upgrade To {%version%}</span>
|
||||
<button type="button" class="btn navbar-btn btn-primary btn-sm" href="" ng-click="upgrade()">
|
||||
<span class="glyphicon glyphicon-chevron-up"></span> 
|
||||
<span translate translate-value-version="{{upgradeInfo.latest}}">Upgrade To {%version%}</span>
|
||||
</button>
|
||||
</li>
|
||||
<li class="dropdown">
|
||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown"><span translate>Edit</spanq <b class="caret"></b></a>
|
||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown"><span class="glyphicon glyphicon-cog"></span></a>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a href="" ng-click="addRepo()"><span class="glyphicon glyphicon-hdd"></span> <span translate>Add Repository</span></a></li>
|
||||
<li><a href="" ng-click="addNode()"><span class="glyphicon glyphicon-retweet"></span> <span translate>Add Node</span></a></li>
|
||||
<li class="divider"></li>
|
||||
<li><a href="" ng-click="editSettings()"><span class="glyphicon glyphicon-cog"></span> <span translate>Settings</span></a></li>
|
||||
<li><a href="" ng-click="idNode()"><span class="glyphicon glyphicon-qrcode"></span> <span translate>Show ID</span></a></li>
|
||||
<li class="divider"></li>
|
||||
@@ -156,196 +79,208 @@
|
||||
<div class="col-md-6">
|
||||
<div class="panel-group" id="repositories">
|
||||
<div class="panel panel-{{repoClass(repo.ID)}}" ng-repeat="repo in repoList()">
|
||||
<div class="panel-heading">
|
||||
<div class="panel-heading" data-toggle="collapse" data-parent="#repositories" href="#repo-{{$index}}" style="cursor: pointer">
|
||||
<h3 class="panel-title">
|
||||
<a data-toggle="collapse" data-parent="#repositories" href="#repo-{{$index}}">
|
||||
<span class="glyphicon glyphicon-hdd"></span> {{repo.ID}}
|
||||
<span class="pull-right hidden-xs" ng-switch="repoStatus(repo.ID)">
|
||||
<span translate ng-switch-when="unknown">Unknown</span>
|
||||
<span translate ng-switch-when="stopped">Stopped</span>
|
||||
<span translate ng-switch-when="scanning">Scanning</span>
|
||||
<span ng-switch-when="syncing">
|
||||
<span translate>Syncing</span>
|
||||
({{syncPercentage(repo.ID)}}%)
|
||||
</span>
|
||||
<span ng-switch-when="idle">
|
||||
<span translate>Idle</span>
|
||||
({{syncPercentage(repo.ID)}}%)
|
||||
</span>
|
||||
<span class="glyphicon glyphicon-hdd"></span> {{repo.ID}}
|
||||
<span class="pull-right hidden-xs" ng-switch="repoStatus(repo.ID)">
|
||||
<span translate ng-switch-when="unknown">Unknown</span>
|
||||
<span translate ng-switch-when="stopped">Stopped</span>
|
||||
<span translate ng-switch-when="scanning">Scanning</span>
|
||||
<span ng-switch-when="syncing">
|
||||
<span translate>Syncing</span>
|
||||
({{syncPercentage(repo.ID)}}%)
|
||||
</span>
|
||||
</a>
|
||||
<span ng-switch-when="idle">
|
||||
<span translate>Idle</span>
|
||||
({{syncPercentage(repo.ID)}}%)
|
||||
</span>
|
||||
</span>
|
||||
</h3>
|
||||
</div>
|
||||
<div id="repo-{{$index}}" class="panel-collapse collapse">
|
||||
<div id="repo-{{$index}}" class="panel-collapse collapse" ng-class="{in: $index === 0}">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Repository ID</span></th>
|
||||
<td class="text-right">{{repo.ID}}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-folder-open"></span> <span translate>Folder</span></th>
|
||||
<td class="text-right">{{repo.Directory}}</td>
|
||||
</tr>
|
||||
<tr ng-if="model[repo.ID].invalid">
|
||||
<th><span class="glyphicon glyphicon-warning-sign"></span> <span translate>Error</span></th>
|
||||
<td class="text-right">{{model[repo.ID].invalid}}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-globe"></span> <span translate>Global Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].globalFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].globalBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-home"></span> <span translate>Local Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].localFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].localBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Out Of Sync</span></th>
|
||||
<td class="text-right">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Repository ID</span></th>
|
||||
<td class="text-right">{{repo.ID}}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-folder-open"></span> <span translate>Folder</span></th>
|
||||
<td class="text-right">{{repo.Directory}}</td>
|
||||
</tr>
|
||||
<tr ng-if="model[repo.ID].invalid">
|
||||
<th><span class="glyphicon glyphicon-warning-sign"></span> <span translate>Error</span></th>
|
||||
<td class="text-right">{{model[repo.ID].invalid}}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-globe"></span> <span translate>Global Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].globalFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].globalBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-home"></span> <span translate>Local Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].localFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].localBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Out Of Sync</span></th>
|
||||
<td class="text-right">
|
||||
<a ng-if="model[repo.ID].needFiles > 0" ng-click="showNeed(repo.ID)" href="">{{model[repo.ID].needFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].needBytes | binary}}B</a>
|
||||
<span ng-if="model[repo.ID].needFiles == 0">0 <span translate>items</span>, 0 B</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-lock"></span> <span translate>Master Repo</span></th>
|
||||
<td class="text-right">
|
||||
<span translate ng-if="repo.ReadOnly">Yes</span>
|
||||
<span translate ng-if="!repo.ReadOnly">No</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-unchecked"></span> <span translate>Ignore Permissions</span></th>
|
||||
<td class="text-right">
|
||||
<span translate ng-if="repo.IgnorePerms">Yes</span>
|
||||
<span translate ng-if="!repo.IgnorePerms">No</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-share-alt"></span> <span translate>Shared With</span></th>
|
||||
<td class="text-right">{{sharesRepo(repo)}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-lock"></span> <span translate>Master Repo</span></th>
|
||||
<td class="text-right">
|
||||
<span translate ng-if="repo.ReadOnly">Yes</span>
|
||||
<span translate ng-if="!repo.ReadOnly">No</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-unchecked"></span> <span translate>Ignore Permissions</span></th>
|
||||
<td class="text-right">
|
||||
<span translate ng-if="repo.IgnorePerms">Yes</span>
|
||||
<span translate ng-if="!repo.IgnorePerms">No</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-refresh"></span> <span translate>Rescan Interval</span></th>
|
||||
<td class="text-right">{{repo.RescanIntervalS}} s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-share-alt"></span> <span translate>Shared With</span></th>
|
||||
<td class="text-right">{{sharesRepo(repo)}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<div class="panel-footer">
|
||||
<button class="btn btn-sm btn-danger" ng-if="repo.ReadOnly && model[repo.ID].needFiles > 0" ng-click="override(repo.ID)" href=""><span class="glyphicon glyphicon-upload"></span> <span translate>Override Changes</span></button>
|
||||
<span class="pull-right">
|
||||
<a class="btn btn-sm btn-default" href="" ng-show="repoStatus(repo.ID) == 'idle'" ng-click="rescanRepo(repo.ID)"><span class="glyphicon glyphicon-refresh"></span> <span translate>Rescan</span></a>
|
||||
<a class="btn btn-sm btn-primary" href="" ng-click="editRepo(repo)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a>
|
||||
<a class="btn btn-sm btn-danger" ng-if="repo.ReadOnly && model[repo.ID].needFiles > 0" ng-click="override(repo.ID)" href=""><span class="glyphicon glyphicon-upload"></span> <span translate>Override Changes</span></a>
|
||||
<button class="btn btn-sm btn-default" href="" ng-show="repoStatus(repo.ID) == 'idle'" ng-click="rescanRepo(repo.ID)"><span class="glyphicon glyphicon-refresh"></span> <span translate>Rescan</span></button>
|
||||
<button class="btn btn-sm btn-default" href="" ng-click="editRepo(repo)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></button>
|
||||
</span>
|
||||
<div class="clearfix"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<button class="btn btn-sm btn-default pull-right" ng-click="addRepo()"><span class="glyphicon glyphicon-plus"></span> <span translate>Add Repository</span></button>
|
||||
<div class="clearfix"></div>
|
||||
<hr class="visible-sm"/>
|
||||
</div>
|
||||
|
||||
<!-- Node list (top right) -->
|
||||
|
||||
<!-- This node -->
|
||||
|
||||
<div class="col-md-6">
|
||||
<div class="panel-group" id="nodes">
|
||||
<div class="panel panel-default" ng-repeat="nodeCfg in [thisNode()]">
|
||||
<div class="panel-heading">
|
||||
<h3 class="panel-title">
|
||||
<a data-toggle="collapse" data-parent="#nodes" href="#node-this"><span class="glyphicon glyphicon-home"></span> {{nodeName(nodeCfg)}}</a>
|
||||
</h3>
|
||||
</div>
|
||||
<div id="node-this" class="panel-collapse collapse in">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-th"></span> <span translate>RAM Utilization</span></th>
|
||||
<td class="text-right">{{system.sys | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tasks"></span> <span translate>CPU Utilization</span></th>
|
||||
<td class="text-right">{{system.cpuPercent | alwaysNumber | natural:1}}%</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Download Rate</span></th>
|
||||
<td class="text-right">{{connections['total'].inbps | metric}}bps ({{connections['total'].InBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-upload"></span> <span translate>Upload Rate</span></th>
|
||||
<td class="text-right">{{connections['total'].outbps | metric}}bps ({{connections['total'].OutBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr ng-if="system.extAnnounceOK != undefined">
|
||||
<th><span class="glyphicon glyphicon-bullhorn"></span> <span translate>Announce Server</span></th>
|
||||
<td class="text-right">
|
||||
<span class="data text-success" ng-if="system.extAnnounceOK"><span translate>Online</span></span>
|
||||
<span class="data text-danger" ng-if="!system.extAnnounceOK"><span translate>Offline</span></span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Version</span></th>
|
||||
<td class="text-right">{{version}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<span class="pull-right"><a class="btn btn-sm btn-primary" href="" ng-click="editNode(nodeCfg)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a></span>
|
||||
</div>
|
||||
<div class="panel panel-default" ng-repeat="nodeCfg in [thisNode()]">
|
||||
<div class="panel-heading" data-toggle="collapse" href="#node-this" style="cursor: pointer">
|
||||
<h3 class="panel-title">
|
||||
<span class="glyphicon glyphicon-home"></span> {{nodeName(nodeCfg)}}
|
||||
</h3>
|
||||
</div>
|
||||
<div id="node-this" class="panel-collapse collapse in">
|
||||
<div class="panel-body">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Download Rate</span></th>
|
||||
<td class="text-right">{{connections['total'].inbps | metric}}bps ({{connections['total'].InBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-upload"></span> <span translate>Upload Rate</span></th>
|
||||
<td class="text-right">{{connections['total'].outbps | metric}}bps ({{connections['total'].OutBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-th"></span> <span translate>RAM Utilization</span></th>
|
||||
<td class="text-right">{{system.sys | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-dashboard"></span> <span translate>CPU Utilization</span></th>
|
||||
<td class="text-right">{{system.cpuPercent | alwaysNumber | natural:1}}%</td>
|
||||
</tr>
|
||||
<tr ng-if="system.extAnnounceOK != undefined">
|
||||
<th><span class="glyphicon glyphicon-bullhorn"></span> <span translate>Global Discovery Server</span></th>
|
||||
<td class="text-right">
|
||||
<span class="data text-success" ng-if="system.extAnnounceOK"><span translate>Online</span></span>
|
||||
<span class="data text-danger" ng-if="!system.extAnnounceOK"><span translate>Offline</span></span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Version</span></th>
|
||||
<td class="text-right">{{version}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Remote nodes -->
|
||||
|
||||
<div class="panel-group" id="nodes">
|
||||
<div class="panel panel-{{nodeClass(nodeCfg)}}" ng-repeat="nodeCfg in otherNodes()">
|
||||
<div class="panel-heading">
|
||||
<div class="panel-heading" data-toggle="collapse" data-parent="#nodes" href="#node-{{$index}}" style="cursor: pointer">
|
||||
<h3 class="panel-title">
|
||||
<a data-toggle="collapse" data-parent="#nodes" href="#node-{{$index}}">
|
||||
<span class="glyphicon glyphicon-retweet"></span>
|
||||
{{nodeName(nodeCfg)}}
|
||||
<span class="pull-right hidden-xs">
|
||||
<span ng-if="connections[nodeCfg.NodeID] && completion[nodeCfg.NodeID]._total == 100">
|
||||
<span translate>Up to Date</span> (100%)
|
||||
</span>
|
||||
<span ng-if="connections[nodeCfg.NodeID] && completion[nodeCfg.NodeID]._total < 100">
|
||||
<span translate>Syncing</span> ({{completion[nodeCfg.NodeID]._total | number:0}}%)
|
||||
</span>
|
||||
<span translate ng-if="!connections[nodeCfg.NodeID]">Disconnected</span>
|
||||
<span class="glyphicon glyphicon-retweet"></span> {{nodeName(nodeCfg)}}
|
||||
<span class="pull-right hidden-xs">
|
||||
<span ng-if="connections[nodeCfg.NodeID] && completion[nodeCfg.NodeID]._total == 100">
|
||||
<span translate>Up to Date</span> (100%)
|
||||
</span>
|
||||
</a>
|
||||
<span ng-if="connections[nodeCfg.NodeID] && completion[nodeCfg.NodeID]._total < 100">
|
||||
<span translate>Syncing</span> ({{completion[nodeCfg.NodeID]._total | number:0}}%)
|
||||
</span>
|
||||
<span translate ng-if="!connections[nodeCfg.NodeID]">Disconnected</span>
|
||||
</span>
|
||||
</h3>
|
||||
</div>
|
||||
<div id="node-{{$index}}" class="panel-collapse collapse">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-link"></span> <span translate>Address</span></th>
|
||||
<td class="text-right">{{nodeAddr(nodeCfg)}}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-comment"></span> <span translate>Synchronization</span></th>
|
||||
<td class="text-right">{{completion[nodeCfg.NodeID]._total | alwaysNumber | number:0}}%</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-compressed"></span> <span translate>Use Compression</span></th>
|
||||
<td translate ng-if="nodeCfg.Compression" class="text-right">Yes</td>
|
||||
<td translate ng-if="!nodeCfg.Compression" class="text-right">No</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Download Rate</span></th>
|
||||
<td class="text-right">{{connections[nodeCfg.NodeID].inbps | metric}}bps ({{connections[nodeCfg.NodeID].InBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-upload"></span> <span translate>Upload Rate</span></th>
|
||||
<td class="text-right">{{connections[nodeCfg.NodeID].outbps | metric}}bps ({{connections[nodeCfg.NodeID].OutBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Version</span></th>
|
||||
<td class="text-right">{{nodeVer(nodeCfg)}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<span class="pull-right"><a class="btn btn-sm btn-primary" href="" ng-click="editNode(nodeCfg)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a></span>
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr ng-if="connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Download Rate</span></th>
|
||||
<td class="text-right">{{connections[nodeCfg.NodeID].inbps | metric}}bps ({{connections[nodeCfg.NodeID].InBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr ng-if="connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-cloud-upload"></span> <span translate>Upload Rate</span></th>
|
||||
<td class="text-right">{{connections[nodeCfg.NodeID].outbps | metric}}bps ({{connections[nodeCfg.NodeID].OutBytesTotal | binary}}B)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-link"></span> <span translate>Address</span></th>
|
||||
<td class="text-right">{{nodeAddr(nodeCfg)}}</td>
|
||||
</tr>
|
||||
<tr ng-if="connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-comment"></span> <span translate>Synchronization</span></th>
|
||||
<td class="text-right">{{completion[nodeCfg.NodeID]._total | alwaysNumber | number:0}}%</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-compressed"></span> <span translate>Use Compression</span></th>
|
||||
<td translate ng-if="nodeCfg.Compression" class="text-right">Yes</td>
|
||||
<td translate ng-if="!nodeCfg.Compression" class="text-right">No</td>
|
||||
</tr>
|
||||
<tr ng-if="connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Version</span></th>
|
||||
<td class="text-right">{{connections[nodeCfg.NodeID].ClientVersion}}</td>
|
||||
</tr>
|
||||
<tr ng-if="!connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-eye-open"></span> <span translate>Last seen</span></th>
|
||||
<td translate ng-if="!stats[nodeCfg.NodeID].LastSeenDays || stats[nodeCfg.NodeID].LastSeenDays >= 365" class="text-right">Never</td>
|
||||
<td ng-if="stats[nodeCfg.NodeID].LastSeenDays < 365" class="text-right">{{stats[nodeCfg.NodeID].LastSeen | date:"yyyy-MM-dd HH:mm"}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<div class="panel-footer">
|
||||
<span class="pull-right"><a class="btn btn-sm btn-default" href="" ng-click="editNode(nodeCfg)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a></span>
|
||||
<div class="clearfix"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<button class="btn btn-sm btn-default pull-right" ng-click="addNode()"><span class="glyphicon glyphicon-plus"></span> <span translate>Add Node</span></button>
|
||||
<div class="clearfix"></div>
|
||||
</div>
|
||||
</div> <!-- /row -->
|
||||
|
||||
@@ -458,9 +393,9 @@
|
||||
</form>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-primary" ng-click="saveNode()" ng-disabled="nodeEditor.$invalid"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
<button ng-if="editingExisting && !editingSelf" type="button" class="btn btn-danger pull-left" ng-click="deleteNode()"><span class="glyphicon glyphicon-minus"></span> <span translate>Delete</span></button>
|
||||
<button type="button" class="btn btn-primary btn-sm" ng-click="saveNode()" ng-disabled="nodeEditor.$invalid"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default btn-sm" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
<button ng-if="editingExisting && !editingSelf" type="button" class="btn btn-danger pull-left btn-sm" ng-click="deleteNode()"><span class="glyphicon glyphicon-minus"></span> <span translate>Delete</span></button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -491,12 +426,19 @@
|
||||
</div>
|
||||
<div class="form-group" ng-class="{'has-error': repoEditor.repoPath.$invalid && repoEditor.repoPath.$dirty}">
|
||||
<label translate for="repoPath">Repository Path</label>
|
||||
<input name="repoPath" placeholder="~/Documents" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
|
||||
<input name="repoPath" placeholder="~/Documents" ng-disabled="editingExisting" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="repoEditor.repoPath.$valid || repoEditor.repoPath.$pristine">Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for</span> <code>{{system.tilde}}</code>.
|
||||
<span translate ng-if="repoEditor.repoPath.$error.required && repoEditor.repoPath.$dirty">The repository path cannot be blank.</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="form-group" ng-class="{'has-error': repoEditor.rescanIntervalS.$invalid && repoEditor.rescanIntervalS.$dirty}">
|
||||
<label for="rescanIntervalS"><span translate>Rescan Interval</span> (s)</label>
|
||||
<input name="rescanIntervalS" placeholder="60" id="rescanIntervalS" class="form-control" type="number" ng-model="currentRepo.RescanIntervalS" required min="5"></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="!repoEditor.rescanIntervalS.$valid && repoEditor.rescanIntervalS.$dirty">The rescan interval must be at least 5 seconds.</span>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
@@ -529,14 +471,25 @@
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label translate>File Versioning</label>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="checkbox" ng-model="currentRepo.simpleFileVersioning"> <span translate>File Versioning</span>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="none"> <span translate>No File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="simple"> <span translate>Simple File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="staggered"> <span translate>Staggered File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<p translate class="help-block">Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</p>
|
||||
</div>
|
||||
<div class="form-group" ng-if="currentRepo.simpleFileVersioning" ng-class="{'has-error': repoEditor.simpleKeep.$invalid && repoEditor.simpleKeep.$dirty}">
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector=='simple'" ng-class="{'has-error': repoEditor.simpleKeep.$invalid && repoEditor.simpleKeep.$dirty}">
|
||||
<p translate class="help-block">Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</p>
|
||||
<label translate for="simpleKeep">Keep Versions</label>
|
||||
<input name="simpleKeep" id="simpleKeep" class="form-control" type="number" ng-model="currentRepo.simpleKeep" required min="1"></input>
|
||||
<p class="help-block">
|
||||
@@ -545,16 +498,30 @@
|
||||
<span translate ng-if="repoEditor.simpleKeep.$error.min && repoEditor.simpleKeep.$dirty">You must keep at least one version.</span>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector=='staggered'" ng-class="{'has-error': repoEditor.staggeredMaxAge.$invalid && repoEditor.staggeredMaxAge.$dirty}">
|
||||
<p class="help-block"><span translate>Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</span> <span translate>Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.</span></p>
|
||||
<p translate class="help-block">The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.</p>
|
||||
<label translate for="staggeredMaxAge">Maximum Age</label>
|
||||
<input name="staggeredMaxAge" id="staggeredMaxAge" class="form-control" type="number" ng-model="currentRepo.staggeredMaxAge" required></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="repoEditor.staggeredMaxAge.$valid || repoEditor.staggeredMaxAge.$pristine">The maximum time to keep a version (in days, set to 0 to keep versions forever).</span>
|
||||
<span translate ng-if="repoEditor.staggeredMaxAge.$error.required && repoEditor.staggeredMaxAge.$dirty">The maximum age must be a number and cannot be blank.</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector == 'staggered'">
|
||||
<label translate for="staggeredVersionsPath">Versions Path</label>
|
||||
<input name="staggeredVersionsPath" placeholder="" id="staggeredVersionsPath" class="form-control" type="text" ng-model="currentRepo.staggeredVersionsPath"></input>
|
||||
<p translate class="help-block">Path where versions should be stored (leave empty for the default .stversions folder in the repository).</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
<div translate ng-show="!editingExisting">When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.</div>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-primary" ng-click="saveRepo()" ng-disabled="repoEditor.$invalid"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
<button ng-if="editingExisting" type="button" class="btn btn-danger pull-left" ng-click="deleteRepo()"><span class="glyphicon glyphicon-minus"></span> <span translate>Delete</span></button>
|
||||
<button type="button" class="btn btn-primary btn-sm" ng-click="saveRepo()" ng-disabled="repoEditor.$invalid"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default btn-sm" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
<button ng-if="editingExisting" type="button" class="btn btn-danger pull-left btn-sm" ng-click="deleteRepo()"><span class="glyphicon glyphicon-minus"></span> <span translate>Delete</span></button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -573,32 +540,22 @@
|
||||
<div class="row">
|
||||
|
||||
<div class="col-md-6">
|
||||
<div class="form-group">
|
||||
<label translate for="NodeName">Node Name</label>
|
||||
<input id="NodeName" class="form-control" type="text" ng-model="tmpOptions.NodeName">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="ListenStr">Sync Protocol Listen Addresses</label>
|
||||
<input id="ListenStr" class="form-control" type="text" ng-model="tmpOptions.ListenStr">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="MaxRecvKbps">Incoming Rate Limit (KiB/s)</label>
|
||||
<input id="MaxRecvKbps" class="form-control" type="number" ng-model="tmpOptions.MaxRecvKbps">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="MaxSendKbps">Outgoing Rate Limit (KiB/s)</label>
|
||||
<input id="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.MaxSendKbps">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="RescanIntervalS">Rescan Interval (s)</label>
|
||||
<input id="RescanIntervalS" class="form-control" type="number" ng-model="tmpOptions.RescanIntervalS">
|
||||
</div>
|
||||
<!--
|
||||
<div class="form-group">
|
||||
<label translate for="ReconnectIntervalS">Reconnect Interval (s)</label>
|
||||
<input id="ReconnectIntervalS" class="form-control" type="number" ng-model="tmpOptions.ReconnectIntervalS">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="ParallelRequests">Max Outstanding Requests</label>
|
||||
<input id="ParallelRequests" class="form-control" type="number" ng-model="tmpOptions.ParallelRequests">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="MaxChangeKbps">Max File Change Rate (KiB/s)</label>
|
||||
<input id="MaxChangeKbps" class="form-control" type="number" ng-model="tmpOptions.MaxChangeKbps">
|
||||
</div>
|
||||
-->
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
@@ -614,10 +571,6 @@
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="LocalAnnPort">Local Discovery Port</label>
|
||||
<input ng-disabled="!tmpOptions.LocalAnnEnabled" id="LocalAnnPort" class="form-control" type="number" ng-model="tmpOptions.LocalAnnPort">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
@@ -647,7 +600,7 @@
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
<span translate>Use HTTPS for GUI</span> <input id="UseTLS" type="checkbox" ng-model="tmpGUI.UseTLS">
|
||||
<span translate>Use HTTPS for GUI</span> <input id="UseTLS" type="checkbox" ng-model="tmpGUI.UseTLS">
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
@@ -661,7 +614,7 @@
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
<span translate>Anonymous Usage Reporting</span> <input id="UREnabled" type="checkbox" ng-model="tmpOptions.UREnabled">
|
||||
<span translate>Anonymous Usage Reporting</span> <input id="UREnabled" type="checkbox" ng-model="tmpOptions.UREnabled"> (<a translate ng-click="showURPreview()" href="#">Preview</a>)
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
@@ -679,8 +632,8 @@
|
||||
</form>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-primary" ng-click="saveSettings()"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
<button type="button" class="btn btn-primary btn-sm" ng-click="saveSettings()"><span class="glyphicon glyphicon-ok"></span> <span translate>Save</span></button>
|
||||
<button type="button" class="btn btn-default btn-sm" data-dismiss="modal"><span class="glyphicon glyphicon-remove"></span> <span translate>Close</span></button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -695,14 +648,34 @@
|
||||
<h4 translate class="modal-title">Allow Anonymous Usage Reporting?</h4>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<p translate>The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.</p>
|
||||
<p translate translate-value-url="https://data.syncthing.net">The aggregated statistics are publicly available at {%url%}.</p>
|
||||
<button translate type="button" class="btn btn-default" ng-show="!reportPreview" ng-click="showReportPreview()">Preview Usage Report</button>
|
||||
<pre ng-if="reportPreview"><small>{{reportData | json}}</small></pre>
|
||||
<p translate>The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.</p>
|
||||
<p translate translate-value-url="https://data.syncthing.net">The aggregated statistics are publicly available at {%url%}.</p>
|
||||
<button translate type="button" class="btn btn-default btn-sm" ng-show="!reportPreview" ng-click="showReportPreview()">Preview Usage Report</button>
|
||||
<pre ng-if="reportPreview"><small>{{reportData | json}}</small></pre>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-success" ng-click="acceptUR()"><span class="glyphicon glyphicon-ok"></span> <span translate>Yes</span></button>
|
||||
<button type="button" class="btn btn-danger" ng-click="declineUR()"><span class="glyphicon glyphicon-remove"></span> <span translate>No</span></button>
|
||||
<button type="button" class="btn btn-success btn-sm" ng-click="acceptUR()"><span class="glyphicon glyphicon-ok"></span> <span translate>Yes</span></button>
|
||||
<button type="button" class="btn btn-danger btn-sm" ng-click="declineUR()"><span class="glyphicon glyphicon-remove"></span> <span translate>No</span></button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Usage report preview modal -->
|
||||
|
||||
<div id="urPreview" class="modal fade" tabindex="-1">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header alert alert-success">
|
||||
<h4 translate class="modal-title">Anonymous Usage Reporting</h4>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<p translate>The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.</p>
|
||||
<p translate translate-value-url="https://data.syncthing.net">The aggregated statistics are publicly available at {%url%}.</p>
|
||||
<pre><small>{{reportData | json}}</small></pre>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-success btn-sm" data-dismiss="modal"><span class="glyphicon glyphicon-ok"></span> <span translate>OK</span></button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -723,7 +696,7 @@
|
||||
<!-- About modal -->
|
||||
|
||||
<modal id="about" large="yes" close="yes" status="info" title="About">
|
||||
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
|
||||
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="img/logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
|
||||
<hr/>
|
||||
|
||||
<p translate>Copyright © 2014 Jakob Borg and the following Contributors:</p>
|
||||
@@ -732,23 +705,26 @@
|
||||
<ul>
|
||||
<li>Aaron Bieber</li>
|
||||
<li>Andrew Dunham</li>
|
||||
<li>Alexander Graf</li>
|
||||
<li>Arthur Axel fREW Schmidt</li>
|
||||
<li>Audrius Butkevicius</li>
|
||||
<li>Ben Sidhom</li>
|
||||
<li>Brandon Philips</li>
|
||||
<li>Gilli Sigurdsson</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<ul>
|
||||
<li>James Patterson</li>
|
||||
<li>Jens Diemer</li>
|
||||
<li>Marcin Dziadus</li>
|
||||
<li>Michael Tilli</li>
|
||||
<li>Philippe Schommers</li>
|
||||
<li>Ryan Sullivan</li>
|
||||
<li>Tully Robinson</li>
|
||||
<li>Veeti Paananen</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<hr/>
|
||||
|
||||
@@ -767,12 +743,12 @@
|
||||
</modal>
|
||||
|
||||
|
||||
<script src="angular.min.js"></script>
|
||||
<script src="angular-translate.min.js"></script>
|
||||
<script src="angular-translate-loader.min.js"></script>
|
||||
<script src="jquery-2.0.3.min.js"></script>
|
||||
<script src="angular/angular.min.js"></script>
|
||||
<script src="angular/angular-translate.min.js"></script>
|
||||
<script src="angular/angular-translate-loader.min.js"></script>
|
||||
<script src="jquery/jquery-2.0.3.min.js"></script>
|
||||
<script src="bootstrap/js/bootstrap.min.js"></script>
|
||||
<script src="valid-langs.js"></script>
|
||||
<script src="lang/valid-langs.js"></script>
|
||||
<script src="app.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
7
gui/lang/README-FIRST.txt
Normal file
7
gui/lang/README-FIRST.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
All files in this directory are auto generated. Do not change any of
|
||||
them. To contribute translations, please head over to
|
||||
|
||||
https://www.transifex.com/projects/p/syncthing/
|
||||
|
||||
Any updates made on Transifex will be automatically pulled into these
|
||||
files.
|
||||
138
gui/lang/lang-bg.json
Normal file
138
gui/lang/lang-bg.json
Normal file
@@ -0,0 +1,138 @@
|
||||
{
|
||||
"API Key": "API Ключ",
|
||||
"About": "За Програмата",
|
||||
"Add Node": "Добави Машина",
|
||||
"Add Repository": "Добави Папка",
|
||||
"Address": "Адрес",
|
||||
"Addresses": "Адреси",
|
||||
"Allow Anonymous Usage Reporting?": "Разреши анонимен доклад за ползване на програмата?",
|
||||
"Announce Server": "Announce Server",
|
||||
"Anonymous Usage Reporting": "Анонимен Доклад",
|
||||
"Bugs": "Бъгове",
|
||||
"CPU Utilization": "Натоварване на Процесора",
|
||||
"Close": "Затвори",
|
||||
"Connection Error": "Грешка при Свързването",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Правата запазени © 2014 Jakob Borg и следните Сътрудници:",
|
||||
"Delete": "Изтрий",
|
||||
"Disconnected": "Прекрати Връзката",
|
||||
"Documentation": "Документация",
|
||||
"Download Rate": "Скорост на Теглене",
|
||||
"Edit": "Промени",
|
||||
"Edit Node": "Промени Машината",
|
||||
"Edit Repository": "Промени Папката",
|
||||
"Enable UPnP": "Включи UPnP",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Въведи \"ip:port\" адреси разделени със запетая или \"dynamic\", за да извършиш автоматична връзка на адреси.",
|
||||
"Error": "Грешка",
|
||||
"File Versioning": "Файлови Версии",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други машини, но промени направени на тази машина ще бъдат синхронизирани до другите машини.",
|
||||
"Folder": "Папка",
|
||||
"GUI Authentication Password": "Парола за Потребителския Интерфейс",
|
||||
"GUI Authentication User": "Потребител за Потребителския Интерфейс",
|
||||
"GUI Listen Addresses": "Адрес за Свързване с Потребителския Интерфейс",
|
||||
"Generate": "Генерирай",
|
||||
"Global Discovery": "Глобавно Откриване",
|
||||
"Global Discovery Server": "Сървър за Глобално Откриване",
|
||||
"Global Repository": "Глобална Папка",
|
||||
"Idle": "Без Работа",
|
||||
"Ignore Permissions": "Игнорирай Права за Достъп",
|
||||
"Incoming Rate Limit (KiB/s)": "Incoming Rate Limit (KiB/s)",
|
||||
"Keep Versions": "Пази Версии",
|
||||
"Last seen": "Last seen",
|
||||
"Latest Release": "Най-новата Версия",
|
||||
"Local Discovery": "Локално Откриване",
|
||||
"Local Discovery Port": "Порт за Локално Откриване",
|
||||
"Local Repository": "Локална Папка",
|
||||
"Master Repo": "Главна Папка",
|
||||
"Max File Change Rate (KiB/s)": "Макс. Скорост на Промяна (KiB/s)",
|
||||
"Max Outstanding Requests": "Макс. Неизпълени Заявки",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Never",
|
||||
"No": "Не",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "Код на Машината",
|
||||
"Node Identification": "Идентификация на Машината",
|
||||
"Node Name": "Име на Машината",
|
||||
"Notice": "Известие",
|
||||
"OK": "ОК",
|
||||
"Offline": "Не е на линия",
|
||||
"Online": "На линия",
|
||||
"Out Of Sync": "Не Синхронизиран",
|
||||
"Outgoing Rate Limit (KiB/s)": "Лимит на Изходящата Скорост (KiB/s)",
|
||||
"Override Changes": "Замени Промените",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Пътят до папката на този компютър. Ще бъде създадена ако не съществува. Символът тилда (~) може да бъде използван като заместител на",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Моля изчакай",
|
||||
"Preview Usage Report": "Разгледай Доклада за Използване",
|
||||
"RAM Utilization": "RAM Натоварване",
|
||||
"Reconnect Interval (s)": "Интервал(и) на Свързване",
|
||||
"Repository ID": "Идентификатор на Папката",
|
||||
"Repository Master": "Главна Папка",
|
||||
"Repository Path": "Път до Папката",
|
||||
"Rescan": "Повторно Сканиране",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Интеравал(и) на Сканиране",
|
||||
"Restart": "Рестартирай",
|
||||
"Restart Needed": "Изискава се Рестартиране",
|
||||
"Restarting": "Рестартиране",
|
||||
"Save": "Запази",
|
||||
"Scanning": "Сканиране",
|
||||
"Select the nodes to share this repository with.": "Избери компютрите, с които да споделиш тази папка.",
|
||||
"Settings": "Настройки",
|
||||
"Share With Nodes": "Сподели с Компютри",
|
||||
"Shared With": "Сподел С",
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Кратък идентификатор на папката. Трябва да бъде същият на всички компютри.",
|
||||
"Show ID": "Покажи Идентификатора",
|
||||
"Shown instead of Node ID in the cluster status.": "Покажи вмест ID-то на Компютъра в статус на клъстъра.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Покажи вмест ID-то на Компютъра в статус на клъстъра. Ще бъде предлагано на други комютри като име по подразбиране.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Покажи вмест ID-то на Компютъра в статус на клъстъра. Ще бъде обновено с името по подразбиране изпратено от другия компютър.",
|
||||
"Shutdown": "Спри Програмата",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Сорс Код",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Стартирай Браузъра",
|
||||
"Stopped": "Спряна",
|
||||
"Support / Forum": "Помощ / Форум",
|
||||
"Sync Protocol Listen Addresses": "Адрес за слушане на синхронизиращия протокол",
|
||||
"Synchronization": "Синхронизация",
|
||||
"Syncing": "Синхронизиране",
|
||||
"Syncthing has been shut down.": "Syncthing е спрян.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing включва следният софтуер пълно или частично:",
|
||||
"Syncthing is restarting.": "Syncthing се рестартирва",
|
||||
"Syncthing is upgrading.": "Syncthing се обновява.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing изглежда не е включен, или има проблем с интерент връзката. Повторен опит...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Сумарната статистика е публично достъпна на {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Конфигурацията е запазена, но не е активирана. Syncthing трябва да рестартира, за да се активира новата конфигурация.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Криптираният доклад се изпраща дневно. Използва се, за да следи общи платформи, размери на папки и версии на приложението. Ако събираните данни се променят, ще бъдете информиран с подобен на този диалог.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведни код на машината не е валиден. Трябва да бъде 52 символа и да се състои от букви, цифри като интервалите и тиретата са пожелание.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведни код на машината не е валиден. Трябва да бъде 52 или 56 символа и да се състои от букви, цифри като интервалите и тиретата са пожелание.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "Кодът на машината не може да бъде празен.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Кодът на машината, който си въвел може да бъде намерен в \"Промени > Покажи Идентификатора\". Интервалите и тиретата са пожелание(биват прескачани).",
|
||||
"The number of old versions to keep, per file.": "Броят стари версии, които да бъдат пазени за всеки файл.",
|
||||
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
|
||||
"The repository ID cannot be blank.": "Полето идентификатор на папка не може д абъде празно.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "Идентификаторът на папка трябва да бъде къс(64 символа или по-малко) състоящ се само от букви, цифри, точка(.), тире(-) и подчерта (_).",
|
||||
"The repository ID must be unique.": "Идентификаторът на папката тряба да бъде уникален.",
|
||||
"The repository path cannot be blank.": "Пътят до папката не може да бъде празен.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Неясен",
|
||||
"Up to Date": "Актуален",
|
||||
"Upgrade To {%version%}": "Обновен До {{version}}",
|
||||
"Upgrading": "Обновяване",
|
||||
"Upload Rate": "Скорост на Качване",
|
||||
"Usage": "Употреба",
|
||||
"Use Compression": "Използвай Компресиране",
|
||||
"Use HTTPS for GUI": "Използвай HTTPS за Потребителския Интерфейс",
|
||||
"Version": "Версия",
|
||||
"Versions Path": "Versions Path",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Когато добавяш нова машина помни, че твоята машина също трябва да бъде добавена от другата страна.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Когато добавяш нов идентификатор на папка помни, че той се използва за свързване на папките на различни машини. Главни/малки букви са от значение и трябва да са еднакви на всички машини.",
|
||||
"Yes": "Да",
|
||||
"You must keep at least one version.": "Трябва да пазиш поне една версия.",
|
||||
"items": "артикула"
|
||||
}
|
||||
138
gui/lang/lang-ca.json
Normal file
138
gui/lang/lang-ca.json
Normal file
@@ -0,0 +1,138 @@
|
||||
{
|
||||
"API Key": "Clau API",
|
||||
"About": "Sobre",
|
||||
"Add Node": "Afegir Node",
|
||||
"Add Repository": "Afegir Repositori",
|
||||
"Address": "Adreça",
|
||||
"Addresses": "Adreces",
|
||||
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
|
||||
"Announce Server": "Servidor d'anunciament",
|
||||
"Anonymous Usage Reporting": "Informe anònim d'ús",
|
||||
"Bugs": "Bugs",
|
||||
"CPU Utilization": "Utilització del CPU",
|
||||
"Close": "Tancar",
|
||||
"Connection Error": "Error de connexió",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg i els següents contribuïdors",
|
||||
"Delete": "Esborrar",
|
||||
"Disconnected": "Desconnectat",
|
||||
"Documentation": "Documentació",
|
||||
"Download Rate": "Tasa de descarrega",
|
||||
"Edit": "Editar",
|
||||
"Edit Node": "Editar Node",
|
||||
"Edit Repository": "Editar Repositori",
|
||||
"Enable UPnP": "Habilitat UPnP",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
|
||||
"Error": "Error",
|
||||
"File Versioning": "Versionat de Fitxers",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres nodes, però els canvis fets en aquest node seran enviats a la resta del cluster.",
|
||||
"Folder": "Carpeta",
|
||||
"GUI Authentication Password": "Contrasenya d'autenticació GUI",
|
||||
"GUI Authentication User": "Usuari d'autenticació GUI",
|
||||
"GUI Listen Addresses": "Adreça d'escolta del GUI",
|
||||
"Generate": "Generar",
|
||||
"Global Discovery": "Descobriment Global",
|
||||
"Global Discovery Server": "Servidor de Descobriment Global",
|
||||
"Global Repository": "Repositori Global",
|
||||
"Idle": "Inactiu",
|
||||
"Ignore Permissions": "Ignora Permisos",
|
||||
"Incoming Rate Limit (KiB/s)": "Incoming Rate Limit (KiB/s)",
|
||||
"Keep Versions": "Mantenir Versions",
|
||||
"Last seen": "Vist per última vegada",
|
||||
"Latest Release": "Última publicació",
|
||||
"Local Discovery": "Descobriment Local",
|
||||
"Local Discovery Port": "Port de Descobriment Local",
|
||||
"Local Repository": "Repositori Local",
|
||||
"Master Repo": "Rep Master",
|
||||
"Max File Change Rate (KiB/s)": "Tasa Màxima d'intercanvi de fitxer (KiB/s)",
|
||||
"Max Outstanding Requests": "Màxim de Peticions Pendents",
|
||||
"Maximum Age": "Antiguitat Màxima",
|
||||
"Never": "Mai",
|
||||
"No": "No",
|
||||
"No File Versioning": "Sense Versionat de Fitxer",
|
||||
"Node ID": "ID del Node",
|
||||
"Node Identification": "Identificació del Node",
|
||||
"Node Name": "Nom Del Node",
|
||||
"Notice": "Avís",
|
||||
"OK": "OK",
|
||||
"Offline": "Desconnectat",
|
||||
"Online": "Connectat",
|
||||
"Out Of Sync": "Fora de la Sincronització",
|
||||
"Outgoing Rate Limit (KiB/s)": "Tasa Límit de Sortida (KiB/s)",
|
||||
"Override Changes": "Sobreescriure Canvis",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta del repositori a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Ruta on les versions s'haurien de guardar (deixa-ho buit per fer servir el directori .stversions per defecte al repositori)",
|
||||
"Please wait": "Si-us-plau espera",
|
||||
"Preview Usage Report": "Vista Prèvia de l'Informe d'Ús",
|
||||
"RAM Utilization": "Utilització de la RAM",
|
||||
"Reconnect Interval (s)": "Interval de Reconnexió (s)",
|
||||
"Repository ID": "ID del Repositori",
|
||||
"Repository Master": "Repositori Mestre",
|
||||
"Repository Path": "Ruta del Repositori",
|
||||
"Rescan": "Re-escanejar",
|
||||
"Rescan Interval": "Interval de re-escaneig",
|
||||
"Rescan Interval (s)": "Interval de re-escaneig (s)",
|
||||
"Restart": "Reiniciar",
|
||||
"Restart Needed": "És Necessari Reiniciar",
|
||||
"Restarting": "Reiniciant",
|
||||
"Save": "Guardar",
|
||||
"Scanning": "Escanejant",
|
||||
"Select the nodes to share this repository with.": "Seleccionar els nodes amb els que es comparteix el repositori.",
|
||||
"Settings": "Preferències",
|
||||
"Share With Nodes": "Compartir Amb Els Nodes",
|
||||
"Shared With": "Compartir Amb",
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identificador curt pel repositori. Ha de ser el mateix per tots els nodes del cluster.",
|
||||
"Show ID": "Mostrar ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Mostrat en comptes del ID del Node en l'estat del cluster.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Mostrat en comptes del ID del Node en l'estat del cluster. Serà advertit als altres nodes com un nom opcional per defecte.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Mostrat en comptes del ID del Node en l'estat del cluster. S'actualitzara al nom del node si es deixa buit.",
|
||||
"Shutdown": "Apagar",
|
||||
"Simple File Versioning": "Versionat de Fitxers Senzill",
|
||||
"Source Code": "Codi Font",
|
||||
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
|
||||
"Start Browser": "Arrancar Navegador",
|
||||
"Stopped": "Aturat",
|
||||
"Support / Forum": "Suport / Fòrum",
|
||||
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
|
||||
"Synchronization": "Sincronització",
|
||||
"Syncing": "Synthing",
|
||||
"Syncthing has been shut down.": "S'ha aturat el synthing.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
|
||||
"Syncthing is restarting.": "Reiniciant syncthing.",
|
||||
"Syncthing is upgrading.": "Actualitzant syncthing.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de repositoris i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 caràcters amb lletres i números, els espais i les barres son opcionals.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es fan servir els següents intervals: per la primera hora es manté una versió cada 30 segons, pel primer dia es manté una versió cada hora, pel primer cada 30 dies es manté una versió cada dia, fins el màxim d'antiguitat es manté una versió cada setmana.",
|
||||
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
|
||||
"The node ID cannot be blank.": "El ID del node no pot estar en blanc.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "El ID del node per introduir aquí es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre node. Els espais i les barres son opcionals (s'ignoren).",
|
||||
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
|
||||
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
|
||||
"The repository ID cannot be blank.": "El ID del repositori no pot estar en blanc.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "El ID del repositori ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
|
||||
"The repository ID must be unique.": "El ID del repositori ha de ser únic",
|
||||
"The repository path cannot be blank.": "La carpeta del repositori no pot estar en blanc.",
|
||||
"The rescan interval must be at least 5 seconds.": "El interval de re-escaneig ha de ser com a mínim de 5 segons.",
|
||||
"Unknown": "Desconegut",
|
||||
"Up to Date": "Actualitzat",
|
||||
"Upgrade To {%version%}": "Actualitzar a {{version}}",
|
||||
"Upgrading": "Actualitzant",
|
||||
"Upload Rate": "Tasa de Pujada",
|
||||
"Usage": "Ús",
|
||||
"Use Compression": "Utilitza compressió",
|
||||
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
|
||||
"Version": "Versió",
|
||||
"Versions Path": "Carpeta de les Versions",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions son automàticament eliminades si son més antigues que el màxim d'antiguitat o si excedeixen del nombre de fitxers permesos en un interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Quan s'afegeix un nou node recorda que aquest node s'ha d'afegir tambe a l'altre banda.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Quan s'afegeix un nou repositori recorda que el ID del repositori s'utilitza per lligar repositoris entre nodes. Es distingeix entre majúscules i minúscules i ha de ser exactament iguals entre tots els nodes.",
|
||||
"Yes": "Si",
|
||||
"You must keep at least one version.": "Has de mantenir com a mínim una versió.",
|
||||
"items": "Elements"
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user