mirror of
https://github.com/syncthing/syncthing.git
synced 2026-01-03 19:39:20 -05:00
Compare commits
194 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
97cb3fa5a5 | ||
|
|
b5368db704 | ||
|
|
8c442b72f3 | ||
|
|
f8f6791d39 | ||
|
|
0c09f077aa | ||
|
|
af2831d7b6 | ||
|
|
64d5d4aec7 | ||
|
|
619a6b2adb | ||
|
|
33a26bc0cf | ||
|
|
b445a7c4d3 | ||
|
|
e6892d0c3e | ||
|
|
33e9a88b56 | ||
|
|
df00a2251e | ||
|
|
92c44c8abe | ||
|
|
8e4f7bbd3e | ||
|
|
a40217cf07 | ||
|
|
e586fda5f2 | ||
|
|
a58564ff88 | ||
|
|
89885b9fb9 | ||
|
|
5c7d977ae0 | ||
|
|
2cd3ee9698 | ||
|
|
dd3080e018 | ||
|
|
5915e8e86a | ||
|
|
3c67c06654 | ||
|
|
76232ca573 | ||
|
|
5235e82bda | ||
|
|
10f0713257 | ||
|
|
e9c7970ea4 | ||
|
|
1a6ac4aeb1 | ||
|
|
f633bdddf0 | ||
|
|
de0b91d157 | ||
|
|
2e77e498f5 | ||
|
|
4ac67eb1f9 | ||
|
|
2b536de37f | ||
|
|
2ffa92ba1b | ||
|
|
6ecddd8388 | ||
|
|
bd2772ea4c | ||
|
|
92bf79d53b | ||
|
|
eebe0eeb71 | ||
|
|
1068eaa0b9 | ||
|
|
faac3e7d7c | ||
|
|
dab4340207 | ||
|
|
fd2567748f | ||
|
|
c2daedbd11 | ||
|
|
7c604beb73 | ||
|
|
8c42aea827 | ||
|
|
cf1bfdfb61 | ||
|
|
75b26513e1 | ||
|
|
6c09a77a97 | ||
|
|
67389c39fb | ||
|
|
c326103e6e | ||
|
|
c2120a16da | ||
|
|
258ad4352e | ||
|
|
435d3958f4 | ||
|
|
b0408ef5c6 | ||
|
|
1c41b0bc2f | ||
|
|
aa827f3042 | ||
|
|
f44f5964bb | ||
|
|
91ba93bd7a | ||
|
|
0abe4cefb4 | ||
|
|
bccd460f3b | ||
|
|
d1023004e1 | ||
|
|
04a5f9cb04 | ||
|
|
9818e2b550 | ||
|
|
fe43e3b89d | ||
|
|
e1f1ae041f | ||
|
|
5bcf26e324 | ||
|
|
5f47a8149f | ||
|
|
00b662b53a | ||
|
|
faf519ab1b | ||
|
|
fce73f6f17 | ||
|
|
887890baf5 | ||
|
|
c66b24feeb | ||
|
|
84c6f147ad | ||
|
|
0cdb0daa8c | ||
|
|
eee702f299 | ||
|
|
df65247325 | ||
|
|
1a174e75d3 | ||
|
|
9e1fd3454f | ||
|
|
3b1603cadf | ||
|
|
8803bac708 | ||
|
|
3a01eaa4a6 | ||
|
|
9f84c1c448 | ||
|
|
dda0390156 | ||
|
|
c74509dd5f | ||
|
|
f61bbb2ff4 | ||
|
|
e7f60161a3 | ||
|
|
ebec4fbc24 | ||
|
|
1d4105ae3d | ||
|
|
586d49f0c3 | ||
|
|
5b0fab0697 | ||
|
|
2b3359dff3 | ||
|
|
63203aa14c | ||
|
|
716a8329c2 | ||
|
|
dab0aec85e | ||
|
|
1f1ab017c0 | ||
|
|
b6912ef95e | ||
|
|
db54dca694 | ||
|
|
0e751b983c | ||
|
|
997b20a975 | ||
|
|
386f9c42c2 | ||
|
|
cfae06db65 | ||
|
|
44260b7b5c | ||
|
|
13063b957f | ||
|
|
ee05e12480 | ||
|
|
5538545fb0 | ||
|
|
bc1167c2c5 | ||
|
|
c57656e4c3 | ||
|
|
264400a984 | ||
|
|
408db4eb1d | ||
|
|
9347f223ef | ||
|
|
518aa30c9c | ||
|
|
6bbf1f9355 | ||
|
|
b221e4d445 | ||
|
|
580fccbfca | ||
|
|
045916efcc | ||
|
|
4f92482294 | ||
|
|
2f055a75a0 | ||
|
|
f0621207e3 | ||
|
|
d657bc4e3d | ||
|
|
a1fd07b27c | ||
|
|
52219c5f3f | ||
|
|
1a66461e07 | ||
|
|
d20df12168 | ||
|
|
668b429615 | ||
|
|
7db528be39 | ||
|
|
60f760ee49 | ||
|
|
884aaab751 | ||
|
|
e968560ea4 | ||
|
|
07caaa96e4 | ||
|
|
e8a679c280 | ||
|
|
bc885f1d08 | ||
|
|
f2f051d6de | ||
|
|
49a0bfccba | ||
|
|
0c1e60894f | ||
|
|
ace87ad7bb | ||
|
|
50f0097843 | ||
|
|
32a9466277 | ||
|
|
1ee3407946 | ||
|
|
f1120d7aa9 | ||
|
|
2e7d6b2f99 | ||
|
|
dfef929187 | ||
|
|
e78d9ad592 | ||
|
|
9f2948f595 | ||
|
|
198da910ed | ||
|
|
5f1bf9d9d6 | ||
|
|
798c4aef9a | ||
|
|
f80f5b3bda | ||
|
|
cbb07b0d67 | ||
|
|
7cc9921615 | ||
|
|
7555fe065e | ||
|
|
d977f4278e | ||
|
|
870e3ca893 | ||
|
|
213acaee3b | ||
|
|
58381496a2 | ||
|
|
5981e42aed | ||
|
|
3c9165d295 | ||
|
|
60d0ef93ac | ||
|
|
f45d5b0066 | ||
|
|
b71306480f | ||
|
|
0c7771ccc5 | ||
|
|
dc9df0a79a | ||
|
|
17cd49fbdc | ||
|
|
ad273adb78 | ||
|
|
150e7daf2d | ||
|
|
b004155e8f | ||
|
|
92eed3b33b | ||
|
|
fe7b77198c | ||
|
|
f51b775698 | ||
|
|
939dd5cb31 | ||
|
|
adcbe13ecd | ||
|
|
8976e53998 | ||
|
|
97dda6a4bb | ||
|
|
9e395eb883 | ||
|
|
60da59623e | ||
|
|
9752ea9ac3 | ||
|
|
279693078a | ||
|
|
19b93045a4 | ||
|
|
5231a09820 | ||
|
|
ab952e6103 | ||
|
|
a418771c04 | ||
|
|
b41590ce38 | ||
|
|
c7dde9499f | ||
|
|
528cbf62ec | ||
|
|
1be4b8bb5d | ||
|
|
c832fc9917 | ||
|
|
4797a94689 | ||
|
|
6948903084 | ||
|
|
94164611ae | ||
|
|
ae298e8902 | ||
|
|
3d8771ecb0 | ||
|
|
28db264e90 | ||
|
|
6af9fa4b81 | ||
|
|
c45b18cc75 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -9,3 +9,4 @@ coverage.out
|
||||
files/pidx
|
||||
bin
|
||||
perfstats*.csv
|
||||
coverage.xml
|
||||
|
||||
18
.travis.yml
18
.travis.yml
@@ -1,18 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- tip
|
||||
|
||||
install:
|
||||
- export PATH=$PATH:$HOME/gopath/bin
|
||||
- ./build.sh setup
|
||||
|
||||
script:
|
||||
- ./build.sh test-cov
|
||||
|
||||
after_success:
|
||||
- goveralls -coverprofile=coverage.out -service=travis-ci -package=syncthing/syncthing -repotoken="$COVERALLS_TOKEN"
|
||||
|
||||
env:
|
||||
global:
|
||||
secure: "TSPJDsokGCQhKLjgG3c58qHn8Qxhh4zEkWFf0XIOOY2nlDVzdgXDsC+Nq0YaP4106Ee4FgkSefsUTQV5lq/IyYW8elgqlgghjOtOi6RJa14eIS9Yy5Bkx6MXn0QfZX/lG+sy42pKSNk43y9GWx/qrt4nkfTtTvI5cXgwDGYdmX8="
|
||||
@@ -34,7 +34,10 @@ latest info on Transifex.
|
||||
Please do contribute! If you want to contribute but are unsure where to
|
||||
start, the [Contributions Needed
|
||||
topic](http://discourse.syncthing.net/t/49) lists areas in need of
|
||||
attention. In general, any open issues are fair game!
|
||||
attention. In general, any open issues are fair game! Be prepared for a
|
||||
[certain amount of
|
||||
review](https://discourse.syncthing.net/t/733); it's all in the name of
|
||||
quality. :)
|
||||
|
||||
## Licensing
|
||||
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
Aaron Bieber <qbit@deftly.net>
|
||||
Alexander Graf <register-github@alex-graf.de>
|
||||
Andrew Dunham <andrew@du.nham.ca>
|
||||
Audrius Butkevicius <audrius.butkevicius@gmail.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com>
|
||||
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
|
||||
Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
James Patterson <jamespatterson@operamail.com>
|
||||
Jens Diemer <github.com@jensdiemer.de>
|
||||
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
Marcin Dziadus <dziadus.marcin@gmail.com>
|
||||
Michael Tilli <pyfisch@gmail.com>
|
||||
Philippe Schommers <philippe@schommers.be>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Tully Robinson <tully@tojr.org>
|
||||
|
||||
22
Godeps/Godeps.json
generated
22
Godeps/Godeps.json
generated
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"ImportPath": "github.com/syncthing/syncthing",
|
||||
"GoVersion": "go1.3",
|
||||
"GoVersion": "go1.3.1",
|
||||
"Packages": [
|
||||
"./cmd/..."
|
||||
],
|
||||
@@ -12,23 +12,23 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/go.crypto/bcrypt",
|
||||
"Comment": "null-213",
|
||||
"Rev": "aa2644fe4aa50e3b38d75187b4799b1f0c9ddcef"
|
||||
"Comment": "null-216",
|
||||
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/go.crypto/blowfish",
|
||||
"Comment": "null-213",
|
||||
"Rev": "aa2644fe4aa50e3b38d75187b4799b1f0c9ddcef"
|
||||
"Comment": "null-216",
|
||||
"Rev": "41cd4647fccc72b0b79ef1bd1fe6735e718257cd"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/go.text/transform",
|
||||
"Comment": "null-89",
|
||||
"Rev": "df15baaf13e3f62b6b7a901e74caa3818a7c0a7c"
|
||||
"Comment": "null-90",
|
||||
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/go.text/unicode/norm",
|
||||
"Comment": "null-89",
|
||||
"Rev": "df15baaf13e3f62b6b7a901e74caa3818a7c0a7c"
|
||||
"Comment": "null-90",
|
||||
"Rev": "d65bffbc88a153d23a6d2a864531e6e7c2cde59b"
|
||||
},
|
||||
{
|
||||
"ImportPath": "code.google.com/p/snappy-go/snappy",
|
||||
@@ -41,7 +41,7 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/xdr",
|
||||
"Rev": "694859acb207675085232438780db923ceb43e96"
|
||||
"Rev": "a597b63b87d6140f79084c8aab214b4d533833a1"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/juju/ratelimit",
|
||||
@@ -49,7 +49,7 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "c5955912e3287376475731c5bc59c79a5a799105"
|
||||
"Rev": "2b99e8d4757bf06eeab1b0485d80b8ae1c088874"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/coding",
|
||||
|
||||
95
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/block.go
generated
vendored
95
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/block.go
generated
vendored
@@ -4,6 +4,22 @@
|
||||
|
||||
package blowfish
|
||||
|
||||
// getNextWord returns the next big-endian uint32 value from the byte slice
|
||||
// at the given position in a circular manner, updating the position.
|
||||
func getNextWord(b []byte, pos *int) uint32 {
|
||||
var w uint32
|
||||
j := *pos
|
||||
for i := 0; i < 4; i++ {
|
||||
w = w<<8 | uint32(b[j])
|
||||
j++
|
||||
if j >= len(b) {
|
||||
j = 0
|
||||
}
|
||||
}
|
||||
*pos = j
|
||||
return w
|
||||
}
|
||||
|
||||
// ExpandKey performs a key expansion on the given *Cipher. Specifically, it
|
||||
// performs the Blowfish algorithm's key schedule which sets up the *Cipher's
|
||||
// pi and substitution tables for calls to Encrypt. This is used, primarily,
|
||||
@@ -12,6 +28,7 @@ package blowfish
|
||||
func ExpandKey(key []byte, c *Cipher) {
|
||||
j := 0
|
||||
for i := 0; i < 18; i++ {
|
||||
// Using inlined getNextWord for performance.
|
||||
var d uint32
|
||||
for k := 0; k < 4; k++ {
|
||||
d = d<<8 | uint32(key[j])
|
||||
@@ -54,86 +71,44 @@ func ExpandKey(key []byte, c *Cipher) {
|
||||
func expandKeyWithSalt(key []byte, salt []byte, c *Cipher) {
|
||||
j := 0
|
||||
for i := 0; i < 18; i++ {
|
||||
var d uint32
|
||||
for k := 0; k < 4; k++ {
|
||||
d = d<<8 | uint32(key[j])
|
||||
j++
|
||||
if j >= len(key) {
|
||||
j = 0
|
||||
}
|
||||
}
|
||||
c.p[i] ^= d
|
||||
c.p[i] ^= getNextWord(key, &j)
|
||||
}
|
||||
|
||||
j = 0
|
||||
var expandedSalt [4]uint32
|
||||
for i := range expandedSalt {
|
||||
var d uint32
|
||||
for k := 0; k < 4; k++ {
|
||||
d = d<<8 | uint32(salt[j])
|
||||
j++
|
||||
if j >= len(salt) {
|
||||
j = 0
|
||||
}
|
||||
}
|
||||
expandedSalt[i] = d
|
||||
}
|
||||
|
||||
var l, r uint32
|
||||
for i := 0; i < 18; i += 2 {
|
||||
l ^= expandedSalt[i&2]
|
||||
r ^= expandedSalt[(i&2)+1]
|
||||
l ^= getNextWord(salt, &j)
|
||||
r ^= getNextWord(salt, &j)
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.p[i], c.p[i+1] = l, r
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i += 4 {
|
||||
l ^= expandedSalt[2]
|
||||
r ^= expandedSalt[3]
|
||||
for i := 0; i < 256; i += 2 {
|
||||
l ^= getNextWord(salt, &j)
|
||||
r ^= getNextWord(salt, &j)
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s0[i], c.s0[i+1] = l, r
|
||||
|
||||
l ^= expandedSalt[0]
|
||||
r ^= expandedSalt[1]
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s0[i+2], c.s0[i+3] = l, r
|
||||
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i += 4 {
|
||||
l ^= expandedSalt[2]
|
||||
r ^= expandedSalt[3]
|
||||
for i := 0; i < 256; i += 2 {
|
||||
l ^= getNextWord(salt, &j)
|
||||
r ^= getNextWord(salt, &j)
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s1[i], c.s1[i+1] = l, r
|
||||
|
||||
l ^= expandedSalt[0]
|
||||
r ^= expandedSalt[1]
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s1[i+2], c.s1[i+3] = l, r
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i += 4 {
|
||||
l ^= expandedSalt[2]
|
||||
r ^= expandedSalt[3]
|
||||
for i := 0; i < 256; i += 2 {
|
||||
l ^= getNextWord(salt, &j)
|
||||
r ^= getNextWord(salt, &j)
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s2[i], c.s2[i+1] = l, r
|
||||
|
||||
l ^= expandedSalt[0]
|
||||
r ^= expandedSalt[1]
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s2[i+2], c.s2[i+3] = l, r
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i += 4 {
|
||||
l ^= expandedSalt[2]
|
||||
r ^= expandedSalt[3]
|
||||
for i := 0; i < 256; i += 2 {
|
||||
l ^= getNextWord(salt, &j)
|
||||
r ^= getNextWord(salt, &j)
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s3[i], c.s3[i+1] = l, r
|
||||
|
||||
l ^= expandedSalt[0]
|
||||
r ^= expandedSalt[1]
|
||||
l, r = encryptBlock(l, r, c)
|
||||
c.s3[i+2], c.s3[i+3] = l, r
|
||||
}
|
||||
}
|
||||
|
||||
@@ -182,9 +157,3 @@ func decryptBlock(l, r uint32, c *Cipher) (uint32, uint32) {
|
||||
xr ^= c.p[0]
|
||||
return xr, xl
|
||||
}
|
||||
|
||||
func zero(x []uint32) {
|
||||
for i := range x {
|
||||
x[i] = 0
|
||||
}
|
||||
}
|
||||
|
||||
76
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/blowfish_test.go
generated
vendored
76
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/blowfish_test.go
generated
vendored
@@ -4,9 +4,7 @@
|
||||
|
||||
package blowfish
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
import "testing"
|
||||
|
||||
type CryptTest struct {
|
||||
key []byte
|
||||
@@ -202,3 +200,75 @@ func TestSaltedCipherKeyLength(t *testing.T) {
|
||||
t.Errorf("NewSaltedCipher with long key, gave error %#v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test vectors generated with Blowfish from OpenSSH.
|
||||
var saltedVectors = [][8]byte{
|
||||
{0x0c, 0x82, 0x3b, 0x7b, 0x8d, 0x01, 0x4b, 0x7e},
|
||||
{0xd1, 0xe1, 0x93, 0xf0, 0x70, 0xa6, 0xdb, 0x12},
|
||||
{0xfc, 0x5e, 0xba, 0xde, 0xcb, 0xf8, 0x59, 0xad},
|
||||
{0x8a, 0x0c, 0x76, 0xe7, 0xdd, 0x2c, 0xd3, 0xa8},
|
||||
{0x2c, 0xcb, 0x7b, 0xee, 0xac, 0x7b, 0x7f, 0xf8},
|
||||
{0xbb, 0xf6, 0x30, 0x6f, 0xe1, 0x5d, 0x62, 0xbf},
|
||||
{0x97, 0x1e, 0xc1, 0x3d, 0x3d, 0xe0, 0x11, 0xe9},
|
||||
{0x06, 0xd7, 0x4d, 0xb1, 0x80, 0xa3, 0xb1, 0x38},
|
||||
{0x67, 0xa1, 0xa9, 0x75, 0x0e, 0x5b, 0xc6, 0xb4},
|
||||
{0x51, 0x0f, 0x33, 0x0e, 0x4f, 0x67, 0xd2, 0x0c},
|
||||
{0xf1, 0x73, 0x7e, 0xd8, 0x44, 0xea, 0xdb, 0xe5},
|
||||
{0x14, 0x0e, 0x16, 0xce, 0x7f, 0x4a, 0x9c, 0x7b},
|
||||
{0x4b, 0xfe, 0x43, 0xfd, 0xbf, 0x36, 0x04, 0x47},
|
||||
{0xb1, 0xeb, 0x3e, 0x15, 0x36, 0xa7, 0xbb, 0xe2},
|
||||
{0x6d, 0x0b, 0x41, 0xdd, 0x00, 0x98, 0x0b, 0x19},
|
||||
{0xd3, 0xce, 0x45, 0xce, 0x1d, 0x56, 0xb7, 0xfc},
|
||||
{0xd9, 0xf0, 0xfd, 0xda, 0xc0, 0x23, 0xb7, 0x93},
|
||||
{0x4c, 0x6f, 0xa1, 0xe4, 0x0c, 0xa8, 0xca, 0x57},
|
||||
{0xe6, 0x2f, 0x28, 0xa7, 0x0c, 0x94, 0x0d, 0x08},
|
||||
{0x8f, 0xe3, 0xf0, 0xb6, 0x29, 0xe3, 0x44, 0x03},
|
||||
{0xff, 0x98, 0xdd, 0x04, 0x45, 0xb4, 0x6d, 0x1f},
|
||||
{0x9e, 0x45, 0x4d, 0x18, 0x40, 0x53, 0xdb, 0xef},
|
||||
{0xb7, 0x3b, 0xef, 0x29, 0xbe, 0xa8, 0x13, 0x71},
|
||||
{0x02, 0x54, 0x55, 0x41, 0x8e, 0x04, 0xfc, 0xad},
|
||||
{0x6a, 0x0a, 0xee, 0x7c, 0x10, 0xd9, 0x19, 0xfe},
|
||||
{0x0a, 0x22, 0xd9, 0x41, 0xcc, 0x23, 0x87, 0x13},
|
||||
{0x6e, 0xff, 0x1f, 0xff, 0x36, 0x17, 0x9c, 0xbe},
|
||||
{0x79, 0xad, 0xb7, 0x40, 0xf4, 0x9f, 0x51, 0xa6},
|
||||
{0x97, 0x81, 0x99, 0xa4, 0xde, 0x9e, 0x9f, 0xb6},
|
||||
{0x12, 0x19, 0x7a, 0x28, 0xd0, 0xdc, 0xcc, 0x92},
|
||||
{0x81, 0xda, 0x60, 0x1e, 0x0e, 0xdd, 0x65, 0x56},
|
||||
{0x7d, 0x76, 0x20, 0xb2, 0x73, 0xc9, 0x9e, 0xee},
|
||||
}
|
||||
|
||||
func TestSaltedCipher(t *testing.T) {
|
||||
var key, salt [32]byte
|
||||
for i := range key {
|
||||
key[i] = byte(i)
|
||||
salt[i] = byte(i + 32)
|
||||
}
|
||||
for i, v := range saltedVectors {
|
||||
c, err := NewSaltedCipher(key[:], salt[:i])
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
var buf [8]byte
|
||||
c.Encrypt(buf[:], buf[:])
|
||||
if v != buf {
|
||||
t.Errorf("%d: expected %x, got %x", i, v, buf)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkExpandKeyWithSalt(b *testing.B) {
|
||||
key := make([]byte, 32)
|
||||
salt := make([]byte, 16)
|
||||
c, _ := NewCipher(key)
|
||||
for i := 0; i < b.N; i++ {
|
||||
expandKeyWithSalt(key, salt, c)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkExpandKey(b *testing.B) {
|
||||
key := make([]byte, 32)
|
||||
c, _ := NewCipher(key)
|
||||
for i := 0; i < b.N; i++ {
|
||||
ExpandKey(key, c)
|
||||
}
|
||||
}
|
||||
|
||||
5
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/cipher.go
generated
vendored
5
Godeps/_workspace/src/code.google.com/p/go.crypto/blowfish/cipher.go
generated
vendored
@@ -40,8 +40,11 @@ func NewCipher(key []byte) (*Cipher, error) {
|
||||
// NewSaltedCipher creates a returns a Cipher that folds a salt into its key
|
||||
// schedule. For most purposes, NewCipher, instead of NewSaltedCipher, is
|
||||
// sufficient and desirable. For bcrypt compatiblity, the key can be over 56
|
||||
// bytes. Only the first 16 bytes of salt are used.
|
||||
// bytes.
|
||||
func NewSaltedCipher(key, salt []byte) (*Cipher, error) {
|
||||
if len(salt) == 0 {
|
||||
return NewCipher(key)
|
||||
}
|
||||
var result Cipher
|
||||
if k := len(key); k < 1 {
|
||||
return nil, KeySizeError(k)
|
||||
|
||||
25
Godeps/_workspace/src/github.com/calmh/xdr/bench_test.go
generated
vendored
25
Godeps/_workspace/src/github.com/calmh/xdr/bench_test.go
generated
vendored
@@ -7,6 +7,8 @@ import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"testing"
|
||||
|
||||
"github.com/calmh/xdr"
|
||||
)
|
||||
|
||||
type XDRBenchStruct struct {
|
||||
@@ -58,6 +60,16 @@ func BenchmarkThisEncode(b *testing.B) {
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkThisEncoder(b *testing.B) {
|
||||
w := xdr.NewWriter(ioutil.Discard)
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := s.encodeXDR(w)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type repeatReader struct {
|
||||
data []byte
|
||||
}
|
||||
@@ -86,3 +98,16 @@ func BenchmarkThisDecode(b *testing.B) {
|
||||
rr.Reset(e)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkThisDecoder(b *testing.B) {
|
||||
rr := &repeatReader{e}
|
||||
r := xdr.NewReader(rr)
|
||||
var t XDRBenchStruct
|
||||
for i := 0; i < b.N; i++ {
|
||||
err := t.decodeXDR(r)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
rr.Reset(e)
|
||||
}
|
||||
}
|
||||
|
||||
29
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
29
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
@@ -7,6 +7,8 @@ package xdr
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var ErrElementSizeExceeded = errors.New("element size exceeded")
|
||||
@@ -15,7 +17,6 @@ type Reader struct {
|
||||
r io.Reader
|
||||
err error
|
||||
b [8]byte
|
||||
sb []byte
|
||||
}
|
||||
|
||||
func NewReader(r io.Reader) *Reader {
|
||||
@@ -35,23 +36,17 @@ func (r *Reader) ReadRaw(bs []byte) (int, error) {
|
||||
}
|
||||
|
||||
func (r *Reader) ReadString() string {
|
||||
if r.sb == nil {
|
||||
r.sb = make([]byte, 64)
|
||||
} else {
|
||||
r.sb = r.sb[:cap(r.sb)]
|
||||
}
|
||||
r.sb = r.ReadBytesInto(r.sb)
|
||||
return string(r.sb)
|
||||
return r.ReadStringMax(0)
|
||||
}
|
||||
|
||||
func (r *Reader) ReadStringMax(max int) string {
|
||||
if r.sb == nil {
|
||||
r.sb = make([]byte, 64)
|
||||
} else {
|
||||
r.sb = r.sb[:cap(r.sb)]
|
||||
buf := r.ReadBytesMaxInto(max, nil)
|
||||
bh := (*reflect.SliceHeader)(unsafe.Pointer(&buf))
|
||||
sh := reflect.StringHeader{
|
||||
Data: bh.Data,
|
||||
Len: bh.Len,
|
||||
}
|
||||
r.sb = r.ReadBytesMaxInto(max, r.sb)
|
||||
return string(r.sb)
|
||||
return *((*string)(unsafe.Pointer(&sh)))
|
||||
}
|
||||
|
||||
func (r *Reader) ReadBytes() []byte {
|
||||
@@ -80,10 +75,10 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
|
||||
return nil
|
||||
}
|
||||
|
||||
if l+pad(l) > len(dst) {
|
||||
dst = make([]byte, l+pad(l))
|
||||
if fullLen := l + pad(l); fullLen > len(dst) {
|
||||
dst = make([]byte, fullLen)
|
||||
} else {
|
||||
dst = dst[:l+pad(l)]
|
||||
dst = dst[:fullLen]
|
||||
}
|
||||
|
||||
var n int
|
||||
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
4
Godeps/_workspace/src/github.com/calmh/xdr/reader_ipdr.go
generated
vendored
@@ -22,7 +22,7 @@ func (r *Reader) ReadUint8() uint8 {
|
||||
}
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint8=%d (0x%08x)", r.b[0], r.b[0])
|
||||
dl.Printf("rd uint8=%d (0x%02x)", r.b[0], r.b[0])
|
||||
}
|
||||
return r.b[0]
|
||||
}
|
||||
@@ -43,7 +43,7 @@ func (r *Reader) ReadUint16() uint16 {
|
||||
v := uint16(r.b[1]) | uint16(r.b[0])<<8
|
||||
|
||||
if debug {
|
||||
dl.Printf("rd uint16=%d (0x%08x)", v, v)
|
||||
dl.Printf("rd uint16=%d (0x%04x)", v, v)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
14
Godeps/_workspace/src/github.com/calmh/xdr/writer.go
generated
vendored
14
Godeps/_workspace/src/github.com/calmh/xdr/writer.go
generated
vendored
@@ -3,7 +3,11 @@
|
||||
|
||||
package xdr
|
||||
|
||||
import "io"
|
||||
import (
|
||||
"io"
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
var padBytes = []byte{0, 0, 0}
|
||||
|
||||
@@ -38,7 +42,13 @@ func (w *Writer) WriteRaw(bs []byte) (int, error) {
|
||||
}
|
||||
|
||||
func (w *Writer) WriteString(s string) (int, error) {
|
||||
return w.WriteBytes([]byte(s))
|
||||
sh := *((*reflect.StringHeader)(unsafe.Pointer(&s)))
|
||||
bh := reflect.SliceHeader{
|
||||
Data: sh.Data,
|
||||
Len: sh.Len,
|
||||
Cap: sh.Len,
|
||||
}
|
||||
return w.WriteBytes(*(*[]byte)(unsafe.Pointer(&bh)))
|
||||
}
|
||||
|
||||
func (w *Writer) WriteBytes(bs []byte) (int, error) {
|
||||
|
||||
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/bench_test.go
generated
vendored
@@ -170,7 +170,7 @@ func (p *dbBench) writes(perBatch int) {
|
||||
b.SetBytes(116)
|
||||
}
|
||||
|
||||
func (p *dbBench) drop() {
|
||||
func (p *dbBench) gc() {
|
||||
p.keys, p.values = nil, nil
|
||||
runtime.GC()
|
||||
}
|
||||
@@ -249,6 +249,9 @@ func (p *dbBench) newIter() iterator.Iterator {
|
||||
}
|
||||
|
||||
func (p *dbBench) close() {
|
||||
if bp, err := p.db.GetProperty("leveldb.blockpool"); err == nil {
|
||||
p.b.Log("Block pool stats: ", bp)
|
||||
}
|
||||
p.db.Close()
|
||||
p.stor.Close()
|
||||
os.RemoveAll(benchDB)
|
||||
@@ -331,7 +334,7 @@ func BenchmarkDBRead(b *testing.B) {
|
||||
p := openDBBench(b, false)
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.drop()
|
||||
p.gc()
|
||||
|
||||
iter := p.newIter()
|
||||
b.ResetTimer()
|
||||
@@ -362,7 +365,7 @@ func BenchmarkDBReadUncompressed(b *testing.B) {
|
||||
p := openDBBench(b, true)
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.drop()
|
||||
p.gc()
|
||||
|
||||
iter := p.newIter()
|
||||
b.ResetTimer()
|
||||
@@ -379,7 +382,7 @@ func BenchmarkDBReadTable(b *testing.B) {
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.reopen()
|
||||
p.drop()
|
||||
p.gc()
|
||||
|
||||
iter := p.newIter()
|
||||
b.ResetTimer()
|
||||
@@ -395,7 +398,7 @@ func BenchmarkDBReadReverse(b *testing.B) {
|
||||
p := openDBBench(b, false)
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.drop()
|
||||
p.gc()
|
||||
|
||||
iter := p.newIter()
|
||||
b.ResetTimer()
|
||||
@@ -413,7 +416,7 @@ func BenchmarkDBReadReverseTable(b *testing.B) {
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.reopen()
|
||||
p.drop()
|
||||
p.gc()
|
||||
|
||||
iter := p.newIter()
|
||||
b.ResetTimer()
|
||||
|
||||
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
145
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache.go
generated
vendored
@@ -11,84 +11,117 @@ import (
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// SetFunc used by Namespace.Get method to create a cache object. SetFunc
|
||||
// may return ok false, in that case the cache object will not be created.
|
||||
type SetFunc func() (ok bool, value interface{}, charge int, fin SetFin)
|
||||
// SetFunc is the function that will be called by Namespace.Get to create
|
||||
// a cache object, if charge is less than one than the cache object will
|
||||
// not be registered to cache tree, if value is nil then the cache object
|
||||
// will not be created.
|
||||
type SetFunc func() (charge int, value interface{})
|
||||
|
||||
// SetFin will be called when corresponding cache object are released.
|
||||
type SetFin func()
|
||||
// DelFin is the function that will be called as the result of a delete operation.
|
||||
// Exist == true is indication that the object is exist, and pending == true is
|
||||
// indication of deletion already happen but haven't done yet (wait for all handles
|
||||
// to be released). And exist == false means the object doesn't exist.
|
||||
type DelFin func(exist, pending bool)
|
||||
|
||||
// DelFin will be called when corresponding cache object are released.
|
||||
// DelFin will be called after SetFin. The exist is true if the corresponding
|
||||
// cache object is actually exist in the cache tree.
|
||||
type DelFin func(exist bool)
|
||||
|
||||
// PurgeFin will be called when corresponding cache object are released.
|
||||
// PurgeFin will be called after SetFin. If PurgeFin present DelFin will
|
||||
// not be executed but passed to the PurgeFin, it is up to the caller
|
||||
// to call it or not.
|
||||
type PurgeFin func(ns, key uint64, delfin DelFin)
|
||||
// PurgeFin is the function that will be called as the result of a purge operation.
|
||||
type PurgeFin func(ns, key uint64)
|
||||
|
||||
// Cache is a cache tree. A cache instance must be goroutine-safe.
|
||||
type Cache interface {
|
||||
// SetCapacity sets cache capacity.
|
||||
// SetCapacity sets cache tree capacity.
|
||||
SetCapacity(capacity int)
|
||||
|
||||
// GetNamespace gets or creates a cache namespace for the given id.
|
||||
// Capacity returns cache tree capacity.
|
||||
Capacity() int
|
||||
|
||||
// Used returns used cache tree capacity.
|
||||
Used() int
|
||||
|
||||
// Size returns entire alive cache objects size.
|
||||
Size() int
|
||||
|
||||
// NumObjects returns number of alive objects.
|
||||
NumObjects() int
|
||||
|
||||
// GetNamespace gets cache namespace with the given id.
|
||||
// GetNamespace is never return nil.
|
||||
GetNamespace(id uint64) Namespace
|
||||
|
||||
// Purge purges all cache namespaces, read Namespace.Purge method documentation.
|
||||
// PurgeNamespace purges cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Purge.
|
||||
PurgeNamespace(id uint64, fin PurgeFin)
|
||||
|
||||
// ZapNamespace detaches cache namespace with the given id from this cache tree.
|
||||
// Also read Namespace.Zap.
|
||||
ZapNamespace(id uint64)
|
||||
|
||||
// Purge purges all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Purge method on all cache namespace.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap zaps all cache namespaces, read Namespace.Zap method documentation.
|
||||
Zap(closed bool)
|
||||
// Zap detaches all cache namespace from this cache tree.
|
||||
// This is behave the same as calling Namespace.Zap method on all cache namespace.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
|
||||
type Namespace interface {
|
||||
// Get gets cache object for the given key. The given SetFunc (if not nil) will
|
||||
// be called if the given key does not exist.
|
||||
// If the given key does not exist, SetFunc is nil or SetFunc return ok false, Get
|
||||
// will return ok false.
|
||||
Get(key uint64, setf SetFunc) (obj Object, ok bool)
|
||||
|
||||
// Get deletes cache object for the given key. If exist the cache object will
|
||||
// be deleted later when all of its handles have been released (i.e. no one use
|
||||
// it anymore) and the given DelFin (if not nil) will finally be executed. If
|
||||
// such cache object does not exist the given DelFin will be executed anyway.
|
||||
// Get gets cache object with the given key.
|
||||
// If cache object is not found and setf is not nil, Get will atomically creates
|
||||
// the cache object by calling setf. Otherwise Get will returns nil.
|
||||
//
|
||||
// Delete returns true if such cache object exist.
|
||||
// The returned cache handle should be released after use by calling Release
|
||||
// method.
|
||||
Get(key uint64, setf SetFunc) Handle
|
||||
|
||||
// Delete removes cache object with the given key from cache tree.
|
||||
// A deleted cache object will be released as soon as all of its handles have
|
||||
// been released.
|
||||
// Delete only happen once, subsequent delete will consider cache object doesn't
|
||||
// exist, even if the cache object ins't released yet.
|
||||
//
|
||||
// If not nil, fin will be called if the cache object doesn't exist or when
|
||||
// finally be released.
|
||||
//
|
||||
// Delete returns true if such cache object exist and never been deleted.
|
||||
Delete(key uint64, fin DelFin) bool
|
||||
|
||||
// Purge deletes all cache objects, read Delete method documentation.
|
||||
// Purge removes all cache objects within this namespace from cache tree.
|
||||
// This is the same as doing delete on all cache objects.
|
||||
//
|
||||
// If not nil, fin will be called on all cache objects when its finally be
|
||||
// released.
|
||||
Purge(fin PurgeFin)
|
||||
|
||||
// Zap detaches the namespace from the cache tree and delete all its cache
|
||||
// objects. The cache objects deletion and finalizers execution are happen
|
||||
// immediately, even if its existing handles haven't yet been released.
|
||||
// A zapped namespace can't never be filled again.
|
||||
// If closed is false then the Get function will always call the given SetFunc
|
||||
// if it is not nil, but resultant of the SetFunc will not be cached.
|
||||
Zap(closed bool)
|
||||
// Zap detaches namespace from cache tree and release all its cache objects.
|
||||
// A zapped namespace can never be filled again.
|
||||
// Calling Get on zapped namespace will always return nil.
|
||||
Zap()
|
||||
}
|
||||
|
||||
// Object is a cache object.
|
||||
type Object interface {
|
||||
// Release releases the cache object. Other methods should not be called
|
||||
// after the cache object has been released.
|
||||
// Handle is a cache handle.
|
||||
type Handle interface {
|
||||
// Release releases this cache handle. This method can be safely called mutiple
|
||||
// times.
|
||||
Release()
|
||||
|
||||
// Value returns value of the cache object.
|
||||
// Value returns value of this cache handle.
|
||||
// Value will returns nil after this cache handle have be released.
|
||||
Value() interface{}
|
||||
}
|
||||
|
||||
const (
|
||||
DelNotExist = iota
|
||||
DelExist
|
||||
DelPendig
|
||||
)
|
||||
|
||||
// Namespace state.
|
||||
type nsState int
|
||||
|
||||
const (
|
||||
nsEffective nsState = iota
|
||||
nsZapped
|
||||
nsClosed
|
||||
)
|
||||
|
||||
// Node state.
|
||||
@@ -97,29 +130,29 @@ type nodeState int
|
||||
const (
|
||||
nodeEffective nodeState = iota
|
||||
nodeEvicted
|
||||
nodeRemoved
|
||||
nodeDeleted
|
||||
)
|
||||
|
||||
// Fake object.
|
||||
type fakeObject struct {
|
||||
// Fake handle.
|
||||
type fakeHandle struct {
|
||||
value interface{}
|
||||
fin func()
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *fakeObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.value
|
||||
func (h *fakeHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *fakeObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *fakeHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
if o.fin != nil {
|
||||
o.fin()
|
||||
o.fin = nil
|
||||
if h.fin != nil {
|
||||
h.fin()
|
||||
h.fin = nil
|
||||
}
|
||||
}
|
||||
|
||||
439
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
439
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/cache_test.go
generated
vendored
@@ -7,15 +7,35 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, fin func()) Object {
|
||||
obj, _ := ns.Get(key, func() (bool, interface{}, int, SetFin) {
|
||||
return true, value, charge, fin
|
||||
type releaserFunc struct {
|
||||
fn func()
|
||||
value interface{}
|
||||
}
|
||||
|
||||
func (r releaserFunc) Release() {
|
||||
if r.fn != nil {
|
||||
r.fn()
|
||||
}
|
||||
}
|
||||
|
||||
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
|
||||
return ns.Get(key, func() (int, interface{}) {
|
||||
if relf != nil {
|
||||
return charge, releaserFunc{relf, value}
|
||||
} else {
|
||||
return charge, value
|
||||
}
|
||||
})
|
||||
return obj
|
||||
}
|
||||
|
||||
func TestCache_HitMiss(t *testing.T) {
|
||||
@@ -43,29 +63,31 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
setfin++
|
||||
}).Release()
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j <= i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit , value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i, x := range cases {
|
||||
finalizerOk := false
|
||||
ns.Delete(x.key, func(exist bool) {
|
||||
ns.Delete(x.key, func(exist, pending bool) {
|
||||
finalizerOk = true
|
||||
})
|
||||
|
||||
@@ -74,22 +96,24 @@ func TestCache_HitMiss(t *testing.T) {
|
||||
}
|
||||
|
||||
for j, y := range cases {
|
||||
r, ok := ns.Get(y.key, nil)
|
||||
h := ns.Get(y.key, nil)
|
||||
if j > i {
|
||||
// should hit
|
||||
if !ok {
|
||||
if h == nil {
|
||||
t.Errorf("case '%d' iteration '%d' is miss", i, j)
|
||||
} else if r.Value().(string) != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, r.Value().(string), y.value)
|
||||
} else {
|
||||
if x := h.Value().(releaserFunc).value.(string); x != y.value {
|
||||
t.Errorf("case '%d' iteration '%d' has invalid value got '%s', want '%s'", i, j, x, y.value)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// should miss
|
||||
if ok {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, r.Value().(string))
|
||||
if h != nil {
|
||||
t.Errorf("case '%d' iteration '%d' is hit, value '%s'", i, j, h.Value().(releaserFunc).value.(string))
|
||||
}
|
||||
}
|
||||
if ok {
|
||||
r.Release()
|
||||
if h != nil {
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -107,42 +131,42 @@ func TestLRUCache_Eviction(t *testing.T) {
|
||||
set(ns, 3, 3, 1, nil).Release()
|
||||
set(ns, 4, 4, 1, nil).Release()
|
||||
set(ns, 5, 5, 1, nil).Release()
|
||||
if r, ok := ns.Get(2, nil); ok { // 1,3,4,5,2
|
||||
r.Release()
|
||||
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
|
||||
h.Release()
|
||||
}
|
||||
set(ns, 9, 9, 10, nil).Release() // 5,2,9
|
||||
|
||||
for _, x := range []uint64{9, 2, 5, 1} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{9, 2, 5, 1} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
for _, x := range []uint64{1, 2, 5} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 5} {
|
||||
h := ns.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
for _, x := range []uint64{3, 4, 9} {
|
||||
r, ok := ns.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{3, 4, 9} {
|
||||
h := ns.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -153,16 +177,15 @@ func TestLRUCache_SetGet(t *testing.T) {
|
||||
for i := 0; i < 200; i++ {
|
||||
n := uint64(rand.Intn(99999) % 20)
|
||||
set(ns, n, n, 1, nil).Release()
|
||||
if p, ok := ns.Get(n, nil); ok {
|
||||
if p.Value() == nil {
|
||||
if h := ns.Get(n, nil); h != nil {
|
||||
if h.Value() == nil {
|
||||
t.Errorf("key '%d' contains nil value", n)
|
||||
} else {
|
||||
got := p.Value().(uint64)
|
||||
if got != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, got)
|
||||
if x := h.Value().(uint64); x != n {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
|
||||
}
|
||||
}
|
||||
p.Release()
|
||||
h.Release()
|
||||
} else {
|
||||
t.Errorf("key '%d' doesn't exist", n)
|
||||
}
|
||||
@@ -176,31 +199,319 @@ func TestLRUCache_Purge(t *testing.T) {
|
||||
o2 := set(ns1, 2, 2, 1, nil)
|
||||
ns1.Purge(nil)
|
||||
set(ns1, 3, 3, 1, nil).Release()
|
||||
for _, x := range []uint64{1, 2, 3} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if !ok {
|
||||
t.Errorf("miss for key '%d'", x)
|
||||
for _, key := range []uint64{1, 2, 3} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h == nil {
|
||||
t.Errorf("miss for key '%d'", key)
|
||||
} else {
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
o1.Release()
|
||||
o2.Release()
|
||||
for _, x := range []uint64{1, 2} {
|
||||
r, ok := ns1.Get(x, nil)
|
||||
if ok {
|
||||
t.Errorf("hit for key '%d'", x)
|
||||
if r.Value().(int) != int(x) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", x, x, r.Value().(int))
|
||||
for _, key := range []uint64{1, 2} {
|
||||
h := ns1.Get(key, nil)
|
||||
if h != nil {
|
||||
t.Errorf("hit for key '%d'", key)
|
||||
if x := h.Value().(int); x != int(key) {
|
||||
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
|
||||
}
|
||||
r.Release()
|
||||
h.Release()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type testingCacheObjectCounter struct {
|
||||
created uint32
|
||||
released uint32
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) createOne() {
|
||||
atomic.AddUint32(&c.created, 1)
|
||||
}
|
||||
|
||||
func (c *testingCacheObjectCounter) releaseOne() {
|
||||
atomic.AddUint32(&c.released, 1)
|
||||
}
|
||||
|
||||
type testingCacheObject struct {
|
||||
t *testing.T
|
||||
cnt *testingCacheObjectCounter
|
||||
|
||||
ns, key uint64
|
||||
|
||||
releaseCalled uint32
|
||||
}
|
||||
|
||||
func (x *testingCacheObject) Release() {
|
||||
if atomic.CompareAndSwapUint32(&x.releaseCalled, 0, 1) {
|
||||
x.cnt.releaseOne()
|
||||
} else {
|
||||
x.t.Errorf("duplicate setfin NS#%d KEY#%s", x.ns, x.key)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLRUCache_Finalizer(t *testing.T) {
|
||||
const (
|
||||
capacity = 100
|
||||
goroutines = 100
|
||||
iterations = 10000
|
||||
keymax = 8000
|
||||
)
|
||||
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
defer runtime.GOMAXPROCS(1)
|
||||
|
||||
wg := &sync.WaitGroup{}
|
||||
cnt := &testingCacheObjectCounter{}
|
||||
|
||||
c := NewLRUCache(capacity)
|
||||
|
||||
type instance struct {
|
||||
seed int64
|
||||
rnd *rand.Rand
|
||||
ns uint64
|
||||
effective int32
|
||||
handles []Handle
|
||||
handlesMap map[uint64]int
|
||||
|
||||
delete bool
|
||||
purge bool
|
||||
zap bool
|
||||
wantDel int32
|
||||
delfinCalledAll int32
|
||||
delfinCalledEff int32
|
||||
purgefinCalled int32
|
||||
}
|
||||
|
||||
instanceGet := func(p *instance, ns Namespace, key uint64) {
|
||||
h := ns.Get(key, func() (charge int, value interface{}) {
|
||||
to := &testingCacheObject{
|
||||
t: t, cnt: cnt,
|
||||
ns: p.ns,
|
||||
key: key,
|
||||
}
|
||||
atomic.AddInt32(&p.effective, 1)
|
||||
cnt.createOne()
|
||||
return 1, releaserFunc{func() {
|
||||
to.Release()
|
||||
atomic.AddInt32(&p.effective, -1)
|
||||
}, to}
|
||||
})
|
||||
p.handles = append(p.handles, h)
|
||||
p.handlesMap[key] = p.handlesMap[key] + 1
|
||||
}
|
||||
instanceRelease := func(p *instance, ns Namespace, i int) {
|
||||
h := p.handles[i]
|
||||
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
|
||||
if n := p.handlesMap[key]; n == 0 {
|
||||
t.Fatal("key ref == 0")
|
||||
} else if n > 1 {
|
||||
p.handlesMap[key] = n - 1
|
||||
} else {
|
||||
delete(p.handlesMap, key)
|
||||
}
|
||||
h.Release()
|
||||
p.handles = append(p.handles[:i], p.handles[i+1:]...)
|
||||
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
|
||||
}
|
||||
|
||||
seeds := make([]int64, goroutines)
|
||||
instances := make([]instance, goroutines)
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
p.handlesMap = make(map[uint64]int)
|
||||
if seeds[i] == 0 {
|
||||
seeds[i] = time.Now().UnixNano()
|
||||
}
|
||||
p.seed = seeds[i]
|
||||
p.rnd = rand.New(rand.NewSource(p.seed))
|
||||
p.ns = uint64(i)
|
||||
p.delete = i%6 == 0
|
||||
p.purge = i%8 == 0
|
||||
p.zap = i%12 == 0 || i%3 == 0
|
||||
}
|
||||
|
||||
seedsStr := make([]string, len(seeds))
|
||||
for i, seed := range seeds {
|
||||
seedsStr[i] = fmt.Sprint(seed)
|
||||
}
|
||||
t.Logf("seeds := []int64{%s}", strings.Join(seedsStr, ", "))
|
||||
|
||||
// Get and release.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
for i := 0; i < iterations; i++ {
|
||||
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
|
||||
instanceGet(p, ns, uint64(p.rnd.Intn(keymax)))
|
||||
} else {
|
||||
instanceRelease(p, ns, p.rnd.Intn(len(p.handles)))
|
||||
}
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if used, cap := c.Used(), c.Capacity(); used > cap {
|
||||
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
|
||||
}
|
||||
|
||||
// Check effective objects.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
if int(p.effective) < len(p.handlesMap) {
|
||||
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
|
||||
}
|
||||
}
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Delete and purge.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
p.wantDel = p.effective
|
||||
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
|
||||
if p.delete {
|
||||
for key := uint64(0); key < keymax; key++ {
|
||||
_, wantExist := p.handlesMap[key]
|
||||
gotExist := ns.Delete(key, func(exist, pending bool) {
|
||||
atomic.AddInt32(&p.delfinCalledAll, 1)
|
||||
if exist {
|
||||
atomic.AddInt32(&p.delfinCalledEff, 1)
|
||||
}
|
||||
})
|
||||
if !gotExist && wantExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d not found", p.ns, key)
|
||||
}
|
||||
}
|
||||
|
||||
var delfinCalled int
|
||||
for key := uint64(0); key < keymax; key++ {
|
||||
func(key uint64) {
|
||||
gotExist := ns.Delete(key, func(exist, pending bool) {
|
||||
if exist && !pending {
|
||||
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.ns, key)
|
||||
}
|
||||
delfinCalled++
|
||||
})
|
||||
if gotExist {
|
||||
t.Errorf("delete on NS#%d KEY#%d found", p.ns, key)
|
||||
}
|
||||
}(key)
|
||||
}
|
||||
if delfinCalled != keymax {
|
||||
t.Errorf("(2) #%d not all delete fin called, diff=%d", p.ns, keymax-delfinCalled)
|
||||
}
|
||||
}
|
||||
|
||||
if p.purge {
|
||||
ns.Purge(func(ns, key uint64) {
|
||||
atomic.AddInt32(&p.purgefinCalled, 1)
|
||||
})
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Release.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if !p.zap {
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
for i := len(p.handles) - 1; i >= 0; i-- {
|
||||
instanceRelease(p, ns, i)
|
||||
}
|
||||
}(p)
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
// Zap.
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if p.zap {
|
||||
wg.Add(1)
|
||||
go func(p *instance) {
|
||||
defer wg.Done()
|
||||
|
||||
ns := c.GetNamespace(p.ns)
|
||||
ns.Zap()
|
||||
|
||||
p.handles = nil
|
||||
p.handlesMap = nil
|
||||
}(p)
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
if want := int(cnt.created - cnt.released); c.Size() != want {
|
||||
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
|
||||
}
|
||||
|
||||
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
|
||||
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
|
||||
}
|
||||
|
||||
c.Purge(nil)
|
||||
|
||||
for i := range instances {
|
||||
p := &instances[i]
|
||||
|
||||
if p.delete {
|
||||
if p.delfinCalledAll != keymax {
|
||||
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.ns, p.purge, p.zap, keymax-p.delfinCalledAll)
|
||||
}
|
||||
if p.delfinCalledEff != p.wantDel {
|
||||
t.Errorf("#%d not all effective delete fin called, diff=%d", p.ns, p.wantDel-p.delfinCalledEff)
|
||||
}
|
||||
if p.purge && p.purgefinCalled > 0 {
|
||||
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.ns, p.delete, p.zap, p.purgefinCalled)
|
||||
}
|
||||
} else {
|
||||
if p.purge {
|
||||
if p.purgefinCalled != p.wantDel {
|
||||
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.ns, p.delete, p.zap, p.wantDel-p.purgefinCalled)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if cnt.created != cnt.released {
|
||||
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkLRUCache_SetRelease(b *testing.B) {
|
||||
capacity := b.N / 100
|
||||
if capacity <= 0 {
|
||||
|
||||
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
246
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/empty_cache.go
generated
vendored
@@ -1,246 +0,0 @@
|
||||
// Copyright (c) 2013, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package cache
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
type emptyCache struct {
|
||||
sync.Mutex
|
||||
table map[uint64]*emptyNS
|
||||
}
|
||||
|
||||
// NewEmptyCache creates a new initialized empty cache.
|
||||
func NewEmptyCache() Cache {
|
||||
return &emptyCache{
|
||||
table: make(map[uint64]*emptyNS),
|
||||
}
|
||||
}
|
||||
|
||||
func (c *emptyCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
ns := &emptyNS{
|
||||
cache: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*emptyNode),
|
||||
}
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *emptyCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (c *emptyCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
}
|
||||
c.table = make(map[uint64]*emptyNS)
|
||||
c.Unlock()
|
||||
}
|
||||
|
||||
func (*emptyCache) SetCapacity(capacity int) {}
|
||||
|
||||
type emptyNS struct {
|
||||
cache *emptyCache
|
||||
id uint64
|
||||
table map[uint64]*emptyNode
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
ns.cache.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
ns.cache.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if ok {
|
||||
n.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
n = &emptyNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
setfin: fin,
|
||||
ref: 1,
|
||||
}
|
||||
ns.table[key] = n
|
||||
}
|
||||
|
||||
ns.cache.Unlock()
|
||||
o = &emptyObject{node: n}
|
||||
return
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Delete(key uint64, fin DelFin) bool {
|
||||
ns.cache.Lock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
ns.cache.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
}
|
||||
return false
|
||||
}
|
||||
n.delfin = fin
|
||||
ns.cache.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *emptyNS) purgeNB(fin PurgeFin) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Purge(fin PurgeFin) {
|
||||
ns.cache.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
func (ns *emptyNS) zapNB(closed bool) {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
n.execFin()
|
||||
}
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
ns.state = nsZapped
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *emptyNS) Zap(closed bool) {
|
||||
ns.cache.Lock()
|
||||
ns.zapNB(closed)
|
||||
delete(ns.cache.table, ns.id)
|
||||
ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyNode struct {
|
||||
ns *emptyNS
|
||||
key uint64
|
||||
value interface{}
|
||||
ref int
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
|
||||
func (n *emptyNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.delfin = nil
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *emptyNode) evict() {
|
||||
n.ns.cache.Lock()
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// Remove elem.
|
||||
delete(n.ns.table, n.key)
|
||||
// Execute finalizer.
|
||||
n.execFin()
|
||||
}
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: emptyNode: negative node reference")
|
||||
}
|
||||
n.ns.cache.Unlock()
|
||||
}
|
||||
|
||||
type emptyObject struct {
|
||||
node *emptyNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *emptyObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *emptyObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
}
|
||||
318
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
318
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/cache/lru_cache.go
generated
vendored
@@ -9,16 +9,17 @@ package cache
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/util"
|
||||
)
|
||||
|
||||
// lruCache represent a LRU cache state.
|
||||
type lruCache struct {
|
||||
sync.Mutex
|
||||
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
size int
|
||||
mu sync.Mutex
|
||||
recent lruNode
|
||||
table map[uint64]*lruNs
|
||||
capacity int
|
||||
used, size, alive int
|
||||
}
|
||||
|
||||
// NewLRUCache creates a new initialized LRU cache with the given capacity.
|
||||
@@ -32,57 +33,98 @@ func NewLRUCache(capacity int) Cache {
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *lruCache) Capacity() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.capacity
|
||||
}
|
||||
|
||||
func (c *lruCache) Used() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.used
|
||||
}
|
||||
|
||||
func (c *lruCache) Size() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.size
|
||||
}
|
||||
|
||||
func (c *lruCache) NumObjects() int {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.alive
|
||||
}
|
||||
|
||||
// SetCapacity set cache capacity.
|
||||
func (c *lruCache) SetCapacity(capacity int) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
c.capacity = capacity
|
||||
c.evict()
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// GetNamespace return namespace object for given id.
|
||||
func (c *lruCache) GetNamespace(id uint64) Namespace {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if p, ok := c.table[id]; ok {
|
||||
return p
|
||||
if ns, ok := c.table[id]; ok {
|
||||
return ns
|
||||
}
|
||||
|
||||
p := &lruNs{
|
||||
ns := &lruNs{
|
||||
lru: c,
|
||||
id: id,
|
||||
table: make(map[uint64]*lruNode),
|
||||
}
|
||||
c.table[id] = p
|
||||
return p
|
||||
c.table[id] = ns
|
||||
return ns
|
||||
}
|
||||
|
||||
func (c *lruCache) ZapNamespace(id uint64) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.zapNB()
|
||||
delete(c.table, id)
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
|
||||
c.mu.Lock()
|
||||
if ns, exist := c.table[id]; exist {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Purge purge entire cache.
|
||||
func (c *lruCache) Purge(fin PurgeFin) {
|
||||
c.Lock()
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.purgeNB(fin)
|
||||
}
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) Zap(closed bool) {
|
||||
c.Lock()
|
||||
func (c *lruCache) Zap() {
|
||||
c.mu.Lock()
|
||||
for _, ns := range c.table {
|
||||
ns.zapNB(closed)
|
||||
ns.zapNB()
|
||||
}
|
||||
c.table = make(map[uint64]*lruNs)
|
||||
c.Unlock()
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *lruCache) evict() {
|
||||
top := &c.recent
|
||||
for n := c.recent.rPrev; c.size > c.capacity && n != top; {
|
||||
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
|
||||
n.state = nodeEvicted
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
c.size -= n.charge
|
||||
n.derefNB()
|
||||
c.used -= n.charge
|
||||
n = c.recent.rPrev
|
||||
}
|
||||
}
|
||||
@@ -94,170 +136,158 @@ type lruNs struct {
|
||||
state nsState
|
||||
}
|
||||
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) (o Object, ok bool) {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
|
||||
ns.lru.mu.Lock()
|
||||
|
||||
switch ns.state {
|
||||
case nsZapped:
|
||||
lru.Unlock()
|
||||
if setf == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var fin func()
|
||||
ok, value, _, fin = setf()
|
||||
if ok {
|
||||
o = &fakeObject{
|
||||
value: value,
|
||||
fin: fin,
|
||||
}
|
||||
}
|
||||
return
|
||||
case nsClosed:
|
||||
lru.Unlock()
|
||||
return
|
||||
if ns.state != nsEffective {
|
||||
ns.lru.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
node, ok := ns.table[key]
|
||||
if ok {
|
||||
switch n.state {
|
||||
switch node.state {
|
||||
case nodeEvicted:
|
||||
// Insert to recent list.
|
||||
n.state = nodeEffective
|
||||
n.ref++
|
||||
lru.size += n.charge
|
||||
lru.evict()
|
||||
node.state = nodeEffective
|
||||
node.ref++
|
||||
ns.lru.used += node.charge
|
||||
ns.lru.evict()
|
||||
fallthrough
|
||||
case nodeEffective:
|
||||
// Bump to front
|
||||
n.rRemove()
|
||||
n.rInsert(&lru.recent)
|
||||
// Bump to front.
|
||||
node.rRemove()
|
||||
node.rInsert(&ns.lru.recent)
|
||||
}
|
||||
n.ref++
|
||||
node.ref++
|
||||
} else {
|
||||
if setf == nil {
|
||||
lru.Unlock()
|
||||
return
|
||||
ns.lru.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
var value interface{}
|
||||
var charge int
|
||||
var fin func()
|
||||
ok, value, charge, fin = setf()
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
return
|
||||
charge, value := setf()
|
||||
if value == nil {
|
||||
ns.lru.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
n = &lruNode{
|
||||
node = &lruNode{
|
||||
ns: ns,
|
||||
key: key,
|
||||
value: value,
|
||||
charge: charge,
|
||||
setfin: fin,
|
||||
ref: 2,
|
||||
ref: 1,
|
||||
}
|
||||
ns.table[key] = n
|
||||
n.rInsert(&lru.recent)
|
||||
ns.table[key] = node
|
||||
|
||||
lru.size += charge
|
||||
lru.evict()
|
||||
ns.lru.size += charge
|
||||
ns.lru.alive++
|
||||
if charge > 0 {
|
||||
node.ref++
|
||||
node.rInsert(&ns.lru.recent)
|
||||
ns.lru.used += charge
|
||||
ns.lru.evict()
|
||||
}
|
||||
}
|
||||
|
||||
lru.Unlock()
|
||||
o = &lruObject{node: n}
|
||||
return
|
||||
ns.lru.mu.Unlock()
|
||||
return &lruHandle{node: node}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
|
||||
lru := ns.lru
|
||||
lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
|
||||
if ns.state != nsEffective {
|
||||
lru.Unlock()
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
ns.lru.mu.Unlock()
|
||||
return false
|
||||
}
|
||||
|
||||
n, ok := ns.table[key]
|
||||
if !ok {
|
||||
lru.Unlock()
|
||||
node, exist := ns.table[key]
|
||||
if !exist {
|
||||
if fin != nil {
|
||||
fin(false)
|
||||
fin(false, false)
|
||||
}
|
||||
ns.lru.mu.Unlock()
|
||||
return false
|
||||
}
|
||||
|
||||
n.delfin = fin
|
||||
switch n.state {
|
||||
case nodeRemoved:
|
||||
lru.Unlock()
|
||||
switch node.state {
|
||||
case nodeDeleted:
|
||||
if fin != nil {
|
||||
fin(true, true)
|
||||
}
|
||||
ns.lru.mu.Unlock()
|
||||
return false
|
||||
case nodeEffective:
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
ns.lru.used -= node.charge
|
||||
node.state = nodeDeleted
|
||||
node.delfin = fin
|
||||
node.rRemove()
|
||||
node.derefNB()
|
||||
default:
|
||||
node.state = nodeDeleted
|
||||
node.delfin = fin
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
|
||||
lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
func (ns *lruNs) purgeNB(fin PurgeFin) {
|
||||
lru := ns.lru
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
for _, n := range ns.table {
|
||||
n.purgefin = fin
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
n.evictNB()
|
||||
for _, node := range ns.table {
|
||||
switch node.state {
|
||||
case nodeDeleted:
|
||||
case nodeEffective:
|
||||
ns.lru.used -= node.charge
|
||||
node.state = nodeDeleted
|
||||
node.purgefin = fin
|
||||
node.rRemove()
|
||||
node.derefNB()
|
||||
default:
|
||||
node.state = nodeDeleted
|
||||
node.purgefin = fin
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
}
|
||||
}
|
||||
|
||||
func (ns *lruNs) Purge(fin PurgeFin) {
|
||||
ns.lru.Lock()
|
||||
ns.lru.mu.Lock()
|
||||
ns.purgeNB(fin)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
func (ns *lruNs) zapNB(closed bool) {
|
||||
lru := ns.lru
|
||||
func (ns *lruNs) zapNB() {
|
||||
if ns.state != nsEffective {
|
||||
return
|
||||
}
|
||||
|
||||
if closed {
|
||||
ns.state = nsClosed
|
||||
} else {
|
||||
ns.state = nsZapped
|
||||
}
|
||||
for _, n := range ns.table {
|
||||
if n.state == nodeEffective {
|
||||
lru.size -= n.charge
|
||||
n.rRemove()
|
||||
ns.state = nsZapped
|
||||
|
||||
for _, node := range ns.table {
|
||||
if node.state == nodeEffective {
|
||||
ns.lru.used -= node.charge
|
||||
node.rRemove()
|
||||
}
|
||||
n.state = nodeRemoved
|
||||
n.execFin()
|
||||
ns.lru.size -= node.charge
|
||||
node.state = nodeDeleted
|
||||
node.fin()
|
||||
}
|
||||
ns.table = nil
|
||||
}
|
||||
|
||||
func (ns *lruNs) Zap(closed bool) {
|
||||
ns.lru.Lock()
|
||||
ns.zapNB(closed)
|
||||
func (ns *lruNs) Zap() {
|
||||
ns.lru.mu.Lock()
|
||||
ns.zapNB()
|
||||
delete(ns.lru.table, ns.id)
|
||||
ns.lru.Unlock()
|
||||
ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruNode struct {
|
||||
@@ -270,7 +300,6 @@ type lruNode struct {
|
||||
charge int
|
||||
ref int
|
||||
state nodeState
|
||||
setfin SetFin
|
||||
delfin DelFin
|
||||
purgefin PurgeFin
|
||||
}
|
||||
@@ -284,7 +313,6 @@ func (n *lruNode) rInsert(at *lruNode) {
|
||||
}
|
||||
|
||||
func (n *lruNode) rRemove() bool {
|
||||
// only remove if not already removed
|
||||
if n.rPrev == nil {
|
||||
return false
|
||||
}
|
||||
@@ -297,58 +325,58 @@ func (n *lruNode) rRemove() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (n *lruNode) execFin() {
|
||||
if n.setfin != nil {
|
||||
n.setfin()
|
||||
n.setfin = nil
|
||||
func (n *lruNode) fin() {
|
||||
if r, ok := n.value.(util.Releaser); ok {
|
||||
r.Release()
|
||||
}
|
||||
if n.purgefin != nil {
|
||||
n.purgefin(n.ns.id, n.key, n.delfin)
|
||||
n.purgefin(n.ns.id, n.key)
|
||||
n.delfin = nil
|
||||
n.purgefin = nil
|
||||
} else if n.delfin != nil {
|
||||
n.delfin(true)
|
||||
n.delfin(true, false)
|
||||
n.delfin = nil
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evictNB() {
|
||||
func (n *lruNode) derefNB() {
|
||||
n.ref--
|
||||
if n.ref == 0 {
|
||||
if n.ns.state == nsEffective {
|
||||
// remove elem
|
||||
// Remove elemement.
|
||||
delete(n.ns.table, n.key)
|
||||
// execute finalizer
|
||||
n.execFin()
|
||||
n.ns.lru.size -= n.charge
|
||||
n.ns.lru.alive--
|
||||
n.fin()
|
||||
}
|
||||
n.value = nil
|
||||
} else if n.ref < 0 {
|
||||
panic("leveldb/cache: lruCache: negative node reference")
|
||||
}
|
||||
}
|
||||
|
||||
func (n *lruNode) evict() {
|
||||
n.ns.lru.Lock()
|
||||
n.evictNB()
|
||||
n.ns.lru.Unlock()
|
||||
func (n *lruNode) deref() {
|
||||
n.ns.lru.mu.Lock()
|
||||
n.derefNB()
|
||||
n.ns.lru.mu.Unlock()
|
||||
}
|
||||
|
||||
type lruObject struct {
|
||||
type lruHandle struct {
|
||||
node *lruNode
|
||||
once uint32
|
||||
}
|
||||
|
||||
func (o *lruObject) Value() interface{} {
|
||||
if atomic.LoadUint32(&o.once) == 0 {
|
||||
return o.node.value
|
||||
func (h *lruHandle) Value() interface{} {
|
||||
if atomic.LoadUint32(&h.once) == 0 {
|
||||
return h.node.value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *lruObject) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&o.once, 0, 1) {
|
||||
func (h *lruHandle) Release() {
|
||||
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
|
||||
return
|
||||
}
|
||||
|
||||
o.node.evict()
|
||||
o.node = nil
|
||||
h.node.deref()
|
||||
h.node = nil
|
||||
}
|
||||
|
||||
55
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
55
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db.go
generated
vendored
@@ -14,6 +14,7 @@ import (
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
@@ -35,8 +36,8 @@ type DB struct {
|
||||
|
||||
// MemDB.
|
||||
memMu sync.RWMutex
|
||||
mem *memdb.DB
|
||||
frozenMem *memdb.DB
|
||||
memPool chan *memdb.DB
|
||||
mem, frozenMem *memDB
|
||||
journal *journal.Writer
|
||||
journalWriter storage.Writer
|
||||
journalFile storage.File
|
||||
@@ -47,6 +48,9 @@ type DB struct {
|
||||
snapsMu sync.Mutex
|
||||
snapsRoot snapshotElement
|
||||
|
||||
// Stats.
|
||||
aliveSnaps, aliveIters int32
|
||||
|
||||
// Write.
|
||||
writeC chan *Batch
|
||||
writeMergedC chan bool
|
||||
@@ -79,6 +83,8 @@ func openDB(s *session) (*DB, error) {
|
||||
s: s,
|
||||
// Initial sequence
|
||||
seq: s.stSeq,
|
||||
// MemDB
|
||||
memPool: make(chan *memdb.DB, 1),
|
||||
// Write
|
||||
writeC: make(chan *Batch),
|
||||
writeMergedC: make(chan bool),
|
||||
@@ -120,6 +126,7 @@ func openDB(s *session) (*DB, error) {
|
||||
go db.tCompaction()
|
||||
go db.mCompaction()
|
||||
go db.jWriter()
|
||||
go db.mpoolDrain()
|
||||
|
||||
s.logf("db@open done T·%v", time.Since(start))
|
||||
|
||||
@@ -257,6 +264,7 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
var mSeq uint64
|
||||
var good, corrupted int
|
||||
rec := new(sessionRecord)
|
||||
bpool := util.NewBufferPool(o.GetBlockSize() + 5)
|
||||
buildTable := func(iter iterator.Iterator) (tmp storage.File, size int64, err error) {
|
||||
tmp = s.newTemp()
|
||||
writer, err := tmp.Create()
|
||||
@@ -314,7 +322,7 @@ func recoverTable(s *session, o *opt.Options) error {
|
||||
var tSeq uint64
|
||||
var tgood, tcorrupted, blockerr int
|
||||
var imin, imax []byte
|
||||
tr := table.NewReader(reader, size, nil, o)
|
||||
tr := table.NewReader(reader, size, nil, bpool, o)
|
||||
iter := tr.NewIterator(nil, nil)
|
||||
iter.(iterator.ErrorCallbackSetter).SetErrorCallback(func(err error) {
|
||||
s.logf("table@recovery found error @%d %q", file.Num(), err)
|
||||
@@ -481,10 +489,11 @@ func (db *DB) recoverJournal() error {
|
||||
|
||||
buf.Reset()
|
||||
if _, err := buf.ReadFrom(r); err != nil {
|
||||
if strict {
|
||||
if err == io.ErrUnexpectedEOF {
|
||||
continue
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
if err := batch.decode(buf.Bytes()); err != nil {
|
||||
return err
|
||||
@@ -558,19 +567,20 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
|
||||
ikey := newIKey(key, seq, tSeek)
|
||||
|
||||
em, fm := db.getMems()
|
||||
for _, m := range [...]*memdb.DB{em, fm} {
|
||||
for _, m := range [...]*memDB{em, fm} {
|
||||
if m == nil {
|
||||
continue
|
||||
}
|
||||
defer m.decref()
|
||||
|
||||
mk, mv, me := m.Find(ikey)
|
||||
mk, mv, me := m.mdb.Find(ikey)
|
||||
if me == nil {
|
||||
ukey, _, t, ok := parseIkey(mk)
|
||||
if ok && db.s.icmp.uCompare(ukey, key) == 0 {
|
||||
if t == tDel {
|
||||
return nil, ErrNotFound
|
||||
}
|
||||
return mv, nil
|
||||
return append([]byte{}, mv...), nil
|
||||
}
|
||||
} else if me != ErrNotFound {
|
||||
return nil, me
|
||||
@@ -590,8 +600,9 @@ func (db *DB) get(key []byte, seq uint64, ro *opt.ReadOptions) (value []byte, er
|
||||
// Get gets the value for the given key. It returns ErrNotFound if the
|
||||
// DB does not contain the key.
|
||||
//
|
||||
// The caller should not modify the contents of the returned slice, but
|
||||
// it is safe to modify the contents of the argument after Get returns.
|
||||
// The returned slice is its own copy, it is safe to modify the contents
|
||||
// of the returned slice.
|
||||
// It is safe to modify the contents of the argument after Get returns.
|
||||
func (db *DB) Get(key []byte, ro *opt.ReadOptions) (value []byte, err error) {
|
||||
err = db.ok()
|
||||
if err != nil {
|
||||
@@ -649,6 +660,16 @@ func (db *DB) GetSnapshot() (*Snapshot, error) {
|
||||
// Returns statistics of the underlying DB.
|
||||
// leveldb.sstables
|
||||
// Returns sstables list for each level.
|
||||
// leveldb.blockpool
|
||||
// Returns block pool stats.
|
||||
// leveldb.cachedblock
|
||||
// Returns size of cached block.
|
||||
// leveldb.openedtables
|
||||
// Returns number of opened tables.
|
||||
// leveldb.alivesnaps
|
||||
// Returns number of alive snapshots.
|
||||
// leveldb.aliveiters
|
||||
// Returns number of alive iterators.
|
||||
func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
err = db.ok()
|
||||
if err != nil {
|
||||
@@ -694,6 +715,20 @@ func (db *DB) GetProperty(name string) (value string, err error) {
|
||||
value += fmt.Sprintf("%d:%d[%q .. %q]\n", t.file.Num(), t.size, t.imin, t.imax)
|
||||
}
|
||||
}
|
||||
case p == "blockpool":
|
||||
value = fmt.Sprintf("%v", db.s.tops.bpool)
|
||||
case p == "cachedblock":
|
||||
if bc := db.s.o.GetBlockCache(); bc != nil {
|
||||
value = fmt.Sprintf("%d", bc.Size())
|
||||
} else {
|
||||
value = "<nil>"
|
||||
}
|
||||
case p == "openedtables":
|
||||
value = fmt.Sprintf("%d", db.s.tops.cache.Size())
|
||||
case p == "alivesnaps":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveSnaps))
|
||||
case p == "aliveiters":
|
||||
value = fmt.Sprintf("%d", atomic.LoadInt32(&db.aliveIters))
|
||||
default:
|
||||
err = errors.New("leveldb: GetProperty: unknown property: " + name)
|
||||
}
|
||||
|
||||
7
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
7
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_compaction.go
generated
vendored
@@ -216,14 +216,15 @@ func (db *DB) memCompaction() {
|
||||
if mem == nil {
|
||||
return
|
||||
}
|
||||
defer mem.decref()
|
||||
|
||||
c := newCMem(db.s)
|
||||
stats := new(cStatsStaging)
|
||||
|
||||
db.logf("mem@flush N·%d S·%s", mem.Len(), shortenb(mem.Size()))
|
||||
db.logf("mem@flush N·%d S·%s", mem.mdb.Len(), shortenb(mem.mdb.Size()))
|
||||
|
||||
// Don't compact empty memdb.
|
||||
if mem.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
db.logf("mem@flush skipping")
|
||||
// drop frozen mem
|
||||
db.dropFrozenMem()
|
||||
@@ -241,7 +242,7 @@ func (db *DB) memCompaction() {
|
||||
db.compactionTransact("mem@flush", func(cnt *compactionTransactCounter) (err error) {
|
||||
stats.startTimer()
|
||||
defer stats.stopTimer()
|
||||
return c.flush(mem, -1)
|
||||
return c.flush(mem.mdb, -1)
|
||||
}, func() error {
|
||||
for _, r := range c.rec.addedTables {
|
||||
db.logf("mem@flush rollback @%d", r.num)
|
||||
|
||||
27
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
27
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_iter.go
generated
vendored
@@ -9,6 +9,8 @@ package leveldb
|
||||
import (
|
||||
"errors"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -19,6 +21,17 @@ var (
|
||||
errInvalidIkey = errors.New("leveldb: Iterator: invalid internal key")
|
||||
)
|
||||
|
||||
type memdbReleaser struct {
|
||||
once sync.Once
|
||||
m *memDB
|
||||
}
|
||||
|
||||
func (mr *memdbReleaser) Release() {
|
||||
mr.once.Do(func() {
|
||||
mr.m.decref()
|
||||
})
|
||||
}
|
||||
|
||||
func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
em, fm := db.getMems()
|
||||
v := db.s.version()
|
||||
@@ -26,9 +39,13 @@ func (db *DB) newRawIterator(slice *util.Range, ro *opt.ReadOptions) iterator.It
|
||||
ti := v.getIterators(slice, ro)
|
||||
n := len(ti) + 2
|
||||
i := make([]iterator.Iterator, 0, n)
|
||||
i = append(i, em.NewIterator(slice))
|
||||
emi := em.mdb.NewIterator(slice)
|
||||
emi.SetReleaser(&memdbReleaser{m: em})
|
||||
i = append(i, emi)
|
||||
if fm != nil {
|
||||
i = append(i, fm.NewIterator(slice))
|
||||
fmi := fm.mdb.NewIterator(slice)
|
||||
fmi.SetReleaser(&memdbReleaser{m: fm})
|
||||
i = append(i, fmi)
|
||||
}
|
||||
i = append(i, ti...)
|
||||
strict := db.s.o.GetStrict(opt.StrictIterator) || ro.GetStrict(opt.StrictIterator)
|
||||
@@ -50,6 +67,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
|
||||
}
|
||||
rawIter := db.newRawIterator(islice, ro)
|
||||
iter := &dbIter{
|
||||
db: db,
|
||||
icmp: db.s.icmp,
|
||||
iter: rawIter,
|
||||
seq: seq,
|
||||
@@ -57,6 +75,7 @@ func (db *DB) newIterator(seq uint64, slice *util.Range, ro *opt.ReadOptions) *d
|
||||
key: make([]byte, 0),
|
||||
value: make([]byte, 0),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveIters, 1)
|
||||
runtime.SetFinalizer(iter, (*dbIter).Release)
|
||||
return iter
|
||||
}
|
||||
@@ -73,6 +92,7 @@ const (
|
||||
|
||||
// dbIter represent an interator states over a database session.
|
||||
type dbIter struct {
|
||||
db *DB
|
||||
icmp *iComparer
|
||||
iter iterator.Iterator
|
||||
seq uint64
|
||||
@@ -287,6 +307,7 @@ func (i *dbIter) Release() {
|
||||
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
}
|
||||
|
||||
i.dir = dirReleased
|
||||
@@ -294,6 +315,8 @@ func (i *dbIter) Release() {
|
||||
i.value = nil
|
||||
i.iter.Release()
|
||||
i.iter = nil
|
||||
atomic.AddInt32(&i.db.aliveIters, -1)
|
||||
i.db = nil
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
9
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_snapshot.go
generated
vendored
@@ -9,6 +9,7 @@ package leveldb
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/iterator"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
@@ -81,7 +82,7 @@ func (db *DB) minSeq() uint64 {
|
||||
type Snapshot struct {
|
||||
db *DB
|
||||
elem *snapshotElement
|
||||
mu sync.Mutex
|
||||
mu sync.RWMutex
|
||||
released bool
|
||||
}
|
||||
|
||||
@@ -91,6 +92,7 @@ func (db *DB) newSnapshot() *Snapshot {
|
||||
db: db,
|
||||
elem: db.acquireSnapshot(),
|
||||
}
|
||||
atomic.AddInt32(&db.aliveSnaps, 1)
|
||||
runtime.SetFinalizer(snap, (*Snapshot).Release)
|
||||
return snap
|
||||
}
|
||||
@@ -105,8 +107,8 @@ func (snap *Snapshot) Get(key []byte, ro *opt.ReadOptions) (value []byte, err er
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
snap.mu.Lock()
|
||||
defer snap.mu.Unlock()
|
||||
snap.mu.RLock()
|
||||
defer snap.mu.RUnlock()
|
||||
if snap.released {
|
||||
err = ErrSnapshotReleased
|
||||
return
|
||||
@@ -160,6 +162,7 @@ func (snap *Snapshot) Release() {
|
||||
|
||||
snap.released = true
|
||||
snap.db.releaseSnapshot(snap.elem)
|
||||
atomic.AddInt32(&snap.db.aliveSnaps, -1)
|
||||
snap.db = nil
|
||||
snap.elem = nil
|
||||
}
|
||||
|
||||
99
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
99
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_state.go
generated
vendored
@@ -8,11 +8,36 @@ package leveldb
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/syndtr/goleveldb/leveldb/journal"
|
||||
"github.com/syndtr/goleveldb/leveldb/memdb"
|
||||
)
|
||||
|
||||
type memDB struct {
|
||||
db *DB
|
||||
mdb *memdb.DB
|
||||
ref int32
|
||||
}
|
||||
|
||||
func (m *memDB) incref() {
|
||||
atomic.AddInt32(&m.ref, 1)
|
||||
}
|
||||
|
||||
func (m *memDB) decref() {
|
||||
if ref := atomic.AddInt32(&m.ref, -1); ref == 0 {
|
||||
// Only put back memdb with std capacity.
|
||||
if m.mdb.Capacity() == m.db.s.o.GetWriteBuffer() {
|
||||
m.mdb.Reset()
|
||||
m.db.mpoolPut(m.mdb)
|
||||
}
|
||||
m.db = nil
|
||||
m.mdb = nil
|
||||
} else if ref < 0 {
|
||||
panic("negative memdb ref")
|
||||
}
|
||||
}
|
||||
|
||||
// Get latest sequence number.
|
||||
func (db *DB) getSeq() uint64 {
|
||||
return atomic.LoadUint64(&db.seq)
|
||||
@@ -23,9 +48,44 @@ func (db *DB) addSeq(delta uint64) {
|
||||
atomic.AddUint64(&db.seq, delta)
|
||||
}
|
||||
|
||||
func (db *DB) mpoolPut(mem *memdb.DB) {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
select {
|
||||
case db.memPool <- mem:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolGet() *memdb.DB {
|
||||
select {
|
||||
case mem := <-db.memPool:
|
||||
return mem
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) mpoolDrain() {
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
select {
|
||||
case <-db.memPool:
|
||||
default:
|
||||
}
|
||||
case _, _ = <-db.closeC:
|
||||
close(db.memPool)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create new memdb and froze the old one; need external synchronization.
|
||||
// newMem only called synchronously by the writer.
|
||||
func (db *DB) newMem(n int) (mem *memdb.DB, err error) {
|
||||
func (db *DB) newMem(n int) (mem *memDB, err error) {
|
||||
num := db.s.allocFileNum()
|
||||
file := db.s.getJournalFile(num)
|
||||
w, err := file.Create()
|
||||
@@ -37,6 +97,10 @@ func (db *DB) newMem(n int) (mem *memdb.DB, err error) {
|
||||
db.memMu.Lock()
|
||||
defer db.memMu.Unlock()
|
||||
|
||||
if db.frozenMem != nil {
|
||||
panic("still has frozen mem")
|
||||
}
|
||||
|
||||
if db.journal == nil {
|
||||
db.journal = journal.NewWriter(w)
|
||||
} else {
|
||||
@@ -47,8 +111,16 @@ func (db *DB) newMem(n int) (mem *memdb.DB, err error) {
|
||||
db.journalWriter = w
|
||||
db.journalFile = file
|
||||
db.frozenMem = db.mem
|
||||
db.mem = memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n))
|
||||
mem = db.mem
|
||||
mdb := db.mpoolGet()
|
||||
if mdb == nil || mdb.Capacity() < n {
|
||||
mdb = memdb.New(db.s.icmp, maxInt(db.s.o.GetWriteBuffer(), n))
|
||||
}
|
||||
mem = &memDB{
|
||||
db: db,
|
||||
mdb: mdb,
|
||||
ref: 2,
|
||||
}
|
||||
db.mem = mem
|
||||
// The seq only incremented by the writer. And whoever called newMem
|
||||
// should hold write lock, so no need additional synchronization here.
|
||||
db.frozenSeq = db.seq
|
||||
@@ -56,16 +128,27 @@ func (db *DB) newMem(n int) (mem *memdb.DB, err error) {
|
||||
}
|
||||
|
||||
// Get all memdbs.
|
||||
func (db *DB) getMems() (e *memdb.DB, f *memdb.DB) {
|
||||
func (db *DB) getMems() (e, f *memDB) {
|
||||
db.memMu.RLock()
|
||||
defer db.memMu.RUnlock()
|
||||
if db.mem == nil {
|
||||
panic("nil effective mem")
|
||||
}
|
||||
db.mem.incref()
|
||||
if db.frozenMem != nil {
|
||||
db.frozenMem.incref()
|
||||
}
|
||||
return db.mem, db.frozenMem
|
||||
}
|
||||
|
||||
// Get frozen memdb.
|
||||
func (db *DB) getEffectiveMem() *memdb.DB {
|
||||
func (db *DB) getEffectiveMem() *memDB {
|
||||
db.memMu.RLock()
|
||||
defer db.memMu.RUnlock()
|
||||
if db.mem == nil {
|
||||
panic("nil effective mem")
|
||||
}
|
||||
db.mem.incref()
|
||||
return db.mem
|
||||
}
|
||||
|
||||
@@ -77,9 +160,12 @@ func (db *DB) hasFrozenMem() bool {
|
||||
}
|
||||
|
||||
// Get frozen memdb.
|
||||
func (db *DB) getFrozenMem() *memdb.DB {
|
||||
func (db *DB) getFrozenMem() *memDB {
|
||||
db.memMu.RLock()
|
||||
defer db.memMu.RUnlock()
|
||||
if db.frozenMem != nil {
|
||||
db.frozenMem.incref()
|
||||
}
|
||||
return db.frozenMem
|
||||
}
|
||||
|
||||
@@ -92,6 +178,7 @@ func (db *DB) dropFrozenMem() {
|
||||
db.logf("journal@remove removed @%d", db.frozenJournalFile.Num())
|
||||
}
|
||||
db.frozenJournalFile = nil
|
||||
db.frozenMem.decref()
|
||||
db.frozenMem = nil
|
||||
db.memMu.Unlock()
|
||||
}
|
||||
|
||||
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_test.go
generated
vendored
@@ -1577,7 +1577,11 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
return fmt.Sprintf("key%06d", i)
|
||||
}
|
||||
|
||||
n := 10000
|
||||
const (
|
||||
n = 10000
|
||||
indexOverheat = 19898
|
||||
filterOverheat = 19799
|
||||
)
|
||||
|
||||
// Populate multiple layers
|
||||
for i := 0; i < n; i++ {
|
||||
@@ -1601,7 +1605,7 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
cnt := int(h.stor.ReadCounter())
|
||||
t.Logf("lookup of %d present keys yield %d sstable I/O reads", n, cnt)
|
||||
|
||||
if min, max := n, n+2*n/100; cnt < min || cnt > max {
|
||||
if min, max := n+indexOverheat+filterOverheat, n+indexOverheat+filterOverheat+2*n/100; cnt < min || cnt > max {
|
||||
t.Errorf("num of sstable I/O reads of present keys not in range of %d - %d, got %d", min, max, cnt)
|
||||
}
|
||||
|
||||
@@ -1612,7 +1616,7 @@ func TestDb_BloomFilter(t *testing.T) {
|
||||
}
|
||||
cnt = int(h.stor.ReadCounter())
|
||||
t.Logf("lookup of %d missing keys yield %d sstable I/O reads", n, cnt)
|
||||
if max := 3 * n / 100; cnt > max {
|
||||
if max := 3*n/100 + indexOverheat + filterOverheat; cnt > max {
|
||||
t.Errorf("num of sstable I/O reads of missing keys was more than %d, got %d", max, cnt)
|
||||
}
|
||||
|
||||
|
||||
37
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
37
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/db_write.go
generated
vendored
@@ -45,7 +45,7 @@ func (db *DB) jWriter() {
|
||||
}
|
||||
}
|
||||
|
||||
func (db *DB) rotateMem(n int) (mem *memdb.DB, err error) {
|
||||
func (db *DB) rotateMem(n int) (mem *memDB, err error) {
|
||||
// Wait for pending memdb compaction.
|
||||
err = db.compSendIdle(db.mcompCmdC)
|
||||
if err != nil {
|
||||
@@ -63,13 +63,19 @@ func (db *DB) rotateMem(n int) (mem *memdb.DB, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func (db *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
|
||||
func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
|
||||
delayed := false
|
||||
flush := func() bool {
|
||||
flush := func() (retry bool) {
|
||||
v := db.s.version()
|
||||
defer v.release()
|
||||
mem = db.getEffectiveMem()
|
||||
nn = mem.Free()
|
||||
defer func() {
|
||||
if retry {
|
||||
mem.decref()
|
||||
mem = nil
|
||||
}
|
||||
}()
|
||||
nn = mem.mdb.Free()
|
||||
switch {
|
||||
case v.tLen(0) >= kL0_SlowdownWritesTrigger && !delayed:
|
||||
delayed = true
|
||||
@@ -84,12 +90,17 @@ func (db *DB) flush(n int) (mem *memdb.DB, nn int, err error) {
|
||||
}
|
||||
default:
|
||||
// Allow memdb to grow if it has no entry.
|
||||
if mem.Len() == 0 {
|
||||
if mem.mdb.Len() == 0 {
|
||||
nn = n
|
||||
return false
|
||||
} else {
|
||||
mem.decref()
|
||||
mem, err = db.rotateMem(n)
|
||||
if err == nil {
|
||||
nn = mem.mdb.Free()
|
||||
} else {
|
||||
nn = 0
|
||||
}
|
||||
}
|
||||
mem, err = db.rotateMem(n)
|
||||
nn = mem.Free()
|
||||
return false
|
||||
}
|
||||
return true
|
||||
@@ -140,6 +151,7 @@ retry:
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer mem.decref()
|
||||
|
||||
// Calculate maximum size of the batch.
|
||||
m := 1 << 20
|
||||
@@ -178,7 +190,7 @@ drain:
|
||||
return
|
||||
case db.journalC <- b:
|
||||
// Write into memdb
|
||||
b.memReplay(mem)
|
||||
b.memReplay(mem.mdb)
|
||||
}
|
||||
// Wait for journal writer
|
||||
select {
|
||||
@@ -188,7 +200,7 @@ drain:
|
||||
case err = <-db.journalAckC:
|
||||
if err != nil {
|
||||
// Revert memdb if error detected
|
||||
b.revertMemReplay(mem)
|
||||
b.revertMemReplay(mem.mdb)
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -197,7 +209,7 @@ drain:
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
b.memReplay(mem)
|
||||
b.memReplay(mem.mdb)
|
||||
}
|
||||
|
||||
// Set last seq number.
|
||||
@@ -258,7 +270,8 @@ func (db *DB) CompactRange(r util.Range) error {
|
||||
|
||||
// Check for overlaps in memdb.
|
||||
mem := db.getEffectiveMem()
|
||||
if isMemOverlaps(db.s.icmp, mem, r.Start, r.Limit) {
|
||||
defer mem.decref()
|
||||
if isMemOverlaps(db.s.icmp, mem.mdb, r.Start, r.Limit) {
|
||||
// Memdb compaction.
|
||||
if _, err := db.rotateMem(0); err != nil {
|
||||
<-db.writeLockC
|
||||
|
||||
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
10
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/doc.go
generated
vendored
@@ -37,6 +37,16 @@
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Iterate over subset of database content with a particular prefix:
|
||||
// iter := db.NewIterator(util.BytesPrefix([]byte("foo-")), nil)
|
||||
// for iter.Next() {
|
||||
// // Use key/value.
|
||||
// ...
|
||||
// }
|
||||
// iter.Release()
|
||||
// err = iter.Error()
|
||||
// ...
|
||||
//
|
||||
// Seek-then-Iterate:
|
||||
//
|
||||
// iter := db.NewIterator(nil, nil)
|
||||
|
||||
58
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/go13_bench_test.go
generated
vendored
Normal file
58
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/go13_bench_test.go
generated
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.3
|
||||
|
||||
package leveldb
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func BenchmarkDBReadConcurrent(b *testing.B) {
|
||||
p := openDBBench(b, false)
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.gc()
|
||||
defer p.close()
|
||||
|
||||
b.ResetTimer()
|
||||
b.SetBytes(116)
|
||||
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
iter := p.newIter()
|
||||
defer iter.Release()
|
||||
for pb.Next() && iter.Next() {
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func BenchmarkDBReadConcurrent2(b *testing.B) {
|
||||
p := openDBBench(b, false)
|
||||
p.populate(b.N)
|
||||
p.fill()
|
||||
p.gc()
|
||||
defer p.close()
|
||||
|
||||
b.ResetTimer()
|
||||
b.SetBytes(116)
|
||||
|
||||
var dir uint32
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
iter := p.newIter()
|
||||
defer iter.Release()
|
||||
if atomic.AddUint32(&dir, 1)%2 == 0 {
|
||||
for pb.Next() && iter.Next() {
|
||||
}
|
||||
} else {
|
||||
if pb.Next() && iter.Last() {
|
||||
for pb.Next() && iter.Prev() {
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
121
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal.go
generated
vendored
121
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal.go
generated
vendored
@@ -103,18 +103,18 @@ type flusher interface {
|
||||
Flush() error
|
||||
}
|
||||
|
||||
// DroppedError is the error type that passed to Dropper.Drop method.
|
||||
type DroppedError struct {
|
||||
// ErrCorrupted is the error type that generated by corrupted block or chunk.
|
||||
type ErrCorrupted struct {
|
||||
Size int
|
||||
Reason string
|
||||
}
|
||||
|
||||
func (e DroppedError) Error() string {
|
||||
return fmt.Sprintf("leveldb/journal: dropped %d bytes: %s", e.Size, e.Reason)
|
||||
func (e ErrCorrupted) Error() string {
|
||||
return fmt.Sprintf("leveldb/journal: block/chunk corrupted: %s (%d bytes)", e.Reason, e.Size)
|
||||
}
|
||||
|
||||
// Dropper is the interface that wrap simple Drop method. The Drop
|
||||
// method will be called when the journal reader dropping a chunk.
|
||||
// method will be called when the journal reader dropping a block or chunk.
|
||||
type Dropper interface {
|
||||
Drop(err error)
|
||||
}
|
||||
@@ -158,76 +158,78 @@ func NewReader(r io.Reader, dropper Dropper, strict, checksum bool) *Reader {
|
||||
}
|
||||
}
|
||||
|
||||
var errSkip = errors.New("leveldb/journal: skipped")
|
||||
|
||||
func (r *Reader) corrupt(n int, reason string, skip bool) error {
|
||||
if r.dropper != nil {
|
||||
r.dropper.Drop(ErrCorrupted{n, reason})
|
||||
}
|
||||
if r.strict && !skip {
|
||||
r.err = ErrCorrupted{n, reason}
|
||||
return r.err
|
||||
}
|
||||
return errSkip
|
||||
}
|
||||
|
||||
// nextChunk sets r.buf[r.i:r.j] to hold the next chunk's payload, reading the
|
||||
// next block into the buffer if necessary.
|
||||
func (r *Reader) nextChunk(wantFirst, skip bool) error {
|
||||
func (r *Reader) nextChunk(first bool) error {
|
||||
for {
|
||||
if r.j+headerSize <= r.n {
|
||||
checksum := binary.LittleEndian.Uint32(r.buf[r.j+0 : r.j+4])
|
||||
length := binary.LittleEndian.Uint16(r.buf[r.j+4 : r.j+6])
|
||||
chunkType := r.buf[r.j+6]
|
||||
|
||||
var err error
|
||||
if checksum == 0 && length == 0 && chunkType == 0 {
|
||||
// Drop entire block.
|
||||
err = DroppedError{r.n - r.j, "zero header"}
|
||||
m := r.n - r.j
|
||||
r.i = r.n
|
||||
r.j = r.n
|
||||
return r.corrupt(m, "zero header", false)
|
||||
} else {
|
||||
m := r.n - r.j
|
||||
r.i = r.j + headerSize
|
||||
r.j = r.j + headerSize + int(length)
|
||||
if r.j > r.n {
|
||||
// Drop entire block.
|
||||
err = DroppedError{m, "chunk length overflows block"}
|
||||
r.i = r.n
|
||||
r.j = r.n
|
||||
return r.corrupt(m, "chunk length overflows block", false)
|
||||
} else if r.checksum && checksum != util.NewCRC(r.buf[r.i-1:r.j]).Value() {
|
||||
// Drop entire block.
|
||||
err = DroppedError{m, "checksum mismatch"}
|
||||
r.i = r.n
|
||||
r.j = r.n
|
||||
return r.corrupt(m, "checksum mismatch", false)
|
||||
}
|
||||
}
|
||||
if wantFirst && err == nil && chunkType != fullChunkType && chunkType != firstChunkType {
|
||||
if skip {
|
||||
// The chunk are intentionally skipped.
|
||||
if chunkType == lastChunkType {
|
||||
skip = false
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
// Drop the chunk.
|
||||
err = DroppedError{r.j - r.i + headerSize, "orphan chunk"}
|
||||
}
|
||||
if first && chunkType != fullChunkType && chunkType != firstChunkType {
|
||||
m := r.j - r.i
|
||||
r.i = r.j
|
||||
// Report the error, but skip it.
|
||||
return r.corrupt(m+headerSize, "orphan chunk", true)
|
||||
}
|
||||
if err == nil {
|
||||
r.last = chunkType == fullChunkType || chunkType == lastChunkType
|
||||
} else {
|
||||
if r.dropper != nil {
|
||||
r.dropper.Drop(err)
|
||||
}
|
||||
if r.strict {
|
||||
r.err = err
|
||||
}
|
||||
r.last = chunkType == fullChunkType || chunkType == lastChunkType
|
||||
return nil
|
||||
}
|
||||
|
||||
// The last block.
|
||||
if r.n < blockSize && r.n > 0 {
|
||||
if !first {
|
||||
return r.corrupt(0, "missing chunk part", false)
|
||||
}
|
||||
r.err = io.EOF
|
||||
return r.err
|
||||
}
|
||||
|
||||
// Read block.
|
||||
n, err := io.ReadFull(r.r, r.buf[:])
|
||||
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
||||
return err
|
||||
}
|
||||
if r.n < blockSize && r.n > 0 {
|
||||
// This is the last block.
|
||||
if r.j != r.n {
|
||||
r.err = io.ErrUnexpectedEOF
|
||||
} else {
|
||||
r.err = io.EOF
|
||||
}
|
||||
return r.err
|
||||
}
|
||||
n, err := io.ReadFull(r.r, r.buf[:])
|
||||
if err != nil && err != io.ErrUnexpectedEOF {
|
||||
r.err = err
|
||||
return r.err
|
||||
}
|
||||
if n == 0 {
|
||||
if !first {
|
||||
return r.corrupt(0, "missing chunk part", false)
|
||||
}
|
||||
r.err = io.EOF
|
||||
return r.err
|
||||
}
|
||||
@@ -237,29 +239,26 @@ func (r *Reader) nextChunk(wantFirst, skip bool) error {
|
||||
|
||||
// Next returns a reader for the next journal. It returns io.EOF if there are no
|
||||
// more journals. The reader returned becomes stale after the next Next call,
|
||||
// and should no longer be used.
|
||||
// and should no longer be used. If strict is false, the reader will returns
|
||||
// io.ErrUnexpectedEOF error when found corrupted journal.
|
||||
func (r *Reader) Next() (io.Reader, error) {
|
||||
r.seq++
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
}
|
||||
skip := !r.last
|
||||
r.i = r.j
|
||||
for {
|
||||
r.i = r.j
|
||||
if r.nextChunk(true, skip) != nil {
|
||||
// So that 'orphan chunk' drop will be reported.
|
||||
skip = false
|
||||
} else {
|
||||
if err := r.nextChunk(true); err == nil {
|
||||
break
|
||||
}
|
||||
if r.err != nil {
|
||||
return nil, r.err
|
||||
} else if err != errSkip {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return &singleReader{r, r.seq, nil}, nil
|
||||
}
|
||||
|
||||
// Reset resets the journal reader, allows reuse of the journal reader.
|
||||
// Reset resets the journal reader, allows reuse of the journal reader. Reset returns
|
||||
// last accumulated error.
|
||||
func (r *Reader) Reset(reader io.Reader, dropper Dropper, strict, checksum bool) error {
|
||||
r.seq++
|
||||
err := r.err
|
||||
@@ -296,7 +295,11 @@ func (x *singleReader) Read(p []byte) (int, error) {
|
||||
if r.last {
|
||||
return 0, io.EOF
|
||||
}
|
||||
if x.err = r.nextChunk(false, false); x.err != nil {
|
||||
x.err = r.nextChunk(false)
|
||||
if x.err != nil {
|
||||
if x.err == errSkip {
|
||||
x.err = io.ErrUnexpectedEOF
|
||||
}
|
||||
return 0, x.err
|
||||
}
|
||||
}
|
||||
@@ -320,7 +323,11 @@ func (x *singleReader) ReadByte() (byte, error) {
|
||||
if r.last {
|
||||
return 0, io.EOF
|
||||
}
|
||||
if x.err = r.nextChunk(false, false); x.err != nil {
|
||||
x.err = r.nextChunk(false)
|
||||
if x.err != nil {
|
||||
if x.err == errSkip {
|
||||
x.err = io.ErrUnexpectedEOF
|
||||
}
|
||||
return 0, x.err
|
||||
}
|
||||
}
|
||||
|
||||
490
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal_test.go
generated
vendored
490
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/journal/journal_test.go
generated
vendored
@@ -12,6 +12,7 @@ package journal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
@@ -326,3 +327,492 @@ func TestStaleWriter(t *testing.T) {
|
||||
t.Fatalf("stale write #1: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_MissingLastBlock(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-1024)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Cut the last block.
|
||||
b := buf.Bytes()[:blockSize]
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read.
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if n != blockSize-1024 {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, blockSize-1024)
|
||||
}
|
||||
|
||||
// Second read.
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != io.ErrUnexpectedEOF {
|
||||
t.Fatalf("read #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_CorruptedFirstBlock(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
|
||||
t.Fatalf("write #2: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Fourth record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
|
||||
t.Fatalf("write #3: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := buf.Bytes()
|
||||
// Corrupting block #0.
|
||||
for i := 0; i < 1024; i++ {
|
||||
b[i] = '1'
|
||||
}
|
||||
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read (third record).
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if want := int64(blockSize-headerSize) + 1; n != want {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Second read (fourth record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #1: %v", err)
|
||||
}
|
||||
if want := int64(blockSize-headerSize) + 2; n != want {
|
||||
t.Fatalf("read #1: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_CorruptedMiddleBlock(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
|
||||
t.Fatalf("write #2: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Fourth record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
|
||||
t.Fatalf("write #3: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := buf.Bytes()
|
||||
// Corrupting block #1.
|
||||
for i := 0; i < 1024; i++ {
|
||||
b[blockSize+i] = '1'
|
||||
}
|
||||
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read (first record).
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if want := int64(blockSize / 2); n != want {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Second read (second record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != io.ErrUnexpectedEOF {
|
||||
t.Fatalf("read #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third read (fourth record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #2: %v", err)
|
||||
}
|
||||
if want := int64(blockSize-headerSize) + 2; n != want {
|
||||
t.Fatalf("read #2: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_CorruptedLastBlock(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
|
||||
t.Fatalf("write #2: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Fourth record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+2)); err != nil {
|
||||
t.Fatalf("write #3: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := buf.Bytes()
|
||||
// Corrupting block #3.
|
||||
for i := len(b) - 1; i > len(b)-1024; i-- {
|
||||
b[i] = '1'
|
||||
}
|
||||
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read (first record).
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if want := int64(blockSize / 2); n != want {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Second read (second record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #1: %v", err)
|
||||
}
|
||||
if want := int64(blockSize - headerSize); n != want {
|
||||
t.Fatalf("read #1: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Third read (third record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #2: %v", err)
|
||||
}
|
||||
if want := int64(blockSize-headerSize) + 1; n != want {
|
||||
t.Fatalf("read #2: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Fourth read (fourth record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != io.ErrUnexpectedEOF {
|
||||
t.Fatalf("read #3: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_FirstChuckLengthOverflow(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
|
||||
t.Fatalf("write #2: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := buf.Bytes()
|
||||
// Corrupting record #1.
|
||||
x := blockSize
|
||||
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
|
||||
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read (first record).
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if want := int64(blockSize / 2); n != want {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Second read (second record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != io.ErrUnexpectedEOF {
|
||||
t.Fatalf("read #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrupt_MiddleChuckLengthOverflow(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
w := NewWriter(buf)
|
||||
|
||||
// First record.
|
||||
ww, err := w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize/2)); err != nil {
|
||||
t.Fatalf("write #0: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Second record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), blockSize-headerSize)); err != nil {
|
||||
t.Fatalf("write #1: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
// Third record.
|
||||
ww, err = w.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if _, err := ww.Write(bytes.Repeat([]byte("0"), (blockSize-headerSize)+1)); err != nil {
|
||||
t.Fatalf("write #2: unexpected error: %v", err)
|
||||
}
|
||||
|
||||
if err := w.Close(); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := buf.Bytes()
|
||||
// Corrupting record #1.
|
||||
x := blockSize/2 + headerSize
|
||||
binary.LittleEndian.PutUint16(b[x+4:], 0xffff)
|
||||
|
||||
r := NewReader(bytes.NewReader(b), dropper{t}, false, true)
|
||||
|
||||
// First read (first record).
|
||||
rr, err := r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #0: %v", err)
|
||||
}
|
||||
if want := int64(blockSize / 2); n != want {
|
||||
t.Fatalf("read #0: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
// Second read (third record).
|
||||
rr, err = r.Next()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err = io.Copy(ioutil.Discard, rr)
|
||||
if err != nil {
|
||||
t.Fatalf("read #1: %v", err)
|
||||
}
|
||||
if want := int64(blockSize-headerSize) + 1; n != want {
|
||||
t.Fatalf("read #1: got %d bytes want %d", n, want)
|
||||
}
|
||||
|
||||
if _, err := r.Next(); err != io.EOF {
|
||||
t.Fatalf("last next: unexpected error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
@@ -30,10 +30,16 @@ const (
|
||||
|
||||
type noCache struct{}
|
||||
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap(closed bool) {}
|
||||
func (noCache) SetCapacity(capacity int) {}
|
||||
func (noCache) Capacity() int { return 0 }
|
||||
func (noCache) Used() int { return 0 }
|
||||
func (noCache) Size() int { return 0 }
|
||||
func (noCache) NumObjects() int { return 0 }
|
||||
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
|
||||
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
|
||||
func (noCache) ZapNamespace(id uint64) {}
|
||||
func (noCache) Purge(fin cache.PurgeFin) {}
|
||||
func (noCache) Zap() {}
|
||||
|
||||
var NoCache cache.Cache = noCache{}
|
||||
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_util.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/session_util.go
generated
vendored
@@ -22,7 +22,7 @@ type dropper struct {
|
||||
}
|
||||
|
||||
func (d dropper) Drop(err error) {
|
||||
if e, ok := err.(journal.DroppedError); ok {
|
||||
if e, ok := err.(journal.ErrCorrupted); ok {
|
||||
d.s.logf("journal@drop %s-%d S·%s %q", d.file.Type(), d.file.Num(), shortenb(e.Size), e.Reason)
|
||||
} else {
|
||||
d.s.logf("journal@drop %s-%d %q", d.file.Type(), d.file.Num(), err)
|
||||
|
||||
80
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
80
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table.go
generated
vendored
@@ -275,6 +275,7 @@ type tOps struct {
|
||||
s *session
|
||||
cache cache.Cache
|
||||
cacheNS cache.Namespace
|
||||
bpool *util.BufferPool
|
||||
}
|
||||
|
||||
// Creates an empty table and returns table writer.
|
||||
@@ -296,7 +297,7 @@ func (t *tOps) create() (*tWriter, error) {
|
||||
func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
w, err := t.create()
|
||||
if err != nil {
|
||||
return f, n, err
|
||||
return
|
||||
}
|
||||
|
||||
defer func() {
|
||||
@@ -321,33 +322,24 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Opens table. It returns a cache object, which should
|
||||
// Opens table. It returns a cache handle, which should
|
||||
// be released after use.
|
||||
func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
|
||||
num := f.file.Num()
|
||||
c, ok := t.cacheNS.Get(num, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
|
||||
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
|
||||
var r storage.Reader
|
||||
r, err = f.file.Open()
|
||||
if err != nil {
|
||||
return
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
o := t.s.o
|
||||
|
||||
var cacheNS cache.Namespace
|
||||
if bc := o.GetBlockCache(); bc != nil {
|
||||
cacheNS = bc.GetNamespace(num)
|
||||
var bcacheNS cache.Namespace
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bcacheNS = bc.GetNamespace(num)
|
||||
}
|
||||
|
||||
ok = true
|
||||
value = table.NewReader(r, int64(f.size), cacheNS, o)
|
||||
charge = 1
|
||||
fin = func() {
|
||||
r.Close()
|
||||
}
|
||||
return
|
||||
return 1, table.NewReader(r, int64(f.size), bcacheNS, t.bpool, t.s.o)
|
||||
})
|
||||
if !ok && err == nil {
|
||||
if ch == nil && err == nil {
|
||||
err = ErrClosed
|
||||
}
|
||||
return
|
||||
@@ -356,34 +348,33 @@ func (t *tOps) open(f *tFile) (c cache.Object, err error) {
|
||||
// Finds key/value pair whose key is greater than or equal to the
|
||||
// given key.
|
||||
func (t *tOps) find(f *tFile, key []byte, ro *opt.ReadOptions) (rkey, rvalue []byte, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
defer c.Release()
|
||||
return c.Value().(*table.Reader).Find(key, ro)
|
||||
defer ch.Release()
|
||||
return ch.Value().(*table.Reader).Find(key, ro)
|
||||
}
|
||||
|
||||
// Returns approximate offset of the given key.
|
||||
func (t *tOps) offsetOf(f *tFile, key []byte) (offset uint64, err error) {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_offset, err := c.Value().(*table.Reader).OffsetOf(key)
|
||||
offset = uint64(_offset)
|
||||
c.Release()
|
||||
return
|
||||
defer ch.Release()
|
||||
offset_, err := ch.Value().(*table.Reader).OffsetOf(key)
|
||||
return uint64(offset_), err
|
||||
}
|
||||
|
||||
// Creates an iterator from the given table.
|
||||
func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
c, err := t.open(f)
|
||||
ch, err := t.open(f)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
iter := c.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(c)
|
||||
iter := ch.Value().(*table.Reader).NewIterator(slice, ro)
|
||||
iter.SetReleaser(ch)
|
||||
return iter
|
||||
}
|
||||
|
||||
@@ -391,14 +382,16 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
|
||||
// no one use the the table.
|
||||
func (t *tOps) remove(f *tFile) {
|
||||
num := f.file.Num()
|
||||
t.cacheNS.Delete(num, func(exist bool) {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.GetNamespace(num).Zap(false)
|
||||
t.cacheNS.Delete(num, func(exist, pending bool) {
|
||||
if !pending {
|
||||
if err := f.file.Remove(); err != nil {
|
||||
t.s.logf("table@remove removing @%d %q", num, err)
|
||||
} else {
|
||||
t.s.logf("table@remove removed @%d", num)
|
||||
}
|
||||
if bc := t.s.o.GetBlockCache(); bc != nil {
|
||||
bc.ZapNamespace(num)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -406,14 +399,19 @@ func (t *tOps) remove(f *tFile) {
|
||||
// Closes the table ops instance. It will close all tables,
|
||||
// regadless still used or not.
|
||||
func (t *tOps) close() {
|
||||
t.cache.Zap(true)
|
||||
t.cache.Zap()
|
||||
t.bpool.Close()
|
||||
}
|
||||
|
||||
// Creates new initialized table ops instance.
|
||||
func newTableOps(s *session, cacheCap int) *tOps {
|
||||
c := cache.NewLRUCache(cacheCap)
|
||||
ns := c.GetNamespace(0)
|
||||
return &tOps{s, c, ns}
|
||||
return &tOps{
|
||||
s: s,
|
||||
cache: c,
|
||||
cacheNS: c.GetNamespace(0),
|
||||
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
|
||||
}
|
||||
}
|
||||
|
||||
// tWriter wraps the table writer. It keep track of file descriptor
|
||||
|
||||
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
2
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/block_test.go
generated
vendored
@@ -40,7 +40,7 @@ var _ = testutil.Defer(func() {
|
||||
data := bw.buf.Bytes()
|
||||
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
|
||||
return &block{
|
||||
cmp: comparer.DefaultComparer,
|
||||
tr: &Reader{cmp: comparer.DefaultComparer},
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
|
||||
278
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
278
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/reader.go
generated
vendored
@@ -37,7 +37,7 @@ func max(x, y int) int {
|
||||
}
|
||||
|
||||
type block struct {
|
||||
cmp comparer.BasicComparer
|
||||
tr *Reader
|
||||
data []byte
|
||||
restartsLen int
|
||||
restartsOffset int
|
||||
@@ -46,31 +46,25 @@ type block struct {
|
||||
}
|
||||
|
||||
func (b *block) seek(rstart, rlimit int, key []byte) (index, offset int, err error) {
|
||||
n := b.restartsOffset
|
||||
data := b.data
|
||||
cmp := b.cmp
|
||||
|
||||
index = sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
|
||||
offset := int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):]))
|
||||
offset += 1 // shared always zero, since this is a restart point
|
||||
v1, n1 := binary.Uvarint(data[offset:]) // key length
|
||||
_, n2 := binary.Uvarint(data[offset+n1:]) // value length
|
||||
offset := int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):]))
|
||||
offset += 1 // shared always zero, since this is a restart point
|
||||
v1, n1 := binary.Uvarint(b.data[offset:]) // key length
|
||||
_, n2 := binary.Uvarint(b.data[offset+n1:]) // value length
|
||||
m := offset + n1 + n2
|
||||
return cmp.Compare(data[m:m+int(v1)], key) > 0
|
||||
return b.tr.cmp.Compare(b.data[m:m+int(v1)], key) > 0
|
||||
}) + rstart - 1
|
||||
if index < rstart {
|
||||
// The smallest key is greater-than key sought.
|
||||
index = rstart
|
||||
}
|
||||
offset = int(binary.LittleEndian.Uint32(data[n+4*index:]))
|
||||
offset = int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*index:]))
|
||||
return
|
||||
}
|
||||
|
||||
func (b *block) restartIndex(rstart, rlimit, offset int) int {
|
||||
n := b.restartsOffset
|
||||
data := b.data
|
||||
return sort.Search(b.restartsLen-rstart-(b.restartsLen-rlimit), func(i int) bool {
|
||||
return int(binary.LittleEndian.Uint32(data[n+4*(rstart+i):])) > offset
|
||||
return int(binary.LittleEndian.Uint32(b.data[b.restartsOffset+4*(rstart+i):])) > offset
|
||||
}) + rstart - 1
|
||||
}
|
||||
|
||||
@@ -139,6 +133,14 @@ func (b *block) newIterator(slice *util.Range, inclLimit bool, cache util.Releas
|
||||
return bi
|
||||
}
|
||||
|
||||
func (b *block) Release() {
|
||||
if b.tr.bpool != nil {
|
||||
b.tr.bpool.Put(b.data)
|
||||
}
|
||||
b.tr = nil
|
||||
b.data = nil
|
||||
}
|
||||
|
||||
type dir int
|
||||
|
||||
const (
|
||||
@@ -261,7 +263,7 @@ func (i *blockIter) Seek(key []byte) bool {
|
||||
i.dir = dirForward
|
||||
}
|
||||
for i.Next() {
|
||||
if i.block.cmp.Compare(i.key, key) >= 0 {
|
||||
if i.block.tr.cmp.Compare(i.key, key) >= 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
@@ -437,18 +439,21 @@ func (i *blockIter) Value() []byte {
|
||||
}
|
||||
|
||||
func (i *blockIter) Release() {
|
||||
i.prevNode = nil
|
||||
i.prevKeys = nil
|
||||
i.key = nil
|
||||
i.value = nil
|
||||
i.dir = dirReleased
|
||||
if i.cache != nil {
|
||||
i.cache.Release()
|
||||
i.cache = nil
|
||||
}
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
if i.dir > dirReleased {
|
||||
i.block = nil
|
||||
i.prevNode = nil
|
||||
i.prevKeys = nil
|
||||
i.key = nil
|
||||
i.value = nil
|
||||
i.dir = dirReleased
|
||||
if i.cache != nil {
|
||||
i.cache.Release()
|
||||
i.cache = nil
|
||||
}
|
||||
if i.releaser != nil {
|
||||
i.releaser.Release()
|
||||
i.releaser = nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -467,7 +472,7 @@ func (i *blockIter) Error() error {
|
||||
}
|
||||
|
||||
type filterBlock struct {
|
||||
filter filter.Filter
|
||||
tr *Reader
|
||||
data []byte
|
||||
oOffset int
|
||||
baseLg uint
|
||||
@@ -481,7 +486,7 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
|
||||
n := int(binary.LittleEndian.Uint32(o))
|
||||
m := int(binary.LittleEndian.Uint32(o[4:]))
|
||||
if n < m && m <= b.oOffset {
|
||||
return b.filter.Contains(b.data[n:m], key)
|
||||
return b.tr.filter.Contains(b.data[n:m], key)
|
||||
} else if n == m {
|
||||
return false
|
||||
}
|
||||
@@ -489,10 +494,17 @@ func (b *filterBlock) contains(offset uint64, key []byte) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (b *filterBlock) Release() {
|
||||
if b.tr.bpool != nil {
|
||||
b.tr.bpool.Put(b.data)
|
||||
}
|
||||
b.tr = nil
|
||||
b.data = nil
|
||||
}
|
||||
|
||||
type indexIter struct {
|
||||
blockIter
|
||||
tableReader *Reader
|
||||
slice *util.Range
|
||||
*blockIter
|
||||
slice *util.Range
|
||||
// Options
|
||||
checksum bool
|
||||
fillCache bool
|
||||
@@ -511,7 +523,7 @@ func (i *indexIter) Get() iterator.Iterator {
|
||||
if i.slice != nil && (i.blockIter.isFirst() || i.blockIter.isLast()) {
|
||||
slice = i.slice
|
||||
}
|
||||
return i.tableReader.getDataIter(dataBH, slice, i.checksum, i.fillCache)
|
||||
return i.blockIter.block.tr.getDataIter(dataBH, slice, i.checksum, i.fillCache)
|
||||
}
|
||||
|
||||
// Reader is a table reader.
|
||||
@@ -519,15 +531,15 @@ type Reader struct {
|
||||
reader io.ReaderAt
|
||||
cache cache.Namespace
|
||||
err error
|
||||
bpool *util.BufferPool
|
||||
// Options
|
||||
cmp comparer.Comparer
|
||||
filter filter.Filter
|
||||
checksum bool
|
||||
strictIter bool
|
||||
|
||||
dataEnd int64
|
||||
indexBlock *block
|
||||
filterBlock *filterBlock
|
||||
dataEnd int64
|
||||
indexBH, filterBH blockHandle
|
||||
}
|
||||
|
||||
func verifyChecksum(data []byte) bool {
|
||||
@@ -538,12 +550,13 @@ func verifyChecksum(data []byte) bool {
|
||||
}
|
||||
|
||||
func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
|
||||
data := make([]byte, bh.length+blockTrailerLen)
|
||||
data := r.bpool.Get(int(bh.length + blockTrailerLen))
|
||||
if _, err := r.reader.ReadAt(data, int64(bh.offset)); err != nil && err != io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
if checksum || r.checksum {
|
||||
if !verifyChecksum(data) {
|
||||
r.bpool.Put(data)
|
||||
return nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
|
||||
}
|
||||
}
|
||||
@@ -551,12 +564,18 @@ func (r *Reader) readRawBlock(bh blockHandle, checksum bool) ([]byte, error) {
|
||||
case blockTypeNoCompression:
|
||||
data = data[:bh.length]
|
||||
case blockTypeSnappyCompression:
|
||||
var err error
|
||||
data, err = snappy.Decode(nil, data[:bh.length])
|
||||
decLen, err := snappy.DecodedLen(data[:bh.length])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tmp := data
|
||||
data, err = snappy.Decode(r.bpool.Get(decLen), tmp[:bh.length])
|
||||
r.bpool.Put(tmp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
default:
|
||||
r.bpool.Put(data)
|
||||
return nil, fmt.Errorf("leveldb/table: Reader: unknown block compression type: %d", data[bh.length])
|
||||
}
|
||||
return data, nil
|
||||
@@ -569,7 +588,7 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
|
||||
}
|
||||
restartsLen := int(binary.LittleEndian.Uint32(data[len(data)-4:]))
|
||||
b := &block{
|
||||
cmp: r.cmp,
|
||||
tr: r,
|
||||
data: data,
|
||||
restartsLen: restartsLen,
|
||||
restartsOffset: len(data) - (restartsLen+1)*4,
|
||||
@@ -578,7 +597,44 @@ func (r *Reader) readBlock(bh blockHandle, checksum bool) (*block, error) {
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterBlock, error) {
|
||||
func (r *Reader) readBlockCached(bh blockHandle, checksum, fillCache bool) (*block, util.Releaser, error) {
|
||||
if r.cache != nil {
|
||||
var err error
|
||||
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
|
||||
if !fillCache {
|
||||
return 0, nil
|
||||
}
|
||||
var b *block
|
||||
b, err = r.readBlock(bh, checksum)
|
||||
if err != nil {
|
||||
return 0, nil
|
||||
}
|
||||
return cap(b.data), b
|
||||
})
|
||||
if ch != nil {
|
||||
b, ok := ch.Value().(*block)
|
||||
if !ok {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
|
||||
}
|
||||
if !b.checksum && (r.checksum || checksum) {
|
||||
if !verifyChecksum(b.data) {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: invalid block (checksum mismatch)")
|
||||
}
|
||||
b.checksum = true
|
||||
}
|
||||
return b, ch, err
|
||||
} else if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
b, err := r.readBlock(bh, checksum)
|
||||
return b, b, err
|
||||
}
|
||||
|
||||
func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
|
||||
data, err := r.readRawBlock(bh, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -593,7 +649,7 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
|
||||
return nil, errors.New("leveldb/table: Reader: invalid filter block (invalid offset)")
|
||||
}
|
||||
b := &filterBlock{
|
||||
filter: filter,
|
||||
tr: r,
|
||||
data: data,
|
||||
oOffset: oOffset,
|
||||
baseLg: uint(data[n-1]),
|
||||
@@ -602,44 +658,42 @@ func (r *Reader) readFilterBlock(bh blockHandle, filter filter.Filter) (*filterB
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fillCache bool) iterator.Iterator {
|
||||
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
|
||||
if r.cache != nil {
|
||||
// Get/set block cache.
|
||||
var err error
|
||||
cache, ok := r.cache.Get(dataBH.offset, func() (ok bool, value interface{}, charge int, fin cache.SetFin) {
|
||||
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
|
||||
if !fillCache {
|
||||
return
|
||||
return 0, nil
|
||||
}
|
||||
var dataBlock *block
|
||||
dataBlock, err = r.readBlock(dataBH, checksum)
|
||||
if err == nil {
|
||||
ok = true
|
||||
value = dataBlock
|
||||
charge = int(dataBH.length)
|
||||
var b *filterBlock
|
||||
b, err = r.readFilterBlock(bh)
|
||||
if err != nil {
|
||||
return 0, nil
|
||||
}
|
||||
return
|
||||
return cap(b.data), b
|
||||
})
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
if ok {
|
||||
dataBlock := cache.Value().(*block)
|
||||
if !dataBlock.checksum && (r.checksum || checksum) {
|
||||
if !verifyChecksum(dataBlock.data) {
|
||||
return iterator.NewEmptyIterator(errors.New("leveldb/table: Reader: invalid block (checksum mismatch)"))
|
||||
}
|
||||
dataBlock.checksum = true
|
||||
if ch != nil {
|
||||
b, ok := ch.Value().(*filterBlock)
|
||||
if !ok {
|
||||
ch.Release()
|
||||
return nil, nil, errors.New("leveldb/table: Reader: inconsistent block type")
|
||||
}
|
||||
iter := dataBlock.newIterator(slice, false, cache)
|
||||
return iter
|
||||
return b, ch, err
|
||||
} else if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
dataBlock, err := r.readBlock(dataBH, checksum)
|
||||
|
||||
b, err := r.readFilterBlock(bh)
|
||||
return b, b, err
|
||||
}
|
||||
|
||||
func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fillCache bool) iterator.Iterator {
|
||||
b, rel, err := r.readBlockCached(dataBH, checksum, fillCache)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
iter := dataBlock.newIterator(slice, false, nil)
|
||||
return iter
|
||||
return b.newIterator(slice, false, rel)
|
||||
}
|
||||
|
||||
// NewIterator creates an iterator from the table.
|
||||
@@ -653,18 +707,21 @@ func (r *Reader) getDataIter(dataBH blockHandle, slice *util.Range, checksum, fi
|
||||
// when not used.
|
||||
//
|
||||
// Also read Iterator documentation of the leveldb/iterator package.
|
||||
|
||||
func (r *Reader) NewIterator(slice *util.Range, ro *opt.ReadOptions) iterator.Iterator {
|
||||
if r.err != nil {
|
||||
return iterator.NewEmptyIterator(r.err)
|
||||
}
|
||||
|
||||
fillCache := !ro.GetDontFillCache()
|
||||
b, rel, err := r.readBlockCached(r.indexBH, true, fillCache)
|
||||
if err != nil {
|
||||
return iterator.NewEmptyIterator(err)
|
||||
}
|
||||
index := &indexIter{
|
||||
blockIter: *r.indexBlock.newIterator(slice, true, nil),
|
||||
tableReader: r,
|
||||
slice: slice,
|
||||
checksum: ro.GetStrict(opt.StrictBlockChecksum),
|
||||
fillCache: !ro.GetDontFillCache(),
|
||||
blockIter: b.newIterator(slice, true, rel),
|
||||
slice: slice,
|
||||
checksum: ro.GetStrict(opt.StrictBlockChecksum),
|
||||
fillCache: !ro.GetDontFillCache(),
|
||||
}
|
||||
return iterator.NewIndexedIterator(index, r.strictIter || ro.GetStrict(opt.StrictIterator), false)
|
||||
}
|
||||
@@ -681,7 +738,13 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
|
||||
return
|
||||
}
|
||||
|
||||
index := r.indexBlock.newIterator(nil, true, nil)
|
||||
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer rel.Release()
|
||||
|
||||
index := indexBlock.newIterator(nil, true, nil)
|
||||
defer index.Release()
|
||||
if !index.Seek(key) {
|
||||
err = index.Error()
|
||||
@@ -695,9 +758,15 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
|
||||
err = errors.New("leveldb/table: Reader: invalid table (bad data block handle)")
|
||||
return
|
||||
}
|
||||
if r.filterBlock != nil && !r.filterBlock.contains(dataBH.offset, key) {
|
||||
err = ErrNotFound
|
||||
return
|
||||
if r.filter != nil {
|
||||
filterBlock, rel, ferr := r.readFilterBlockCached(r.filterBH, true)
|
||||
if ferr == nil {
|
||||
if !filterBlock.contains(dataBH.offset, key) {
|
||||
rel.Release()
|
||||
return nil, nil, ErrNotFound
|
||||
}
|
||||
rel.Release()
|
||||
}
|
||||
}
|
||||
data := r.getDataIter(dataBH, nil, ro.GetStrict(opt.StrictBlockChecksum), !ro.GetDontFillCache())
|
||||
defer data.Release()
|
||||
@@ -708,8 +777,11 @@ func (r *Reader) Find(key []byte, ro *opt.ReadOptions) (rkey, value []byte, err
|
||||
}
|
||||
return
|
||||
}
|
||||
// Don't use block buffer, no need to copy the buffer.
|
||||
rkey = data.Key()
|
||||
value = data.Value()
|
||||
// Use block buffer, and since the buffer will be recycled, the buffer
|
||||
// need to be copied.
|
||||
value = append([]byte{}, data.Value()...)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -741,7 +813,13 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
index := r.indexBlock.newIterator(nil, true, nil)
|
||||
indexBlock, rel, err := r.readBlockCached(r.indexBH, true, true)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer rel.Release()
|
||||
|
||||
index := indexBlock.newIterator(nil, true, nil)
|
||||
defer index.Release()
|
||||
if index.Seek(key) {
|
||||
dataBH, n := decodeBlockHandle(index.Value())
|
||||
@@ -759,14 +837,29 @@ func (r *Reader) OffsetOf(key []byte) (offset int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// Release implements util.Releaser.
|
||||
// It also close the file if it is an io.Closer.
|
||||
func (r *Reader) Release() {
|
||||
if closer, ok := r.reader.(io.Closer); ok {
|
||||
closer.Close()
|
||||
}
|
||||
r.reader = nil
|
||||
r.cache = nil
|
||||
r.bpool = nil
|
||||
}
|
||||
|
||||
// NewReader creates a new initialized table reader for the file.
|
||||
// The cache is optional and can be nil.
|
||||
// The cache and bpool is optional and can be nil.
|
||||
//
|
||||
// The returned table reader instance is goroutine-safe.
|
||||
func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, o *opt.Options) *Reader {
|
||||
func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, bpool *util.BufferPool, o *opt.Options) *Reader {
|
||||
if bpool == nil {
|
||||
bpool = util.NewBufferPool(o.GetBlockSize() + blockTrailerLen)
|
||||
}
|
||||
r := &Reader{
|
||||
reader: f,
|
||||
cache: cache,
|
||||
bpool: bpool,
|
||||
cmp: o.GetComparer(),
|
||||
checksum: o.GetStrict(opt.StrictBlockChecksum),
|
||||
strictIter: o.GetStrict(opt.StrictIterator),
|
||||
@@ -794,16 +887,11 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, o *opt.Options)
|
||||
return r
|
||||
}
|
||||
// Decode the index block handle.
|
||||
indexBH, n := decodeBlockHandle(footer[n:])
|
||||
r.indexBH, n = decodeBlockHandle(footer[n:])
|
||||
if n == 0 {
|
||||
r.err = errors.New("leveldb/table: Reader: invalid table (bad index block handle)")
|
||||
return r
|
||||
}
|
||||
// Read index block.
|
||||
r.indexBlock, r.err = r.readBlock(indexBH, true)
|
||||
if r.err != nil {
|
||||
return r
|
||||
}
|
||||
// Read metaindex block.
|
||||
metaBlock, err := r.readBlock(metaBH, true)
|
||||
if err != nil {
|
||||
@@ -819,32 +907,28 @@ func NewReader(f io.ReaderAt, size int64, cache cache.Namespace, o *opt.Options)
|
||||
continue
|
||||
}
|
||||
fn := key[7:]
|
||||
var filter filter.Filter
|
||||
if f0 := o.GetFilter(); f0 != nil && f0.Name() == fn {
|
||||
filter = f0
|
||||
r.filter = f0
|
||||
} else {
|
||||
for _, f0 := range o.GetAltFilters() {
|
||||
if f0.Name() == fn {
|
||||
filter = f0
|
||||
r.filter = f0
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if filter != nil {
|
||||
if r.filter != nil {
|
||||
filterBH, n := decodeBlockHandle(metaIter.Value())
|
||||
if n == 0 {
|
||||
continue
|
||||
}
|
||||
r.filterBH = filterBH
|
||||
// Update data end.
|
||||
r.dataEnd = int64(filterBH.offset)
|
||||
filterBlock, err := r.readFilterBlock(filterBH, filter)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
r.filterBlock = filterBlock
|
||||
break
|
||||
}
|
||||
}
|
||||
metaIter.Release()
|
||||
metaBlock.Release()
|
||||
return r
|
||||
}
|
||||
|
||||
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
8
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/table/table_test.go
generated
vendored
@@ -59,7 +59,7 @@ var _ = testutil.Defer(func() {
|
||||
It("Should be able to approximate offset of a key correctly", func() {
|
||||
Expect(err).ShouldNot(HaveOccurred())
|
||||
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, o)
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
|
||||
CheckOffset := func(key string, expect, threshold int) {
|
||||
offset, err := tr.OffsetOf([]byte(key))
|
||||
Expect(err).ShouldNot(HaveOccurred())
|
||||
@@ -95,7 +95,7 @@ var _ = testutil.Defer(func() {
|
||||
tw.Close()
|
||||
|
||||
// Opening the table.
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, o)
|
||||
tr := NewReader(bytes.NewReader(buf.Bytes()), int64(buf.Len()), nil, nil, o)
|
||||
return tableWrapper{tr}
|
||||
}
|
||||
Test := func(kv *testutil.KeyValue, body func(r *Reader)) func() {
|
||||
@@ -111,7 +111,9 @@ var _ = testutil.Defer(func() {
|
||||
testutil.AllKeyValueTesting(nil, Build)
|
||||
Describe("with one key per block", Test(testutil.KeyValue_Generate(nil, 9, 1, 10, 512, 512), func(r *Reader) {
|
||||
It("should have correct blocks number", func() {
|
||||
Expect(r.indexBlock.restartsLen).Should(Equal(9))
|
||||
indexBlock, err := r.readBlock(r.indexBH, true)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(indexBlock.restartsLen).Should(Equal(9))
|
||||
})
|
||||
}))
|
||||
})
|
||||
|
||||
205
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
Normal file
205
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go
generated
vendored
Normal file
@@ -0,0 +1,205 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type buffer struct {
|
||||
b []byte
|
||||
miss int
|
||||
}
|
||||
|
||||
// BufferPool is a 'buffer pool'.
|
||||
type BufferPool struct {
|
||||
pool [6]chan []byte
|
||||
size [5]uint32
|
||||
sizeMiss [5]uint32
|
||||
sizeHalf [5]uint32
|
||||
baseline [4]int
|
||||
baselinex0 int
|
||||
baselinex1 int
|
||||
baseline0 int
|
||||
baseline1 int
|
||||
baseline2 int
|
||||
close chan struct{}
|
||||
|
||||
get uint32
|
||||
put uint32
|
||||
half uint32
|
||||
less uint32
|
||||
equal uint32
|
||||
greater uint32
|
||||
miss uint32
|
||||
}
|
||||
|
||||
func (p *BufferPool) poolNum(n int) int {
|
||||
if n <= p.baseline0 && n > p.baseline0/2 {
|
||||
return 0
|
||||
}
|
||||
for i, x := range p.baseline {
|
||||
if n <= x {
|
||||
return i + 1
|
||||
}
|
||||
}
|
||||
return len(p.baseline) + 1
|
||||
}
|
||||
|
||||
// Get returns buffer with length of n.
|
||||
func (p *BufferPool) Get(n int) []byte {
|
||||
atomic.AddUint32(&p.get, 1)
|
||||
|
||||
poolNum := p.poolNum(n)
|
||||
pool := p.pool[poolNum]
|
||||
if poolNum == 0 {
|
||||
// Fast path.
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
return make([]byte, n, p.baseline0)
|
||||
} else {
|
||||
sizePtr := &p.size[poolNum-1]
|
||||
|
||||
select {
|
||||
case b := <-pool:
|
||||
switch {
|
||||
case cap(b) > n:
|
||||
if cap(b)-n >= n {
|
||||
atomic.AddUint32(&p.half, 1)
|
||||
sizeHalfPtr := &p.sizeHalf[poolNum-1]
|
||||
if atomic.AddUint32(sizeHalfPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(cap(b)/2))
|
||||
atomic.StoreUint32(sizeHalfPtr, 0)
|
||||
} else {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
atomic.AddUint32(&p.less, 1)
|
||||
return b[:n]
|
||||
}
|
||||
case cap(b) == n:
|
||||
atomic.AddUint32(&p.equal, 1)
|
||||
return b[:n]
|
||||
default:
|
||||
atomic.AddUint32(&p.greater, 1)
|
||||
if uint32(cap(b)) >= atomic.LoadUint32(sizePtr) {
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
default:
|
||||
atomic.AddUint32(&p.miss, 1)
|
||||
}
|
||||
|
||||
if size := atomic.LoadUint32(sizePtr); uint32(n) > size {
|
||||
if size == 0 {
|
||||
atomic.CompareAndSwapUint32(sizePtr, 0, uint32(n))
|
||||
} else {
|
||||
sizeMissPtr := &p.sizeMiss[poolNum-1]
|
||||
if atomic.AddUint32(sizeMissPtr, 1) == 20 {
|
||||
atomic.StoreUint32(sizePtr, uint32(n))
|
||||
atomic.StoreUint32(sizeMissPtr, 0)
|
||||
}
|
||||
}
|
||||
return make([]byte, n)
|
||||
} else {
|
||||
return make([]byte, n, size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Put adds given buffer to the pool.
|
||||
func (p *BufferPool) Put(b []byte) {
|
||||
atomic.AddUint32(&p.put, 1)
|
||||
|
||||
pool := p.pool[p.poolNum(cap(b))]
|
||||
select {
|
||||
case pool <- b:
|
||||
default:
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func (p *BufferPool) Close() {
|
||||
select {
|
||||
case p.close <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func (p *BufferPool) String() string {
|
||||
return fmt.Sprintf("BufferPool{B·%d Z·%v Zm·%v Zh·%v G·%d P·%d H·%d <·%d =·%d >·%d M·%d}",
|
||||
p.baseline0, p.size, p.sizeMiss, p.sizeHalf, p.get, p.put, p.half, p.less, p.equal, p.greater, p.miss)
|
||||
}
|
||||
|
||||
func (p *BufferPool) drain() {
|
||||
ticker := time.NewTicker(2 * time.Second)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
for _, ch := range p.pool {
|
||||
select {
|
||||
case <-ch:
|
||||
default:
|
||||
}
|
||||
}
|
||||
case <-p.close:
|
||||
for _, ch := range p.pool {
|
||||
close(ch)
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewBufferPool creates a new initialized 'buffer pool'.
|
||||
func NewBufferPool(baseline int) *BufferPool {
|
||||
if baseline <= 0 {
|
||||
panic("baseline can't be <= 0")
|
||||
}
|
||||
p := &BufferPool{
|
||||
baseline0: baseline,
|
||||
baseline: [...]int{baseline / 4, baseline / 2, baseline * 2, baseline * 4},
|
||||
close: make(chan struct{}, 1),
|
||||
}
|
||||
for i, cap := range []int{2, 2, 4, 4, 2, 1} {
|
||||
p.pool[i] = make(chan []byte, cap)
|
||||
}
|
||||
go p.drain()
|
||||
return p
|
||||
}
|
||||
21
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/pool.go
generated
vendored
Normal file
21
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/pool.go
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build go1.3
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
type Pool struct {
|
||||
sync.Pool
|
||||
}
|
||||
|
||||
func NewPool(cap int) *Pool {
|
||||
return &Pool{}
|
||||
}
|
||||
33
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/pool_legacy.go
generated
vendored
Normal file
33
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/pool_legacy.go
generated
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
// Copyright (c) 2014, Suryandaru Triandana <syndtr@gmail.com>
|
||||
// All rights reserved.
|
||||
//
|
||||
// Use of this source code is governed by a BSD-style license that can be
|
||||
// found in the LICENSE file.
|
||||
|
||||
// +build !go1.3
|
||||
|
||||
package util
|
||||
|
||||
type Pool struct {
|
||||
pool chan interface{}
|
||||
}
|
||||
|
||||
func (p *Pool) Get() interface{} {
|
||||
select {
|
||||
case x := <-p.pool:
|
||||
return x
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Pool) Put(x interface{}) {
|
||||
select {
|
||||
case p.pool <- x:
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func NewPool(cap int) *Pool {
|
||||
return &Pool{pool: make(chan interface{}, cap)}
|
||||
}
|
||||
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
15
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/util/range.go
generated
vendored
@@ -14,3 +14,18 @@ type Range struct {
|
||||
// Limit of the key range, not include in the range.
|
||||
Limit []byte
|
||||
}
|
||||
|
||||
// BytesPrefix returns key range that satisfy the given prefix.
|
||||
// This only applicable for the standard 'bytes comparer'.
|
||||
func BytesPrefix(prefix []byte) *Range {
|
||||
var limit []byte
|
||||
for i := len(prefix) - 1; i >= 0; i-- {
|
||||
c := prefix[i]
|
||||
if c < 0xff {
|
||||
limit = make([]byte, i+1)
|
||||
copy(limit, prefix)
|
||||
limit[i] = c + 1
|
||||
}
|
||||
}
|
||||
return &Range{prefix, limit}
|
||||
}
|
||||
|
||||
21
README.md
21
README.md
@@ -1,10 +1,9 @@
|
||||
syncthing
|
||||
=========
|
||||
|
||||
[](https://travis-ci.org/syncthing/syncthing)
|
||||
[](https://coveralls.io/r/syncthing/syncthing?branch=master)
|
||||
[](http://godoc.org/github.com/syncthing/syncthing)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
[](http://build.syncthing.net/job/syncthing/lastBuild/)
|
||||
[](http://godoc.org/github.com/syncthing/syncthing)
|
||||
[](http://opensource.org/licenses/MIT)
|
||||
|
||||
This is the `syncthing` project. The following are the project goals:
|
||||
|
||||
@@ -26,14 +25,22 @@ for incompatible changes.
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/getting-started/46).
|
||||
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
|
||||
|
||||
Building
|
||||
--------
|
||||
|
||||
Building Syncthing from source is easy, and there's a
|
||||
[guide](http://discourse.syncthing.net/t/44)
|
||||
that describes it for both Unix and Windows.
|
||||
|
||||
Signed Releases
|
||||
---------------
|
||||
|
||||
As of v0.7.0 and onwards, git tags and release binaries are GPG signed with
|
||||
the key BCE524C7 (http://nym.se/gpg.txt). The signature is included in the
|
||||
normal release bundle as `syncthing.asc` or `syncthing.exe.asc`.
|
||||
the key BCE524C7 (http://nym.se/gpg.txt). For release binaries, MD5 and
|
||||
SHA1 checksums are calculated and signed, available in the
|
||||
md5sum.txt.asc and sha1sum.txt.asc files.
|
||||
|
||||
Documentation
|
||||
=============
|
||||
|
||||
23
auto/auto_test.go
Normal file
23
auto/auto_test.go
Normal file
@@ -0,0 +1,23 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package auto_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/syncthing/syncthing/auto"
|
||||
)
|
||||
|
||||
func TestAssets(t *testing.T) {
|
||||
assets := auto.Assets()
|
||||
idx, ok := assets["index.html"]
|
||||
if !ok {
|
||||
t.Fatal("No index.html in compiled in assets")
|
||||
}
|
||||
if !bytes.Contains(idx, []byte("<html")) {
|
||||
t.Fatal("No html in index.html")
|
||||
}
|
||||
}
|
||||
@@ -1,2 +1,6 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package auto contains auto generated files for web assets.
|
||||
package auto
|
||||
|
||||
File diff suppressed because one or more lines are too long
105
beacon/beacon.go
105
beacon/beacon.go
@@ -11,52 +11,17 @@ type recv struct {
|
||||
src net.Addr
|
||||
}
|
||||
|
||||
type dst struct {
|
||||
intf string
|
||||
conn *net.UDPConn
|
||||
type Interface interface {
|
||||
Send(data []byte)
|
||||
Recv() ([]byte, net.Addr)
|
||||
}
|
||||
|
||||
type Beacon struct {
|
||||
conn *net.UDPConn
|
||||
port int
|
||||
conns []dst
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func New(port int) (*Beacon, error) {
|
||||
conn, err := net.ListenUDP("udp", &net.UDPAddr{Port: port})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Beacon{
|
||||
conn: conn,
|
||||
port: port,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go b.reader()
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Beacon) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Beacon) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Beacon) reader() {
|
||||
func genericReader(conn *net.UDPConn, outbox chan<- recv) {
|
||||
bs := make([]byte, 65536)
|
||||
for {
|
||||
n, addr, err := b.conn.ReadFrom(bs)
|
||||
n, addr, err := conn.ReadFrom(bs)
|
||||
if err != nil {
|
||||
l.Warnln("Beacon read:", err)
|
||||
l.Warnln("multicast read:", err)
|
||||
return
|
||||
}
|
||||
if debug {
|
||||
@@ -66,7 +31,7 @@ func (b *Beacon) reader() {
|
||||
c := make([]byte, n)
|
||||
copy(c, bs)
|
||||
select {
|
||||
case b.outbox <- recv{c, addr}:
|
||||
case outbox <- recv{c, addr}:
|
||||
default:
|
||||
if debug {
|
||||
l.Debugln("dropping message")
|
||||
@@ -74,59 +39,3 @@ func (b *Beacon) reader() {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *Beacon) writer() {
|
||||
for bs := range b.inbox {
|
||||
|
||||
addrs, err := net.InterfaceAddrs()
|
||||
if err != nil {
|
||||
l.Warnln("Beacon: interface addresses:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var dsts []net.IP
|
||||
for _, addr := range addrs {
|
||||
if iaddr, ok := addr.(*net.IPNet); ok && iaddr.IP.IsGlobalUnicast() && iaddr.IP.To4() != nil {
|
||||
baddr := bcast(iaddr)
|
||||
dsts = append(dsts, baddr.IP)
|
||||
}
|
||||
}
|
||||
|
||||
if len(dsts) == 0 {
|
||||
// Fall back to the general IPv4 broadcast address
|
||||
dsts = append(dsts, net.IP{0xff, 0xff, 0xff, 0xff})
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugln("addresses:", dsts)
|
||||
}
|
||||
|
||||
for _, ip := range dsts {
|
||||
dst := &net.UDPAddr{IP: ip, Port: b.port}
|
||||
|
||||
_, err := b.conn.WriteTo(bs, dst)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func bcast(ip *net.IPNet) *net.IPNet {
|
||||
var bc = &net.IPNet{}
|
||||
bc.IP = make([]byte, len(ip.IP))
|
||||
copy(bc.IP, ip.IP)
|
||||
bc.Mask = ip.Mask
|
||||
|
||||
offset := len(bc.IP) - len(bc.Mask)
|
||||
for i := range bc.IP {
|
||||
if i-offset >= 0 {
|
||||
bc.IP[i] = ip.IP[i] | ^ip.Mask[i-offset]
|
||||
}
|
||||
}
|
||||
return bc
|
||||
}
|
||||
|
||||
97
beacon/broadcast.go
Normal file
97
beacon/broadcast.go
Normal file
@@ -0,0 +1,97 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package beacon
|
||||
|
||||
import "net"
|
||||
|
||||
type Broadcast struct {
|
||||
conn *net.UDPConn
|
||||
port int
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func NewBroadcast(port int) (*Broadcast, error) {
|
||||
conn, err := net.ListenUDP("udp", &net.UDPAddr{Port: port})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Broadcast{
|
||||
conn: conn,
|
||||
port: port,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go genericReader(b.conn, b.outbox)
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Broadcast) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Broadcast) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Broadcast) writer() {
|
||||
for bs := range b.inbox {
|
||||
|
||||
addrs, err := net.InterfaceAddrs()
|
||||
if err != nil {
|
||||
l.Warnln("Broadcast: interface addresses:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
var dsts []net.IP
|
||||
for _, addr := range addrs {
|
||||
if iaddr, ok := addr.(*net.IPNet); ok && iaddr.IP.IsGlobalUnicast() && iaddr.IP.To4() != nil {
|
||||
baddr := bcast(iaddr)
|
||||
dsts = append(dsts, baddr.IP)
|
||||
}
|
||||
}
|
||||
|
||||
if len(dsts) == 0 {
|
||||
// Fall back to the general IPv4 broadcast address
|
||||
dsts = append(dsts, net.IP{0xff, 0xff, 0xff, 0xff})
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugln("addresses:", dsts)
|
||||
}
|
||||
|
||||
for _, ip := range dsts {
|
||||
dst := &net.UDPAddr{IP: ip, Port: b.port}
|
||||
|
||||
_, err := b.conn.WriteTo(bs, dst)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func bcast(ip *net.IPNet) *net.IPNet {
|
||||
var bc = &net.IPNet{}
|
||||
bc.IP = make([]byte, len(ip.IP))
|
||||
copy(bc.IP, ip.IP)
|
||||
bc.Mask = ip.Mask
|
||||
|
||||
offset := len(bc.IP) - len(bc.Mask)
|
||||
for i := range bc.IP {
|
||||
if i-offset >= 0 {
|
||||
bc.IP[i] = ip.IP[i] | ^ip.Mask[i-offset]
|
||||
}
|
||||
}
|
||||
return bc
|
||||
}
|
||||
69
beacon/multicast.go
Normal file
69
beacon/multicast.go
Normal file
@@ -0,0 +1,69 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package beacon
|
||||
|
||||
import "net"
|
||||
|
||||
type Multicast struct {
|
||||
conn *net.UDPConn
|
||||
addr *net.UDPAddr
|
||||
inbox chan []byte
|
||||
outbox chan recv
|
||||
}
|
||||
|
||||
func NewMulticast(addr string) (*Multicast, error) {
|
||||
gaddr, err := net.ResolveUDPAddr("udp", addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
conn, err := net.ListenMulticastUDP("udp", nil, gaddr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b := &Multicast{
|
||||
conn: conn,
|
||||
addr: gaddr,
|
||||
inbox: make(chan []byte),
|
||||
outbox: make(chan recv, 16),
|
||||
}
|
||||
|
||||
go genericReader(b.conn, b.outbox)
|
||||
go b.writer()
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (b *Multicast) Send(data []byte) {
|
||||
b.inbox <- data
|
||||
}
|
||||
|
||||
func (b *Multicast) Recv() ([]byte, net.Addr) {
|
||||
recv := <-b.outbox
|
||||
return recv.data, recv.src
|
||||
}
|
||||
|
||||
func (b *Multicast) writer() {
|
||||
for bs := range b.inbox {
|
||||
intfs, err := net.Interfaces()
|
||||
if err != nil {
|
||||
l.Warnln("multicast interfaces:", err)
|
||||
continue
|
||||
}
|
||||
for _, intf := range intfs {
|
||||
if intf.Flags&net.FlagUp != 0 && intf.Flags&net.FlagMulticast != 0 {
|
||||
addr := *b.addr
|
||||
addr.Zone = intf.Name
|
||||
_, err = b.conn.WriteTo(bs, &addr)
|
||||
if err != nil {
|
||||
if debug {
|
||||
l.Debugln(err, "on write to", addr)
|
||||
}
|
||||
} else if debug {
|
||||
l.Debugf("sent %d bytes to %s", len(bs), addr.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
500
build.go
Normal file
500
build.go
Normal file
@@ -0,0 +1,500 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
versionRe = regexp.MustCompile(`-[0-9]{1,3}-g[0-9a-f]{5,10}`)
|
||||
goarch string
|
||||
goos string
|
||||
noupgrade bool
|
||||
)
|
||||
|
||||
const minGoVersion = 1.3
|
||||
|
||||
func main() {
|
||||
log.SetOutput(os.Stdout)
|
||||
log.SetFlags(0)
|
||||
|
||||
if os.Getenv("GOPATH") == "" {
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
gopath := filepath.Clean(filepath.Join(cwd, "../../../../"))
|
||||
log.Println("GOPATH is", gopath)
|
||||
os.Setenv("GOPATH", gopath)
|
||||
}
|
||||
os.Setenv("PATH", fmt.Sprintf("%s%cbin%c%s", os.Getenv("GOPATH"), os.PathSeparator, os.PathListSeparator, os.Getenv("PATH")))
|
||||
|
||||
flag.StringVar(&goarch, "goarch", runtime.GOARCH, "GOARCH")
|
||||
flag.StringVar(&goos, "goos", runtime.GOOS, "GOOS")
|
||||
flag.BoolVar(&noupgrade, "no-upgrade", false, "Disable upgrade functionality")
|
||||
flag.Parse()
|
||||
|
||||
switch goarch {
|
||||
case "386", "amd64", "armv5", "armv6", "armv7":
|
||||
break
|
||||
case "arm":
|
||||
log.Println("Invalid goarch \"arm\". Use one of \"armv5\", \"armv6\", \"armv7\".")
|
||||
log.Fatalln("Note that producing a correct \"armv5\" binary requires a rebuilt stdlib.")
|
||||
default:
|
||||
log.Printf("Unknown goarch %q; proceed with caution!", goarch)
|
||||
}
|
||||
|
||||
checkRequiredGoVersion()
|
||||
|
||||
if check() != nil {
|
||||
setup()
|
||||
}
|
||||
|
||||
if flag.NArg() == 0 {
|
||||
install("./cmd/...")
|
||||
return
|
||||
}
|
||||
|
||||
switch flag.Arg(0) {
|
||||
case "install":
|
||||
pkg := "./cmd/..."
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
install(pkg)
|
||||
|
||||
case "build":
|
||||
pkg := "./cmd/syncthing"
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
}
|
||||
build(pkg, tags)
|
||||
|
||||
case "test":
|
||||
pkg := "./..."
|
||||
if flag.NArg() > 2 {
|
||||
pkg = flag.Arg(1)
|
||||
}
|
||||
test(pkg)
|
||||
|
||||
case "assets":
|
||||
assets()
|
||||
|
||||
case "xdr":
|
||||
xdr()
|
||||
|
||||
case "translate":
|
||||
translate()
|
||||
|
||||
case "transifex":
|
||||
transifex()
|
||||
|
||||
case "deps":
|
||||
deps()
|
||||
|
||||
case "tar":
|
||||
buildTar()
|
||||
|
||||
case "zip":
|
||||
buildZip()
|
||||
|
||||
case "clean":
|
||||
clean()
|
||||
|
||||
default:
|
||||
log.Fatalf("Unknown command %q", flag.Arg(0))
|
||||
}
|
||||
}
|
||||
|
||||
func check() error {
|
||||
_, err := exec.LookPath("godep")
|
||||
return err
|
||||
}
|
||||
|
||||
func checkRequiredGoVersion() {
|
||||
ver := run("go", "version")
|
||||
re := regexp.MustCompile(`go version go(\d+\.\d+)`)
|
||||
if m := re.FindSubmatch(ver); len(m) == 2 {
|
||||
vs := string(m[1])
|
||||
// This is a standard go build. Verify that it's new enough.
|
||||
f, err := strconv.ParseFloat(vs, 64)
|
||||
if err != nil {
|
||||
log.Printf("*** Could parse Go version out of %q.\n*** This isn't known to work, proceed on your own risk.", vs)
|
||||
return
|
||||
}
|
||||
if f < minGoVersion {
|
||||
log.Fatalf("*** Go version %.01f is less than required %.01f.\n*** This is known not to work, not proceeding.", f, minGoVersion)
|
||||
}
|
||||
} else {
|
||||
log.Printf("*** Unknown Go version %q.\n*** This isn't known to work, proceed on your own risk.", ver)
|
||||
}
|
||||
}
|
||||
|
||||
func setup() {
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/cover")
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.tools/cmd/vet")
|
||||
runPrint("go", "get", "-v", "code.google.com/p/go.net/html")
|
||||
runPrint("go", "get", "-v", "github.com/tools/godep")
|
||||
}
|
||||
|
||||
func test(pkg string) {
|
||||
runPrint("godep", "go", "test", "-short", "-timeout", "10s", pkg)
|
||||
}
|
||||
|
||||
func install(pkg string) {
|
||||
os.Setenv("GOBIN", "./bin")
|
||||
setBuildEnv()
|
||||
runPrint("godep", "go", "install", "-ldflags", ldflags(), pkg)
|
||||
}
|
||||
|
||||
func build(pkg string, tags []string) {
|
||||
rmr("syncthing", "syncthing.exe")
|
||||
args := []string{"go", "build", "-ldflags", ldflags()}
|
||||
if len(tags) > 0 {
|
||||
args = append(args, "-tags", strings.Join(tags, ","))
|
||||
}
|
||||
args = append(args, pkg)
|
||||
setBuildEnv()
|
||||
runPrint("godep", args...)
|
||||
}
|
||||
|
||||
func buildTar() {
|
||||
name := archiveName()
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
name += "-noupgrade"
|
||||
}
|
||||
build("./cmd/syncthing", tags)
|
||||
filename := name + ".tar.gz"
|
||||
tarGz(filename, []archiveFile{
|
||||
{"README.md", name + "/README.txt"},
|
||||
{"LICENSE", name + "/LICENSE.txt"},
|
||||
{"CONTRIBUTORS", name + "/CONTRIBUTORS.txt"},
|
||||
{"syncthing", name + "/syncthing"},
|
||||
})
|
||||
log.Println(filename)
|
||||
}
|
||||
|
||||
func buildZip() {
|
||||
name := archiveName()
|
||||
var tags []string
|
||||
if noupgrade {
|
||||
tags = []string{"noupgrade"}
|
||||
name += "-noupgrade"
|
||||
}
|
||||
build("./cmd/syncthing", tags)
|
||||
filename := name + ".zip"
|
||||
zipFile(filename, []archiveFile{
|
||||
{"README.md", name + "/README.txt"},
|
||||
{"LICENSE", name + "/LICENSE.txt"},
|
||||
{"CONTRIBUTORS", name + "/CONTRIBUTORS.txt"},
|
||||
{"syncthing.exe", name + "/syncthing.exe"},
|
||||
})
|
||||
log.Println(filename)
|
||||
}
|
||||
|
||||
func setBuildEnv() {
|
||||
os.Setenv("GOOS", goos)
|
||||
if strings.HasPrefix(goarch, "arm") {
|
||||
os.Setenv("GOARCH", "arm")
|
||||
os.Setenv("GOARM", goarch[4:])
|
||||
} else {
|
||||
os.Setenv("GOARCH", goarch)
|
||||
}
|
||||
if goarch == "386" {
|
||||
os.Setenv("GO386", "387")
|
||||
}
|
||||
}
|
||||
|
||||
func assets() {
|
||||
runPipe("auto/gui.files.go", "godep", "go", "run", "cmd/genassets/main.go", "gui")
|
||||
}
|
||||
|
||||
func xdr() {
|
||||
for _, f := range []string{"discover/packets", "files/leveldb", "protocol/message"} {
|
||||
runPipe(f+"_xdr.go", "go", "run", "./Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go", "--", f+".go")
|
||||
}
|
||||
}
|
||||
|
||||
func translate() {
|
||||
os.Chdir("gui/lang")
|
||||
runPipe("lang-en-new.json", "go", "run", "../../cmd/translate/main.go", "lang-en.json", "../index.html")
|
||||
os.Remove("lang-en.json")
|
||||
err := os.Rename("lang-en-new.json", "lang-en.json")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
os.Chdir("../..")
|
||||
}
|
||||
|
||||
func transifex() {
|
||||
os.Chdir("gui/lang")
|
||||
runPrint("go", "run", "../../cmd/transifexdl/main.go")
|
||||
os.Chdir("../..")
|
||||
assets()
|
||||
}
|
||||
|
||||
func deps() {
|
||||
rmr("Godeps")
|
||||
runPrint("godep", "save", "./cmd/...")
|
||||
}
|
||||
|
||||
func clean() {
|
||||
rmr("bin", "Godeps/_workspace/pkg", "Godeps/_workspace/bin")
|
||||
rmr(filepath.Join(os.Getenv("GOPATH"), fmt.Sprintf("pkg/%s_%s/github.com/syncthing", goos, goarch)))
|
||||
}
|
||||
|
||||
func ldflags() string {
|
||||
var b bytes.Buffer
|
||||
b.WriteString("-w")
|
||||
b.WriteString(fmt.Sprintf(" -X main.Version %s", version()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildStamp %d", buildStamp()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildUser %s", buildUser()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildHost %s", buildHost()))
|
||||
b.WriteString(fmt.Sprintf(" -X main.BuildEnv %s", buildEnvironment()))
|
||||
if strings.HasPrefix(goarch, "arm") {
|
||||
b.WriteString(fmt.Sprintf(" -X main.GoArchExtra %s", goarch[3:]))
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func rmr(paths ...string) {
|
||||
for _, path := range paths {
|
||||
log.Println("rm -r", path)
|
||||
os.RemoveAll(path)
|
||||
}
|
||||
}
|
||||
|
||||
func version() string {
|
||||
v := run("git", "describe", "--always", "--dirty")
|
||||
v = versionRe.ReplaceAllFunc(v, func(s []byte) []byte {
|
||||
s[0] = '+'
|
||||
return s
|
||||
})
|
||||
return string(v)
|
||||
}
|
||||
|
||||
func buildStamp() int64 {
|
||||
bs := run("git", "show", "-s", "--format=%ct")
|
||||
s, _ := strconv.ParseInt(string(bs), 10, 64)
|
||||
return s
|
||||
}
|
||||
|
||||
func buildUser() string {
|
||||
u, err := user.Current()
|
||||
if err != nil {
|
||||
return "unknown-user"
|
||||
}
|
||||
return strings.Replace(u.Username, " ", "-", -1)
|
||||
}
|
||||
|
||||
func buildHost() string {
|
||||
h, err := os.Hostname()
|
||||
if err != nil {
|
||||
return "unknown-host"
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func buildEnvironment() string {
|
||||
if v := os.Getenv("ENVIRONMENT"); len(v) > 0 {
|
||||
return v
|
||||
}
|
||||
return "default"
|
||||
}
|
||||
|
||||
func buildArch() string {
|
||||
os := goos
|
||||
if os == "darwin" {
|
||||
os = "macosx"
|
||||
}
|
||||
return fmt.Sprintf("%s-%s", os, goarch)
|
||||
}
|
||||
|
||||
func archiveName() string {
|
||||
return fmt.Sprintf("syncthing-%s-%s", buildArch(), version())
|
||||
}
|
||||
|
||||
func run(cmd string, args ...string) []byte {
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
bs, err := ecmd.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Println(cmd, strings.Join(args, " "))
|
||||
log.Println(string(bs))
|
||||
log.Fatal(err)
|
||||
}
|
||||
return bytes.TrimSpace(bs)
|
||||
}
|
||||
|
||||
func runPrint(cmd string, args ...string) {
|
||||
log.Println(cmd, strings.Join(args, " "))
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
ecmd.Stdout = os.Stdout
|
||||
ecmd.Stderr = os.Stderr
|
||||
err := ecmd.Run()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func runPipe(file, cmd string, args ...string) {
|
||||
log.Println(cmd, strings.Join(args, " "), ">", file)
|
||||
fd, err := os.Create(file)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
ecmd.Stdout = fd
|
||||
ecmd.Stderr = os.Stderr
|
||||
err = ecmd.Run()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fd.Close()
|
||||
}
|
||||
|
||||
type archiveFile struct {
|
||||
src string
|
||||
dst string
|
||||
}
|
||||
|
||||
func tarGz(out string, files []archiveFile) {
|
||||
fd, err := os.Create(out)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
gw := gzip.NewWriter(fd)
|
||||
tw := tar.NewWriter(gw)
|
||||
|
||||
for _, f := range files {
|
||||
sf, err := os.Open(f.src)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
info, err := sf.Stat()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
h := &tar.Header{
|
||||
Name: f.dst,
|
||||
Size: info.Size(),
|
||||
Mode: int64(info.Mode()),
|
||||
ModTime: info.ModTime(),
|
||||
}
|
||||
|
||||
err = tw.WriteHeader(h)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
_, err = io.Copy(tw, sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
sf.Close()
|
||||
}
|
||||
|
||||
err = tw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = gw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func zipFile(out string, files []archiveFile) {
|
||||
fd, err := os.Create(out)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
zw := zip.NewWriter(fd)
|
||||
|
||||
for _, f := range files {
|
||||
sf, err := os.Open(f.src)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
info, err := sf.Stat()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
fh, err := zip.FileInfoHeader(info)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fh.Name = f.dst
|
||||
fh.Method = zip.Deflate
|
||||
|
||||
if strings.HasSuffix(f.dst, ".txt") {
|
||||
// Text file. Read it and convert line endings.
|
||||
bs, err := ioutil.ReadAll(sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
bs = bytes.Replace(bs, []byte{'\n'}, []byte{'\n', '\r'}, -1)
|
||||
fh.UncompressedSize = uint32(len(bs))
|
||||
fh.UncompressedSize64 = uint64(len(bs))
|
||||
|
||||
of, err := zw.CreateHeader(fh)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
of.Write(bs)
|
||||
} else {
|
||||
// Binary file. Copy verbatim.
|
||||
of, err := zw.CreateHeader(fh)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
_, err = io.Copy(of, sf)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err = zw.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
282
build.sh
282
build.sh
@@ -1,239 +1,105 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
IFS=$'\n\t'
|
||||
|
||||
export COPYFILE_DISABLE=true
|
||||
export GO386=387 # Don't use SSE on 32 bit builds
|
||||
|
||||
distFiles=(README.md LICENSE CONTRIBUTORS) # apart from the binary itself
|
||||
|
||||
# replace "...-12-g123abc" with "...+12-g123abc" to remain semver compatible-ish
|
||||
version=$(git describe --always --dirty)
|
||||
version=$(echo "$version" | sed 's/-\([0-9]\{1,3\}-g[0-9a-f]\{5,10\}\)/+\1/')
|
||||
|
||||
date=$(git show -s --format=%ct)
|
||||
user=$(whoami)
|
||||
host=$(hostname)
|
||||
host=${host%%.*}
|
||||
bldenv=${ENVIRONMENT:-default}
|
||||
ldflags="-w -X main.Version $version -X main.BuildStamp $date -X main.BuildUser $user -X main.BuildHost $host -X main.BuildEnv $bldenv"
|
||||
|
||||
check() {
|
||||
if ! command -v godep >/dev/null ; then
|
||||
echo "Error: no godep. Try \"$0 setup\"."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
build() {
|
||||
check
|
||||
godep go build $* -ldflags "$ldflags" ./cmd/syncthing
|
||||
}
|
||||
|
||||
assets() {
|
||||
check
|
||||
godep go run cmd/genassets/main.go gui > auto/gui.files.go
|
||||
}
|
||||
|
||||
test-cov() {
|
||||
echo "mode: set" > coverage.out
|
||||
fail=0
|
||||
|
||||
for dir in $(go list ./...) ; do
|
||||
godep go test -coverprofile=profile.out $dir || fail=1
|
||||
if [ -f profile.out ] ; then
|
||||
grep -v "mode: set" profile.out >> coverage.out
|
||||
rm profile.out
|
||||
fi
|
||||
done
|
||||
|
||||
exit $fail
|
||||
}
|
||||
|
||||
test() {
|
||||
check
|
||||
go vet ./...
|
||||
godep go test -cpu=1,2,4 $* ./...
|
||||
}
|
||||
|
||||
sign() {
|
||||
if git describe --exact-match 2>/dev/null >/dev/null ; then
|
||||
# HEAD is a tag
|
||||
id=BCE524C7
|
||||
if gpg --list-keys "$id" >/dev/null 2>&1 ; then
|
||||
gpg -ab -u "$id" "$1"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
tarDist() {
|
||||
name="$1"
|
||||
rm -rf "$name"
|
||||
mkdir -p "$name"
|
||||
cp syncthing "${distFiles[@]}" "$name"
|
||||
sign "$name/syncthing"
|
||||
tar zcvf "$name.tar.gz" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
zipDist() {
|
||||
name="$1"
|
||||
rm -rf "$name"
|
||||
mkdir -p "$name"
|
||||
for f in "${distFiles[@]}" ; do
|
||||
GOARCH="" GOOS="" go run cmd/todos/main.go < "$f" > "$name/$f.txt"
|
||||
done
|
||||
cp syncthing.exe "$name"
|
||||
sign "$name/syncthing.exe"
|
||||
zip -r "$name.zip" "$name"
|
||||
rm -rf "$name"
|
||||
}
|
||||
|
||||
deps() {
|
||||
check
|
||||
godep save ./cmd/...
|
||||
}
|
||||
|
||||
setup() {
|
||||
go get -v code.google.com/p/go.tools/cmd/cover
|
||||
go get -v code.google.com/p/go.tools/cmd/vet
|
||||
go get -v github.com/mattn/goveralls
|
||||
go get -v github.com/tools/godep
|
||||
}
|
||||
|
||||
xdr() {
|
||||
for f in discover/packets files/leveldb protocol/message ; do
|
||||
go run "$(godep path)/src/github.com/calmh/xdr/cmd/genxdr/main.go" -- "${f}.go" > "${f}_xdr.go"
|
||||
done
|
||||
}
|
||||
|
||||
translate() {
|
||||
pushd gui
|
||||
go run ../cmd/translate/main.go lang-en.json < index.html > lang-en-new.json
|
||||
mv lang-en-new.json lang-en.json
|
||||
popd
|
||||
}
|
||||
|
||||
transifex() {
|
||||
pushd gui
|
||||
go run ../cmd/transifexdl/main.go
|
||||
popd
|
||||
assets
|
||||
}
|
||||
|
||||
case "$1" in
|
||||
"")
|
||||
shift
|
||||
export GOBIN=$(pwd)/bin
|
||||
godep go install $* -ldflags "$ldflags" ./cmd/...
|
||||
case "${1:-default}" in
|
||||
default)
|
||||
go run build.go
|
||||
;;
|
||||
|
||||
race)
|
||||
build -race
|
||||
;;
|
||||
|
||||
guidev)
|
||||
echo "Syncthing is already built for GUI developments. Try:"
|
||||
echo " STGUIASSETS=~/someDir/gui syncthing"
|
||||
clean)
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
test)
|
||||
test -short
|
||||
;;
|
||||
ulimit -t 60 || true
|
||||
ulimit -d 512000 || true
|
||||
ulimit -m 512000 || true
|
||||
|
||||
test-cov)
|
||||
test-cov
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
tar)
|
||||
rm -f *.tar.gz *.zip
|
||||
test -short || exit 1
|
||||
assets
|
||||
build
|
||||
|
||||
eval $(go env)
|
||||
name="syncthing-${GOOS/darwin/macosx}-$GOARCH-$version"
|
||||
|
||||
tarDist "$name"
|
||||
;;
|
||||
|
||||
all)
|
||||
rm -f *.tar.gz *.zip
|
||||
test -short || exit 1
|
||||
assets
|
||||
|
||||
for os in darwin-amd64 freebsd-amd64 freebsd-386 linux-amd64 linux-386 windows-amd64 windows-386 solaris-amd64 ; do
|
||||
export GOOS=${os%-*}
|
||||
export GOARCH=${os#*-}
|
||||
|
||||
build
|
||||
|
||||
name="syncthing-${os/darwin/macosx}-$version"
|
||||
case $GOOS in
|
||||
windows)
|
||||
zipDist "$name"
|
||||
rm -f syncthing.exe
|
||||
;;
|
||||
*)
|
||||
tarDist "$name"
|
||||
rm -f syncthing
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
export GOOS=linux
|
||||
export GOARCH=arm
|
||||
|
||||
origldflags="$ldflags"
|
||||
|
||||
export GOARM=7
|
||||
ldflags="$origldflags -X main.GoArchExtra v7"
|
||||
build
|
||||
tarDist "syncthing-linux-armv7-$version"
|
||||
|
||||
export GOARM=6
|
||||
ldflags="$origldflags -X main.GoArchExtra v6"
|
||||
build
|
||||
tarDist "syncthing-linux-armv6-$version"
|
||||
|
||||
export GOARM=5
|
||||
ldflags="$origldflags -X main.GoArchExtra v5"
|
||||
build
|
||||
tarDist "syncthing-linux-armv5-$version"
|
||||
|
||||
;;
|
||||
|
||||
upload)
|
||||
tag=$(git describe)
|
||||
shopt -s nullglob
|
||||
for f in *.tar.gz *.zip *.asc ; do
|
||||
relup syncthing/syncthing "$tag" "$f"
|
||||
done
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
deps)
|
||||
deps
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
assets)
|
||||
assets
|
||||
;;
|
||||
|
||||
setup)
|
||||
setup
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
xdr)
|
||||
xdr
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
translate)
|
||||
translate
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
transifex)
|
||||
transifex
|
||||
go run build.go "$1"
|
||||
;;
|
||||
|
||||
noupgrade)
|
||||
go run build.go -no-upgrade tar
|
||||
;;
|
||||
|
||||
all)
|
||||
go run build.go -goos linux -goarch amd64 tar
|
||||
go run build.go -goos linux -goarch 386 tar
|
||||
go run build.go -goos linux -goarch armv5 tar
|
||||
go run build.go -goos linux -goarch armv6 tar
|
||||
go run build.go -goos linux -goarch armv7 tar
|
||||
|
||||
go run build.go -goos freebsd -goarch amd64 tar
|
||||
go run build.go -goos freebsd -goarch 386 tar
|
||||
|
||||
go run build.go -goos darwin -goarch amd64 tar
|
||||
|
||||
go run build.go -goos windows -goarch amd64 zip
|
||||
go run build.go -goos windows -goarch 386 zip
|
||||
;;
|
||||
|
||||
setup)
|
||||
echo "Don't worry, just build."
|
||||
;;
|
||||
|
||||
test-cov)
|
||||
ulimit -t 60 || true
|
||||
ulimit -d 512000 || true
|
||||
ulimit -m 512000 || true
|
||||
|
||||
go get github.com/axw/gocov/gocov
|
||||
go get github.com/AlekSi/gocov-xml
|
||||
|
||||
echo "mode: set" > coverage.out
|
||||
fail=0
|
||||
|
||||
# For every package in the repo
|
||||
for dir in $(go list ./...) ; do
|
||||
# run the tests
|
||||
godep go test -coverprofile=profile.out $dir
|
||||
if [ -f profile.out ] ; then
|
||||
# and if there was test output, append it to coverage.out
|
||||
grep -v "mode: set" profile.out >> coverage.out
|
||||
rm profile.out
|
||||
fi
|
||||
done
|
||||
|
||||
gocov convert coverage.out | gocov-xml > coverage.xml
|
||||
|
||||
# This is usually run from within Jenkins. If it is, we need to
|
||||
# tweak the paths in coverage.xml so cobertura finds the
|
||||
# source.
|
||||
if [[ "${WORKSPACE:-default}" != "default" ]] ; then
|
||||
sed "s#$WORKSPACE##g" < coverage.xml > coverage.xml.new && mv coverage.xml.new coverage.xml
|
||||
fi
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Unknown build parameter $1"
|
||||
echo "Unknown build command $1"
|
||||
;;
|
||||
esac
|
||||
|
||||
29
check-contrib.sh
Executable file
29
check-contrib.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
|
||||
missing-contribs() {
|
||||
for email in $(git log --format=%ae master | grep -v jakob@nym.se | sort | uniq) ; do
|
||||
grep -q "$email" CONTRIBUTORS || echo $email
|
||||
done
|
||||
}
|
||||
|
||||
no-docs-typos() {
|
||||
# Commits that are known to not change code
|
||||
grep -v f2459ef3319b2f060dbcdacd0c35a1788a94b8bd |\
|
||||
grep -v b61f418bf2d1f7d5a9d7088a20a2a448e5e66801 |\
|
||||
grep -v f0621207e3953711f9ab86d99724f1d0faac45b1 |\
|
||||
grep -v f1120d7aa936c0658429edef0037792520b46334
|
||||
}
|
||||
|
||||
print-missing-contribs() {
|
||||
for email in $(missing-contribs) ; do
|
||||
git log --author="$email" --format="%H %ae %s" | no-docs-typos
|
||||
done
|
||||
}
|
||||
|
||||
print-missing-copyright() {
|
||||
find . -name \*.go | xargs grep -L 'Copyright (C)' | grep -v Godeps
|
||||
}
|
||||
|
||||
print-missing-contribs
|
||||
print-missing-copyright
|
||||
|
||||
@@ -9,8 +9,8 @@ package main
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/base64"
|
||||
"flag"
|
||||
"fmt"
|
||||
"go/format"
|
||||
"io"
|
||||
"os"
|
||||
@@ -23,27 +23,27 @@ var tpl = template.Must(template.New("assets").Parse(`package auto
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/hex"
|
||||
"encoding/base64"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
var Assets = make(map[string][]byte)
|
||||
|
||||
func init() {
|
||||
func Assets() map[string][]byte {
|
||||
var assets = make(map[string][]byte, {{.assets | len}})
|
||||
var bs []byte
|
||||
var gr *gzip.Reader
|
||||
{{range $asset := .assets}}
|
||||
bs, _ = hex.DecodeString("{{$asset.HexData}}")
|
||||
bs, _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
|
||||
gr, _ = gzip.NewReader(bytes.NewBuffer(bs))
|
||||
bs, _ = ioutil.ReadAll(gr)
|
||||
Assets["{{$asset.Name}}"] = bs
|
||||
assets["{{$asset.Name}}"] = bs
|
||||
{{end}}
|
||||
return assets
|
||||
}
|
||||
`))
|
||||
|
||||
type asset struct {
|
||||
Name string
|
||||
HexData string
|
||||
Name string
|
||||
Data string
|
||||
}
|
||||
|
||||
var assets []asset
|
||||
@@ -69,8 +69,8 @@ func walkerFor(basePath string) filepath.WalkFunc {
|
||||
|
||||
name, _ = filepath.Rel(basePath, name)
|
||||
assets = append(assets, asset{
|
||||
Name: filepath.ToSlash(name),
|
||||
HexData: fmt.Sprintf("%x", buf.Bytes()),
|
||||
Name: filepath.ToSlash(name),
|
||||
Data: base64.StdEncoding.EncodeToString(buf.Bytes()),
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -32,7 +32,8 @@ func main() {
|
||||
|
||||
if *node == "" {
|
||||
log.Printf("*** Global index for repo %q", *repo)
|
||||
fs.WithGlobal(func(f protocol.FileInfo) bool {
|
||||
fs.WithGlobalTruncated(func(fi protocol.FileIntf) bool {
|
||||
f := fi.(protocol.FileInfoTruncated)
|
||||
fmt.Println(f)
|
||||
fmt.Println("\t", fs.Availability(f.Name))
|
||||
return true
|
||||
@@ -43,7 +44,8 @@ func main() {
|
||||
log.Fatal(err)
|
||||
}
|
||||
log.Printf("*** Have index for repo %q node %q", *repo, n)
|
||||
fs.WithHave(n, func(f protocol.FileInfo) bool {
|
||||
fs.WithHaveTruncated(n, func(fi protocol.FileIntf) bool {
|
||||
f := fi.(protocol.FileInfoTruncated)
|
||||
fmt.Println(f)
|
||||
return true
|
||||
})
|
||||
|
||||
@@ -6,11 +6,11 @@ package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"mime"
|
||||
"net"
|
||||
@@ -24,7 +24,6 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"crypto/tls"
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/syncthing/syncthing/auto"
|
||||
"github.com/syncthing/syncthing/config"
|
||||
@@ -45,7 +44,6 @@ var (
|
||||
configInSync = true
|
||||
guiErrors = []guiError{}
|
||||
guiErrorsMut sync.Mutex
|
||||
static func(http.ResponseWriter, *http.Request, *log.Logger)
|
||||
apiKey string
|
||||
modt = time.Now().UTC().Format(http.TimeFormat)
|
||||
eventSub *events.BufferedSubscription
|
||||
@@ -111,6 +109,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
getRestMux.HandleFunc("/rest/system", restGetSystem)
|
||||
getRestMux.HandleFunc("/rest/upgrade", restGetUpgrade)
|
||||
getRestMux.HandleFunc("/rest/version", restGetVersion)
|
||||
getRestMux.HandleFunc("/rest/stats/node", withModel(m, restGetNodeStats))
|
||||
|
||||
// Debug endpoints, not for general use
|
||||
getRestMux.HandleFunc("/rest/debug/peerCompletion", withModel(m, restGetPeerCompletion))
|
||||
@@ -126,6 +125,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
postRestMux.HandleFunc("/rest/restart", restPostRestart)
|
||||
postRestMux.HandleFunc("/rest/shutdown", restPostShutdown)
|
||||
postRestMux.HandleFunc("/rest/upgrade", restPostUpgrade)
|
||||
postRestMux.HandleFunc("/rest/scan", withModel(m, restPostScan))
|
||||
|
||||
// A handler that splits requests between the two above and disables
|
||||
// caching
|
||||
@@ -143,6 +143,9 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
// protected, other requests will grant cookies.
|
||||
handler := csrfMiddleware("/rest", mux)
|
||||
|
||||
// Add our version as a header to responses
|
||||
handler = withVersionMiddleware(handler)
|
||||
|
||||
// Wrap everything in basic auth, if user/password is set.
|
||||
if len(cfg.User) > 0 {
|
||||
handler = basicAuthMiddleware(cfg.User, cfg.Password, handler)
|
||||
@@ -172,6 +175,13 @@ func noCacheMiddleware(h http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
func withVersionMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("X-Syncthing-Version", Version)
|
||||
h.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
func withModel(m *model.Model, h func(m *model.Model, w http.ResponseWriter, r *http.Request)) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
h(m, w, r)
|
||||
@@ -245,7 +255,7 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func restPostOverride(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var repo = qs.Get("repo")
|
||||
m.Override(repo)
|
||||
go m.Override(repo)
|
||||
}
|
||||
|
||||
func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
@@ -264,6 +274,12 @@ func restGetConnections(m *model.Model, w http.ResponseWriter, r *http.Request)
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetNodeStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var res = m.NodeStatistics()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetConfig(w http.ResponseWriter, r *http.Request) {
|
||||
encCfg := cfg
|
||||
if encCfg.GUI.Password != "" {
|
||||
@@ -277,7 +293,9 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var newCfg config.Configuration
|
||||
err := json.NewDecoder(r.Body).Decode(&newCfg)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("decoding posted config:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
} else {
|
||||
if newCfg.GUI.Password == "" {
|
||||
// Leave it empty
|
||||
@@ -286,7 +304,9 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
} else {
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("bcrypting password:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
} else {
|
||||
newCfg.GUI.Password = string(hash)
|
||||
}
|
||||
@@ -454,12 +474,18 @@ func restGetEvents(w http.ResponseWriter, r *http.Request) {
|
||||
since, _ := strconv.Atoi(sinceStr)
|
||||
limit, _ := strconv.Atoi(limitStr)
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
|
||||
// Flush before blocking, to indicate that we've received the request
|
||||
// and that it should not be retried.
|
||||
f := w.(http.Flusher)
|
||||
f.Flush()
|
||||
|
||||
evs := eventSub.Since(since, nil)
|
||||
if 0 < limit && limit < len(evs) {
|
||||
evs = evs[len(evs)-limit:]
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(evs)
|
||||
}
|
||||
|
||||
@@ -498,9 +524,8 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
lang := r.Header.Get("Accept-Language")
|
||||
var langs []string
|
||||
for _, l := range strings.Split(lang, ",") {
|
||||
if len(l) >= 2 {
|
||||
langs = append(langs, l[:2])
|
||||
}
|
||||
parts := strings.SplitN(l, ";", 2)
|
||||
langs = append(langs, strings.ToLower(strings.TrimSpace(parts[0])))
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(langs)
|
||||
@@ -509,15 +534,15 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
rel, err := upgrade.LatestRelease(strings.Contains(Version, "-beta"))
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("getting latest release:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
|
||||
if upgrade.CompareVersions(rel.Tag, Version) == 1 {
|
||||
err = upgrade.UpgradeTo(rel)
|
||||
err = upgrade.UpgradeTo(rel, GoArchExtra)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("upgrading:", err)
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
@@ -526,6 +551,16 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func restPostScan(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
repo := qs.Get("repo")
|
||||
sub := qs.Get("sub")
|
||||
err := m.ScanRepoSub(repo, sub)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
}
|
||||
}
|
||||
|
||||
func getQR(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var text = qs.Get("text")
|
||||
@@ -615,6 +650,8 @@ func validAPIKey(k string) bool {
|
||||
}
|
||||
|
||||
func embeddedStatic(assetDir string) http.Handler {
|
||||
assets := auto.Assets()
|
||||
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
file := r.URL.Path
|
||||
|
||||
@@ -635,13 +672,13 @@ func embeddedStatic(assetDir string) http.Handler {
|
||||
}
|
||||
}
|
||||
|
||||
bs, ok := auto.Assets[file]
|
||||
bs, ok := assets[file]
|
||||
if !ok {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
mtype := mime.TypeByExtension(filepath.Ext(r.URL.Path))
|
||||
mtype := mimeTypeForFile(file)
|
||||
if len(mtype) != 0 {
|
||||
w.Header().Set("Content-Type", mtype)
|
||||
}
|
||||
@@ -651,3 +688,28 @@ func embeddedStatic(assetDir string) http.Handler {
|
||||
w.Write(bs)
|
||||
})
|
||||
}
|
||||
|
||||
func mimeTypeForFile(file string) string {
|
||||
// We use a built in table of the common types since the system
|
||||
// TypeByExtension might be unreliable. But if we don't know, we delegate
|
||||
// to the system.
|
||||
ext := filepath.Ext(file)
|
||||
switch ext {
|
||||
case ".htm", ".html":
|
||||
return "text/html"
|
||||
case ".css":
|
||||
return "text/css"
|
||||
case ".js":
|
||||
return "application/javascript"
|
||||
case ".json":
|
||||
return "application/json"
|
||||
case ".png":
|
||||
return "image/png"
|
||||
case ".ttf":
|
||||
return "application/x-font-ttf"
|
||||
case ".woff":
|
||||
return "application/x-font-woff"
|
||||
default:
|
||||
return mime.TypeByExtension(ext)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -79,7 +79,7 @@ func trackCPUUsage() {
|
||||
for _ = range time.NewTicker(time.Second).C {
|
||||
err := solarisPrusage(pid, &rusage)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("getting prusage:", err)
|
||||
continue
|
||||
}
|
||||
curTime := time.Now().UnixNano()
|
||||
|
||||
47
cmd/syncthing/heapprof.go
Normal file
47
cmd/syncthing/heapprof.go
Normal file
@@ -0,0 +1,47 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"runtime/pprof"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
func init() {
|
||||
if innerProcess && os.Getenv("STHEAPPROFILE") != "" {
|
||||
l.Debugln("Starting heap profiling")
|
||||
go saveHeapProfiles()
|
||||
}
|
||||
}
|
||||
|
||||
func saveHeapProfiles() {
|
||||
runtime.MemProfileRate = 1
|
||||
var memstats, prevMemstats runtime.MemStats
|
||||
|
||||
t0 := time.Now()
|
||||
for t := range time.NewTicker(250 * time.Millisecond).C {
|
||||
startms := int(t.Sub(t0).Seconds() * 1000)
|
||||
runtime.ReadMemStats(&memstats)
|
||||
if memstats.HeapInuse > prevMemstats.HeapInuse {
|
||||
fd, err := os.Create(fmt.Sprintf("heap-%05d-%07d.pprof", syscall.Getpid(), startms))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = pprof.WriteHeapProfile(fd)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
prevMemstats = memstats
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"net/http"
|
||||
_ "net/http/pprof"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
@@ -26,10 +25,12 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"code.google.com/p/go.crypto/bcrypt"
|
||||
"github.com/juju/ratelimit"
|
||||
"github.com/syncthing/syncthing/config"
|
||||
"github.com/syncthing/syncthing/discover"
|
||||
"github.com/syncthing/syncthing/events"
|
||||
"github.com/syncthing/syncthing/files"
|
||||
"github.com/syncthing/syncthing/logger"
|
||||
"github.com/syncthing/syncthing/model"
|
||||
"github.com/syncthing/syncthing/osutil"
|
||||
@@ -37,6 +38,7 @@ import (
|
||||
"github.com/syncthing/syncthing/upgrade"
|
||||
"github.com/syncthing/syncthing/upnp"
|
||||
"github.com/syndtr/goleveldb/leveldb"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -47,9 +49,18 @@ var (
|
||||
BuildHost = "unknown"
|
||||
BuildUser = "unknown"
|
||||
LongVersion string
|
||||
GoArchExtra string // "", "v5", "v6", "v7"
|
||||
)
|
||||
|
||||
const (
|
||||
exitSuccess = 0
|
||||
exitError = 1
|
||||
exitNoUpgradeAvailable = 2
|
||||
exitRestarting = 3
|
||||
)
|
||||
|
||||
var l = logger.DefaultLogger
|
||||
var innerProcess = os.Getenv("STNORESTART") != ""
|
||||
|
||||
func init() {
|
||||
if Version != "unknown-dev" {
|
||||
@@ -72,15 +83,15 @@ func init() {
|
||||
}
|
||||
|
||||
var (
|
||||
cfg config.Configuration
|
||||
myID protocol.NodeID
|
||||
confDir string
|
||||
logFlags int = log.Ltime
|
||||
rateBucket *ratelimit.Bucket
|
||||
stop = make(chan bool)
|
||||
discoverer *discover.Discoverer
|
||||
lockConn *net.TCPListener
|
||||
lockPort int
|
||||
cfg config.Configuration
|
||||
myID protocol.NodeID
|
||||
confDir string
|
||||
logFlags int = log.Ltime
|
||||
rateBucket *ratelimit.Bucket
|
||||
stop = make(chan int)
|
||||
discoverer *discover.Discoverer
|
||||
externalPort int
|
||||
cert tls.Certificate
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -99,13 +110,19 @@ show time only (2).
|
||||
|
||||
The following enviroment variables are interpreted by syncthing:
|
||||
|
||||
STGUIADDRESS Override GUI listen address set in config. Expects protocol type
|
||||
followed by hostname or an IP address, followed by a port, such
|
||||
as "https://127.0.0.1:8888".
|
||||
|
||||
STGUIAUTH Override GUI authentication credentials set in config. Expects
|
||||
a colon separated username and password, such as "admin:secret".
|
||||
|
||||
STGUIAPIKEY Override GUI API key set in config.
|
||||
|
||||
STNORESTART Do not attempt to restart when requested to, instead just exit.
|
||||
Set this variable when running under a service manager such as
|
||||
runit, launchd, etc.
|
||||
|
||||
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
|
||||
profiler with HTTP access.
|
||||
|
||||
STTRACE A comma separated string of facilities to trace. The valid
|
||||
facility strings:
|
||||
- "beacon" (the beacon package)
|
||||
@@ -115,36 +132,57 @@ The following enviroment variables are interpreted by syncthing:
|
||||
- "net" (the main package; connections & network messages)
|
||||
- "model" (the model package)
|
||||
- "scanner" (the scanner package)
|
||||
- "stats" (the stats package)
|
||||
- "upnp" (the upnp package)
|
||||
- "xdr" (the xdr package)
|
||||
- "all" (all of the above)
|
||||
|
||||
STCPUPROFILE Write CPU profile to the specified file.
|
||||
|
||||
STGUIASSETS Directory to load GUI assets from. Overrides compiled in assets.
|
||||
|
||||
STDEADLOCKTIMEOUT Alter deadlock detection timeout (seconds; default 1200).`
|
||||
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
|
||||
profiler with HTTP access.
|
||||
|
||||
STCPUPROFILE Write a CPU profile to cpu-$pid.pprof on exit.
|
||||
|
||||
STHEAPPROFILE Write heap profiles to heap-$pid-$timestamp.pprof each time
|
||||
heap usage increases.
|
||||
|
||||
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
|
||||
supported on Windows.
|
||||
|
||||
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
|
||||
available CPU cores.`
|
||||
)
|
||||
|
||||
func init() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
}
|
||||
|
||||
// Command line options
|
||||
var (
|
||||
reset bool
|
||||
showVersion bool
|
||||
doUpgrade bool
|
||||
doUpgradeCheck bool
|
||||
noBrowser bool
|
||||
generateDir string
|
||||
guiAddress string
|
||||
guiAuthentication string
|
||||
guiAPIKey string
|
||||
)
|
||||
|
||||
func main() {
|
||||
var reset bool
|
||||
var showVersion bool
|
||||
var doUpgrade bool
|
||||
var doUpgradeCheck bool
|
||||
var generateDir string
|
||||
var noBrowser bool
|
||||
flag.StringVar(&confDir, "home", getDefaultConfDir(), "Set configuration directory")
|
||||
flag.BoolVar(&reset, "reset", false, "Prepare to resync from cluster")
|
||||
flag.BoolVar(&showVersion, "version", false, "Show version")
|
||||
flag.BoolVar(&doUpgrade, "upgrade", false, "Perform upgrade")
|
||||
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
|
||||
flag.BoolVar(&noBrowser, "no-browser", false, "Do not start browser")
|
||||
flag.IntVar(&logFlags, "logflags", logFlags, "Set log flags")
|
||||
flag.StringVar(&generateDir, "generate", "", "Generate key in specified dir")
|
||||
flag.StringVar(&guiAddress, "gui-address", "", "Override GUI address")
|
||||
flag.StringVar(&guiAuthentication, "gui-authentication", "", "Override GUI authentication. Expects 'username:password'")
|
||||
flag.StringVar(&guiAPIKey, "gui-apikey", "", "Override GUI API key")
|
||||
flag.IntVar(&logFlags, "logflags", logFlags, "Set log flags")
|
||||
flag.Usage = usageFor(flag.CommandLine, usage, extraUsage)
|
||||
flag.Parse()
|
||||
|
||||
@@ -188,13 +226,13 @@ func main() {
|
||||
|
||||
if upgrade.CompareVersions(rel.Tag, Version) <= 0 {
|
||||
l.Infof("No upgrade available (current %q >= latest %q).", Version, rel.Tag)
|
||||
os.Exit(2)
|
||||
os.Exit(exitNoUpgradeAvailable)
|
||||
}
|
||||
|
||||
l.Infof("Upgrade available (current %q < latest %q)", Version, rel.Tag)
|
||||
|
||||
if doUpgrade {
|
||||
err = upgrade.UpgradeTo(rel)
|
||||
err = upgrade.UpgradeTo(rel, GoArchExtra)
|
||||
if err != nil {
|
||||
l.Fatalln("Upgrade:", err) // exits 1
|
||||
}
|
||||
@@ -205,12 +243,21 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
var err error
|
||||
lockPort, err = getLockPort()
|
||||
if err != nil {
|
||||
l.Fatalln("Opening lock port:", err)
|
||||
if reset {
|
||||
resetRepositories()
|
||||
return
|
||||
}
|
||||
|
||||
if os.Getenv("STNORESTART") != "" {
|
||||
syncthingMain()
|
||||
} else {
|
||||
monitorMain()
|
||||
}
|
||||
}
|
||||
|
||||
func syncthingMain() {
|
||||
var err error
|
||||
|
||||
if len(os.Getenv("GOGC")) == 0 {
|
||||
debug.SetGCPercent(25)
|
||||
}
|
||||
@@ -223,7 +270,7 @@ func main() {
|
||||
|
||||
events.Default.Log(events.Starting, map[string]string{"home": confDir})
|
||||
|
||||
if _, err := os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
|
||||
if _, err = os.Stat(confDir); err != nil && confDir == getDefaultConfDir() {
|
||||
// We are supposed to use the default configuration directory. It
|
||||
// doesn't exist. In the past our default has been ~/.syncthing, so if
|
||||
// that directory exists we move it to the new default location and
|
||||
@@ -247,7 +294,7 @@ func main() {
|
||||
// Ensure that our home directory exists and that we have a certificate and key.
|
||||
|
||||
ensureDir(confDir, 0700)
|
||||
cert, err := loadCert(confDir, "")
|
||||
cert, err = loadCert(confDir, "")
|
||||
if err != nil {
|
||||
newCertificate(confDir, "")
|
||||
cert, err = loadCert(confDir, "")
|
||||
@@ -265,6 +312,8 @@ func main() {
|
||||
cfgFile := filepath.Join(confDir, "config.xml")
|
||||
go saveConfigLoop(cfgFile)
|
||||
|
||||
var myName string
|
||||
|
||||
// Load the configuration file, if it exists.
|
||||
// If it does not, create a template.
|
||||
|
||||
@@ -276,25 +325,31 @@ func main() {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
cf.Close()
|
||||
myCfg := cfg.GetNodeConfiguration(myID)
|
||||
if myCfg == nil || myCfg.Name == "" {
|
||||
myName, _ = os.Hostname()
|
||||
} else {
|
||||
myName = myCfg.Name
|
||||
}
|
||||
} else {
|
||||
l.Infoln("No config file; starting with empty defaults")
|
||||
name, _ := os.Hostname()
|
||||
myName, _ = os.Hostname()
|
||||
defaultRepo := filepath.Join(getHomeDir(), "Sync")
|
||||
ensureDir(defaultRepo, 0755)
|
||||
|
||||
cfg, err = config.Load(nil, myID)
|
||||
cfg.Repositories = []config.RepositoryConfiguration{
|
||||
{
|
||||
ID: "default",
|
||||
Directory: defaultRepo,
|
||||
Nodes: []config.NodeConfiguration{{NodeID: myID}},
|
||||
ID: "default",
|
||||
Directory: defaultRepo,
|
||||
RescanIntervalS: 60,
|
||||
Nodes: []config.RepositoryNodeConfiguration{{NodeID: myID}},
|
||||
},
|
||||
}
|
||||
cfg.Nodes = []config.NodeConfiguration{
|
||||
{
|
||||
NodeID: myID,
|
||||
Addresses: []string{"dynamic"},
|
||||
Name: name,
|
||||
Name: myName,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -310,15 +365,6 @@ func main() {
|
||||
l.Infof("Edit %s to taste or use the GUI\n", cfgFile)
|
||||
}
|
||||
|
||||
if reset {
|
||||
resetRepositories()
|
||||
return
|
||||
}
|
||||
|
||||
if len(os.Getenv("STRESTART")) > 0 {
|
||||
waitForParentExit()
|
||||
}
|
||||
|
||||
if profiler := os.Getenv("STPROFILER"); len(profiler) > 0 {
|
||||
go func() {
|
||||
l.Debugln("Starting profiler on", profiler)
|
||||
@@ -353,11 +399,21 @@ func main() {
|
||||
// If this is the first time the user runs v0.9, archive the old indexes and config.
|
||||
archiveLegacyConfig()
|
||||
|
||||
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), nil)
|
||||
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{MaxOpenFiles: 100})
|
||||
if err != nil {
|
||||
l.Fatalln("leveldb.OpenFile():", err)
|
||||
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
|
||||
}
|
||||
m := model.NewModel(confDir, &cfg, "syncthing", Version, db)
|
||||
|
||||
// Remove database entries for repos that no longer exist in the config
|
||||
repoMap := cfg.RepoMap()
|
||||
for _, repo := range files.ListRepos(db) {
|
||||
if _, ok := repoMap[repo]; !ok {
|
||||
l.Infof("Cleaning data for dropped repo %q", repo)
|
||||
files.DropRepo(db, repo)
|
||||
}
|
||||
}
|
||||
|
||||
m := model.NewModel(confDir, &cfg, myName, "syncthing", Version, db)
|
||||
|
||||
nextRepo:
|
||||
for i, repo := range cfg.Repositories {
|
||||
@@ -374,6 +430,7 @@ nextRepo:
|
||||
// that all files have been deleted which might not be the case,
|
||||
// so mark it as invalid instead.
|
||||
if err != nil || !fi.IsDir() {
|
||||
l.Warnf("Stopping repository %q - directory missing, but has files in index", repo.ID)
|
||||
cfg.Repositories[i].Invalid = "repo directory missing"
|
||||
continue nextRepo
|
||||
}
|
||||
@@ -386,6 +443,7 @@ nextRepo:
|
||||
if err != nil {
|
||||
// If there was another error or we could not create the
|
||||
// directory, the repository is invalid.
|
||||
l.Warnf("Stopping repository %q - %v", err)
|
||||
cfg.Repositories[i].Invalid = err.Error()
|
||||
continue nextRepo
|
||||
}
|
||||
@@ -394,10 +452,13 @@ nextRepo:
|
||||
}
|
||||
|
||||
// GUI
|
||||
if cfg.GUI.Enabled && cfg.GUI.Address != "" {
|
||||
addr, err := net.ResolveTCPAddr("tcp", cfg.GUI.Address)
|
||||
|
||||
guiCfg := overrideGUIConfig(cfg.GUI, guiAddress, guiAuthentication, guiAPIKey)
|
||||
|
||||
if guiCfg.Enabled && guiCfg.Address != "" {
|
||||
addr, err := net.ResolveTCPAddr("tcp", guiCfg.Address)
|
||||
if err != nil {
|
||||
l.Fatalf("Cannot start GUI on %q: %v", cfg.GUI.Address, err)
|
||||
l.Fatalf("Cannot start GUI on %q: %v", guiCfg.Address, err)
|
||||
} else {
|
||||
var hostOpen, hostShow string
|
||||
switch {
|
||||
@@ -413,12 +474,12 @@ nextRepo:
|
||||
}
|
||||
|
||||
var proto = "http"
|
||||
if cfg.GUI.UseTLS {
|
||||
if guiCfg.UseTLS {
|
||||
proto = "https"
|
||||
}
|
||||
|
||||
l.Infof("Starting web GUI on %s://%s:%d/", proto, hostShow, addr.Port)
|
||||
err := startGUI(cfg.GUI, os.Getenv("STGUIASSETS"), m)
|
||||
l.Infof("Starting web GUI on %s://%s/", proto, net.JoinHostPort(hostShow, strconv.Itoa(addr.Port)))
|
||||
err := startGUI(guiCfg, os.Getenv("STGUIASSETS"), m)
|
||||
if err != nil {
|
||||
l.Fatalln("Cannot start GUI:", err)
|
||||
}
|
||||
@@ -432,7 +493,13 @@ nextRepo:
|
||||
// start needing a bunch of files which are nowhere to be found. This
|
||||
// needs to be changed when we correctly do persistent indexes.
|
||||
for _, repoCfg := range cfg.Repositories {
|
||||
if repoCfg.Invalid != "" {
|
||||
continue
|
||||
}
|
||||
for _, node := range repoCfg.NodeIDs() {
|
||||
if node == myID {
|
||||
continue
|
||||
}
|
||||
m.Index(node, repoCfg.ID, nil)
|
||||
}
|
||||
}
|
||||
@@ -469,11 +536,8 @@ nextRepo:
|
||||
|
||||
// UPnP
|
||||
|
||||
var externalPort = 0
|
||||
if cfg.Options.UPnPEnabled {
|
||||
// We seed the random number generator with the node ID to get a
|
||||
// repeatable sequence of random external ports.
|
||||
externalPort = setupUPnP(rand.NewSource(certSeed(cert.Certificate[0])))
|
||||
setupUPnP()
|
||||
}
|
||||
|
||||
// Routine to connect out to configured nodes
|
||||
@@ -497,7 +561,7 @@ nextRepo:
|
||||
}
|
||||
|
||||
if cpuprof := os.Getenv("STCPUPROFILE"); len(cpuprof) > 0 {
|
||||
f, err := os.Create(cpuprof)
|
||||
f, err := os.Create(fmt.Sprintf("cpu-%d.pprof", os.Getpid()))
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
@@ -526,12 +590,15 @@ nextRepo:
|
||||
}()
|
||||
}
|
||||
|
||||
go standbyMonitor()
|
||||
|
||||
events.Default.Log(events.StartupComplete, nil)
|
||||
go generateEvents()
|
||||
|
||||
<-stop
|
||||
code := <-stop
|
||||
|
||||
l.Okln("Exiting")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func generateEvents() {
|
||||
@@ -541,27 +608,7 @@ func generateEvents() {
|
||||
}
|
||||
}
|
||||
|
||||
func waitForParentExit() {
|
||||
l.Infoln("Waiting for parent to exit...")
|
||||
lockPortStr := os.Getenv("STRESTART")
|
||||
lockPort, err := strconv.Atoi(lockPortStr)
|
||||
if err != nil {
|
||||
l.Warnln("Invalid lock port %q: %v", lockPortStr, err)
|
||||
}
|
||||
// Wait for the listen address to become free, indicating that the parent has exited.
|
||||
for {
|
||||
ln, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", lockPort))
|
||||
if err == nil {
|
||||
ln.Close()
|
||||
break
|
||||
}
|
||||
time.Sleep(250 * time.Millisecond)
|
||||
}
|
||||
l.Infoln("Continuing")
|
||||
}
|
||||
|
||||
func setupUPnP(r rand.Source) int {
|
||||
var externalPort = 0
|
||||
func setupUPnP() {
|
||||
if len(cfg.Options.ListenAddress) == 1 {
|
||||
_, portStr, err := net.SplitHostPort(cfg.Options.ListenAddress[0])
|
||||
if err != nil {
|
||||
@@ -571,17 +618,11 @@ func setupUPnP(r rand.Source) int {
|
||||
port, _ := strconv.Atoi(portStr)
|
||||
igd, err := upnp.Discover()
|
||||
if err == nil {
|
||||
for i := 0; i < 10; i++ {
|
||||
r := 1024 + int(r.Int63()%(65535-1024))
|
||||
err := igd.AddPortMapping(upnp.TCP, r, port, "syncthing", 0)
|
||||
if err == nil {
|
||||
externalPort = r
|
||||
l.Infoln("Created UPnP port mapping - external port", externalPort)
|
||||
break
|
||||
}
|
||||
}
|
||||
externalPort = setupExternalPort(igd, port)
|
||||
if externalPort == 0 {
|
||||
l.Warnln("Failed to create UPnP port mapping")
|
||||
} else {
|
||||
l.Infoln("Created UPnP port mapping - external port", externalPort)
|
||||
}
|
||||
} else {
|
||||
l.Infof("No UPnP gateway detected")
|
||||
@@ -589,11 +630,60 @@ func setupUPnP(r rand.Source) int {
|
||||
l.Debugf("UPnP: %v", err)
|
||||
}
|
||||
}
|
||||
if cfg.Options.UPnPRenewal > 0 {
|
||||
go renewUPnP(port)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
l.Warnln("Multiple listening addresses; not attempting UPnP port mapping")
|
||||
}
|
||||
return externalPort
|
||||
}
|
||||
|
||||
func setupExternalPort(igd *upnp.IGD, port int) int {
|
||||
// We seed the random number generator with the node ID to get a
|
||||
// repeatable sequence of random external ports.
|
||||
rnd := rand.NewSource(certSeed(cert.Certificate[0]))
|
||||
for i := 0; i < 10; i++ {
|
||||
r := 1024 + int(rnd.Int63()%(65535-1024))
|
||||
err := igd.AddPortMapping(upnp.TCP, r, port, "syncthing", cfg.Options.UPnPLease*60)
|
||||
if err == nil {
|
||||
return r
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func renewUPnP(port int) {
|
||||
for {
|
||||
time.Sleep(time.Duration(cfg.Options.UPnPRenewal) * time.Minute)
|
||||
|
||||
igd, err := upnp.Discover()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Just renew the same port that we already have
|
||||
if externalPort != 0 {
|
||||
err = igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", cfg.Options.UPnPLease*60)
|
||||
if err == nil {
|
||||
l.Infoln("Renewed UPnP port mapping - external port", externalPort)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Something strange has happened. We didn't have an external port before?
|
||||
// Or perhaps the gateway has changed?
|
||||
// Retry the same port sequence from the beginning.
|
||||
r := setupExternalPort(igd, port)
|
||||
if r != 0 {
|
||||
externalPort = r
|
||||
l.Infoln("Updated UPnP port mapping - external port", externalPort)
|
||||
discoverer.StopGlobal()
|
||||
discoverer.StartGlobal(cfg.Options.GlobalAnnServer, uint16(r))
|
||||
continue
|
||||
}
|
||||
l.Warnln("Failed to update UPnP port mapping - external port", externalPort)
|
||||
}
|
||||
}
|
||||
|
||||
func resetRepositories() {
|
||||
@@ -638,7 +728,7 @@ func archiveLegacyConfig() {
|
||||
l.Warnf("Cannot archive config:", err)
|
||||
return
|
||||
}
|
||||
defer src.Close()
|
||||
defer dst.Close()
|
||||
|
||||
l.Infoln("Archiving config.xml")
|
||||
io.Copy(dst, src)
|
||||
@@ -647,40 +737,12 @@ func archiveLegacyConfig() {
|
||||
|
||||
func restart() {
|
||||
l.Infoln("Restarting")
|
||||
if os.Getenv("SMF_FMRI") != "" || os.Getenv("STNORESTART") != "" {
|
||||
// Solaris SMF
|
||||
l.Infoln("Service manager detected; exit instead of restart")
|
||||
stop <- true
|
||||
return
|
||||
}
|
||||
|
||||
env := os.Environ()
|
||||
newEnv := make([]string, 0, len(env))
|
||||
for _, s := range env {
|
||||
if !strings.HasPrefix(s, "STRESTART=") {
|
||||
newEnv = append(newEnv, s)
|
||||
}
|
||||
}
|
||||
newEnv = append(newEnv, fmt.Sprintf("STRESTART=%d", lockPort))
|
||||
|
||||
pgm, err := exec.LookPath(os.Args[0])
|
||||
if err != nil {
|
||||
l.Warnln("Cannot restart:", err)
|
||||
return
|
||||
}
|
||||
proc, err := os.StartProcess(pgm, os.Args, &os.ProcAttr{
|
||||
Env: newEnv,
|
||||
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
|
||||
})
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
proc.Release()
|
||||
stop <- true
|
||||
stop <- exitRestarting
|
||||
}
|
||||
|
||||
func shutdown() {
|
||||
stop <- true
|
||||
l.Infoln("Shutting down")
|
||||
stop <- exitSuccess
|
||||
}
|
||||
|
||||
var saveConfigCh = make(chan struct{})
|
||||
@@ -794,6 +856,10 @@ next:
|
||||
}
|
||||
}
|
||||
|
||||
events.Default.Log(events.NodeRejected, map[string]string{
|
||||
"node": remoteID.String(),
|
||||
"address": conn.RemoteAddr().String(),
|
||||
})
|
||||
l.Infof("Connection from %s with unknown node ID %s; ignoring", conn.RemoteAddr(), remoteID)
|
||||
conn.Close()
|
||||
}
|
||||
@@ -933,19 +999,15 @@ func setTCPOptions(conn *net.TCPConn) {
|
||||
}
|
||||
|
||||
func discovery(extPort int) *discover.Discoverer {
|
||||
disc, err := discover.NewDiscoverer(myID, cfg.Options.ListenAddress, cfg.Options.LocalAnnPort)
|
||||
if err != nil {
|
||||
l.Warnf("No discovery possible (%v)", err)
|
||||
return nil
|
||||
}
|
||||
disc := discover.NewDiscoverer(myID, cfg.Options.ListenAddress)
|
||||
|
||||
if cfg.Options.LocalAnnEnabled {
|
||||
l.Infoln("Sending local discovery announcements")
|
||||
disc.StartLocal()
|
||||
l.Infoln("Starting local discovery announcements")
|
||||
disc.StartLocal(cfg.Options.LocalAnnPort, cfg.Options.LocalAnnMCAddr)
|
||||
}
|
||||
|
||||
if cfg.Options.GlobalAnnEnabled {
|
||||
l.Infoln("Sending global discovery announcements")
|
||||
l.Infoln("Starting global discovery announcements")
|
||||
disc.StartGlobal(cfg.Options.GlobalAnnServer, uint16(extPort))
|
||||
}
|
||||
|
||||
@@ -1034,12 +1096,63 @@ func getFreePort(host string, ports ...int) (int, error) {
|
||||
return addr.Port, nil
|
||||
}
|
||||
|
||||
func getLockPort() (int, error) {
|
||||
var err error
|
||||
lockConn, err = net.ListenTCP("tcp", &net.TCPAddr{IP: net.IP{127, 0, 0, 1}})
|
||||
if err != nil {
|
||||
return 0, err
|
||||
func overrideGUIConfig(originalCfg config.GUIConfiguration, address, authentication, apikey string) config.GUIConfiguration {
|
||||
// Make a copy of the config
|
||||
cfg := originalCfg
|
||||
|
||||
if address == "" {
|
||||
address = os.Getenv("STGUIADDRESS")
|
||||
}
|
||||
|
||||
if address != "" {
|
||||
cfg.Enabled = true
|
||||
|
||||
addressParts := strings.SplitN(address, "://", 2)
|
||||
switch addressParts[0] {
|
||||
case "http":
|
||||
cfg.UseTLS = false
|
||||
case "https":
|
||||
cfg.UseTLS = true
|
||||
default:
|
||||
l.Fatalln("Unidentified protocol", addressParts[0])
|
||||
}
|
||||
cfg.Address = addressParts[1]
|
||||
}
|
||||
|
||||
if authentication == "" {
|
||||
authentication = os.Getenv("STGUIAUTH")
|
||||
}
|
||||
|
||||
if authentication != "" {
|
||||
authenticationParts := strings.SplitN(authentication, ":", 2)
|
||||
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(authenticationParts[1]), 0)
|
||||
if err != nil {
|
||||
l.Fatalln("Invalid GUI password:", err)
|
||||
}
|
||||
|
||||
cfg.User = authenticationParts[0]
|
||||
cfg.Password = string(hash)
|
||||
}
|
||||
|
||||
if apikey == "" {
|
||||
apikey = os.Getenv("STGUIAPIKEY")
|
||||
}
|
||||
|
||||
if apikey != "" {
|
||||
cfg.APIKey = apikey
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
|
||||
func standbyMonitor() {
|
||||
now := time.Now()
|
||||
for {
|
||||
time.Sleep(10 * time.Second)
|
||||
if time.Since(now) > 2*time.Minute {
|
||||
l.Infoln("Paused state detected, possibly woke up from standby.")
|
||||
restart()
|
||||
}
|
||||
now = time.Now()
|
||||
}
|
||||
addr := lockConn.Addr().(*net.TCPAddr)
|
||||
return addr.Port, nil
|
||||
}
|
||||
|
||||
7
cmd/syncthing/main_test.go
Normal file
7
cmd/syncthing/main_test.go
Normal file
@@ -0,0 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main_test
|
||||
|
||||
// Empty test file to generate 0% coverage rather than no coverage
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build solaris
|
||||
|
||||
package main
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build freebsd
|
||||
|
||||
package main
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
167
cmd/syncthing/monitor.go
Normal file
167
cmd/syncthing/monitor.go
Normal file
@@ -0,0 +1,167 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
stdoutFirstLines []string // The first 10 lines of stdout
|
||||
stdoutLastLines []string // The last 50 lines of stdout
|
||||
stdoutMut sync.Mutex
|
||||
)
|
||||
|
||||
const (
|
||||
countRestarts = 5
|
||||
loopThreshold = 15 * time.Second
|
||||
)
|
||||
|
||||
func monitorMain() {
|
||||
os.Setenv("STNORESTART", "yes")
|
||||
l.SetPrefix("[monitor] ")
|
||||
|
||||
args := os.Args
|
||||
var restarts [countRestarts]time.Time
|
||||
|
||||
sign := make(chan os.Signal, 1)
|
||||
sigTerm := syscall.Signal(0xf)
|
||||
signal.Notify(sign, os.Interrupt, sigTerm, os.Kill)
|
||||
|
||||
for {
|
||||
if t := time.Since(restarts[0]); t < loopThreshold {
|
||||
l.Warnf("%d restarts in %v; not retrying further", countRestarts, t)
|
||||
os.Exit(exitError)
|
||||
}
|
||||
|
||||
copy(restarts[0:], restarts[1:])
|
||||
restarts[len(restarts)-1] = time.Now()
|
||||
|
||||
cmd := exec.Command(args[0], args[1:]...)
|
||||
|
||||
stderr, err := cmd.StderrPipe()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
stdout, err := cmd.StdoutPipe()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
l.Infoln("Starting syncthing")
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
stdoutMut.Lock()
|
||||
stdoutFirstLines = make([]string, 0, 10)
|
||||
stdoutLastLines = make([]string, 0, 50)
|
||||
stdoutMut.Unlock()
|
||||
|
||||
go copyStderr(stderr)
|
||||
go copyStdout(stdout)
|
||||
|
||||
exit := make(chan error)
|
||||
|
||||
go func() {
|
||||
exit <- cmd.Wait()
|
||||
}()
|
||||
|
||||
select {
|
||||
case s := <-sign:
|
||||
l.Infof("Signal %d received; exiting", s)
|
||||
cmd.Process.Kill()
|
||||
<-exit
|
||||
return
|
||||
case <-exit:
|
||||
if err == nil {
|
||||
// Successfull exit indicates an intentional shutdown
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
l.Infoln("Syncthing exited:", err)
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
func copyStderr(stderr io.ReadCloser) {
|
||||
br := bufio.NewReader(stderr)
|
||||
|
||||
var panicFd *os.File
|
||||
for {
|
||||
line, err := br.ReadString('\n')
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
l.Warnln("stderr:", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if panicFd == nil {
|
||||
os.Stderr.WriteString(line)
|
||||
|
||||
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
|
||||
panicFd, err = os.Create(filepath.Join(confDir, time.Now().Format("panic-20060102-150405.log")))
|
||||
if err != nil {
|
||||
l.Warnln("Create panic log:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
l.Warnf("Panic detected, writing to \"%s\"", panicFd.Name())
|
||||
l.Warnln("Please create an issue at https://github.com/syncting/syncthing/issues/ with the panic log attached")
|
||||
|
||||
stdoutMut.Lock()
|
||||
for _, line := range stdoutFirstLines {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
panicFd.WriteString("...\n")
|
||||
for _, line := range stdoutLastLines {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if panicFd != nil {
|
||||
panicFd.WriteString(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func copyStdout(stderr io.ReadCloser) {
|
||||
br := bufio.NewReader(stderr)
|
||||
for {
|
||||
line, err := br.ReadString('\n')
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
l.Warnln("stdout:", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
stdoutMut.Lock()
|
||||
if len(stdoutFirstLines) < cap(stdoutFirstLines) {
|
||||
stdoutFirstLines = append(stdoutFirstLines, line)
|
||||
}
|
||||
if l := len(stdoutLastLines); l == cap(stdoutLastLines) {
|
||||
stdoutLastLines = stdoutLastLines[:l-1]
|
||||
}
|
||||
stdoutLastLines = append(stdoutLastLines, line)
|
||||
stdoutMut.Unlock()
|
||||
|
||||
os.Stdout.WriteString(line)
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,8 @@
|
||||
// +build perfstats
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !solaris,!windows
|
||||
|
||||
package main
|
||||
|
||||
@@ -11,7 +15,9 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
go savePerfStats(fmt.Sprintf("perfstats-%d.csv", syscall.Getpid()))
|
||||
if innerProcess && os.Getenv("STPERFSTATS") != "" {
|
||||
go savePerfStats(fmt.Sprintf("perfstats-%d.csv", syscall.Getpid()))
|
||||
}
|
||||
}
|
||||
|
||||
func savePerfStats(file string) {
|
||||
@@ -40,6 +46,6 @@ func savePerfStats(file string) {
|
||||
|
||||
startms := int(t.Sub(t0).Seconds() * 1000)
|
||||
|
||||
fmt.Fprintf(fd, "%d\t%f\t%d\t%d\n", startms, cpuUsagePercent, memstats.Alloc, memstats.Sys)
|
||||
fmt.Fprintf(fd, "%d\t%f\t%d\t%d\n", startms, cpuUsagePercent, memstats.Alloc, memstats.Sys-memstats.HeapReleased)
|
||||
}
|
||||
}
|
||||
@@ -1,3 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
|
||||
@@ -51,21 +51,21 @@ func main() {
|
||||
|
||||
var langs []string
|
||||
for code, stat := range stats {
|
||||
shortCode := code[:2]
|
||||
if !curValidLangs[shortCode] {
|
||||
code = strings.Replace(code, "_", "-", 1)
|
||||
if !curValidLangs[code] {
|
||||
if pct := 100 * stat.Translated / (stat.Translated + stat.Untranslated); pct < 95 {
|
||||
log.Printf("Skipping language %q (too low completion ratio %d%%)", shortCode, pct)
|
||||
os.Remove("lang-" + shortCode + ".json")
|
||||
log.Printf("Skipping language %q (too low completion ratio %d%%)", code, pct)
|
||||
os.Remove("lang-" + code + ".json")
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
langs = append(langs, shortCode)
|
||||
if shortCode == "en" {
|
||||
langs = append(langs, code)
|
||||
if code == "en" {
|
||||
continue
|
||||
}
|
||||
|
||||
log.Printf("Updating language %q", shortCode)
|
||||
log.Printf("Updating language %q", code)
|
||||
|
||||
resp := req("https://www.transifex.com/api/2/project/syncthing/resource/gui/translation/" + code)
|
||||
var t translation
|
||||
@@ -75,7 +75,7 @@ func main() {
|
||||
}
|
||||
resp.Body.Close()
|
||||
|
||||
fd, err := os.Create("lang-" + shortCode + ".json")
|
||||
fd, err := os.Create("lang-" + code + ".json")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
@@ -130,7 +130,7 @@ func loadValidLangs() []string {
|
||||
}
|
||||
|
||||
var langs []string
|
||||
exp := regexp.MustCompile(`\[([a-z",]+)\]`)
|
||||
exp := regexp.MustCompile(`\[([a-zA-Z",-]+)\]`)
|
||||
if matches := exp.FindSubmatch(bs); len(matches) == 2 {
|
||||
langs = strings.Split(string(matches[1]), ",")
|
||||
for i := range langs {
|
||||
|
||||
@@ -81,10 +81,16 @@ func main() {
|
||||
}
|
||||
fd.Close()
|
||||
|
||||
doc, err := html.Parse(os.Stdin)
|
||||
fd, err = os.Open(os.Args[2])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
doc, err := html.Parse(fd)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
fd.Close()
|
||||
|
||||
generalNode(doc)
|
||||
bs, err := json.MarshalIndent(trans, "", " ")
|
||||
if err != nil {
|
||||
|
||||
124
config/config.go
124
config/config.go
@@ -31,13 +31,14 @@ type Configuration struct {
|
||||
}
|
||||
|
||||
type RepositoryConfiguration struct {
|
||||
ID string `xml:"id,attr"`
|
||||
Directory string `xml:"directory,attr"`
|
||||
Nodes []NodeConfiguration `xml:"node"`
|
||||
ReadOnly bool `xml:"ro,attr"`
|
||||
IgnorePerms bool `xml:"ignorePerms,attr"`
|
||||
Invalid string `xml:"-"` // Set at runtime when there is an error, not saved
|
||||
Versioning VersioningConfiguration `xml:"versioning"`
|
||||
ID string `xml:"id,attr"`
|
||||
Directory string `xml:"directory,attr"`
|
||||
Nodes []RepositoryNodeConfiguration `xml:"node"`
|
||||
ReadOnly bool `xml:"ro,attr"`
|
||||
RescanIntervalS int `xml:"rescanIntervalS,attr" default:"60"`
|
||||
IgnorePerms bool `xml:"ignorePerms,attr"`
|
||||
Invalid string `xml:"-"` // Set at runtime when there is an error, not saved
|
||||
Versioning VersioningConfiguration `xml:"versioning"`
|
||||
|
||||
nodeIDs []protocol.NodeID
|
||||
}
|
||||
@@ -100,26 +101,35 @@ type NodeConfiguration struct {
|
||||
CertName string `xml:"certName,attr,omitempty"`
|
||||
}
|
||||
|
||||
type RepositoryNodeConfiguration struct {
|
||||
NodeID protocol.NodeID `xml:"id,attr"`
|
||||
|
||||
Deprecated_Name string `xml:"name,attr,omitempty" json:"-"`
|
||||
Deprecated_Addresses []string `xml:"address,omitempty" json:"-"`
|
||||
}
|
||||
|
||||
type OptionsConfiguration struct {
|
||||
ListenAddress []string `xml:"listenAddress" default:"0.0.0.0:22000"`
|
||||
GlobalAnnServer string `xml:"globalAnnounceServer" default:"announce.syncthing.net:22026"`
|
||||
GlobalAnnEnabled bool `xml:"globalAnnounceEnabled" default:"true"`
|
||||
LocalAnnEnabled bool `xml:"localAnnounceEnabled" default:"true"`
|
||||
LocalAnnPort int `xml:"localAnnouncePort" default:"21025"`
|
||||
LocalAnnMCAddr string `xml:"localAnnounceMCAddr" default:"[ff32::5222]:21026"`
|
||||
ParallelRequests int `xml:"parallelRequests" default:"16"`
|
||||
MaxSendKbps int `xml:"maxSendKbps"`
|
||||
RescanIntervalS int `xml:"rescanIntervalS" default:"60"`
|
||||
ReconnectIntervalS int `xml:"reconnectionIntervalS" default:"60"`
|
||||
MaxChangeKbps int `xml:"maxChangeKbps" default:"10000"`
|
||||
StartBrowser bool `xml:"startBrowser" default:"true"`
|
||||
UPnPEnabled bool `xml:"upnpEnabled" default:"true"`
|
||||
UPnPLease int `xml:"upnpLeaseMinutes" default:"0"`
|
||||
UPnPRenewal int `xml:"upnpRenewalMinutes" default:"30"`
|
||||
URAccepted int `xml:"urAccepted"` // Accepted usage reporting version; 0 for off (undecided), -1 for off (permanently)
|
||||
|
||||
Deprecated_UREnabled bool `xml:"urEnabled,omitempty" json:"-"`
|
||||
Deprecated_URDeclined bool `xml:"urDeclined,omitempty" json:"-"`
|
||||
Deprecated_ReadOnly bool `xml:"readOnly,omitempty" json:"-"`
|
||||
Deprecated_GUIEnabled bool `xml:"guiEnabled,omitempty" json:"-"`
|
||||
Deprecated_GUIAddress string `xml:"guiAddress,omitempty" json:"-"`
|
||||
Deprecated_RescanIntervalS int `xml:"rescanIntervalS,omitempty" json:"-"`
|
||||
Deprecated_UREnabled bool `xml:"urEnabled,omitempty" json:"-"`
|
||||
Deprecated_URDeclined bool `xml:"urDeclined,omitempty" json:"-"`
|
||||
Deprecated_ReadOnly bool `xml:"readOnly,omitempty" json:"-"`
|
||||
Deprecated_GUIEnabled bool `xml:"guiEnabled,omitempty" json:"-"`
|
||||
Deprecated_GUIAddress string `xml:"guiAddress,omitempty" json:"-"`
|
||||
}
|
||||
|
||||
type GUIConfiguration struct {
|
||||
@@ -139,6 +149,15 @@ func (cfg *Configuration) NodeMap() map[protocol.NodeID]NodeConfiguration {
|
||||
return m
|
||||
}
|
||||
|
||||
func (cfg *Configuration) GetNodeConfiguration(nodeid protocol.NodeID) *NodeConfiguration {
|
||||
for i, node := range cfg.Nodes {
|
||||
if node.NodeID == nodeid {
|
||||
return &cfg.Nodes[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cfg *Configuration) RepoMap() map[string]RepositoryConfiguration {
|
||||
m := make(map[string]RepositoryConfiguration, len(cfg.Repositories))
|
||||
for _, r := range cfg.Repositories {
|
||||
@@ -301,11 +320,16 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
convertV2V3(&cfg)
|
||||
}
|
||||
|
||||
// Upgrade to v4 configuration if appropriate
|
||||
if cfg.Version == 3 {
|
||||
convertV3V4(&cfg)
|
||||
}
|
||||
|
||||
// Hash old cleartext passwords
|
||||
if len(cfg.GUI.Password) > 0 && cfg.GUI.Password[0] != '$' {
|
||||
hash, err := bcrypt.GenerateFromPassword([]byte(cfg.GUI.Password), 0)
|
||||
if err != nil {
|
||||
l.Warnln(err)
|
||||
l.Warnln("bcrypting password:", err)
|
||||
} else {
|
||||
cfg.GUI.Password = string(hash)
|
||||
}
|
||||
@@ -319,15 +343,22 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
}
|
||||
|
||||
// Ensure this node is present in all relevant places
|
||||
me := cfg.GetNodeConfiguration(myID)
|
||||
if me == nil {
|
||||
myName, _ := os.Hostname()
|
||||
cfg.Nodes = append(cfg.Nodes, NodeConfiguration{
|
||||
NodeID: myID,
|
||||
Name: myName,
|
||||
})
|
||||
}
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
// Ensure that any loose nodes are not present in the wrong places
|
||||
// Ensure that there are no duplicate nodes
|
||||
cfg.Nodes = ensureNodePresent(cfg.Nodes, myID)
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
for i := range cfg.Repositories {
|
||||
cfg.Repositories[i].Nodes = ensureNodePresent(cfg.Repositories[i].Nodes, myID)
|
||||
cfg.Repositories[i].Nodes = ensureExistingNodes(cfg.Repositories[i].Nodes, existingNodes)
|
||||
cfg.Repositories[i].Nodes = ensureNoDuplicates(cfg.Repositories[i].Nodes)
|
||||
sort.Sort(NodeConfigurationList(cfg.Repositories[i].Nodes))
|
||||
sort.Sort(RepositoryNodeConfigurationList(cfg.Repositories[i].Nodes))
|
||||
}
|
||||
|
||||
// An empty address list is equivalent to a single "dynamic" entry
|
||||
@@ -341,6 +372,31 @@ func Load(rd io.Reader, myID protocol.NodeID) (Configuration, error) {
|
||||
return cfg, err
|
||||
}
|
||||
|
||||
func convertV3V4(cfg *Configuration) {
|
||||
// In previous versions, rescan interval was common for each repository.
|
||||
// From now, it can be set independently. We have to make sure, that after upgrade
|
||||
// the individual rescan interval will be defined for every existing repository.
|
||||
for i := range cfg.Repositories {
|
||||
cfg.Repositories[i].RescanIntervalS = cfg.Options.Deprecated_RescanIntervalS
|
||||
}
|
||||
|
||||
cfg.Options.Deprecated_RescanIntervalS = 0
|
||||
|
||||
// In previous versions, repositories held full node configurations.
|
||||
// Since that's the only place where node configs were in V1, we still have
|
||||
// to define the deprecated fields to be able to upgrade from V1 to V4.
|
||||
for i, repo := range cfg.Repositories {
|
||||
|
||||
for j := range repo.Nodes {
|
||||
rncfg := cfg.Repositories[i].Nodes[j]
|
||||
rncfg.Deprecated_Name = ""
|
||||
rncfg.Deprecated_Addresses = nil
|
||||
}
|
||||
}
|
||||
|
||||
cfg.Version = 4
|
||||
}
|
||||
|
||||
func convertV2V3(cfg *Configuration) {
|
||||
// In previous versions, compression was always on. When upgrading, enable
|
||||
// compression on all existing new. New nodes will get compression on by
|
||||
@@ -362,7 +418,7 @@ func convertV1V2(cfg *Configuration) {
|
||||
// Collect the list of nodes.
|
||||
// Replace node configs inside repositories with only a reference to the nide ID.
|
||||
// Set all repositories to read only if the global read only flag is set.
|
||||
var nodes = map[string]NodeConfiguration{}
|
||||
var nodes = map[string]RepositoryNodeConfiguration{}
|
||||
for i, repo := range cfg.Repositories {
|
||||
cfg.Repositories[i].ReadOnly = cfg.Options.Deprecated_ReadOnly
|
||||
for j, node := range repo.Nodes {
|
||||
@@ -370,14 +426,18 @@ func convertV1V2(cfg *Configuration) {
|
||||
if _, ok := nodes[id]; !ok {
|
||||
nodes[id] = node
|
||||
}
|
||||
cfg.Repositories[i].Nodes[j] = NodeConfiguration{NodeID: node.NodeID}
|
||||
cfg.Repositories[i].Nodes[j] = RepositoryNodeConfiguration{NodeID: node.NodeID}
|
||||
}
|
||||
}
|
||||
cfg.Options.Deprecated_ReadOnly = false
|
||||
|
||||
// Set and sort the list of nodes.
|
||||
for _, node := range nodes {
|
||||
cfg.Nodes = append(cfg.Nodes, node)
|
||||
cfg.Nodes = append(cfg.Nodes, NodeConfiguration{
|
||||
NodeID: node.NodeID,
|
||||
Name: node.Deprecated_Name,
|
||||
Addresses: node.Deprecated_Addresses,
|
||||
})
|
||||
}
|
||||
sort.Sort(NodeConfigurationList(cfg.Nodes))
|
||||
|
||||
@@ -402,23 +462,33 @@ func (l NodeConfigurationList) Len() int {
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func ensureNodePresent(nodes []NodeConfiguration, myID protocol.NodeID) []NodeConfiguration {
|
||||
type RepositoryNodeConfigurationList []RepositoryNodeConfiguration
|
||||
|
||||
func (l RepositoryNodeConfigurationList) Less(a, b int) bool {
|
||||
return l[a].NodeID.Compare(l[b].NodeID) == -1
|
||||
}
|
||||
func (l RepositoryNodeConfigurationList) Swap(a, b int) {
|
||||
l[a], l[b] = l[b], l[a]
|
||||
}
|
||||
func (l RepositoryNodeConfigurationList) Len() int {
|
||||
return len(l)
|
||||
}
|
||||
|
||||
func ensureNodePresent(nodes []RepositoryNodeConfiguration, myID protocol.NodeID) []RepositoryNodeConfiguration {
|
||||
for _, node := range nodes {
|
||||
if node.NodeID.Equals(myID) {
|
||||
return nodes
|
||||
}
|
||||
}
|
||||
|
||||
name, _ := os.Hostname()
|
||||
nodes = append(nodes, NodeConfiguration{
|
||||
nodes = append(nodes, RepositoryNodeConfiguration{
|
||||
NodeID: myID,
|
||||
Name: name,
|
||||
})
|
||||
|
||||
return nodes
|
||||
}
|
||||
|
||||
func ensureExistingNodes(nodes []NodeConfiguration, existingNodes map[protocol.NodeID]bool) []NodeConfiguration {
|
||||
func ensureExistingNodes(nodes []RepositoryNodeConfiguration, existingNodes map[protocol.NodeID]bool) []RepositoryNodeConfiguration {
|
||||
count := len(nodes)
|
||||
i := 0
|
||||
loop:
|
||||
@@ -433,7 +503,7 @@ loop:
|
||||
return nodes[0:count]
|
||||
}
|
||||
|
||||
func ensureNoDuplicates(nodes []NodeConfiguration) []NodeConfiguration {
|
||||
func ensureNoDuplicates(nodes []RepositoryNodeConfiguration) []RepositoryNodeConfiguration {
|
||||
count := len(nodes)
|
||||
i := 0
|
||||
seenNodes := make(map[protocol.NodeID]bool)
|
||||
|
||||
@@ -30,13 +30,14 @@ func TestDefaultValues(t *testing.T) {
|
||||
GlobalAnnEnabled: true,
|
||||
LocalAnnEnabled: true,
|
||||
LocalAnnPort: 21025,
|
||||
LocalAnnMCAddr: "[ff32::5222]:21026",
|
||||
ParallelRequests: 16,
|
||||
MaxSendKbps: 0,
|
||||
RescanIntervalS: 60,
|
||||
ReconnectIntervalS: 60,
|
||||
MaxChangeKbps: 10000,
|
||||
StartBrowser: true,
|
||||
UPnPEnabled: true,
|
||||
UPnPLease: 0,
|
||||
UPnPRenewal: 30,
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(nil), node1)
|
||||
@@ -68,6 +69,7 @@ func TestNodeConfig(t *testing.T) {
|
||||
</repository>
|
||||
<options>
|
||||
<readOnly>true</readOnly>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
@@ -88,6 +90,9 @@ func TestNodeConfig(t *testing.T) {
|
||||
<node id="P56IOI7MZJNU2IQGDREYDM2MGTMGL3BXNPQ6W5BTBBZ4TJXZWICQ" name="node two">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<options>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
|
||||
@@ -103,9 +108,26 @@ func TestNodeConfig(t *testing.T) {
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="true">
|
||||
<address>b</address>
|
||||
</node>
|
||||
<options>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
</options>
|
||||
</configuration>`)
|
||||
|
||||
for i, data := range [][]byte{v1data, v2data, v3data} {
|
||||
v4data := []byte(`
|
||||
<configuration version="4">
|
||||
<repository id="test" directory="~/Sync" ro="true" ignorePerms="false" rescanIntervalS="600">
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR"></node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2"></node>
|
||||
</repository>
|
||||
<node id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" name="node one" compression="true">
|
||||
<address>a</address>
|
||||
</node>
|
||||
<node id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="true">
|
||||
<address>b</address>
|
||||
</node>
|
||||
</configuration>`)
|
||||
|
||||
for i, data := range [][]byte{v1data, v2data, v3data, v4data} {
|
||||
cfg, err := Load(bytes.NewReader(data), node1)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
@@ -113,10 +135,11 @@ func TestNodeConfig(t *testing.T) {
|
||||
|
||||
expectedRepos := []RepositoryConfiguration{
|
||||
{
|
||||
ID: "test",
|
||||
Directory: "~/Sync",
|
||||
Nodes: []NodeConfiguration{{NodeID: node1}, {NodeID: node4}},
|
||||
ReadOnly: true,
|
||||
ID: "test",
|
||||
Directory: "~/Sync",
|
||||
Nodes: []RepositoryNodeConfiguration{{NodeID: node1}, {NodeID: node4}},
|
||||
ReadOnly: true,
|
||||
RescanIntervalS: 600,
|
||||
},
|
||||
}
|
||||
expectedNodes := []NodeConfiguration{
|
||||
@@ -135,7 +158,7 @@ func TestNodeConfig(t *testing.T) {
|
||||
}
|
||||
expectedNodeIDs := []protocol.NodeID{node1, node4}
|
||||
|
||||
if cfg.Version != 3 {
|
||||
if cfg.Version != 4 {
|
||||
t.Errorf("%d: Incorrect version %d != 3", i, cfg.Version)
|
||||
}
|
||||
if !reflect.DeepEqual(cfg.Repositories, expectedRepos) {
|
||||
@@ -185,13 +208,14 @@ func TestOverriddenValues(t *testing.T) {
|
||||
<globalAnnounceEnabled>false</globalAnnounceEnabled>
|
||||
<localAnnounceEnabled>false</localAnnounceEnabled>
|
||||
<localAnnouncePort>42123</localAnnouncePort>
|
||||
<localAnnounceMCAddr>quux:3232</localAnnounceMCAddr>
|
||||
<parallelRequests>32</parallelRequests>
|
||||
<maxSendKbps>1234</maxSendKbps>
|
||||
<rescanIntervalS>600</rescanIntervalS>
|
||||
<reconnectionIntervalS>6000</reconnectionIntervalS>
|
||||
<maxChangeKbps>2345</maxChangeKbps>
|
||||
<startBrowser>false</startBrowser>
|
||||
<upnpEnabled>false</upnpEnabled>
|
||||
<upnpLeaseMinutes>60</upnpLeaseMinutes>
|
||||
<upnpRenewalMinutes>15</upnpRenewalMinutes>
|
||||
</options>
|
||||
</configuration>
|
||||
`)
|
||||
@@ -202,13 +226,14 @@ func TestOverriddenValues(t *testing.T) {
|
||||
GlobalAnnEnabled: false,
|
||||
LocalAnnEnabled: false,
|
||||
LocalAnnPort: 42123,
|
||||
LocalAnnMCAddr: "quux:3232",
|
||||
ParallelRequests: 32,
|
||||
MaxSendKbps: 1234,
|
||||
RescanIntervalS: 600,
|
||||
ReconnectIntervalS: 6000,
|
||||
MaxChangeKbps: 2345,
|
||||
StartBrowser: false,
|
||||
UPnPEnabled: false,
|
||||
UPnPLease: 60,
|
||||
UPnPRenewal: 15,
|
||||
}
|
||||
|
||||
cfg, err := Load(bytes.NewReader(data), node1)
|
||||
|
||||
@@ -8,9 +8,9 @@ import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -24,57 +24,88 @@ type Discoverer struct {
|
||||
listenAddrs []string
|
||||
localBcastIntv time.Duration
|
||||
globalBcastIntv time.Duration
|
||||
beacon *beacon.Beacon
|
||||
registry map[protocol.NodeID][]string
|
||||
errorRetryIntv time.Duration
|
||||
cacheLifetime time.Duration
|
||||
broadcastBeacon beacon.Interface
|
||||
multicastBeacon beacon.Interface
|
||||
registry map[protocol.NodeID][]cacheEntry
|
||||
registryLock sync.RWMutex
|
||||
extServer string
|
||||
extPort uint16
|
||||
localBcastTick <-chan time.Time
|
||||
stopGlobal chan struct{}
|
||||
globalWG sync.WaitGroup
|
||||
forcedBcastTick chan time.Time
|
||||
extAnnounceOK bool
|
||||
extAnnounceOKmut sync.Mutex
|
||||
}
|
||||
|
||||
type cacheEntry struct {
|
||||
addr string
|
||||
seen time.Time
|
||||
}
|
||||
|
||||
var (
|
||||
ErrIncorrectMagic = errors.New("incorrect magic number")
|
||||
)
|
||||
|
||||
// We tolerate a certain amount of errors because we might be running on
|
||||
// laptops that sleep and wake, have intermittent network connectivity, etc.
|
||||
// When we hit this many errors in succession, we stop.
|
||||
const maxErrors = 30
|
||||
|
||||
func NewDiscoverer(id protocol.NodeID, addresses []string, localPort int) (*Discoverer, error) {
|
||||
b, err := beacon.New(localPort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
disc := &Discoverer{
|
||||
func NewDiscoverer(id protocol.NodeID, addresses []string) *Discoverer {
|
||||
return &Discoverer{
|
||||
myID: id,
|
||||
listenAddrs: addresses,
|
||||
localBcastIntv: 30 * time.Second,
|
||||
globalBcastIntv: 1800 * time.Second,
|
||||
beacon: b,
|
||||
registry: make(map[protocol.NodeID][]string),
|
||||
errorRetryIntv: 60 * time.Second,
|
||||
cacheLifetime: 5 * time.Minute,
|
||||
registry: make(map[protocol.NodeID][]cacheEntry),
|
||||
}
|
||||
|
||||
go disc.recvAnnouncements()
|
||||
|
||||
return disc, nil
|
||||
}
|
||||
|
||||
func (d *Discoverer) StartLocal() {
|
||||
d.localBcastTick = time.Tick(d.localBcastIntv)
|
||||
d.forcedBcastTick = make(chan time.Time)
|
||||
go d.sendLocalAnnouncements()
|
||||
func (d *Discoverer) StartLocal(localPort int, localMCAddr string) {
|
||||
if localPort > 0 {
|
||||
bb, err := beacon.NewBroadcast(localPort)
|
||||
if err != nil {
|
||||
l.Infof("No IPv4 discovery possible (%v)", err)
|
||||
} else {
|
||||
d.broadcastBeacon = bb
|
||||
go d.recvAnnouncements(bb)
|
||||
}
|
||||
}
|
||||
|
||||
if len(localMCAddr) > 0 {
|
||||
mb, err := beacon.NewMulticast(localMCAddr)
|
||||
if err != nil {
|
||||
l.Infof("No IPv6 discovery possible (%v)", err)
|
||||
} else {
|
||||
d.multicastBeacon = mb
|
||||
go d.recvAnnouncements(mb)
|
||||
}
|
||||
}
|
||||
|
||||
if d.broadcastBeacon == nil && d.multicastBeacon == nil {
|
||||
l.Warnln("No local discovery method available")
|
||||
} else {
|
||||
d.localBcastTick = time.Tick(d.localBcastIntv)
|
||||
d.forcedBcastTick = make(chan time.Time)
|
||||
go d.sendLocalAnnouncements()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *Discoverer) StartGlobal(server string, extPort uint16) {
|
||||
// Wait for any previous announcer to stop before starting a new one.
|
||||
d.globalWG.Wait()
|
||||
d.extServer = server
|
||||
d.extPort = extPort
|
||||
d.stopGlobal = make(chan struct{})
|
||||
d.globalWG.Add(1)
|
||||
go d.sendExternalAnnouncements()
|
||||
}
|
||||
|
||||
func (d *Discoverer) StopGlobal() {
|
||||
close(d.stopGlobal)
|
||||
d.globalWG.Wait()
|
||||
}
|
||||
|
||||
func (d *Discoverer) ExtAnnounceOK() bool {
|
||||
d.extAnnounceOKmut.Lock()
|
||||
defer d.extAnnounceOKmut.Unlock()
|
||||
@@ -83,14 +114,28 @@ func (d *Discoverer) ExtAnnounceOK() bool {
|
||||
|
||||
func (d *Discoverer) Lookup(node protocol.NodeID) []string {
|
||||
d.registryLock.Lock()
|
||||
addr, ok := d.registry[node]
|
||||
cached := d.filterCached(d.registry[node])
|
||||
d.registryLock.Unlock()
|
||||
|
||||
if ok {
|
||||
return addr
|
||||
if len(cached) > 0 {
|
||||
addrs := make([]string, len(cached))
|
||||
for i := range cached {
|
||||
addrs[i] = cached[i].addr
|
||||
}
|
||||
return addrs
|
||||
} else if len(d.extServer) != 0 {
|
||||
// We might want to cache this, but not permanently so it needs some intelligence
|
||||
return d.externalLookup(node)
|
||||
addrs := d.externalLookup(node)
|
||||
cached = make([]cacheEntry, len(addrs))
|
||||
for i := range addrs {
|
||||
cached[i] = cacheEntry{
|
||||
addr: addrs[i],
|
||||
seen: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
d.registryLock.Lock()
|
||||
d.registry[node] = cached
|
||||
d.registryLock.Unlock()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -105,11 +150,11 @@ func (d *Discoverer) Hint(node string, addrs []string) {
|
||||
})
|
||||
}
|
||||
|
||||
func (d *Discoverer) All() map[protocol.NodeID][]string {
|
||||
func (d *Discoverer) All() map[protocol.NodeID][]cacheEntry {
|
||||
d.registryLock.RLock()
|
||||
nodes := make(map[protocol.NodeID][]string, len(d.registry))
|
||||
nodes := make(map[protocol.NodeID][]cacheEntry, len(d.registry))
|
||||
for node, addrs := range d.registry {
|
||||
addrsCopy := make([]string, len(addrs))
|
||||
addrsCopy := make([]cacheEntry, len(addrs))
|
||||
copy(addrsCopy, addrs)
|
||||
nodes[node] = addrsCopy
|
||||
}
|
||||
@@ -149,21 +194,15 @@ func (d *Discoverer) sendLocalAnnouncements() {
|
||||
Magic: AnnouncementMagic,
|
||||
This: Node{d.myID[:], addrs},
|
||||
}
|
||||
msg := pkt.MarshalXDR()
|
||||
|
||||
for {
|
||||
pkt.Extra = nil
|
||||
d.registryLock.RLock()
|
||||
for node, addrs := range d.registry {
|
||||
if len(pkt.Extra) == 16 {
|
||||
break
|
||||
}
|
||||
|
||||
anode := Node{node[:], resolveAddrs(addrs)}
|
||||
pkt.Extra = append(pkt.Extra, anode)
|
||||
if d.multicastBeacon != nil {
|
||||
d.multicastBeacon.Send(msg)
|
||||
}
|
||||
if d.broadcastBeacon != nil {
|
||||
d.broadcastBeacon.Send(msg)
|
||||
}
|
||||
d.registryLock.RUnlock()
|
||||
|
||||
d.beacon.Send(pkt.MarshalXDR())
|
||||
|
||||
select {
|
||||
case <-d.localBcastTick:
|
||||
@@ -173,20 +212,19 @@ func (d *Discoverer) sendLocalAnnouncements() {
|
||||
}
|
||||
|
||||
func (d *Discoverer) sendExternalAnnouncements() {
|
||||
// this should go in the Discoverer struct
|
||||
errorRetryIntv := 60 * time.Second
|
||||
defer d.globalWG.Done()
|
||||
|
||||
remote, err := net.ResolveUDPAddr("udp", d.extServer)
|
||||
for err != nil {
|
||||
l.Warnf("Global discovery: %v; trying again in %v", err, errorRetryIntv)
|
||||
time.Sleep(errorRetryIntv)
|
||||
l.Warnf("Global discovery: %v; trying again in %v", err, d.errorRetryIntv)
|
||||
time.Sleep(d.errorRetryIntv)
|
||||
remote, err = net.ResolveUDPAddr("udp", d.extServer)
|
||||
}
|
||||
|
||||
conn, err := net.ListenUDP("udp", nil)
|
||||
for err != nil {
|
||||
l.Warnf("Global discovery: %v; trying again in %v", err, errorRetryIntv)
|
||||
time.Sleep(errorRetryIntv)
|
||||
l.Warnf("Global discovery: %v; trying again in %v", err, d.errorRetryIntv)
|
||||
time.Sleep(d.errorRetryIntv)
|
||||
conn, err = net.ListenUDP("udp", nil)
|
||||
}
|
||||
|
||||
@@ -201,7 +239,10 @@ func (d *Discoverer) sendExternalAnnouncements() {
|
||||
buf = d.announcementPkt()
|
||||
}
|
||||
|
||||
for {
|
||||
var bcastTick = time.Tick(d.globalBcastIntv)
|
||||
var errTick <-chan time.Time
|
||||
|
||||
sendOneAnnouncement := func() {
|
||||
var ok bool
|
||||
|
||||
if debug {
|
||||
@@ -230,19 +271,40 @@ func (d *Discoverer) sendExternalAnnouncements() {
|
||||
d.extAnnounceOKmut.Unlock()
|
||||
|
||||
if ok {
|
||||
time.Sleep(d.globalBcastIntv)
|
||||
} else {
|
||||
time.Sleep(errorRetryIntv)
|
||||
errTick = nil
|
||||
} else if errTick != nil {
|
||||
errTick = time.Tick(d.errorRetryIntv)
|
||||
}
|
||||
}
|
||||
|
||||
// Announce once, immediately
|
||||
sendOneAnnouncement()
|
||||
|
||||
loop:
|
||||
for {
|
||||
select {
|
||||
case <-d.stopGlobal:
|
||||
break loop
|
||||
|
||||
case <-errTick:
|
||||
sendOneAnnouncement()
|
||||
|
||||
case <-bcastTick:
|
||||
sendOneAnnouncement()
|
||||
}
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugln("discover: stopping global")
|
||||
}
|
||||
}
|
||||
|
||||
func (d *Discoverer) recvAnnouncements() {
|
||||
func (d *Discoverer) recvAnnouncements(b beacon.Interface) {
|
||||
for {
|
||||
buf, addr := d.beacon.Recv()
|
||||
buf, addr := b.Recv()
|
||||
|
||||
if debug {
|
||||
l.Debugf("discover: read announcement:\n%s", hex.Dump(buf))
|
||||
l.Debugf("discover: read announcement from %s:\n%s", addr, hex.Dump(buf))
|
||||
}
|
||||
|
||||
var pkt Announce
|
||||
@@ -251,20 +313,9 @@ func (d *Discoverer) recvAnnouncements() {
|
||||
continue
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugf("discover: parsed announcement: %#v", pkt)
|
||||
}
|
||||
|
||||
var newNode bool
|
||||
if bytes.Compare(pkt.This.ID, d.myID[:]) != 0 {
|
||||
newNode = d.registerNode(addr, pkt.This)
|
||||
for _, node := range pkt.Extra {
|
||||
if bytes.Compare(node.ID, d.myID[:]) != 0 {
|
||||
if d.registerNode(nil, node) {
|
||||
newNode = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if newNode {
|
||||
@@ -276,41 +327,57 @@ func (d *Discoverer) recvAnnouncements() {
|
||||
}
|
||||
|
||||
func (d *Discoverer) registerNode(addr net.Addr, node Node) bool {
|
||||
var addrs []string
|
||||
var id protocol.NodeID
|
||||
copy(id[:], node.ID)
|
||||
|
||||
d.registryLock.RLock()
|
||||
current := d.filterCached(d.registry[id])
|
||||
d.registryLock.RUnlock()
|
||||
|
||||
orig := current
|
||||
|
||||
for _, a := range node.Addresses {
|
||||
var nodeAddr string
|
||||
if len(a.IP) > 0 {
|
||||
nodeAddr = fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
|
||||
addrs = append(addrs, nodeAddr)
|
||||
nodeAddr = net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
|
||||
} else if addr != nil {
|
||||
ua := addr.(*net.UDPAddr)
|
||||
ua.Port = int(a.Port)
|
||||
nodeAddr = ua.String()
|
||||
addrs = append(addrs, nodeAddr)
|
||||
}
|
||||
}
|
||||
if len(addrs) == 0 {
|
||||
if debug {
|
||||
l.Debugln("discover: no valid address for", node.ID)
|
||||
for i := range current {
|
||||
if current[i].addr == nodeAddr {
|
||||
current[i].seen = time.Now()
|
||||
goto done
|
||||
}
|
||||
}
|
||||
current = append(current, cacheEntry{
|
||||
addr: nodeAddr,
|
||||
seen: time.Now(),
|
||||
})
|
||||
done:
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugf("discover: register: %s -> %#v", node.ID, addrs)
|
||||
l.Debugf("discover: register: %v -> %v", id, current)
|
||||
}
|
||||
var id protocol.NodeID
|
||||
copy(id[:], node.ID)
|
||||
|
||||
d.registryLock.Lock()
|
||||
_, seen := d.registry[id]
|
||||
d.registry[id] = addrs
|
||||
d.registry[id] = current
|
||||
d.registryLock.Unlock()
|
||||
|
||||
if !seen {
|
||||
if len(current) > len(orig) {
|
||||
addrs := make([]string, len(current))
|
||||
for i := range current {
|
||||
addrs[i] = current[i].addr
|
||||
}
|
||||
events.Default.Log(events.NodeDiscovered, map[string]interface{}{
|
||||
"node": id.String(),
|
||||
"addrs": addrs,
|
||||
})
|
||||
}
|
||||
return !seen
|
||||
|
||||
return len(current) > len(orig)
|
||||
}
|
||||
|
||||
func (d *Discoverer) externalLookup(node protocol.NodeID) []string {
|
||||
@@ -374,18 +441,29 @@ func (d *Discoverer) externalLookup(node protocol.NodeID) []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugf("discover: parsed external: %#v", pkt)
|
||||
}
|
||||
|
||||
var addrs []string
|
||||
for _, a := range pkt.This.Addresses {
|
||||
nodeAddr := fmt.Sprintf("%s:%d", net.IP(a.IP), a.Port)
|
||||
nodeAddr := net.JoinHostPort(net.IP(a.IP).String(), strconv.Itoa(int(a.Port)))
|
||||
addrs = append(addrs, nodeAddr)
|
||||
}
|
||||
return addrs
|
||||
}
|
||||
|
||||
func (d *Discoverer) filterCached(c []cacheEntry) []cacheEntry {
|
||||
for i := 0; i < len(c); {
|
||||
if ago := time.Since(c[i].seen); ago > d.cacheLifetime {
|
||||
if debug {
|
||||
l.Debugf("removing cached address %s: seen %v ago", c[i].addr, ago)
|
||||
}
|
||||
c[i] = c[len(c)-1]
|
||||
c = c[:len(c)-1]
|
||||
} else {
|
||||
i++
|
||||
}
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func addrToAddr(addr *net.TCPAddr) Address {
|
||||
if len(addr.IP) == 0 || addr.IP.IsUnspecified() {
|
||||
return Address{Port: uint16(addr.Port)}
|
||||
|
||||
7
discover/discover_test.go
Normal file
7
discover/discover_test.go
Normal file
@@ -0,0 +1,7 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package discover_test
|
||||
|
||||
// Empty test file to generate 0% coverage rather than no coverage
|
||||
@@ -20,10 +20,12 @@ const (
|
||||
NodeDiscovered
|
||||
NodeConnected
|
||||
NodeDisconnected
|
||||
NodeRejected
|
||||
LocalIndexUpdated
|
||||
RemoteIndexUpdated
|
||||
ItemStarted
|
||||
StateChanged
|
||||
RepoRejected
|
||||
|
||||
AllEvents = ^EventType(0)
|
||||
)
|
||||
@@ -42,6 +44,8 @@ func (t EventType) String() string {
|
||||
return "NodeConnected"
|
||||
case NodeDisconnected:
|
||||
return "NodeDisconnected"
|
||||
case NodeRejected:
|
||||
return "NodeRejected"
|
||||
case LocalIndexUpdated:
|
||||
return "LocalIndexUpdated"
|
||||
case RemoteIndexUpdated:
|
||||
@@ -50,6 +54,8 @@ func (t EventType) String() string {
|
||||
return "ItemStarted"
|
||||
case StateChanged:
|
||||
return "StateChanged"
|
||||
case RepoRejected:
|
||||
return "RepoRejected"
|
||||
default:
|
||||
return "Unknown"
|
||||
}
|
||||
|
||||
15
files/filenames_darwin.go
Normal file
15
files/filenames_darwin.go
Normal file
@@ -0,0 +1,15 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import "code.google.com/p/go.text/unicode/norm"
|
||||
|
||||
func normalizedFilename(s string) string {
|
||||
return norm.NFC.String(s)
|
||||
}
|
||||
|
||||
func nativeFilename(s string) string {
|
||||
return norm.NFD.String(s)
|
||||
}
|
||||
17
files/filenames_unix.go
Normal file
17
files/filenames_unix.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build !windows,!darwin
|
||||
|
||||
package files
|
||||
|
||||
import "code.google.com/p/go.text/unicode/norm"
|
||||
|
||||
func normalizedFilename(s string) string {
|
||||
return norm.NFC.String(s)
|
||||
}
|
||||
|
||||
func nativeFilename(s string) string {
|
||||
return s
|
||||
}
|
||||
19
files/filenames_windows.go
Normal file
19
files/filenames_windows.go
Normal file
@@ -0,0 +1,19 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
|
||||
"code.google.com/p/go.text/unicode/norm"
|
||||
)
|
||||
|
||||
func normalizedFilename(s string) string {
|
||||
return norm.NFC.String(filepath.ToSlash(s))
|
||||
}
|
||||
|
||||
func nativeFilename(s string) string {
|
||||
return filepath.FromSlash(s)
|
||||
}
|
||||
261
files/leveldb.go
261
files/leveldb.go
@@ -1,7 +1,12 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package files
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"runtime"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
@@ -116,11 +121,19 @@ func globalKeyName(key []byte) []byte {
|
||||
return key[1+64:]
|
||||
}
|
||||
|
||||
func globalKeyRepo(key []byte) []byte {
|
||||
repo := key[1 : 1+64]
|
||||
izero := bytes.IndexByte(repo, 0)
|
||||
return repo[:izero]
|
||||
}
|
||||
|
||||
type deletionHandler func(db dbReader, batch dbWriter, repo, node, name []byte, dbi iterator.Iterator) uint64
|
||||
|
||||
type fileIterator func(f protocol.FileInfo) bool
|
||||
type fileIterator func(f protocol.FileIntf) bool
|
||||
|
||||
func ldbGenericReplace(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo, deleteFn deletionHandler) uint64 {
|
||||
defer runtime.GC()
|
||||
|
||||
sort.Sort(fileList(fs)) // sort list on name, same as on disk
|
||||
|
||||
start := nodeKey(repo, node, nil) // before all repo/node files
|
||||
@@ -173,18 +186,28 @@ func ldbGenericReplace(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo
|
||||
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
if fs[fsi].IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
}
|
||||
fsi++
|
||||
|
||||
case moreFs && moreDb && cmp == 0:
|
||||
// File exists on both sides - compare versions.
|
||||
var ef protocol.FileInfo
|
||||
// File exists on both sides - compare versions. We might get an
|
||||
// update with the same version and different flags if a node has
|
||||
// marked a file as invalid, so handle that too.
|
||||
var ef protocol.FileInfoTruncated
|
||||
ef.UnmarshalXDR(dbi.Value())
|
||||
if fs[fsi].Version > ef.Version {
|
||||
if fs[fsi].Version > ef.Version || fs[fsi].Version != ef.Version {
|
||||
if lv := ldbInsert(batch, repo, node, newName, fs[fsi]); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
if fs[fsi].IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, newName)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, newName, fs[fsi].Version)
|
||||
}
|
||||
}
|
||||
// Iterate both sides.
|
||||
fsi++
|
||||
@@ -223,20 +246,23 @@ func ldbReplace(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint6
|
||||
|
||||
func ldbReplaceWithDelete(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64 {
|
||||
return ldbGenericReplace(db, repo, node, fs, func(db dbReader, batch dbWriter, repo, node, name []byte, dbi iterator.Iterator) uint64 {
|
||||
var f protocol.FileInfo
|
||||
err := f.UnmarshalXDR(dbi.Value())
|
||||
var tf protocol.FileInfoTruncated
|
||||
err := tf.UnmarshalXDR(dbi.Value())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if !protocol.IsDeleted(f.Flags) {
|
||||
if !tf.IsDeleted() {
|
||||
if debug {
|
||||
l.Debugf("mark deleted; repo=%q node=%v name=%q", repo, protocol.NodeIDFromBytes(node), name)
|
||||
}
|
||||
ts := clock(f.LocalVersion)
|
||||
f.Blocks = nil
|
||||
f.Version = lamport.Default.Tick(f.Version)
|
||||
f.Flags |= protocol.FlagDeleted
|
||||
f.LocalVersion = ts
|
||||
ts := clock(tf.LocalVersion)
|
||||
f := protocol.FileInfo{
|
||||
Name: tf.Name,
|
||||
Version: lamport.Default.Tick(tf.Version),
|
||||
LocalVersion: ts,
|
||||
Flags: tf.Flags | protocol.FlagDeleted,
|
||||
Modified: tf.Modified,
|
||||
}
|
||||
batch.Put(dbi.Key(), f.MarshalXDR())
|
||||
ldbUpdateGlobal(db, batch, repo, node, nodeKeyName(dbi.Key()), f.Version)
|
||||
return ts
|
||||
@@ -246,6 +272,8 @@ func ldbReplaceWithDelete(db *leveldb.DB, repo, node []byte, fs []protocol.FileI
|
||||
}
|
||||
|
||||
func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64 {
|
||||
defer runtime.GC()
|
||||
|
||||
batch := new(leveldb.Batch)
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
@@ -262,20 +290,30 @@ func ldbUpdate(db *leveldb.DB, repo, node []byte, fs []protocol.FileInfo) uint64
|
||||
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
if f.IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, name)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
var ef protocol.FileInfo
|
||||
var ef protocol.FileInfoTruncated
|
||||
err = ef.UnmarshalXDR(bs)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if ef.Version != f.Version {
|
||||
// Flags might change without the version being bumped when we set the
|
||||
// invalid flag on an existing file.
|
||||
if ef.Version != f.Version || ef.Flags != f.Flags {
|
||||
if lv := ldbInsert(batch, repo, node, name, f); lv > maxLocalVer {
|
||||
maxLocalVer = lv
|
||||
}
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
if f.IsInvalid() {
|
||||
ldbRemoveFromGlobal(snap, batch, repo, node, name)
|
||||
} else {
|
||||
ldbUpdateGlobal(snap, batch, repo, node, name, f.Version)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -367,7 +405,9 @@ func ldbRemoveFromGlobal(db dbReader, batch dbWriter, repo, node, file []byte) {
|
||||
gk := globalKey(repo, file)
|
||||
svl, err := db.Get(gk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
// We might be called to "remove" a global version that doesn't exist
|
||||
// if the first update for the file is already marked invalid.
|
||||
return
|
||||
}
|
||||
|
||||
var fl versionList
|
||||
@@ -390,7 +430,7 @@ func ldbRemoveFromGlobal(db dbReader, batch dbWriter, repo, node, file []byte) {
|
||||
}
|
||||
}
|
||||
|
||||
func ldbWithHave(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
func ldbWithHave(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterator) {
|
||||
start := nodeKey(repo, node, nil) // before all repo/node files
|
||||
limit := nodeKey(repo, node, []byte{0xff, 0xff, 0xff, 0xff}) // after all repo/node files
|
||||
snap, err := db.GetSnapshot()
|
||||
@@ -402,8 +442,7 @@ func ldbWithHave(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
defer dbi.Release()
|
||||
|
||||
for dbi.Next() {
|
||||
var f protocol.FileInfo
|
||||
err := f.UnmarshalXDR(dbi.Value())
|
||||
f, err := unmarshalTrunc(dbi.Value(), truncate)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
@@ -413,7 +452,9 @@ func ldbWithHave(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
}
|
||||
}
|
||||
|
||||
func ldbWithAllRepo(db *leveldb.DB, repo []byte, fn func(node []byte, f protocol.FileInfo) bool) {
|
||||
func ldbWithAllRepoTruncated(db *leveldb.DB, repo []byte, fn func(node []byte, f protocol.FileInfoTruncated) bool) {
|
||||
defer runtime.GC()
|
||||
|
||||
start := nodeKey(repo, nil, nil) // before all repo/node files
|
||||
limit := nodeKey(repo, protocol.LocalNodeID[:], []byte{0xff, 0xff, 0xff, 0xff}) // after all repo/node files
|
||||
snap, err := db.GetSnapshot()
|
||||
@@ -426,7 +467,7 @@ func ldbWithAllRepo(db *leveldb.DB, repo []byte, fn func(node []byte, f protocol
|
||||
|
||||
for dbi.Next() {
|
||||
node := nodeKeyNode(dbi.Key())
|
||||
var f protocol.FileInfo
|
||||
var f protocol.FileInfoTruncated
|
||||
err := f.UnmarshalXDR(dbi.Value())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
@@ -437,40 +478,6 @@ func ldbWithAllRepo(db *leveldb.DB, repo []byte, fn func(node []byte, f protocol
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
func ldbCheckGlobalConsistency(db *leveldb.DB, repo []byte) {
|
||||
l.Debugf("Checking global consistency for %q", repo)
|
||||
start := nodeKey(repo, nil, nil) // before all repo/node files
|
||||
limit := nodeKey(repo, protocol.LocalNodeID[:], []byte{0xff, 0xff, 0xff, 0xff}) // after all repo/node files
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer snap.Release()
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
defer dbi.Release()
|
||||
|
||||
batch := new(leveldb.Batch)
|
||||
i := 0
|
||||
for dbi.Next() {
|
||||
repo := nodeKeyRepo(dbi.Key())
|
||||
node := nodeKeyNode(dbi.Key())
|
||||
var f protocol.FileInfo
|
||||
err := f.UnmarshalXDR(dbi.Value())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if ldbUpdateGlobal(snap, batch, repo, node, []byte(f.Name), f.Version) {
|
||||
var nodeID protocol.NodeID
|
||||
copy(nodeID[:], node)
|
||||
l.Debugf("fixed global for %q %s %q", repo, nodeID, f.Name)
|
||||
}
|
||||
i++
|
||||
}
|
||||
l.Debugln("Done", i)
|
||||
}
|
||||
*/
|
||||
|
||||
func ldbGet(db *leveldb.DB, repo, node, file []byte) protocol.FileInfo {
|
||||
nk := nodeKey(repo, node, file)
|
||||
bs, err := db.Get(nk, nil)
|
||||
@@ -529,7 +536,9 @@ func ldbGetGlobal(db *leveldb.DB, repo, file []byte) protocol.FileInfo {
|
||||
return f
|
||||
}
|
||||
|
||||
func ldbWithGlobal(db *leveldb.DB, repo []byte, fn fileIterator) {
|
||||
func ldbWithGlobal(db *leveldb.DB, repo []byte, truncate bool, fn fileIterator) {
|
||||
defer runtime.GC()
|
||||
|
||||
start := globalKey(repo, nil)
|
||||
limit := globalKey(repo, []byte{0xff, 0xff, 0xff, 0xff})
|
||||
snap, err := db.GetSnapshot()
|
||||
@@ -556,8 +565,7 @@ func ldbWithGlobal(db *leveldb.DB, repo []byte, fn fileIterator) {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var f protocol.FileInfo
|
||||
err = f.UnmarshalXDR(bs)
|
||||
f, err := unmarshalTrunc(bs, truncate)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
@@ -596,7 +604,9 @@ func ldbAvailability(db *leveldb.DB, repo, file []byte) []protocol.NodeID {
|
||||
return nodes
|
||||
}
|
||||
|
||||
func ldbWithNeed(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
func ldbWithNeed(db *leveldb.DB, repo, node []byte, truncate bool, fn fileIterator) {
|
||||
defer runtime.GC()
|
||||
|
||||
start := globalKey(repo, nil)
|
||||
limit := globalKey(repo, []byte{0xff, 0xff, 0xff, 0xff})
|
||||
snap, err := db.GetSnapshot()
|
||||
@@ -607,6 +617,7 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
defer dbi.Release()
|
||||
|
||||
outer:
|
||||
for dbi.Next() {
|
||||
var vl versionList
|
||||
err := vl.UnmarshalXDR(dbi.Value())
|
||||
@@ -632,30 +643,118 @@ func ldbWithNeed(db *leveldb.DB, repo, node []byte, fn fileIterator) {
|
||||
|
||||
if need || !have {
|
||||
name := globalKeyName(dbi.Key())
|
||||
fk := nodeKey(repo, vl.versions[0].node, name)
|
||||
bs, err := snap.Get(fk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
needVersion := vl.versions[0].version
|
||||
inner:
|
||||
for i := range vl.versions {
|
||||
if vl.versions[i].version != needVersion {
|
||||
// We haven't found a valid copy of the file with the needed version.
|
||||
continue outer
|
||||
}
|
||||
fk := nodeKey(repo, vl.versions[i].node, name)
|
||||
bs, err := snap.Get(fk, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var gf protocol.FileInfo
|
||||
err = gf.UnmarshalXDR(bs)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
gf, err := unmarshalTrunc(bs, truncate)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
if protocol.IsDeleted(gf.Flags) && !have {
|
||||
// We don't need deleted files that we don't have
|
||||
continue
|
||||
}
|
||||
if gf.IsInvalid() {
|
||||
// The file is marked invalid for whatever reason, don't use it.
|
||||
continue inner
|
||||
}
|
||||
|
||||
if debug {
|
||||
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
|
||||
}
|
||||
if gf.IsDeleted() && !have {
|
||||
// We don't need deleted files that we don't have
|
||||
continue outer
|
||||
}
|
||||
|
||||
if cont := fn(gf); !cont {
|
||||
return
|
||||
if debug {
|
||||
l.Debugf("need repo=%q node=%v name=%q need=%v have=%v haveV=%d globalV=%d", repo, protocol.NodeIDFromBytes(node), name, need, have, haveVersion, vl.versions[0].version)
|
||||
}
|
||||
|
||||
if cont := fn(gf); !cont {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ldbListRepos(db *leveldb.DB) []string {
|
||||
defer runtime.GC()
|
||||
|
||||
start := []byte{keyTypeGlobal}
|
||||
limit := []byte{keyTypeGlobal + 1}
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer snap.Release()
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
defer dbi.Release()
|
||||
|
||||
repoExists := make(map[string]bool)
|
||||
for dbi.Next() {
|
||||
repo := string(globalKeyRepo(dbi.Key()))
|
||||
if !repoExists[repo] {
|
||||
repoExists[repo] = true
|
||||
}
|
||||
}
|
||||
|
||||
repos := make([]string, 0, len(repoExists))
|
||||
for k := range repoExists {
|
||||
repos = append(repos, k)
|
||||
}
|
||||
|
||||
sort.Strings(repos)
|
||||
return repos
|
||||
}
|
||||
|
||||
func ldbDropRepo(db *leveldb.DB, repo []byte) {
|
||||
defer runtime.GC()
|
||||
|
||||
snap, err := db.GetSnapshot()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer snap.Release()
|
||||
|
||||
// Remove all items related to the given repo from the node->file bucket
|
||||
start := []byte{keyTypeNode}
|
||||
limit := []byte{keyTypeNode + 1}
|
||||
dbi := snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
for dbi.Next() {
|
||||
itemRepo := nodeKeyRepo(dbi.Key())
|
||||
if bytes.Compare(repo, itemRepo) == 0 {
|
||||
db.Delete(dbi.Key(), nil)
|
||||
}
|
||||
}
|
||||
dbi.Release()
|
||||
|
||||
// Remove all items related to the given repo from the global bucket
|
||||
start = []byte{keyTypeGlobal}
|
||||
limit = []byte{keyTypeGlobal + 1}
|
||||
dbi = snap.NewIterator(&util.Range{Start: start, Limit: limit}, nil)
|
||||
for dbi.Next() {
|
||||
itemRepo := globalKeyRepo(dbi.Key())
|
||||
if bytes.Compare(repo, itemRepo) == 0 {
|
||||
db.Delete(dbi.Key(), nil)
|
||||
}
|
||||
}
|
||||
dbi.Release()
|
||||
}
|
||||
|
||||
func unmarshalTrunc(bs []byte, truncate bool) (protocol.FileIntf, error) {
|
||||
if truncate {
|
||||
var tf protocol.FileInfoTruncated
|
||||
err := tf.UnmarshalXDR(bs)
|
||||
return tf, err
|
||||
} else {
|
||||
var tf protocol.FileInfo
|
||||
err := tf.UnmarshalXDR(bs)
|
||||
return tf, err
|
||||
}
|
||||
}
|
||||
|
||||
81
files/set.go
81
files/set.go
@@ -2,7 +2,12 @@
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package files provides a set type to track local/remote files with newness checks.
|
||||
// Package files provides a set type to track local/remote files with newness
|
||||
// checks. We must do a certain amount of normalization in here. We will get
|
||||
// fed paths with either native or wire-format separators and encodings
|
||||
// depending on who calls us. We transform paths to wire-format (NFC and
|
||||
// slashes) on the way to the database, and transform to native format
|
||||
// (varying separator and encoding) on the way back out.
|
||||
package files
|
||||
|
||||
import (
|
||||
@@ -36,7 +41,7 @@ func NewSet(repo string, db *leveldb.DB) *Set {
|
||||
}
|
||||
|
||||
var nodeID protocol.NodeID
|
||||
ldbWithAllRepo(db, []byte(repo), func(node []byte, f protocol.FileInfo) bool {
|
||||
ldbWithAllRepoTruncated(db, []byte(repo), func(node []byte, f protocol.FileInfoTruncated) bool {
|
||||
copy(nodeID[:], node)
|
||||
if f.LocalVersion > s.localVersion[nodeID] {
|
||||
s.localVersion[nodeID] = f.LocalVersion
|
||||
@@ -56,6 +61,7 @@ func (s *Set) Replace(node protocol.NodeID, fs []protocol.FileInfo) {
|
||||
if debug {
|
||||
l.Debugf("%s Replace(%v, [%d])", s.repo, node, len(fs))
|
||||
}
|
||||
normalizeFilenames(fs)
|
||||
s.mutex.Lock()
|
||||
defer s.mutex.Unlock()
|
||||
s.localVersion[node] = ldbReplace(s.db, []byte(s.repo), node[:], fs)
|
||||
@@ -65,6 +71,7 @@ func (s *Set) ReplaceWithDelete(node protocol.NodeID, fs []protocol.FileInfo) {
|
||||
if debug {
|
||||
l.Debugf("%s ReplaceWithDelete(%v, [%d])", s.repo, node, len(fs))
|
||||
}
|
||||
normalizeFilenames(fs)
|
||||
s.mutex.Lock()
|
||||
defer s.mutex.Unlock()
|
||||
if lv := ldbReplaceWithDelete(s.db, []byte(s.repo), node[:], fs); lv > s.localVersion[node] {
|
||||
@@ -76,6 +83,7 @@ func (s *Set) Update(node protocol.NodeID, fs []protocol.FileInfo) {
|
||||
if debug {
|
||||
l.Debugf("%s Update(%v, [%d])", s.repo, node, len(fs))
|
||||
}
|
||||
normalizeFilenames(fs)
|
||||
s.mutex.Lock()
|
||||
defer s.mutex.Unlock()
|
||||
if lv := ldbUpdate(s.db, []byte(s.repo), node[:], fs); lv > s.localVersion[node] {
|
||||
@@ -87,33 +95,58 @@ func (s *Set) WithNeed(node protocol.NodeID, fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithNeed(%v)", s.repo, node)
|
||||
}
|
||||
ldbWithNeed(s.db, []byte(s.repo), node[:], fn)
|
||||
ldbWithNeed(s.db, []byte(s.repo), node[:], false, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) WithNeedTruncated(node protocol.NodeID, fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithNeedTruncated(%v)", s.repo, node)
|
||||
}
|
||||
ldbWithNeed(s.db, []byte(s.repo), node[:], true, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) WithHave(node protocol.NodeID, fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithHave(%v)", s.repo, node)
|
||||
}
|
||||
ldbWithHave(s.db, []byte(s.repo), node[:], fn)
|
||||
ldbWithHave(s.db, []byte(s.repo), node[:], false, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) WithHaveTruncated(node protocol.NodeID, fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithHaveTruncated(%v)", s.repo, node)
|
||||
}
|
||||
ldbWithHave(s.db, []byte(s.repo), node[:], true, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) WithGlobal(fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithGlobal()", s.repo)
|
||||
}
|
||||
ldbWithGlobal(s.db, []byte(s.repo), fn)
|
||||
ldbWithGlobal(s.db, []byte(s.repo), false, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) WithGlobalTruncated(fn fileIterator) {
|
||||
if debug {
|
||||
l.Debugf("%s WithGlobalTruncated()", s.repo)
|
||||
}
|
||||
ldbWithGlobal(s.db, []byte(s.repo), true, nativeFileIterator(fn))
|
||||
}
|
||||
|
||||
func (s *Set) Get(node protocol.NodeID, file string) protocol.FileInfo {
|
||||
return ldbGet(s.db, []byte(s.repo), node[:], []byte(file))
|
||||
f := ldbGet(s.db, []byte(s.repo), node[:], []byte(normalizedFilename(file)))
|
||||
f.Name = nativeFilename(f.Name)
|
||||
return f
|
||||
}
|
||||
|
||||
func (s *Set) GetGlobal(file string) protocol.FileInfo {
|
||||
return ldbGetGlobal(s.db, []byte(s.repo), []byte(file))
|
||||
f := ldbGetGlobal(s.db, []byte(s.repo), []byte(normalizedFilename(file)))
|
||||
f.Name = nativeFilename(f.Name)
|
||||
return f
|
||||
}
|
||||
|
||||
func (s *Set) Availability(file string) []protocol.NodeID {
|
||||
return ldbAvailability(s.db, []byte(s.repo), []byte(file))
|
||||
return ldbAvailability(s.db, []byte(s.repo), []byte(normalizedFilename(file)))
|
||||
}
|
||||
|
||||
func (s *Set) LocalVersion(node protocol.NodeID) uint64 {
|
||||
@@ -121,3 +154,35 @@ func (s *Set) LocalVersion(node protocol.NodeID) uint64 {
|
||||
defer s.mutex.Unlock()
|
||||
return s.localVersion[node]
|
||||
}
|
||||
|
||||
// ListRepos returns the repository IDs seen in the database.
|
||||
func ListRepos(db *leveldb.DB) []string {
|
||||
return ldbListRepos(db)
|
||||
}
|
||||
|
||||
// DropRepo clears out all information related to the given repo from the
|
||||
// database.
|
||||
func DropRepo(db *leveldb.DB, repo string) {
|
||||
ldbDropRepo(db, []byte(repo))
|
||||
}
|
||||
|
||||
func normalizeFilenames(fs []protocol.FileInfo) {
|
||||
for i := range fs {
|
||||
fs[i].Name = normalizedFilename(fs[i].Name)
|
||||
}
|
||||
}
|
||||
|
||||
func nativeFileIterator(fn fileIterator) fileIterator {
|
||||
return func(fi protocol.FileIntf) bool {
|
||||
switch f := fi.(type) {
|
||||
case protocol.FileInfo:
|
||||
f.Name = nativeFilename(f.Name)
|
||||
return fn(f)
|
||||
case protocol.FileInfoTruncated:
|
||||
f.Name = nativeFilename(f.Name)
|
||||
return fn(f)
|
||||
default:
|
||||
panic("unknown interface type")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,9 @@
|
||||
package files_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
@@ -16,10 +18,11 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
)
|
||||
|
||||
var remoteNode protocol.NodeID
|
||||
var remoteNode0, remoteNode1 protocol.NodeID
|
||||
|
||||
func init() {
|
||||
remoteNode, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
|
||||
remoteNode0, _ = protocol.NodeIDFromString("AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR")
|
||||
remoteNode1, _ = protocol.NodeIDFromString("I6KAH76-66SLLLB-5PFXSOA-UFJCDZC-YAOMLEK-CP2GB32-BV5RQST-3PSROAU")
|
||||
}
|
||||
|
||||
func genBlocks(n int) []protocol.BlockInfo {
|
||||
@@ -37,7 +40,8 @@ func genBlocks(n int) []protocol.BlockInfo {
|
||||
|
||||
func globalList(s *files.Set) []protocol.FileInfo {
|
||||
var fs []protocol.FileInfo
|
||||
s.WithGlobal(func(f protocol.FileInfo) bool {
|
||||
s.WithGlobal(func(fi protocol.FileIntf) bool {
|
||||
f := fi.(protocol.FileInfo)
|
||||
fs = append(fs, f)
|
||||
return true
|
||||
})
|
||||
@@ -46,7 +50,8 @@ func globalList(s *files.Set) []protocol.FileInfo {
|
||||
|
||||
func haveList(s *files.Set, n protocol.NodeID) []protocol.FileInfo {
|
||||
var fs []protocol.FileInfo
|
||||
s.WithHave(n, func(f protocol.FileInfo) bool {
|
||||
s.WithHave(n, func(fi protocol.FileIntf) bool {
|
||||
f := fi.(protocol.FileInfo)
|
||||
fs = append(fs, f)
|
||||
return true
|
||||
})
|
||||
@@ -55,7 +60,8 @@ func haveList(s *files.Set, n protocol.NodeID) []protocol.FileInfo {
|
||||
|
||||
func needList(s *files.Set, n protocol.NodeID) []protocol.FileInfo {
|
||||
var fs []protocol.FileInfo
|
||||
s.WithNeed(n, func(f protocol.FileInfo) bool {
|
||||
s.WithNeed(n, func(fi protocol.FileIntf) bool {
|
||||
f := fi.(protocol.FileInfo)
|
||||
fs = append(fs, f)
|
||||
return true
|
||||
})
|
||||
@@ -76,6 +82,16 @@ func (l fileList) Swap(a, b int) {
|
||||
l[a], l[b] = l[b], l[a]
|
||||
}
|
||||
|
||||
func (l fileList) String() string {
|
||||
var b bytes.Buffer
|
||||
b.WriteString("[]protocol.FileList{\n")
|
||||
for _, f := range l {
|
||||
fmt.Fprintf(&b, " %q: #%d, %d bytes, %d blocks, flags=%o\n", f.Name, f.Version, f.Size(), len(f.Blocks), f.Flags)
|
||||
}
|
||||
b.WriteString("}")
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func TestGlobalSet(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
@@ -86,20 +102,20 @@ func TestGlobalSet(t *testing.T) {
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
|
||||
local0 := []protocol.FileInfo{
|
||||
local0 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
|
||||
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
|
||||
protocol.FileInfo{Name: "z", Version: 1000, Blocks: genBlocks(8)},
|
||||
}
|
||||
local1 := []protocol.FileInfo{
|
||||
local1 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1000, Blocks: genBlocks(3)},
|
||||
protocol.FileInfo{Name: "d", Version: 1000, Blocks: genBlocks(4)},
|
||||
}
|
||||
localTot := []protocol.FileInfo{
|
||||
localTot := fileList{
|
||||
local0[0],
|
||||
local0[1],
|
||||
local0[2],
|
||||
@@ -107,76 +123,76 @@ func TestGlobalSet(t *testing.T) {
|
||||
protocol.FileInfo{Name: "z", Version: 1001, Flags: protocol.FlagDeleted},
|
||||
}
|
||||
|
||||
remote0 := []protocol.FileInfo{
|
||||
remote0 := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1000, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5)},
|
||||
}
|
||||
remote1 := []protocol.FileInfo{
|
||||
remote1 := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(6)},
|
||||
protocol.FileInfo{Name: "e", Version: 1000, Blocks: genBlocks(7)},
|
||||
}
|
||||
remoteTot := []protocol.FileInfo{
|
||||
remoteTot := fileList{
|
||||
remote0[0],
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
remote1[1],
|
||||
}
|
||||
|
||||
expectedGlobal := []protocol.FileInfo{
|
||||
remote0[0],
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
localTot[3],
|
||||
remote1[1],
|
||||
localTot[4],
|
||||
expectedGlobal := fileList{
|
||||
remote0[0], // a
|
||||
remote1[0], // b
|
||||
remote0[2], // c
|
||||
localTot[3], // d
|
||||
remote1[1], // e
|
||||
localTot[4], // z
|
||||
}
|
||||
|
||||
expectedLocalNeed := []protocol.FileInfo{
|
||||
expectedLocalNeed := fileList{
|
||||
remote1[0],
|
||||
remote0[2],
|
||||
remote1[1],
|
||||
}
|
||||
|
||||
expectedRemoteNeed := []protocol.FileInfo{
|
||||
expectedRemoteNeed := fileList{
|
||||
local0[3],
|
||||
}
|
||||
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local0)
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local1)
|
||||
m.Replace(remoteNode, remote0)
|
||||
m.Update(remoteNode, remote1)
|
||||
m.Replace(remoteNode0, remote0)
|
||||
m.Update(remoteNode0, remote1)
|
||||
|
||||
g := globalList(m)
|
||||
sort.Sort(fileList(g))
|
||||
g := fileList(globalList(m))
|
||||
sort.Sort(g)
|
||||
|
||||
if fmt.Sprint(g) != fmt.Sprint(expectedGlobal) {
|
||||
t.Errorf("Global incorrect;\n A: %v !=\n E: %v", g, expectedGlobal)
|
||||
}
|
||||
|
||||
h := haveList(m, protocol.LocalNodeID)
|
||||
sort.Sort(fileList(h))
|
||||
h := fileList(haveList(m, protocol.LocalNodeID))
|
||||
sort.Sort(h)
|
||||
|
||||
if fmt.Sprint(h) != fmt.Sprint(localTot) {
|
||||
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, localTot)
|
||||
}
|
||||
|
||||
h = haveList(m, remoteNode)
|
||||
sort.Sort(fileList(h))
|
||||
h = fileList(haveList(m, remoteNode0))
|
||||
sort.Sort(h)
|
||||
|
||||
if fmt.Sprint(h) != fmt.Sprint(remoteTot) {
|
||||
t.Errorf("Have incorrect;\n A: %v !=\n E: %v", h, remoteTot)
|
||||
}
|
||||
|
||||
n := needList(m, protocol.LocalNodeID)
|
||||
sort.Sort(fileList(n))
|
||||
n := fileList(needList(m, protocol.LocalNodeID))
|
||||
sort.Sort(n)
|
||||
|
||||
if fmt.Sprint(n) != fmt.Sprint(expectedLocalNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedLocalNeed)
|
||||
}
|
||||
|
||||
n = needList(m, remoteNode)
|
||||
sort.Sort(fileList(n))
|
||||
n = fileList(needList(m, remoteNode0))
|
||||
sort.Sort(n)
|
||||
|
||||
if fmt.Sprint(n) != fmt.Sprint(expectedRemoteNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", n, expectedRemoteNeed)
|
||||
@@ -187,7 +203,7 @@ func TestGlobalSet(t *testing.T) {
|
||||
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, localTot[1])
|
||||
}
|
||||
|
||||
f = m.Get(remoteNode, "b")
|
||||
f = m.Get(remoteNode0, "b")
|
||||
if fmt.Sprint(f) != fmt.Sprint(remote1[0]) {
|
||||
t.Errorf("Get incorrect;\n A: %v !=\n E: %v", f, remote1[0])
|
||||
}
|
||||
@@ -207,14 +223,14 @@ func TestGlobalSet(t *testing.T) {
|
||||
t.Errorf("GetGlobal incorrect;\n A: %v !=\n E: %v", f, protocol.FileInfo{})
|
||||
}
|
||||
|
||||
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode}
|
||||
av := []protocol.NodeID{protocol.LocalNodeID, remoteNode0}
|
||||
a := m.Availability("a")
|
||||
if !(len(a) == 2 && (a[0] == av[0] && a[1] == av[1] || a[0] == av[1] && a[1] == av[0])) {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, av)
|
||||
}
|
||||
a = m.Availability("b")
|
||||
if len(a) != 1 || a[0] != remoteNode {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode)
|
||||
if len(a) != 1 || a[0] != remoteNode0 {
|
||||
t.Errorf("Availability incorrect;\n A: %v !=\n E: %v", a, remoteNode0)
|
||||
}
|
||||
a = m.Availability("d")
|
||||
if len(a) != 1 || a[0] != protocol.LocalNodeID {
|
||||
@@ -222,6 +238,128 @@ func TestGlobalSet(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestNeedWithInvalid(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
localHave := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
}
|
||||
remote0Have := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
remote1Have := fileList{
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "e", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
|
||||
expectedNeed := fileList{
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
|
||||
s.Replace(remoteNode0, remote0Have)
|
||||
s.Replace(remoteNode1, remote1Have)
|
||||
|
||||
need := fileList(needList(s, protocol.LocalNodeID))
|
||||
sort.Sort(need)
|
||||
|
||||
if fmt.Sprint(need) != fmt.Sprint(expectedNeed) {
|
||||
t.Errorf("Need incorrect;\n A: %v !=\n E: %v", need, expectedNeed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateToInvalid(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
localHave := fileList{
|
||||
protocol.FileInfo{Name: "a", Version: 1000, Blocks: genBlocks(1)},
|
||||
protocol.FileInfo{Name: "b", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "c", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "d", Version: 1003, Blocks: genBlocks(7)},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, localHave)
|
||||
|
||||
have := fileList(haveList(s, protocol.LocalNodeID))
|
||||
sort.Sort(have)
|
||||
|
||||
if fmt.Sprint(have) != fmt.Sprint(localHave) {
|
||||
t.Errorf("Have incorrect before invalidation;\n A: %v !=\n E: %v", have, localHave)
|
||||
}
|
||||
|
||||
localHave[1] = protocol.FileInfo{Name: "b", Version: 1001, Flags: protocol.FlagInvalid}
|
||||
s.Update(protocol.LocalNodeID, localHave[1:2])
|
||||
|
||||
have = fileList(haveList(s, protocol.LocalNodeID))
|
||||
sort.Sort(have)
|
||||
|
||||
if fmt.Sprint(have) != fmt.Sprint(localHave) {
|
||||
t.Errorf("Have incorrect after invalidation;\n A: %v !=\n E: %v", have, localHave)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidAvailability(t *testing.T) {
|
||||
lamport.Default = lamport.Clock{}
|
||||
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
remote0Have := fileList{
|
||||
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
remote1Have := fileList{
|
||||
protocol.FileInfo{Name: "both", Version: 1001, Blocks: genBlocks(2)},
|
||||
protocol.FileInfo{Name: "r1only", Version: 1002, Blocks: genBlocks(7)},
|
||||
protocol.FileInfo{Name: "r0only", Version: 1003, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
protocol.FileInfo{Name: "none", Version: 1004, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
|
||||
}
|
||||
|
||||
s.Replace(remoteNode0, remote0Have)
|
||||
s.Replace(remoteNode1, remote1Have)
|
||||
|
||||
if av := s.Availability("both"); len(av) != 2 {
|
||||
t.Error("Incorrect availability for 'both':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("r0only"); len(av) != 1 || av[0] != remoteNode0 {
|
||||
t.Error("Incorrect availability for 'r0only':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("r1only"); len(av) != 1 || av[0] != remoteNode1 {
|
||||
t.Error("Incorrect availability for 'r1only':", av)
|
||||
}
|
||||
|
||||
if av := s.Availability("none"); len(av) != 0 {
|
||||
t.Error("Incorrect availability for 'none':", av)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLocalDeleted(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
@@ -327,7 +465,7 @@ func Benchmark10kUpdateChg(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -358,7 +496,7 @@ func Benchmark10kUpdateSme(b *testing.B) {
|
||||
b.Fatal(err)
|
||||
}
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -385,7 +523,7 @@ func Benchmark10kNeed2k(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 8000; i++ {
|
||||
@@ -418,7 +556,7 @@ func Benchmark10kHaveFullList(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 2000; i++ {
|
||||
@@ -451,7 +589,7 @@ func Benchmark10kGlobal(b *testing.B) {
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
var local []protocol.FileInfo
|
||||
for i := 0; i < 2000; i++ {
|
||||
@@ -502,8 +640,8 @@ func TestGlobalReset(t *testing.T) {
|
||||
t.Errorf("Global incorrect;\n%v !=\n%v", g, local)
|
||||
}
|
||||
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode, nil)
|
||||
m.Replace(remoteNode0, remote)
|
||||
m.Replace(remoteNode0, nil)
|
||||
|
||||
g = globalList(m)
|
||||
sort.Sort(fileList(g))
|
||||
@@ -542,7 +680,7 @@ func TestNeed(t *testing.T) {
|
||||
}
|
||||
|
||||
m.ReplaceWithDelete(protocol.LocalNodeID, local)
|
||||
m.Replace(remoteNode, remote)
|
||||
m.Replace(remoteNode0, remote)
|
||||
|
||||
need := needList(m, protocol.LocalNodeID)
|
||||
|
||||
@@ -592,3 +730,144 @@ func TestLocalVersion(t *testing.T) {
|
||||
t.Fatal("Local version number should be unchanged")
|
||||
}
|
||||
}
|
||||
|
||||
func TestListDropRepo(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s0 := files.NewSet("test0", db)
|
||||
local1 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "a", Version: 1000},
|
||||
protocol.FileInfo{Name: "b", Version: 1000},
|
||||
protocol.FileInfo{Name: "c", Version: 1000},
|
||||
}
|
||||
s0.Replace(protocol.LocalNodeID, local1)
|
||||
|
||||
s1 := files.NewSet("test1", db)
|
||||
local2 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "d", Version: 1002},
|
||||
protocol.FileInfo{Name: "e", Version: 1002},
|
||||
protocol.FileInfo{Name: "f", Version: 1002},
|
||||
}
|
||||
s1.Replace(remoteNode0, local2)
|
||||
|
||||
// Check that we have both repos and their data is in the global list
|
||||
|
||||
expectedRepoList := []string{"test0", "test1"}
|
||||
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
|
||||
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
|
||||
}
|
||||
if l := len(globalList(s0)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s0", l)
|
||||
}
|
||||
if l := len(globalList(s1)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s1", l)
|
||||
}
|
||||
|
||||
// Drop one of them and check that it's gone.
|
||||
|
||||
files.DropRepo(db, "test1")
|
||||
|
||||
expectedRepoList = []string{"test0"}
|
||||
if actualRepoList := files.ListRepos(db); !reflect.DeepEqual(actualRepoList, expectedRepoList) {
|
||||
t.Fatalf("RepoList mismatch\nE: %v\nA: %v", expectedRepoList, actualRepoList)
|
||||
}
|
||||
if l := len(globalList(s0)); l != 3 {
|
||||
t.Errorf("Incorrect global length %d != 3 for s0", l)
|
||||
}
|
||||
if l := len(globalList(s1)); l != 0 {
|
||||
t.Errorf("Incorrect global length %d != 0 for s1", l)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLongPath(t *testing.T) {
|
||||
db, err := leveldb.Open(storage.NewMemStorage(), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
s := files.NewSet("test", db)
|
||||
|
||||
var b bytes.Buffer
|
||||
for i := 0; i < 100; i++ {
|
||||
b.WriteString("012345678901234567890123456789012345678901234567890")
|
||||
}
|
||||
name := b.String() // 5000 characters
|
||||
|
||||
local := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: string(name), Version: 1000},
|
||||
}
|
||||
|
||||
s.ReplaceWithDelete(protocol.LocalNodeID, local)
|
||||
|
||||
gf := globalList(s)
|
||||
if l := len(gf); l != 1 {
|
||||
t.Fatalf("Incorrect len %d != 1 for global list", l)
|
||||
}
|
||||
if gf[0].Name != local[0].Name {
|
||||
t.Error("Incorrect long filename;\n%q !=\n%q", gf[0].Name, local[0].Name)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
var gf protocol.FileInfo
|
||||
|
||||
func TestStressGlobalVersion(t *testing.T) {
|
||||
dur := 15 * time.Second
|
||||
if testing.Short() {
|
||||
dur = 1 * time.Second
|
||||
}
|
||||
|
||||
set1 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "a", Version: 1000},
|
||||
protocol.FileInfo{Name: "b", Version: 1000},
|
||||
}
|
||||
set2 := []protocol.FileInfo{
|
||||
protocol.FileInfo{Name: "b", Version: 1001},
|
||||
protocol.FileInfo{Name: "c", Version: 1000},
|
||||
}
|
||||
|
||||
db, err := leveldb.OpenFile("testdata/global.db", nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
m := files.NewSet("test", db)
|
||||
|
||||
done := make(chan struct{})
|
||||
go stressWriter(m, remoteNode0, set1, nil, done)
|
||||
go stressWriter(m, protocol.LocalNodeID, set2, nil, done)
|
||||
|
||||
t0 := time.Now()
|
||||
for time.Since(t0) < dur {
|
||||
m.WithGlobal(func(f protocol.FileInfo) bool {
|
||||
gf = f
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
close(done)
|
||||
}
|
||||
|
||||
func stressWriter(s *files.Set, id protocol.NodeID, set1, set2 []protocol.FileInfo, done chan struct{}) {
|
||||
one := true
|
||||
i := 0
|
||||
for {
|
||||
select {
|
||||
case <-done:
|
||||
return
|
||||
|
||||
default:
|
||||
if one {
|
||||
s.Replace(id, set1)
|
||||
} else {
|
||||
s.Replace(id, set2)
|
||||
}
|
||||
one = !one
|
||||
}
|
||||
i++
|
||||
}
|
||||
}
|
||||
*/
|
||||
|
||||
62
fnmatch/fnmatch.go
Normal file
62
fnmatch/fnmatch.go
Normal file
@@ -0,0 +1,62 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package fnmatch
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
FNM_NOESCAPE = (1 << iota)
|
||||
FNM_PATHNAME
|
||||
FNM_CASEFOLD
|
||||
)
|
||||
|
||||
func Convert(pattern string, flags int) (*regexp.Regexp, error) {
|
||||
any := "."
|
||||
|
||||
if runtime.GOOS == "windows" {
|
||||
flags |= FNM_NOESCAPE
|
||||
pattern = filepath.FromSlash(pattern)
|
||||
if flags&FNM_PATHNAME != 0 {
|
||||
any = "[^\\\\]"
|
||||
}
|
||||
} else if flags&FNM_PATHNAME != 0 {
|
||||
any = "[^/]"
|
||||
}
|
||||
if flags&FNM_NOESCAPE != 0 {
|
||||
pattern = strings.Replace(pattern, "\\", "\\\\", -1)
|
||||
} else {
|
||||
pattern = strings.Replace(pattern, "\\*", "[:escapedstar:]", -1)
|
||||
pattern = strings.Replace(pattern, "\\?", "[:escapedques:]", -1)
|
||||
pattern = strings.Replace(pattern, "\\.", "[:escapeddot:]", -1)
|
||||
}
|
||||
pattern = strings.Replace(pattern, ".", "\\.", -1)
|
||||
pattern = strings.Replace(pattern, "**", "[:doublestar:]", -1)
|
||||
pattern = strings.Replace(pattern, "*", any+"*", -1)
|
||||
pattern = strings.Replace(pattern, "[:doublestar:]", ".*", -1)
|
||||
pattern = strings.Replace(pattern, "?", any, -1)
|
||||
pattern = strings.Replace(pattern, "[:escapedstar:]", "\\*", -1)
|
||||
pattern = strings.Replace(pattern, "[:escapedques:]", "\\?", -1)
|
||||
pattern = strings.Replace(pattern, "[:escapeddot:]", "\\.", -1)
|
||||
pattern = "^" + pattern + "$"
|
||||
if flags&FNM_CASEFOLD != 0 {
|
||||
pattern = "(?i)" + pattern
|
||||
}
|
||||
return regexp.Compile(pattern)
|
||||
}
|
||||
|
||||
// Matches the pattern against the string, with the given flags,
|
||||
// and returns true if the match is successful.
|
||||
func Match(pattern, s string, flags int) (bool, error) {
|
||||
exp, err := Convert(pattern, flags)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return exp.MatchString(s), nil
|
||||
}
|
||||
81
fnmatch/fnmatch_test.go
Normal file
81
fnmatch/fnmatch_test.go
Normal file
@@ -0,0 +1,81 @@
|
||||
// Copyright (C) 2014 Jakob Borg and Contributors (see the CONTRIBUTORS file).
|
||||
// All rights reserved. Use of this source code is governed by an MIT-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package fnmatch
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type testcase struct {
|
||||
pat string
|
||||
name string
|
||||
flags int
|
||||
match bool
|
||||
}
|
||||
|
||||
var testcases = []testcase{
|
||||
{"", "", 0, true},
|
||||
{"*", "", 0, true},
|
||||
{"*", "foo", 0, true},
|
||||
{"*", "bar", 0, true},
|
||||
{"*", "*", 0, true},
|
||||
{"**", "f", 0, true},
|
||||
{"**", "foo.txt", 0, true},
|
||||
{"*.*", "foo.txt", 0, true},
|
||||
{"foo*.txt", "foobar.txt", 0, true},
|
||||
{"foo.txt", "foo.txt", 0, true},
|
||||
|
||||
{"foo.txt", "bar/foo.txt", 0, false},
|
||||
{"*/foo.txt", "bar/foo.txt", 0, true},
|
||||
{"f?o.txt", "foo.txt", 0, true},
|
||||
{"f?o.txt", "fooo.txt", 0, false},
|
||||
{"f[ab]o.txt", "foo.txt", 0, false},
|
||||
{"f[ab]o.txt", "fao.txt", 0, true},
|
||||
{"f[ab]o.txt", "fbo.txt", 0, true},
|
||||
{"f[ab]o.txt", "fco.txt", 0, false},
|
||||
{"f[ab]o.txt", "fabo.txt", 0, false},
|
||||
{"f[ab]o.txt", "f[ab]o.txt", 0, false},
|
||||
{"f\\[ab\\]o.txt", "f[ab]o.txt", FNM_NOESCAPE, false},
|
||||
|
||||
{"*foo.txt", "bar/foo.txt", 0, true},
|
||||
{"*foo.txt", "bar/foo.txt", FNM_PATHNAME, false},
|
||||
{"*/foo.txt", "bar/foo.txt", 0, true},
|
||||
{"*/foo.txt", "bar/foo.txt", FNM_PATHNAME, true},
|
||||
{"*/foo.txt", "bar/baz/foo.txt", 0, true},
|
||||
{"*/foo.txt", "bar/baz/foo.txt", FNM_PATHNAME, false},
|
||||
{"**/foo.txt", "bar/baz/foo.txt", 0, true},
|
||||
{"**/foo.txt", "bar/baz/foo.txt", FNM_PATHNAME, true},
|
||||
|
||||
{"foo.txt", "foo.TXT", 0, false},
|
||||
{"foo.txt", "foo.TXT", FNM_CASEFOLD, true},
|
||||
}
|
||||
|
||||
func TestMatch(t *testing.T) {
|
||||
if runtime.GOOS != "windows" {
|
||||
testcases = append(testcases, testcase{"f\\[ab\\]o.txt", "f[ab]o.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\.txt", "foo.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\*.txt", "foo*.txt", 0, true})
|
||||
testcases = append(testcases, testcase{"foo\\.txt", "foo.txt", FNM_NOESCAPE, false})
|
||||
testcases = append(testcases, testcase{"f\\\\\\[ab\\\\\\]o.txt", "f\\[ab\\]o.txt", 0, true})
|
||||
}
|
||||
|
||||
for _, tc := range testcases {
|
||||
if m, err := Match(tc.pat, filepath.FromSlash(tc.name), tc.flags); m != tc.match {
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
} else {
|
||||
t.Errorf("Match(%q, %q, %d) != %v", tc.pat, tc.name, tc.flags, tc.match)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalid(t *testing.T) {
|
||||
if _, err := Match("foo[bar", "...", 0); err == nil {
|
||||
t.Error("Unexpected nil error")
|
||||
}
|
||||
}
|
||||
194
gui/app.js
194
gui/app.js
@@ -15,22 +15,28 @@ syncthing.config(function ($httpProvider, $translateProvider) {
|
||||
$httpProvider.defaults.xsrfCookieName = 'CSRF-Token';
|
||||
|
||||
$translateProvider.useStaticFilesLoader({
|
||||
prefix: 'lang-',
|
||||
prefix: 'lang/lang-',
|
||||
suffix: '.json'
|
||||
});
|
||||
});
|
||||
|
||||
syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
$scope.lastEvent = null;
|
||||
var online = false;
|
||||
var lastID = 0;
|
||||
|
||||
var successFn = function (data) {
|
||||
if (!online) {
|
||||
$scope.$emit('UIOnline');
|
||||
online = true;
|
||||
// When Syncthing restarts while the long polling connection is in
|
||||
// progress the browser on some platforms returns a 200 (since the
|
||||
// headers has been flushed with the return code 200), with no data.
|
||||
// This basically means that the connection has been reset, and the call
|
||||
// was not actually sucessful.
|
||||
if (!data) {
|
||||
errorFn(data);
|
||||
return;
|
||||
}
|
||||
|
||||
$scope.$emit('UIOnline');
|
||||
|
||||
if (lastID > 0) {
|
||||
data.forEach(function (event) {
|
||||
console.log("event", event.id, event.type, event.data);
|
||||
@@ -49,10 +55,8 @@ syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
};
|
||||
|
||||
var errorFn = function (data) {
|
||||
if (online) {
|
||||
$scope.$emit('UIOffline');
|
||||
online = false;
|
||||
}
|
||||
$scope.$emit('UIOffline');
|
||||
|
||||
setTimeout(function () {
|
||||
$http.get(urlbase + '/events?limit=1')
|
||||
.success(successFn)
|
||||
@@ -68,6 +72,8 @@ syncthing.controller('EventCtrl', function ($scope, $http) {
|
||||
syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $location) {
|
||||
var prevDate = 0;
|
||||
var getOK = true;
|
||||
var navigatingAway = false;
|
||||
var online = false;
|
||||
var restarting = false;
|
||||
|
||||
$scope.completion = {};
|
||||
@@ -84,18 +90,44 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.repos = {};
|
||||
$scope.seenError = '';
|
||||
$scope.upgradeInfo = {};
|
||||
$scope.stats = {};
|
||||
|
||||
$http.get(urlbase+"/lang").success(function (langs) {
|
||||
var lang;
|
||||
// Find the first language in the list provided by the user's browser
|
||||
// that is a prefix of a language we have available. That is, "en"
|
||||
// sent by the browser will match "en" or "en-US", while "zh-TW" will
|
||||
// match only "zh-TW" and not "zh-CN".
|
||||
|
||||
var lang, matching;
|
||||
for (var i = 0; i < langs.length; i++) {
|
||||
lang = langs[i];
|
||||
if (validLangs.indexOf(lang) >= 0) {
|
||||
$translate.use(lang);
|
||||
break;
|
||||
if (lang.length < 2) {
|
||||
continue;
|
||||
}
|
||||
matching = validLangs.filter(function (possibleLang) {
|
||||
// The langs returned by the /rest/langs call will be in lower
|
||||
// case. We compare to the lowercase version of the language
|
||||
// code we have as well.
|
||||
possibleLang = possibleLang.toLowerCase()
|
||||
if (possibleLang.length > lang.length) {
|
||||
return possibleLang.indexOf(lang) == 0;
|
||||
} else {
|
||||
return lang.indexOf(possibleLang) == 0;
|
||||
}
|
||||
});
|
||||
if (matching.length >= 1) {
|
||||
$translate.use(matching[0]);
|
||||
return;
|
||||
}
|
||||
}
|
||||
// Fallback if nothing matched
|
||||
$translate.use("en");
|
||||
})
|
||||
|
||||
$(window).bind('beforeunload', function() {
|
||||
navigatingAway = true;
|
||||
});
|
||||
|
||||
$scope.$on("$locationChangeSuccess", function () {
|
||||
var lang = $location.search().lang;
|
||||
if (lang) {
|
||||
@@ -117,8 +149,13 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
|
||||
$scope.$on('UIOnline', function (event, arg) {
|
||||
if (online && !restarting) {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('UIOnline');
|
||||
$scope.init();
|
||||
online = true;
|
||||
restarting = false;
|
||||
$('#networkError').modal('hide');
|
||||
$('#restarting').modal('hide');
|
||||
@@ -126,9 +163,14 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
});
|
||||
|
||||
$scope.$on('UIOffline', function (event, arg) {
|
||||
if (navigatingAway || !online) {
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('UIOffline');
|
||||
online = false;
|
||||
if (!restarting) {
|
||||
$('#networkError').modal({backdrop: 'static', keyboard: false});
|
||||
$('#networkError').modal();
|
||||
}
|
||||
});
|
||||
|
||||
@@ -157,6 +199,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.$on('NodeDisconnected', function (event, arg) {
|
||||
delete $scope.connections[arg.data.id];
|
||||
refreshNodeStats();
|
||||
});
|
||||
|
||||
$scope.$on('NodeConnected', function (event, arg) {
|
||||
@@ -188,7 +231,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
document.cookie = "firstVisit=" + Date.now() + ";max-age=" + 30*24*3600;
|
||||
} else {
|
||||
if (+firstVisit < Date.now() - 4*3600*1000){
|
||||
$('#ur').modal({backdrop: 'static', keyboard: false});
|
||||
$('#ur').modal();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -314,10 +357,18 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
});
|
||||
}
|
||||
|
||||
var refreshNodeStats = debounce(function () {
|
||||
$http.get(urlbase+"/stats/node").success(function (data) {
|
||||
$scope.stats = data;
|
||||
console.log("refreshNodeStats", data);
|
||||
});
|
||||
}, 500);
|
||||
|
||||
$scope.init = function() {
|
||||
refreshSystem();
|
||||
refreshConfig();
|
||||
refreshConnectionStats();
|
||||
refreshNodeStats();
|
||||
|
||||
$http.get(urlbase + '/version').success(function (data) {
|
||||
$scope.version = data;
|
||||
@@ -471,7 +522,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.tmpOptions = angular.copy($scope.config.Options);
|
||||
$scope.tmpOptions.UREnabled = ($scope.tmpOptions.URAccepted > 0);
|
||||
$scope.tmpGUI = angular.copy($scope.config.GUI);
|
||||
$('#settings').modal({backdrop: 'static', keyboard: true});
|
||||
$('#settings').modal();
|
||||
};
|
||||
|
||||
$scope.saveConfig = function() {
|
||||
@@ -514,7 +565,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.restart = function () {
|
||||
restarting = true;
|
||||
$('#restarting').modal({backdrop: 'static', keyboard: false});
|
||||
$('#restarting').modal();
|
||||
$http.post(urlbase + '/restart');
|
||||
$scope.configInSync = true;
|
||||
|
||||
@@ -536,9 +587,9 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.upgrade = function () {
|
||||
restarting = true;
|
||||
$('#upgrading').modal({backdrop: 'static', keyboard: false});
|
||||
$('#upgrading').modal();
|
||||
$http.post(urlbase + '/upgrade').success(function () {
|
||||
$('#restarting').modal({backdrop: 'static', keyboard: false});
|
||||
$('#restarting').modal();
|
||||
$('#upgrading').modal('hide');
|
||||
}).error(function () {
|
||||
$('#upgrading').modal('hide');
|
||||
@@ -548,7 +599,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.shutdown = function () {
|
||||
restarting = true;
|
||||
$http.post(urlbase + '/shutdown').success(function () {
|
||||
$('#shutdown').modal({backdrop: 'static', keyboard: false});
|
||||
$('#shutdown').modal();
|
||||
});
|
||||
$scope.configInSync = true;
|
||||
};
|
||||
@@ -559,7 +610,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.editingSelf = (nodeCfg.NodeID == $scope.myID);
|
||||
$scope.currentNode.AddressesStr = nodeCfg.Addresses.join(', ');
|
||||
$scope.nodeEditor.$setPristine();
|
||||
$('#editNode').modal({backdrop: 'static', keyboard: true});
|
||||
$('#editNode').modal();
|
||||
};
|
||||
|
||||
$scope.idNode = function () {
|
||||
@@ -571,7 +622,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.editingExisting = false;
|
||||
$scope.editingSelf = false;
|
||||
$scope.nodeEditor.$setPristine();
|
||||
$('#editNode').modal({backdrop: 'static', keyboard: true});
|
||||
$('#editNode').modal();
|
||||
};
|
||||
|
||||
$scope.deleteNode = function () {
|
||||
@@ -674,19 +725,44 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
});
|
||||
if ($scope.currentRepo.Versioning && $scope.currentRepo.Versioning.Type === "simple") {
|
||||
$scope.currentRepo.simpleFileVersioning = true;
|
||||
$scope.currentRepo.FileVersioningSelector = "simple";
|
||||
$scope.currentRepo.simpleKeep = +$scope.currentRepo.Versioning.Params.keep;
|
||||
} else if ($scope.currentRepo.Versioning && $scope.currentRepo.Versioning.Type === "staggered") {
|
||||
$scope.currentRepo.staggeredFileVersioning = true;
|
||||
$scope.currentRepo.FileVersioningSelector = "staggered";
|
||||
$scope.currentRepo.staggeredMaxAge = Math.floor(+$scope.currentRepo.Versioning.Params.maxAge / 86400);
|
||||
$scope.currentRepo.staggeredCleanInterval = +$scope.currentRepo.Versioning.Params.cleanInterval;
|
||||
$scope.currentRepo.staggeredVersionsPath = $scope.currentRepo.Versioning.Params.versionsPath;
|
||||
} else {
|
||||
$scope.currentRepo.FileVersioningSelector = "none";
|
||||
}
|
||||
$scope.currentRepo.simpleKeep = $scope.currentRepo.simpleKeep || 5;
|
||||
$scope.currentRepo.staggeredCleanInterval = $scope.currentRepo.staggeredCleanInterval || 3600;
|
||||
$scope.currentRepo.staggeredVersionsPath = $scope.currentRepo.staggeredVersionsPath || "";
|
||||
|
||||
// staggeredMaxAge can validly be zero, which we should not replace
|
||||
// with the default value of 365. So only set the default if it's
|
||||
// actually undefined.
|
||||
if (typeof $scope.currentRepo.staggeredMaxAge === 'undefined') {
|
||||
$scope.currentRepo.staggeredMaxAge = 365;
|
||||
}
|
||||
|
||||
$scope.editingExisting = true;
|
||||
$scope.repoEditor.$setPristine();
|
||||
$('#editRepo').modal({backdrop: 'static', keyboard: true});
|
||||
$('#editRepo').modal();
|
||||
};
|
||||
|
||||
$scope.addRepo = function () {
|
||||
$scope.currentRepo = {selectedNodes: {}};
|
||||
$scope.currentRepo.RescanIntervalS = 60;
|
||||
$scope.currentRepo.FileVersioningSelector = "none";
|
||||
$scope.currentRepo.simpleKeep = 5;
|
||||
$scope.currentRepo.staggeredMaxAge = 365;
|
||||
$scope.currentRepo.staggeredCleanInterval = 3600;
|
||||
$scope.currentRepo.staggeredVersionsPath = "";
|
||||
$scope.editingExisting = false;
|
||||
$scope.repoEditor.$setPristine();
|
||||
$('#editRepo').modal({backdrop: 'static', keyboard: true});
|
||||
$('#editRepo').modal();
|
||||
};
|
||||
|
||||
$scope.saveRepo = function () {
|
||||
@@ -703,7 +779,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
}
|
||||
delete repoCfg.selectedNodes;
|
||||
|
||||
if (repoCfg.simpleFileVersioning) {
|
||||
if (repoCfg.FileVersioningSelector === "simple") {
|
||||
repoCfg.Versioning = {
|
||||
'Type': 'simple',
|
||||
'Params': {
|
||||
@@ -712,6 +788,20 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
};
|
||||
delete repoCfg.simpleFileVersioning;
|
||||
delete repoCfg.simpleKeep;
|
||||
} else if (repoCfg.FileVersioningSelector === "staggered") {
|
||||
repoCfg.Versioning = {
|
||||
'Type': 'staggered',
|
||||
'Params': {
|
||||
'maxAge': '' + (repoCfg.staggeredMaxAge * 86400),
|
||||
'cleanInterval': '' + repoCfg.staggeredCleanInterval,
|
||||
'versionsPath': '' + repoCfg.staggeredVersionsPath,
|
||||
}
|
||||
};
|
||||
delete repoCfg.staggeredFileVersioning;
|
||||
delete repoCfg.staggeredMaxAge;
|
||||
delete repoCfg.staggeredCleanInterval;
|
||||
delete repoCfg.staggeredVersionsPath;
|
||||
|
||||
} else {
|
||||
delete repoCfg.Versioning;
|
||||
}
|
||||
@@ -747,8 +837,6 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
cfg.APIKey = randomString(30, 32);
|
||||
};
|
||||
|
||||
|
||||
|
||||
$scope.acceptUR = function () {
|
||||
$scope.config.Options.URAccepted = 1000; // Larger than the largest existing report version
|
||||
$scope.saveConfig();
|
||||
@@ -763,7 +851,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
|
||||
$scope.showNeed = function (repo) {
|
||||
$scope.neededLoaded = false;
|
||||
$('#needed').modal({backdrop: 'static', keyboard: true});
|
||||
$('#needed').modal();
|
||||
$http.get(urlbase + "/need?repo=" + encodeURIComponent(repo)).success(function (data) {
|
||||
$scope.needed = data;
|
||||
$scope.neededLoaded = true;
|
||||
@@ -786,9 +874,7 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
};
|
||||
|
||||
$scope.override = function (repo) {
|
||||
$http.post(urlbase + "/model/override?repo=" + encodeURIComponent(repo)).success(function () {
|
||||
$scope.refresh();
|
||||
});
|
||||
$http.post(urlbase + "/model/override?repo=" + encodeURIComponent(repo));
|
||||
};
|
||||
|
||||
$scope.about = function () {
|
||||
@@ -799,6 +885,10 @@ syncthing.controller('SyncthingCtrl', function ($scope, $http, $translate, $loca
|
||||
$scope.reportPreview = true;
|
||||
};
|
||||
|
||||
$scope.rescanRepo = function (repo) {
|
||||
$http.post(urlbase + "/scan?repo=" + encodeURIComponent(repo));
|
||||
};
|
||||
|
||||
$scope.init();
|
||||
setInterval($scope.refresh, 10000);
|
||||
});
|
||||
@@ -816,10 +906,10 @@ function nodeCompare(a, b) {
|
||||
}
|
||||
|
||||
function repoCompare(a, b) {
|
||||
if (a.Directory < b.Directory) {
|
||||
if (a.ID < b.ID) {
|
||||
return -1;
|
||||
}
|
||||
return a.Directory > b.Directory;
|
||||
return a.ID > b.ID;
|
||||
}
|
||||
|
||||
function repoMap(l) {
|
||||
@@ -881,9 +971,9 @@ function debounce(func, wait) {
|
||||
} else {
|
||||
timeout = null;
|
||||
if (again) {
|
||||
again = false;
|
||||
result = func.apply(context, args);
|
||||
context = args = null;
|
||||
again = false;
|
||||
}
|
||||
}
|
||||
};
|
||||
@@ -953,12 +1043,6 @@ syncthing.filter('metric', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('short', function () {
|
||||
return function (input) {
|
||||
return input.substr(0, 6);
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('alwaysNumber', function () {
|
||||
return function (input) {
|
||||
if (input === undefined) {
|
||||
@@ -968,18 +1052,6 @@ syncthing.filter('alwaysNumber', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('shortPath', function () {
|
||||
return function (input) {
|
||||
if (input === undefined)
|
||||
return "";
|
||||
var parts = input.split(/[\/\\]/);
|
||||
if (!parts || parts.length <= 3) {
|
||||
return input;
|
||||
}
|
||||
return ".../" + parts.slice(parts.length-2).join("/");
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('basename', function () {
|
||||
return function (input) {
|
||||
if (input === undefined)
|
||||
@@ -992,24 +1064,6 @@ syncthing.filter('basename', function () {
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.filter('clean', function () {
|
||||
return function (input) {
|
||||
return encodeURIComponent(input).replace(/%/g, '');
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.directive('optionEditor', function () {
|
||||
return {
|
||||
restrict: 'C',
|
||||
replace: true,
|
||||
transclude: true,
|
||||
scope: {
|
||||
setting: '=setting',
|
||||
},
|
||||
template: '<input type="text" ng-model="config.Options[setting.id]"></input>',
|
||||
};
|
||||
});
|
||||
|
||||
syncthing.directive('uniqueRepo', function() {
|
||||
return {
|
||||
require: 'ngModel',
|
||||
|
||||
BIN
gui/font/raleway-500.woff
Normal file
BIN
gui/font/raleway-500.woff
Normal file
Binary file not shown.
6
gui/font/raleway.css
Normal file
6
gui/font/raleway.css
Normal file
@@ -0,0 +1,6 @@
|
||||
@font-face {
|
||||
font-family: 'Raleway';
|
||||
font-style: normal;
|
||||
font-weight: 500;
|
||||
src: local('Raleway'), url(raleway-500.woff) format('woff');
|
||||
}
|
||||
|
Before Width: | Height: | Size: 6.4 KiB After Width: | Height: | Size: 6.4 KiB |
|
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 47 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
186
gui/index.html
186
gui/index.html
@@ -11,86 +11,12 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
<link rel="shortcut icon" href="favicon.png">
|
||||
<link rel="shortcut icon" href="img/favicon.png">
|
||||
|
||||
<title>Syncthing | {{thisNodeName()}}</title>
|
||||
<link href="bootstrap/css/bootstrap.min.css" rel="stylesheet">
|
||||
<link href="raleway.css" rel="stylesheet">
|
||||
<style type="text/css">
|
||||
body {
|
||||
padding-bottom: 70px;
|
||||
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5 {
|
||||
font-family: "Raleway", "Helvetica Neue", Helvetica, Arial, sans-serif;
|
||||
}
|
||||
|
||||
ul+h5 {
|
||||
margin-top: 1.5em;
|
||||
}
|
||||
|
||||
.text-monospace {
|
||||
font-family: Menlo, Monaco, Consolas, "Courier New", monospace;
|
||||
}
|
||||
|
||||
.table-condensed>thead>tr>th, .table-condensed>tbody>tr>th, .table-condensed>tfoot>tr>th, .table-condensed>thead>tr>td, .table-condensed>tbody>tr>td, .table-condensed>tfoot>tr>td {
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
.logo {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
top: -5px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.list-no-bullet {
|
||||
list-style-type: none
|
||||
}
|
||||
|
||||
.li-column {
|
||||
display: inline-block;
|
||||
min-width: 7em;
|
||||
margin-right: 1em;
|
||||
background-color: rgb(236, 240, 241);
|
||||
border-radius: 3px;
|
||||
padding: 1px 4px;
|
||||
margin: 2px 2px;
|
||||
}
|
||||
.li-column span.data {
|
||||
margin-left: 0.5em;
|
||||
min-width: 10em;
|
||||
text-align: right;
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
.ng-cloak {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.table th {
|
||||
white-space: nowrap;
|
||||
font-weight: 400;
|
||||
}
|
||||
|
||||
.table td {
|
||||
padding-left: 20px !important;
|
||||
}
|
||||
|
||||
.table td.small-data {
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
@media (max-width:767px) {
|
||||
.table-responsive>.table>tbody>tr>td {
|
||||
/* revert a bootstrap setting e.g.:
|
||||
* for mobile phones to allow linebreaks in long repro folder/shared with
|
||||
* columns. */
|
||||
white-space: normal;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
<link href="font/raleway.css" rel="stylesheet">
|
||||
<link href="overrides.css" rel="stylesheet">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
@@ -100,7 +26,7 @@
|
||||
|
||||
<nav class="navbar navbar-top navbar-default" role="navigation">
|
||||
<div class="container">
|
||||
<span class="navbar-brand"><img class="logo" src="logo-text-64.png" height="32" width="117"/></span>
|
||||
<span class="navbar-brand"><img class="logo" src="img/logo-text-64.png" height="32" width="117"/></span>
|
||||
<p class="navbar-text hidden-xs">{{thisNodeName()}}</p>
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
<li ng-if="upgradeInfo.newer">
|
||||
@@ -178,7 +104,6 @@
|
||||
</div>
|
||||
<div id="repo-{{$index}}" class="panel-collapse collapse">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
@@ -195,16 +120,16 @@
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-globe"></span> <span translate>Global Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].globalFiles | alwaysNumber}} <span translate>items</span>, {{model[repo.ID].globalBytes | binary}}B</td>
|
||||
<td class="text-right">{{model[repo.ID].globalFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].globalBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-home"></span> <span translate>Local Repository</span></th>
|
||||
<td class="text-right">{{model[repo.ID].localFiles | alwaysNumber}} <span translate>items</span>, {{model[repo.ID].localBytes | binary}}B</td>
|
||||
<td class="text-right">{{model[repo.ID].localFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].localBytes | binary}}B</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-cloud-download"></span> <span translate>Out Of Sync</span></th>
|
||||
<td class="text-right">
|
||||
<a ng-if="model[repo.ID].needFiles > 0" ng-click="showNeed(repo.ID)" href="">{{model[repo.ID].needFiles | alwaysNumber}} <span translate>items</span>, {{model[repo.ID].needBytes | binary}}B</a>
|
||||
<a ng-if="model[repo.ID].needFiles > 0" ng-click="showNeed(repo.ID)" href="">{{model[repo.ID].needFiles | alwaysNumber}} <span translate>items</span>, ~{{model[repo.ID].needBytes | binary}}B</a>
|
||||
<span ng-if="model[repo.ID].needFiles == 0">0 <span translate>items</span>, 0 B</span>
|
||||
</td>
|
||||
</tr>
|
||||
@@ -222,14 +147,18 @@
|
||||
<span translate ng-if="!repo.IgnorePerms">No</span>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-refresh"></span> <span translate>Rescan Interval</span></th>
|
||||
<td class="text-right">{{repo.RescanIntervalS}} s</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<th><span class="glyphicon glyphicon-share-alt"></span> <span translate>Shared With</span></th>
|
||||
<td class="text-right">{{sharesRepo(repo)}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<span class="pull-right">
|
||||
<a class="btn btn-sm btn-default" href="" ng-show="repoStatus(repo.ID) == 'idle'" ng-click="rescanRepo(repo.ID)"><span class="glyphicon glyphicon-refresh"></span> <span translate>Rescan</span></a>
|
||||
<a class="btn btn-sm btn-primary" href="" ng-click="editRepo(repo)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a>
|
||||
<a class="btn btn-sm btn-danger" ng-if="repo.ReadOnly && model[repo.ID].needFiles > 0" ng-click="override(repo.ID)" href=""><span class="glyphicon glyphicon-upload"></span> <span translate>Override Changes</span></a>
|
||||
</span>
|
||||
@@ -251,7 +180,6 @@
|
||||
</div>
|
||||
<div id="node-this" class="panel-collapse collapse in">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
@@ -283,7 +211,6 @@
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<span class="pull-right"><a class="btn btn-sm btn-primary" href="" ng-click="editNode(nodeCfg)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a></span>
|
||||
</div>
|
||||
</div>
|
||||
@@ -309,7 +236,6 @@
|
||||
</div>
|
||||
<div id="node-{{$index}}" class="panel-collapse collapse">
|
||||
<div class="panel-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-condensed table-striped">
|
||||
<tbody>
|
||||
<tr>
|
||||
@@ -337,9 +263,13 @@
|
||||
<th><span class="glyphicon glyphicon-tag"></span> <span translate>Version</span></th>
|
||||
<td class="text-right">{{nodeVer(nodeCfg)}}</td>
|
||||
</tr>
|
||||
<tr ng-if="!connections[nodeCfg.NodeID]">
|
||||
<th><span class="glyphicon glyphicon-eye-open"></span> <span translate>Last seen</span></th>
|
||||
<td translate ng-if="stats[nodeCfg.NodeID].LastSeen.indexOf('1970') > -1" class="text-right">Never</td>
|
||||
<td ng-if="stats[nodeCfg.NodeID].LastSeen.indexOf('1970') < 0" class="text-right">{{stats[nodeCfg.NodeID].LastSeen | date:"yyyy-MM-dd HH:mm"}}</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<span class="pull-right"><a class="btn btn-sm btn-primary" href="" ng-click="editNode(nodeCfg)"><span class="glyphicon glyphicon-pencil"></span> <span translate>Edit</span></a></span>
|
||||
</div>
|
||||
</div>
|
||||
@@ -395,6 +325,8 @@
|
||||
<p><span translate>Syncthing is restarting.</span> <span translate>Please wait</span>...</p>
|
||||
</modal>
|
||||
|
||||
<!-- Upgrading modal -->
|
||||
|
||||
<modal id="upgrading" icon="refresh" title="{{'Upgrading' | translate}}" status="info">
|
||||
<p><span translate>Syncthing is upgrading.</span> <span translate>Please wait</span>...</p>
|
||||
</modal>
|
||||
@@ -414,7 +346,7 @@
|
||||
|
||||
<!-- Node editor modal -->
|
||||
|
||||
<div id="editNode" class="modal fade">
|
||||
<div id="editNode" class="modal fade" tabindex="-1">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
@@ -437,7 +369,8 @@
|
||||
<div class="form-group">
|
||||
<label translate for="name">Node Name</label>
|
||||
<input placeholder="Home Server" id="name" class="form-control" type="text" ng-model="currentNode.Name"></input>
|
||||
<p translate class="help-block">Shown instead of Node ID in the cluster status.</p>
|
||||
<p translate ng-if="currentNode.NodeID == myID" class="help-block">Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.</p>
|
||||
<p translate ng-if="currentNode.NodeID != myID" class="help-block">Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.</p>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="addresses">Addresses</label>
|
||||
@@ -464,7 +397,7 @@
|
||||
|
||||
<!-- Repo editor modal -->
|
||||
|
||||
<div id="editRepo" class="modal fade">
|
||||
<div id="editRepo" class="modal fade" tabindex="-1">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
@@ -487,12 +420,19 @@
|
||||
</div>
|
||||
<div class="form-group" ng-class="{'has-error': repoEditor.repoPath.$invalid && repoEditor.repoPath.$dirty}">
|
||||
<label translate for="repoPath">Repository Path</label>
|
||||
<input name="repoPath" placeholder="~/Documents" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
|
||||
<input name="repoPath" placeholder="~/Documents" ng-disabled="editingExisting" id="repoPath" class="form-control" type="text" ng-model="currentRepo.Directory" required></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="repoEditor.repoPath.$valid || repoEditor.repoPath.$pristine">Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for</span> <code>{{system.tilde}}</code>.
|
||||
<span translate ng-if="repoEditor.repoPath.$error.required && repoEditor.repoPath.$dirty">The repository path cannot be blank.</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="form-group" ng-class="{'has-error': repoEditor.rescanIntervalS.$invalid && repoEditor.rescanIntervalS.$dirty}">
|
||||
<label for="rescanIntervalS"><span translate>Rescan Interval</span> (s)</label>
|
||||
<input name="rescanIntervalS" placeholder="60" id="rescanIntervalS" class="form-control" type="number" ng-model="currentRepo.RescanIntervalS" required min="5"></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="!repoEditor.rescanIntervalS.$valid && repoEditor.rescanIntervalS.$dirty">The rescan interval must be at least 5 seconds.</span>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
@@ -525,14 +465,25 @@
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label translate>File Versioning</label>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="checkbox" ng-model="currentRepo.simpleFileVersioning"> <span translate>File Versioning</span>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="none"> <span translate>No File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="simple"> <span translate>Simple File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<div class="radio">
|
||||
<label>
|
||||
<input type="radio" ng-model="currentRepo.FileVersioningSelector" value="staggered"> <span translate>Staggered File Versioning</span>
|
||||
</label>
|
||||
</div>
|
||||
<p translate class="help-block">Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</p>
|
||||
</div>
|
||||
<div class="form-group" ng-if="currentRepo.simpleFileVersioning" ng-class="{'has-error': repoEditor.simpleKeep.$invalid && repoEditor.simpleKeep.$dirty}">
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector=='simple'" ng-class="{'has-error': repoEditor.simpleKeep.$invalid && repoEditor.simpleKeep.$dirty}">
|
||||
<p translate class="help-block">Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</p>
|
||||
<label translate for="simpleKeep">Keep Versions</label>
|
||||
<input name="simpleKeep" id="simpleKeep" class="form-control" type="number" ng-model="currentRepo.simpleKeep" required min="1"></input>
|
||||
<p class="help-block">
|
||||
@@ -541,7 +492,21 @@
|
||||
<span translate ng-if="repoEditor.simpleKeep.$error.min && repoEditor.simpleKeep.$dirty">You must keep at least one version.</span>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector=='staggered'" ng-class="{'has-error': repoEditor.staggeredMaxAge.$invalid && repoEditor.staggeredMaxAge.$dirty}">
|
||||
<p class="help-block"><span translate>Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.</span> <span translate>Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.</span></p>
|
||||
<p translate class="help-block">The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.</p>
|
||||
<label translate for="staggeredMaxAge">Maximum Age</label>
|
||||
<input name="staggeredMaxAge" id="staggeredMaxAge" class="form-control" type="number" ng-model="currentRepo.staggeredMaxAge" required></input>
|
||||
<p class="help-block">
|
||||
<span translate ng-if="repoEditor.staggeredMaxAge.$valid || repoEditor.staggeredMaxAge.$pristine">The maximum time to keep a version (in days, set to 0 to keep versions forever).</span>
|
||||
<span translate ng-if="repoEditor.staggeredMaxAge.$error.required && repoEditor.staggeredMaxAge.$dirty">The maximum age must be a number and cannot be blank.</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="form-group" ng-if="currentRepo.FileVersioningSelector == 'staggered'">
|
||||
<label translate for="staggeredVersionsPath">Versions Path</label>
|
||||
<input name="staggeredVersionsPath" placeholder="" id="staggeredVersionsPath" class="form-control" type="text" ng-model="currentRepo.staggeredVersionsPath"></input>
|
||||
<p translate class="help-block">Path where versions should be stored (leave empty for the default .stversions folder in the repository).</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
@@ -558,7 +523,7 @@
|
||||
|
||||
<!-- Settings modal -->
|
||||
|
||||
<div id="settings" class="modal fade">
|
||||
<div id="settings" class="modal fade" tabindex="-1">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
@@ -577,10 +542,6 @@
|
||||
<label translate for="MaxSendKbps">Outgoing Rate Limit (KiB/s)</label>
|
||||
<input id="MaxSendKbps" class="form-control" type="number" ng-model="tmpOptions.MaxSendKbps">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="RescanIntervalS">Rescan Interval (s)</label>
|
||||
<input id="RescanIntervalS" class="form-control" type="number" ng-model="tmpOptions.RescanIntervalS">
|
||||
</div>
|
||||
<!--
|
||||
<div class="form-group">
|
||||
<label translate for="ReconnectIntervalS">Reconnect Interval (s)</label>
|
||||
@@ -610,10 +571,6 @@
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label translate for="LocalAnnPort">Local Discovery Port</label>
|
||||
<input ng-disabled="!tmpOptions.LocalAnnEnabled" id="LocalAnnPort" class="form-control" type="number" ng-model="tmpOptions.LocalAnnPort">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
@@ -684,7 +641,7 @@
|
||||
|
||||
<!-- Usage report modal -->
|
||||
|
||||
<div id="ur" class="modal fade">
|
||||
<div id="ur" class="modal fade" data-backdrop="static" data-keyboard="false" tabindex="-1">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header alert alert-success">
|
||||
@@ -719,7 +676,7 @@
|
||||
<!-- About modal -->
|
||||
|
||||
<modal id="about" large="yes" close="yes" status="info" title="About">
|
||||
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
|
||||
<h1 class="text-center"><img alt="Syncthing" title="Syncthing" src="img/logo-text-256.png" style="vertical-align: -16px" height="100" width="366"/><br/><small>{{version}}</small></h1>
|
||||
<hr/>
|
||||
|
||||
<p translate>Copyright © 2014 Jakob Borg and the following Contributors:</p>
|
||||
@@ -728,6 +685,7 @@
|
||||
<ul>
|
||||
<li>Aaron Bieber</li>
|
||||
<li>Andrew Dunham</li>
|
||||
<li>Alexander Graf</li>
|
||||
<li>Arthur Axel fREW Schmidt</li>
|
||||
<li>Audrius Butkevicius</li>
|
||||
<li>Ben Sidhom</li>
|
||||
@@ -739,6 +697,8 @@
|
||||
<ul>
|
||||
<li>James Patterson</li>
|
||||
<li>Jens Diemer</li>
|
||||
<li>Marcin Dziadus</li>
|
||||
<li>Michael Tilli</li>
|
||||
<li>Philippe Schommers</li>
|
||||
<li>Ryan Sullivan</li>
|
||||
<li>Tully Robinson</li>
|
||||
@@ -750,7 +710,7 @@
|
||||
|
||||
<p translate>Syncthing includes the following software or portions thereof:</p>
|
||||
<ul>
|
||||
<li><a href="http://golang.org/">The Go Programming Languange</a>, Copyright © 2012 The Go Authors.</li>
|
||||
<li><a href="http://golang.org/">The Go Programming Language</a>, Copyright © 2012 The Go Authors.</li>
|
||||
<li><a href="https://bitbucket.org/kardianos/osext">kardianos/osext</a>, Copyright © 2012 Daniel Theophanes.</li>
|
||||
<li><a href="https://code.google.com/p/snappy-go/">snappy-go</a>, Copyright © 2011 The Snappy-Go Authors.</li>
|
||||
<li><a href="https://github.com/golang/groupcache">groupcache/lru</a>, Copyright © 2013 Google Inc.</li>
|
||||
@@ -763,12 +723,12 @@
|
||||
</modal>
|
||||
|
||||
|
||||
<script src="angular.min.js"></script>
|
||||
<script src="angular-translate.min.js"></script>
|
||||
<script src="angular-translate-loader.min.js"></script>
|
||||
<script src="jquery-2.0.3.min.js"></script>
|
||||
<script src="angular/angular.min.js"></script>
|
||||
<script src="angular/angular-translate.min.js"></script>
|
||||
<script src="angular/angular-translate-loader.min.js"></script>
|
||||
<script src="jquery/jquery-2.0.3.min.js"></script>
|
||||
<script src="bootstrap/js/bootstrap.min.js"></script>
|
||||
<script src="valid-langs.js"></script>
|
||||
<script src="lang/valid-langs.js"></script>
|
||||
<script src="app.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
7
gui/lang/README-FIRST.txt
Normal file
7
gui/lang/README-FIRST.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
All files in this directory are auto generated. Do not change any of
|
||||
them. To contribute translations, please head over to
|
||||
|
||||
https://www.transifex.com/projects/p/syncthing/
|
||||
|
||||
Any updates made on Transifex will be automatically pulled into these
|
||||
files.
|
||||
137
gui/lang/lang-bg.json
Normal file
137
gui/lang/lang-bg.json
Normal file
@@ -0,0 +1,137 @@
|
||||
{
|
||||
"API Key": "API Ключ",
|
||||
"About": "За Програмата",
|
||||
"Add Node": "Добави Машина",
|
||||
"Add Repository": "Добави Папка",
|
||||
"Address": "Адрес",
|
||||
"Addresses": "Адреси",
|
||||
"Allow Anonymous Usage Reporting?": "Разреши анонимен доклад за ползване на програмата?",
|
||||
"Announce Server": "Announce Server",
|
||||
"Anonymous Usage Reporting": "Анонимен Доклад",
|
||||
"Bugs": "Бъгове",
|
||||
"CPU Utilization": "Натоварване на Процесора",
|
||||
"Close": "Затвори",
|
||||
"Connection Error": "Грешка при Свързването",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Правата запазени © 2014 Jakob Borg и следните Сътрудници:",
|
||||
"Delete": "Изтрий",
|
||||
"Disconnected": "Прекрати Връзката",
|
||||
"Documentation": "Документация",
|
||||
"Download Rate": "Скорост на Теглене",
|
||||
"Edit": "Промени",
|
||||
"Edit Node": "Промени Машината",
|
||||
"Edit Repository": "Промени Папката",
|
||||
"Enable UPnP": "Включи UPnP",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Въведи \"ip:port\" адреси разделени със запетая или \"dynamic\", за да извършиш автоматична връзка на адреси.",
|
||||
"Error": "Грешка",
|
||||
"File Versioning": "Файлови Версии",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други машини, но промени направени на тази машина ще бъдат синхронизирани до другите машини.",
|
||||
"Folder": "Папка",
|
||||
"GUI Authentication Password": "Парола за Потребителския Интерфейс",
|
||||
"GUI Authentication User": "Потребител за Потребителския Интерфейс",
|
||||
"GUI Listen Addresses": "Адрес за Свързване с Потребителския Интерфейс",
|
||||
"Generate": "Генерирай",
|
||||
"Global Discovery": "Глобавно Откриване",
|
||||
"Global Discovery Server": "Сървър за Глобално Откриване",
|
||||
"Global Repository": "Глобална Папка",
|
||||
"Idle": "Без Работа",
|
||||
"Ignore Permissions": "Игнорирай Права за Достъп",
|
||||
"Keep Versions": "Пази Версии",
|
||||
"Last seen": "Last seen",
|
||||
"Latest Release": "Най-новата Версия",
|
||||
"Local Discovery": "Локално Откриване",
|
||||
"Local Discovery Port": "Порт за Локално Откриване",
|
||||
"Local Repository": "Локална Папка",
|
||||
"Master Repo": "Главна Папка",
|
||||
"Max File Change Rate (KiB/s)": "Макс. Скорост на Промяна (KiB/s)",
|
||||
"Max Outstanding Requests": "Макс. Неизпълени Заявки",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Never",
|
||||
"No": "Не",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "Код на Машината",
|
||||
"Node Identification": "Идентификация на Машината",
|
||||
"Node Name": "Име на Машината",
|
||||
"Notice": "Известие",
|
||||
"OK": "ОК",
|
||||
"Offline": "Не е на линия",
|
||||
"Online": "На линия",
|
||||
"Out Of Sync": "Не Синхронизиран",
|
||||
"Outgoing Rate Limit (KiB/s)": "Лимит на Изходящата Скорост (KiB/s)",
|
||||
"Override Changes": "Замени Промените",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Пътят до папката на този компютър. Ще бъде създадена ако не съществува. Символът тилда (~) може да бъде използван като заместител на",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Моля изчакай",
|
||||
"Preview Usage Report": "Разгледай Доклада за Използване",
|
||||
"RAM Utilization": "RAM Натоварване",
|
||||
"Reconnect Interval (s)": "Интервал(и) на Свързване",
|
||||
"Repository ID": "Идентификатор на Папката",
|
||||
"Repository Master": "Главна Папка",
|
||||
"Repository Path": "Път до Папката",
|
||||
"Rescan": "Повторно Сканиране",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Интеравал(и) на Сканиране",
|
||||
"Restart": "Рестартирай",
|
||||
"Restart Needed": "Изискава се Рестартиране",
|
||||
"Restarting": "Рестартиране",
|
||||
"Save": "Запази",
|
||||
"Scanning": "Сканиране",
|
||||
"Select the nodes to share this repository with.": "Избери компютрите, с които да споделиш тази папка.",
|
||||
"Settings": "Настройки",
|
||||
"Share With Nodes": "Сподели с Компютри",
|
||||
"Shared With": "Сподел С",
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Кратък идентификатор на папката. Трябва да бъде същият на всички компютри.",
|
||||
"Show ID": "Покажи Идентификатора",
|
||||
"Shown instead of Node ID in the cluster status.": "Покажи вмест ID-то на Компютъра в статус на клъстъра.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Покажи вмест ID-то на Компютъра в статус на клъстъра. Ще бъде предлагано на други комютри като име по подразбиране.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Покажи вмест ID-то на Компютъра в статус на клъстъра. Ще бъде обновено с името по подразбиране изпратено от другия компютър.",
|
||||
"Shutdown": "Спри Програмата",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Сорс Код",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Стартирай Браузъра",
|
||||
"Stopped": "Спряна",
|
||||
"Support / Forum": "Помощ / Форум",
|
||||
"Sync Protocol Listen Addresses": "Адрес за слушане на синхронизиращия протокол",
|
||||
"Synchronization": "Синхронизация",
|
||||
"Syncing": "Синхронизиране",
|
||||
"Syncthing has been shut down.": "Syncthing е спрян.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing включва следният софтуер пълно или частично:",
|
||||
"Syncthing is restarting.": "Syncthing се рестартирва",
|
||||
"Syncthing is upgrading.": "Syncthing се обновява.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing изглежда не е включен, или има проблем с интерент връзката. Повторен опит...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Сумарната статистика е публично достъпна на {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Конфигурацията е запазена, но не е активирана. Syncthing трябва да рестартира, за да се активира новата конфигурация.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Криптираният доклад се изпраща дневно. Използва се, за да следи общи платформи, размери на папки и версии на приложението. Ако събираните данни се променят, ще бъдете информиран с подобен на този диалог.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведни код на машината не е валиден. Трябва да бъде 52 символа и да се състои от букви, цифри като интервалите и тиретата са пожелание.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведни код на машината не е валиден. Трябва да бъде 52 или 56 символа и да се състои от букви, цифри като интервалите и тиретата са пожелание.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "Кодът на машината не може да бъде празен.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Кодът на машината, който си въвел може да бъде намерен в \"Промени > Покажи Идентификатора\". Интервалите и тиретата са пожелание(биват прескачани).",
|
||||
"The number of old versions to keep, per file.": "Броят стари версии, които да бъдат пазени за всеки файл.",
|
||||
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
|
||||
"The repository ID cannot be blank.": "Полето идентификатор на папка не може д абъде празно.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "Идентификаторът на папка трябва да бъде къс(64 символа или по-малко) състоящ се само от букви, цифри, точка(.), тире(-) и подчерта (_).",
|
||||
"The repository ID must be unique.": "Идентификаторът на папката тряба да бъде уникален.",
|
||||
"The repository path cannot be blank.": "Пътят до папката не може да бъде празен.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Неясен",
|
||||
"Up to Date": "Актуален",
|
||||
"Upgrade To {%version%}": "Обновен До {{version}}",
|
||||
"Upgrading": "Обновяване",
|
||||
"Upload Rate": "Скорост на Качване",
|
||||
"Usage": "Употреба",
|
||||
"Use Compression": "Използвай Компресиране",
|
||||
"Use HTTPS for GUI": "Използвай HTTPS за Потребителския Интерфейс",
|
||||
"Version": "Версия",
|
||||
"Versions Path": "Versions Path",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Когато добавяш нова машина помни, че твоята машина също трябва да бъде добавена от другата страна.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Когато добавяш нов идентификатор на папка помни, че той се използва за свързване на папките на различни машини. Главни/малки букви са от значение и трябва да са еднакви на всички машини.",
|
||||
"Yes": "Да",
|
||||
"You must keep at least one version.": "Трябва да пазиш поне една версия.",
|
||||
"items": "артикула"
|
||||
}
|
||||
137
gui/lang/lang-ca.json
Normal file
137
gui/lang/lang-ca.json
Normal file
@@ -0,0 +1,137 @@
|
||||
{
|
||||
"API Key": "Clau API",
|
||||
"About": "Sobre",
|
||||
"Add Node": "Afegir Node",
|
||||
"Add Repository": "Afegir Repositori",
|
||||
"Address": "Adreça",
|
||||
"Addresses": "Adreces",
|
||||
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
|
||||
"Announce Server": "Servidor d'anunciament",
|
||||
"Anonymous Usage Reporting": "Informe anònim d'ús",
|
||||
"Bugs": "Bugs",
|
||||
"CPU Utilization": "Utilització del CPU",
|
||||
"Close": "Tancar",
|
||||
"Connection Error": "Error de connexió",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg i els següents contribuïdors",
|
||||
"Delete": "Esborrar",
|
||||
"Disconnected": "Desconnectat",
|
||||
"Documentation": "Documentació",
|
||||
"Download Rate": "Tasa de descarrega",
|
||||
"Edit": "Editar",
|
||||
"Edit Node": "Editar Node",
|
||||
"Edit Repository": "Editar Repositori",
|
||||
"Enable UPnP": "Habilitat UPnP",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
|
||||
"Error": "Error",
|
||||
"File Versioning": "Versionat de Fitxers",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres nodes, però els canvis fets en aquest node seran enviats a la resta del cluster.",
|
||||
"Folder": "Carpeta",
|
||||
"GUI Authentication Password": "Contrasenya d'autenticació GUI",
|
||||
"GUI Authentication User": "Usuari d'autenticació GUI",
|
||||
"GUI Listen Addresses": "Adreça d'escolta del GUI",
|
||||
"Generate": "Generar",
|
||||
"Global Discovery": "Descobriment Global",
|
||||
"Global Discovery Server": "Servidor de Descobriment Global",
|
||||
"Global Repository": "Repositori Global",
|
||||
"Idle": "Inactiu",
|
||||
"Ignore Permissions": "Ignora Permisos",
|
||||
"Keep Versions": "Mantenir Versions",
|
||||
"Last seen": "Vist per última vegada",
|
||||
"Latest Release": "Última publicació",
|
||||
"Local Discovery": "Descobriment Local",
|
||||
"Local Discovery Port": "Port de Descobriment Local",
|
||||
"Local Repository": "Repositori Local",
|
||||
"Master Repo": "Rep Master",
|
||||
"Max File Change Rate (KiB/s)": "Tasa Màxima d'intercanvi de fitxer (KiB/s)",
|
||||
"Max Outstanding Requests": "Màxim de Peticions Pendents",
|
||||
"Maximum Age": "Antiguitat Màxima",
|
||||
"Never": "Mai",
|
||||
"No": "No",
|
||||
"No File Versioning": "Sense Versionat de Fitxer",
|
||||
"Node ID": "ID del Node",
|
||||
"Node Identification": "Identificació del Node",
|
||||
"Node Name": "Nom Del Node",
|
||||
"Notice": "Avís",
|
||||
"OK": "OK",
|
||||
"Offline": "Desconnectat",
|
||||
"Online": "Connectat",
|
||||
"Out Of Sync": "Fora de la Sincronització",
|
||||
"Outgoing Rate Limit (KiB/s)": "Tasa Límit de Sortida (KiB/s)",
|
||||
"Override Changes": "Sobreescriure Canvis",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta del repositori a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Ruta on les versions s'haurien de guardar (deixa-ho buit per fer servir el directori .stversions per defecte al repositori)",
|
||||
"Please wait": "Si-us-plau espera",
|
||||
"Preview Usage Report": "Vista Prèvia de l'Informe d'Ús",
|
||||
"RAM Utilization": "Utilització de la RAM",
|
||||
"Reconnect Interval (s)": "Interval de Reconnexió (s)",
|
||||
"Repository ID": "ID del Repositori",
|
||||
"Repository Master": "Repositori Mestre",
|
||||
"Repository Path": "Ruta del Repositori",
|
||||
"Rescan": "Re-escanejar",
|
||||
"Rescan Interval": "Interval de re-escaneig",
|
||||
"Rescan Interval (s)": "Interval de re-escaneig (s)",
|
||||
"Restart": "Reiniciar",
|
||||
"Restart Needed": "És Necessari Reiniciar",
|
||||
"Restarting": "Reiniciant",
|
||||
"Save": "Guardar",
|
||||
"Scanning": "Escanejant",
|
||||
"Select the nodes to share this repository with.": "Seleccionar els nodes amb els que es comparteix el repositori.",
|
||||
"Settings": "Preferències",
|
||||
"Share With Nodes": "Compartir Amb Els Nodes",
|
||||
"Shared With": "Compartir Amb",
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identificador curt pel repositori. Ha de ser el mateix per tots els nodes del cluster.",
|
||||
"Show ID": "Mostrar ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Mostrat en comptes del ID del Node en l'estat del cluster.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Mostrat en comptes del ID del Node en l'estat del cluster. Serà advertit als altres nodes com un nom opcional per defecte.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Mostrat en comptes del ID del Node en l'estat del cluster. S'actualitzara al nom del node si es deixa buit.",
|
||||
"Shutdown": "Apagar",
|
||||
"Simple File Versioning": "Versionat de Fitxers Senzill",
|
||||
"Source Code": "Codi Font",
|
||||
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
|
||||
"Start Browser": "Arrancar Navegador",
|
||||
"Stopped": "Aturat",
|
||||
"Support / Forum": "Suport / Fòrum",
|
||||
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
|
||||
"Synchronization": "Sincronització",
|
||||
"Syncing": "Synthing",
|
||||
"Syncthing has been shut down.": "S'ha aturat el synthing.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
|
||||
"Syncthing is restarting.": "Reiniciant syncthing.",
|
||||
"Syncthing is upgrading.": "Actualitzant syncthing.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de repositoris i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 caràcters amb lletres i números, els espais i les barres son opcionals.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del Node introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es fan servir els següents intervals: per la primera hora es manté una versió cada 30 segons, pel primer dia es manté una versió cada hora, pel primer cada 30 dies es manté una versió cada dia, fins el màxim d'antiguitat es manté una versió cada setmana.",
|
||||
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
|
||||
"The node ID cannot be blank.": "El ID del node no pot estar en blanc.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "El ID del node per introduir aquí es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre node. Els espais i les barres son opcionals (s'ignoren).",
|
||||
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
|
||||
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
|
||||
"The repository ID cannot be blank.": "El ID del repositori no pot estar en blanc.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "El ID del repositori ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
|
||||
"The repository ID must be unique.": "El ID del repositori ha de ser únic",
|
||||
"The repository path cannot be blank.": "La carpeta del repositori no pot estar en blanc.",
|
||||
"The rescan interval must be at least 5 seconds.": "El interval de re-escaneig ha de ser com a mínim de 5 segons.",
|
||||
"Unknown": "Desconegut",
|
||||
"Up to Date": "Actualitzat",
|
||||
"Upgrade To {%version%}": "Actualitzar a {{version}}",
|
||||
"Upgrading": "Actualitzant",
|
||||
"Upload Rate": "Tasa de Pujada",
|
||||
"Usage": "Ús",
|
||||
"Use Compression": "Utilitza compressió",
|
||||
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
|
||||
"Version": "Versió",
|
||||
"Versions Path": "Carpeta de les Versions",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions son automàticament eliminades si son més antigues que el màxim d'antiguitat o si excedeixen del nombre de fitxers permesos en un interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Quan s'afegeix un nou node recorda que aquest node s'ha d'afegir tambe a l'altre banda.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Quan s'afegeix un nou repositori recorda que el ID del repositori s'utilitza per lligar repositoris entre nodes. Es distingeix entre majúscules i minúscules i ha de ser exactament iguals entre tots els nodes.",
|
||||
"Yes": "Si",
|
||||
"You must keep at least one version.": "Has de mantenir com a mínim una versió.",
|
||||
"items": "Elements"
|
||||
}
|
||||
@@ -38,6 +38,7 @@
|
||||
"Idle": "Inaktiv",
|
||||
"Ignore Permissions": "Ignorér filrettigheder",
|
||||
"Keep Versions": "Behold versioner",
|
||||
"Last seen": "Last seen",
|
||||
"Latest Release": "Seneste udgivelse",
|
||||
"Local Discovery": "Lokal opslag",
|
||||
"Local Discovery Port": "Lokal opslagsport",
|
||||
@@ -45,7 +46,10 @@
|
||||
"Master Repo": "Hovedlagring",
|
||||
"Max File Change Rate (KiB/s)": "Højeste filændringshastighed (KiB/s)",
|
||||
"Max Outstanding Requests": "Parallelitet",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Never",
|
||||
"No": "Nej",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "Node ID",
|
||||
"Node Identification": "Node identifikation",
|
||||
"Node Name": "Nodenavn",
|
||||
@@ -57,6 +61,7 @@
|
||||
"Outgoing Rate Limit (KiB/s)": "Udgående hastighedsbegrænsning (KiB/s)",
|
||||
"Override Changes": "Overskriv ændringer",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Sti til lagring på din lokale computer. Hvis biblioteket ikke findes vil det blive oprettet. Tegnet tilde (~) kan bruges som genvej til",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Vent venligst",
|
||||
"Preview Usage Report": "Forhåndsvisning af forbrugsrapport",
|
||||
"RAM Utilization": "RAM-forbrug",
|
||||
@@ -64,6 +69,8 @@
|
||||
"Repository ID": "Lagrings-ID",
|
||||
"Repository Master": "Hovedlagring",
|
||||
"Repository Path": "Sti til lagring",
|
||||
"Rescan": "Rescan",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Genscanningsinterval (s)",
|
||||
"Restart": "Genstart",
|
||||
"Restart Needed": "Programmet kræver genstart",
|
||||
@@ -77,8 +84,12 @@
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Kort identifikation for denne lagring. Skal være ens på alle noder i clusteret.",
|
||||
"Show ID": "Vis ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Vises i stedet for node-ID under clusterstatus.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
|
||||
"Shutdown": "Luk ned",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Kildekode",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Start browser",
|
||||
"Stopped": "Stoppet",
|
||||
"Support / Forum": "Support / Forum",
|
||||
@@ -95,6 +106,9 @@
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Den krypterede forbrugsrapport sendes dagligt. Den benyttes til at spore anvendte platforme, lagringsstørrelser og versioner. Hvis det typen af opsamlet data ændres på et senere tidspunkt, vil du blive spurgt om tilladelse igen.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Det indtastede node ID ser ikke gyldigt ud. Det skal være en 52 tegn streng, bestående af tal og bogstaver, eventuelt med mellemrum og bindestreger.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Det indtastede node ID ser ikke gyldigt ud. Det skal være en 52 eller 56 tegn streng, bestående af tal og bogstaver, eventuelt med mellemrum og bindestreger.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "Node-ID'et kan ikke være blankt.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Node-ID'et som skal bruges her, kan du finde i \"Rediger > Vis ID\"-dialogen på den anden node. Mellemrum og bindestreg er valgfri (ignoreres).",
|
||||
"The number of old versions to keep, per file.": "Antallet af gamle versioner som gemmes, per fil.",
|
||||
@@ -103,6 +117,7 @@
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "Lagrings-ID'et skal være en kort identificierende streng (64 karaktere eller mindre) bestående af bogstav-, tal-, punktum- (.), bindestreg- (-) og understregskaraktere (_).",
|
||||
"The repository ID must be unique.": "Lagrings-ID'et skal være unikt.",
|
||||
"The repository path cannot be blank.": "Lagringsstien kan ikke være blank.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Ukendt",
|
||||
"Up to Date": "Fuldt opdateret",
|
||||
"Upgrade To {%version%}": "Opgradér til {{version}}",
|
||||
@@ -112,6 +127,8 @@
|
||||
"Use Compression": "Anvend komprimering",
|
||||
"Use HTTPS for GUI": "Anvend HTTPS til GUI adgang",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versions Path",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Når du tilføjer en ny node skal du huske, at den også skal tilføjes på den anden side.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Når du tilføjer en ny node skal du huske, at lagrings-ID'et bliver brugt til at knytte noder sammen. De er følsomme for store og små bogstaver og skal matche på alle noder.",
|
||||
"Yes": "Ja",
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"API Key": "API-Schlüssel",
|
||||
"About": "Über",
|
||||
"API Key": "API-Key",
|
||||
"About": "Über Syncthing",
|
||||
"Add Node": "Knoten hinzufügen",
|
||||
"Add Repository": "Verzeichnis hinzufügen",
|
||||
"Address": "Adresse",
|
||||
@@ -17,35 +17,39 @@
|
||||
"Disconnected": "Verbindung getrennt",
|
||||
"Documentation": "Dokumentation",
|
||||
"Download Rate": "Downloadgeschwindigkeit",
|
||||
"Edit": "Bearbeiten",
|
||||
"Edit": "Einstellungen bearbeiten",
|
||||
"Edit Node": "Knoten bearbeiten",
|
||||
"Edit Repository": "Verzeichnis ändern",
|
||||
"Enable UPnP": "Aktiviere UPnP",
|
||||
"Enable UPnP": "UPnP aktivieren",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Trage durch ein Komma getrennte \"IP:Port\" Adressen oder \"dynamic\" ein um automatische Adresserkennung durchzuführen.",
|
||||
"Error": "Fehler",
|
||||
"File Versioning": "Dateiversionierung",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Dateizugriffsrechte beim Suchen nach Veränderungen ignorieren. Bei FAT-Dateisystemen verwenden.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Dateien werden beim Löschen oder Ersetzen als datierte Versionen in einen .stversions -Ordner verschoben.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Dateien werden, bevor syncthing sie löscht oder ersetzt, als datierte Versionen in einen Ordner names .stversions verschoben.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Dateien sind vor Veränderung durch andere Knoten geschützt, auf diesem Knoten durchgeführte Veränderungen werden aber auf den Rest des Netzwerks übertragen.",
|
||||
"Folder": "Ordner",
|
||||
"GUI Authentication Password": "Passwort für Zugang zur Benutzeroberfläche",
|
||||
"GUI Authentication User": "Nutzername für Zugang zur Benutzeroberfläche.",
|
||||
"GUI Listen Addresses": "Adresse(n) für die Benutzeroberfläche",
|
||||
"Generate": "Generiere",
|
||||
"Generate": "Generieren",
|
||||
"Global Discovery": "Globale Auffindung",
|
||||
"Global Discovery Server": "Globaler Auffindungsserver",
|
||||
"Global Repository": "Globales Verzeichnis",
|
||||
"Idle": "Untätig",
|
||||
"Ignore Permissions": "Berechtigungen ignorieren",
|
||||
"Keep Versions": "Versionen erhalten",
|
||||
"Last seen": "Zuletzt online",
|
||||
"Latest Release": "Letzte Veröffentlichung",
|
||||
"Local Discovery": "Lokale Auffindung",
|
||||
"Local Discovery Port": "Lokaler Aufindungsport",
|
||||
"Local Repository": "Lokales Verzeichnis",
|
||||
"Master Repo": "Keine Veränderungen zugelassen",
|
||||
"Master Repo": "Originalverzeichnis",
|
||||
"Max File Change Rate (KiB/s)": "Maximale Datenänderungsrate (KiB/s)",
|
||||
"Max Outstanding Requests": "Max. ausstehende Anfragen",
|
||||
"Maximum Age": "Höchstalter",
|
||||
"Never": "Nie",
|
||||
"No": "Nein",
|
||||
"No File Versioning": "Keine Dateiversionierung",
|
||||
"Node ID": "Knoten-ID",
|
||||
"Node Identification": "Knoten Identifikation",
|
||||
"Node Name": "Knoten-Name",
|
||||
@@ -57,19 +61,22 @@
|
||||
"Outgoing Rate Limit (KiB/s)": "Ausgehendes Datenratelimit (KiB/s)",
|
||||
"Override Changes": "Änderungen überschreiben",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Pfad zum Verzeichnis auf dem lokalen Rechner. Wird erzeugt, wenn es nicht existiert. Das Tilden-Zeichen (~) kann als Abkürzung benutzt werden für",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Pfad in dem die Versionen gespeichert werden sollen (ohne Angabe wird der Ordner .stversions im Verzeichnis verwendet).",
|
||||
"Please wait": "Bitte warten",
|
||||
"Preview Usage Report": "Vorschau des Nutzungsberichts",
|
||||
"RAM Utilization": "Verwendeter Arbeitsspeicher",
|
||||
"Reconnect Interval (s)": "Wiederverbindungsintervall (s)",
|
||||
"Repository ID": "Verzeichnis-ID",
|
||||
"Repository Master": "Keine Veränderungen zulassen",
|
||||
"Repository Master": "Originalverzeichnis",
|
||||
"Repository Path": "Pfad zum Verzeichnis",
|
||||
"Rescan": "Überprüfen",
|
||||
"Rescan Interval": "Suchintervall",
|
||||
"Rescan Interval (s)": "Suchintervall (s)",
|
||||
"Restart": "Neustart",
|
||||
"Restart Needed": "Neustart notwendig",
|
||||
"Restarting": "Wird neu gestartet",
|
||||
"Save": "Speichern",
|
||||
"Scanning": "Überprüfe",
|
||||
"Scanning": "Sucht",
|
||||
"Select the nodes to share this repository with.": "Wähle die Knoten aus, mit denen du dieses Verzeichnis teilen willst.",
|
||||
"Settings": "Einstellungen",
|
||||
"Share With Nodes": "Teile mit diesen Knoten",
|
||||
@@ -77,8 +84,12 @@
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Kurze ID für das Verzeichnis. Muss auf allen Verbunds-Knoten gleich sein.",
|
||||
"Show ID": "ID anzeigen",
|
||||
"Shown instead of Node ID in the cluster status.": "Wird anstatt der Knoten-ID im Verbunds-Status angezeigt.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Wird anstatt der Knoten-ID im Verbunds-Status angezeigt. Wird als optionaler Standardname an andere Knoten bekannt gegeben.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Wird anstatt der Knoten-ID im Verbunds-Status angezeigt. Wird auf den Namen aktualisiert, den der Knoten angibt.",
|
||||
"Shutdown": "Herunterfahren",
|
||||
"Source Code": "Quellcode",
|
||||
"Simple File Versioning": "Einfache Dateiversionierung",
|
||||
"Source Code": "Sourcecode",
|
||||
"Staggered File Versioning": "Stufenweise Dateiversionierung",
|
||||
"Start Browser": "Starte Browser",
|
||||
"Stopped": "Gestoppt",
|
||||
"Support / Forum": "Support / Forum",
|
||||
@@ -90,19 +101,23 @@
|
||||
"Syncthing is restarting.": "Syncthing wird neu gestartet",
|
||||
"Syncthing is upgrading.": "Syncthing wird aktualisiert",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing scheint nicht erreichbar zu sein oder es gibt ein Problem mit Ihrer Internetverbindung. Versuche erneut...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Die aggregierten Statistiken sind öffentlich verfügbar unter {{url}}.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Die gesammelten Statistiken sind öffentlich verfügbar unter {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Die Konfiguration wurde gespeichert, aber nicht aktiviert. Syncthing muss neugestartet werden um die neue Konfiguration zu aktivieren.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Der verschlüsselte Benutzungsbericht wird täglich gesendet. Er wird benutzt um Statistiken über verwendete Betriebssysteme, Verzeichnis-Größen und Programm-Versionen zu erstellen. Sobald der Bericht in Zukunft weitere Daten erfasst, wird dir dieses Fenster erneut angezeigt.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Die eingegebene Knoten-ID scheint nicht gültig zu sein. Sie sollte eine 52 Stellen lange Zeichenkette aus Buchstaben und Zahlen sein. Leerzeichen und Striche sind optional (werden ignoriert).",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Die eingegebene Knoten-ID scheint nicht gültig zu sein. Es sollte eine 52 oder 56 stellige Zeichenkette aus Buchstaben und Nummern sein. Leerzeichen und Bindestriche sind optional.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es wird in folgenden Abständen versioniert: in der ersten Stunde wird alle 30 Sekunden eine Version behalten, am ersten Tag eine jede Stunde, in den ersten 30 Tagen eine jeden Tag, danach wird bis zum Höchstalter eine Version pro Woche beibehalten.",
|
||||
"The maximum age must be a number and cannot be blank.": "Das Höchstalter muss angegeben werden und eine Zahl sein.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Die längste Zeit, die alte Versionen vorgehalten werden (in Tagen, 0 bedeutet, alte Versionen für immer zu behalten).",
|
||||
"The node ID cannot be blank.": "Die Knoten-ID darf nicht leer sein.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Die hier einzutragende Knoten-ID kann im \"Bearbeiten > Zeige ID\"-Dialog auf dem anderen Knoten gefunden werden. Leerzeichen und Striche sind optional (werden ignoriert).",
|
||||
"The number of old versions to keep, per file.": "Anzahl der alten Versionen, die von jeder Datei gespeichert werden sollen.",
|
||||
"The number of versions must be a number and cannot be blank.": "Die Anzahl von Versionen muss eine Zahl sein und darf nicht leer sein.",
|
||||
"The number of versions must be a number and cannot be blank.": "Die Anzahl von Versionen muss eine Zahl und darf nicht leer sein.",
|
||||
"The repository ID cannot be blank.": "Die Verzeichnis-ID darf nicht leer sein.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "Die Verzeichnis-ID muss eine kurze Kennung (64 Zeichen oder weniger) sein. Sie kann aus Buchstaben, Zahlen und den Punkt- (.), Strich- (-), und Unterstrich- (_) Zeichen bestehen.",
|
||||
"The repository ID must be unique.": "Die Verzeichnis-ID muss eindeutig sein.",
|
||||
"The repository path cannot be blank.": "Der Verzeichnis-Pfad kann nicht leer sein",
|
||||
"The rescan interval must be at least 5 seconds.": "Das Suchintervall muss mindestens 5 Sekunden betragen.",
|
||||
"Unknown": "Unbekannt",
|
||||
"Up to Date": "Aktuell",
|
||||
"Upgrade To {%version%}": "Upgrade auf {{version}}",
|
||||
@@ -112,6 +127,8 @@
|
||||
"Use Compression": "Benutze Komprimierung",
|
||||
"Use HTTPS for GUI": "Benutze HTTPS für Benutzeroberfläche",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versionierungspfad",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Alte Versionen werden automatisch gelöscht wenn sie älter als das angegebene Höchstalter sind oder die Höchstzahl der Dateien pro Zeitabschnitt überschritten wird.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Beachte beim Hinzufügen eines neuen Knotens, dass dieser Knoten auch auf der Gegenseite hinzugefügt werden muss.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Beim Hinzufügen eines neuen Verzeichnisses, beachte dass die Verzeichnis-ID dazu verwendet wird, Verzeichnisse zwischen Knoten zu verbinden. Die ID muss also auf allen Knoten gleich sein, Groß- und Kleinschreibung muss dabei beachtet werden.",
|
||||
"Yes": "Ja",
|
||||
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"API Key": "Κλειδί API",
|
||||
"About": "Σχετικά",
|
||||
"Add Node": "Πρόσθεσε Κόμβο",
|
||||
"Add Repository": "Πρόσθεσε Αποθετήριο",
|
||||
"Add Node": "Προσθήκη Κόμβου",
|
||||
"Add Repository": "Προσθήκη Αποθετηρίου",
|
||||
"Address": "Διεύθυνση",
|
||||
"Addresses": "Διευθύνσεις",
|
||||
"Allow Anonymous Usage Reporting?": "Να επιτρέπεται Ανώνυμη Αποστολή Αναφοράς Χρήσης?",
|
||||
@@ -28,43 +28,50 @@
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Όταν τα αρχεία αντικατασταθούν ή διαγραφούν από το syncthing, μεταφέρονται σε φάκελο .stversions με χρονική σήμανση.",
|
||||
"Files are protected from changes made on other nodes, but changes made on this node will be sent to the rest of the cluster.": "Τα αρχεία προστατεύονται από αλλαγές που γίνονται σε άλλους κόμβους, αλλά όποιες αλλαγές γίνουν εδώ θα αποσταλούν στο όλο το cluster.",
|
||||
"Folder": "Κατάλογος",
|
||||
"GUI Authentication Password": "GUI Authentication Password",
|
||||
"GUI Authentication User": "GUI Authentication User",
|
||||
"GUI Listen Addresses": "GUI Listen Addresses",
|
||||
"GUI Authentication Password": "Κωδικός πιστοποίησης στο GUI",
|
||||
"GUI Authentication User": "Χρήστης πιστοποίησης στο GUI",
|
||||
"GUI Listen Addresses": "GUI Listen διευθύνσεις",
|
||||
"Generate": "Δημιουργία",
|
||||
"Global Discovery": "Global Discovery",
|
||||
"Global Discovery Server": "Διακομιστής Ανεύρεσης Κόμβου",
|
||||
"Global Repository": "Global Repository",
|
||||
"Idle": "Ανενεργός",
|
||||
"Ignore Permissions": "Ignore Permissions",
|
||||
"Keep Versions": "Keep Versions",
|
||||
"Idle": "Ανενεργό",
|
||||
"Ignore Permissions": "Αγνόησε Δικαιώματα",
|
||||
"Keep Versions": "Διατήρησε Εκδόσεις",
|
||||
"Last seen": "Last seen",
|
||||
"Latest Release": "Τελευταία Έκδοση",
|
||||
"Local Discovery": "Local Discovery",
|
||||
"Local Discovery Port": "Local Discovery Port",
|
||||
"Local Discovery": "Τοπική Ανεύρεση",
|
||||
"Local Discovery Port": "Port Τοπικής Ανεύρεσης",
|
||||
"Local Repository": "Τοπικό Αποθετήριο",
|
||||
"Master Repo": "Master Repo",
|
||||
"Max File Change Rate (KiB/s)": "Max File Change Rate (KiB/s)",
|
||||
"Max Outstanding Requests": "Max Outstanding Requests",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Never",
|
||||
"No": "Αριθμός",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "ID Κόμβου",
|
||||
"Node Identification": "Ταυτοποίηση Κόμβου",
|
||||
"Node Name": "Όνομα Κόμβου",
|
||||
"Notice": "Notice",
|
||||
"OK": "OK",
|
||||
"Offline": "Ανεργός",
|
||||
"Offline": "Ανεργό",
|
||||
"Online": "Ενεργός",
|
||||
"Out Of Sync": "Εκτός Συγχρονισμού",
|
||||
"Out Of Sync": "Μη Συγχρονισμένα",
|
||||
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
|
||||
"Override Changes": "Override Changes",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Μονοπάτι του αποθετηρίου στον τοπικό υπολογιστή. Σε περίπτωση που δεν υπάρχει, θα δημιουργηθεί. Ο χαρακτήρας tilde (~) μπορεί να χρησιμοποιηθεί σαν συντόμευση για",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Παρακαλώ περιμένετε",
|
||||
"Preview Usage Report": "Preview Usage Report",
|
||||
"Preview Usage Report": "Προεπισκόπηση αναφοράς χρήσης",
|
||||
"RAM Utilization": "Χρήση RAM",
|
||||
"Reconnect Interval (s)": "Reconnect Interval (s)",
|
||||
"Reconnect Interval (s)": "Χρονικό διάστημα επανασύνδεσης (s)",
|
||||
"Repository ID": "ID Αποθετηρίου",
|
||||
"Repository Master": "Repository Master",
|
||||
"Repository Path": "Μονοπάτι Αποθετηρίου",
|
||||
"Rescan Interval (s)": "Rescan Interval (s)",
|
||||
"Rescan": "Rescan",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Χρονικό διάστημα Επανασάρρωσης (s)",
|
||||
"Restart": "Επανεκκίνηση",
|
||||
"Restart Needed": "Απαιτείται Επανεκκίνηση",
|
||||
"Restarting": "Επανεκκίνηση",
|
||||
@@ -77,42 +84,52 @@
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Σύντομη περιγραφή του αποθετηρίου. Θα πρέπει να είναι το ίδιο σε όλους τους κόμβους του cluster.",
|
||||
"Show ID": "Εμφάνιση ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Εμφάνιση στη θέση του ID Αποθετηρίου, στην κατάσταση του cluster.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
|
||||
"Shutdown": "Απενεργοποίηση",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Πηγαίος Κώδικας",
|
||||
"Start Browser": "Start Browser",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Έναρξη Φυλλομετρητή",
|
||||
"Stopped": "Απενεργοποιημένο",
|
||||
"Support / Forum": "Υποστήριξη / Forum",
|
||||
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
|
||||
"Synchronization": "Συγχρονισμός",
|
||||
"Syncing": "Συγχρονισμός",
|
||||
"Syncthing has been shut down.": "Syncthing έχει απενεργοποιηθεί.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
|
||||
"Syncthing has been shut down.": "Το Syncthing έχει απενεργοποιηθεί.",
|
||||
"Syncthing includes the following software or portions thereof:": "Το Syncthing συμπεριλαμβάνει τα παρακάτω λογισμικά ή μέρη αυτών:",
|
||||
"Syncthing is restarting.": "Το Syncthing επανεκκινεί.",
|
||||
"Syncthing is upgrading.": "Το Syncthing αναβαθμίζεται.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Το Syncthing φαίνεται πως είναι απενεργοποιημένο ή υπάρχει πρόβλημα στη σύνδεσή σας στο Internet. Προσπάθεια ξανά…",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Τα στατιστικά που έχουν συλλεγεί είναι διαθέσιμα στο κοινό, στο {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Οι ρυθμίσεις έχουν αποθηκευτεί αλλά δεν έχουν ενεργοποιηθεί. Πρέπει να επανεκκινήσετε το Syncthing για να ισχύσουν οι νέες ρυθμίσεις.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Η κρυπτογραφημένη αναφοράς χρήσης στέλνεται καθημερινά. Χρησιμοποιείται ανίχνευση πληροφοριών πλατφόρμας, μεγέθους αποθετηρίων και εκδόσεων της εφαρμογής. Αν τα δεδομένα που αποστέλονται αλλάξουν, θα πληροφορηθείτε ξανά με αυτό το διάλογο.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "Το ID Κόμβου που έχει εισαχθεί δεν είναι σωστό. Θα πρέπει να είναι αλφαριθμητικό 52 χαρακτήρων που να αποτελείται από γράμματα και αριθμούς, όπου τα κενά και οι παύλες είναι προαιρετικά.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Το ID Κόμβου δεν είναι έγκυρο. Θα πρέπει να είναι αλφαριθμιτικό με 52 ή 56 χαρακτήρες και να αποτελείται από γράμματα και αριθμούς, που προαιρετικά χωρίζονται με κενά και παύλες.",
|
||||
"The node ID cannot be blank.": "The node ID cannot be blank.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "Το ID Κόμβου δε μπορεί να είναι κενό.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "Το ID Κόμβου μπορείτε να βρείτε στο μενού \"Επεξεργασία > Εμφάνιση ID\" του άλλου κόμβου. Κενά και παύλες είναι προαιρετικά (αγνοούνται).",
|
||||
"The number of old versions to keep, per file.": "The number of old versions to keep, per file.",
|
||||
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
|
||||
"The repository ID cannot be blank.": "The repository ID cannot be blank.",
|
||||
"The number of versions must be a number and cannot be blank.": "Ο αριθμός εκδόσεων πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
|
||||
"The repository ID cannot be blank.": "Το ID Αποθετηρίου δε μπορεί να είναι κενό.",
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.",
|
||||
"The repository ID must be unique.": "The repository ID must be unique.",
|
||||
"The repository path cannot be blank.": "The repository path cannot be blank.",
|
||||
"The repository ID must be unique.": "Το ID Αποθετηρίου πρέπει να είναι μοναδικό.",
|
||||
"The repository path cannot be blank.": "Το μονοπάτι του αποθετηρίου δε μπορεί να είναι κενό.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Άγνωστο",
|
||||
"Up to Date": "Ενημερώμενο",
|
||||
"Up to Date": "Ενημερωμένος",
|
||||
"Upgrade To {%version%}": "Αναβάθμιση στην έκδοση {{version}}",
|
||||
"Upgrading": "Αναβάθμιση",
|
||||
"Upload Rate": "Upload Rate",
|
||||
"Usage": "Usage",
|
||||
"Use Compression": "Χρήση συμπίεσης",
|
||||
"Use HTTPS for GUI": "Use HTTPS for GUI",
|
||||
"Use HTTPS for GUI": "Χρήση HTTPS για το GUI",
|
||||
"Version": "Έκδοση",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "When adding a new node, keep in mind that this node must be added on the other side too.",
|
||||
"Versions Path": "Versions Path",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Προσθέτοντας έναν καινούργιο κόμβο, θυμηθείται πως θα πρέπει να προσθέσετε και τον παρόν κόμβο στην άλλη πλευρά.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Κατά την πρόσθεση νέου αποθετηρίου, να γνωρίζεται πως το ID Αποθετηρίου χρησιμοποιείται για να συνδέει Αποθετήρια μεταξύ κόμβων. Τα ID είναι case sensitive και θα πρέπει να είναι ταυτόσημα μεταξύ όλων των κόμβων.",
|
||||
"Yes": "Ναι",
|
||||
"You must keep at least one version.": "You must keep at least one version.",
|
||||
@@ -38,6 +38,7 @@
|
||||
"Idle": "Idle",
|
||||
"Ignore Permissions": "Ignore Permissions",
|
||||
"Keep Versions": "Keep Versions",
|
||||
"Last seen": "Last seen",
|
||||
"Latest Release": "Latest Release",
|
||||
"Local Discovery": "Local Discovery",
|
||||
"Local Discovery Port": "Local Discovery Port",
|
||||
@@ -45,7 +46,10 @@
|
||||
"Master Repo": "Master Repo",
|
||||
"Max File Change Rate (KiB/s)": "Max File Change Rate (KiB/s)",
|
||||
"Max Outstanding Requests": "Max Outstanding Requests",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Never",
|
||||
"No": "No",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "Node ID",
|
||||
"Node Identification": "Node Identification",
|
||||
"Node Name": "Node Name",
|
||||
@@ -57,6 +61,7 @@
|
||||
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
|
||||
"Override Changes": "Override Changes",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Please wait",
|
||||
"Preview Usage Report": "Preview Usage Report",
|
||||
"RAM Utilization": "RAM Utilization",
|
||||
@@ -64,6 +69,8 @@
|
||||
"Repository ID": "Repository ID",
|
||||
"Repository Master": "Repository Master",
|
||||
"Repository Path": "Repository Path",
|
||||
"Rescan": "Rescan",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Rescan Interval (s)",
|
||||
"Restart": "Restart",
|
||||
"Restart Needed": "Restart Needed",
|
||||
@@ -77,8 +84,12 @@
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Short identifier for the repository. Must be the same on all cluster nodes.",
|
||||
"Show ID": "Show ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Shown instead of Node ID in the cluster status.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
|
||||
"Shutdown": "Shutdown",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Source Code",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Start Browser",
|
||||
"Stopped": "Stopped",
|
||||
"Support / Forum": "Support / Forum",
|
||||
@@ -95,6 +106,9 @@
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "The node ID cannot be blank.",
|
||||
"The node ID to enter here can be found in the \"Edit \u003e Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "The node ID to enter here can be found in the \"Edit \u003e Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).",
|
||||
"The number of old versions to keep, per file.": "The number of old versions to keep, per file.",
|
||||
@@ -103,6 +117,7 @@
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.",
|
||||
"The repository ID must be unique.": "The repository ID must be unique.",
|
||||
"The repository path cannot be blank.": "The repository path cannot be blank.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Unknown",
|
||||
"Up to Date": "Up to Date",
|
||||
"Upgrade To {%version%}": "Upgrade To {{version}}",
|
||||
@@ -112,6 +127,8 @@
|
||||
"Use Compression": "Use Compression",
|
||||
"Use HTTPS for GUI": "Use HTTPS for GUI",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versions Path",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "When adding a new node, keep in mind that this node must be added on the other side too.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.",
|
||||
"Yes": "Yes",
|
||||
@@ -11,7 +11,7 @@
|
||||
"Bugs": "Errores",
|
||||
"CPU Utilization": "Uso de la CPU",
|
||||
"Close": "Cerrar",
|
||||
"Connection Error": "Connection Error",
|
||||
"Connection Error": "Error de conexión",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Derechos de autor © 2014 Jakob Borg y los siguientes colaboradores:",
|
||||
"Delete": "Suprimir",
|
||||
"Disconnected": "Desconectado",
|
||||
@@ -33,11 +33,12 @@
|
||||
"GUI Listen Addresses": "Direcciones de escucha para la GUI.",
|
||||
"Generate": "Generar",
|
||||
"Global Discovery": "Búsqueda en internet",
|
||||
"Global Discovery Server": "Global Discovery Server",
|
||||
"Global Discovery Server": "Servidor global de identificación",
|
||||
"Global Repository": "Repositorio global",
|
||||
"Idle": "Inactivo",
|
||||
"Ignore Permissions": "Ignorar permisos",
|
||||
"Keep Versions": "Conservar versiones",
|
||||
"Last seen": "Visto por ultima vez",
|
||||
"Latest Release": "Última versión",
|
||||
"Local Discovery": "Búsqueda en red local",
|
||||
"Local Discovery Port": "Puerto de búsqueda de red local",
|
||||
@@ -45,9 +46,12 @@
|
||||
"Master Repo": "Repositorio maestro",
|
||||
"Max File Change Rate (KiB/s)": "Tasa máxima de cambios (KiB/s)",
|
||||
"Max Outstanding Requests": "Cantidad máxima de peticiones pendientes",
|
||||
"Maximum Age": "Maximum Age",
|
||||
"Never": "Nunca",
|
||||
"No": "No",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Node ID": "Nodo ID",
|
||||
"Node Identification": "Node Identification",
|
||||
"Node Identification": "Identificador del nodo",
|
||||
"Node Name": "Nodo nombre",
|
||||
"Notice": "Aviso",
|
||||
"OK": "OK",
|
||||
@@ -57,6 +61,7 @@
|
||||
"Outgoing Rate Limit (KiB/s)": "Tasa máxima de envío (KiB/s)",
|
||||
"Override Changes": "Reemplazar los cambios",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta del repositorio en el equipo local. La carpeta sera creada si no existe. El carácter tilde (~) puede ser utilizado como atajo de ",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Path where versions should be stored (leave empty for the default .stversions folder in the repository).",
|
||||
"Please wait": "Aguarde por favor",
|
||||
"Preview Usage Report": "Ver reporte de uso",
|
||||
"RAM Utilization": "Utilización de RAM",
|
||||
@@ -64,10 +69,12 @@
|
||||
"Repository ID": "ID de repositorio",
|
||||
"Repository Master": "Repositorio maestro",
|
||||
"Repository Path": "Ruta del repositorio",
|
||||
"Rescan": "Rescan",
|
||||
"Rescan Interval": "Rescan Interval",
|
||||
"Rescan Interval (s)": "Intervalo de reescaneo (s)",
|
||||
"Restart": "Reiniciar",
|
||||
"Restart Needed": "Es necesario reiniciar",
|
||||
"Restarting": "Restarting",
|
||||
"Restarting": "Reiniciando",
|
||||
"Save": "Guardar",
|
||||
"Scanning": "Actualización",
|
||||
"Select the nodes to share this repository with.": "Seleccione los nodos con los cuales compartir el repositorio.",
|
||||
@@ -77,8 +84,12 @@
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identificador corto para el repositorio. Debe ser el mismo en todos los nodos del clúster.",
|
||||
"Show ID": "Mostrar ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Mostrar en lugar de ID de nodo en estado de cluster.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.",
|
||||
"Shutdown": "Apagar",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Source Code": "Código fuente",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Iniciar navegador",
|
||||
"Stopped": "Parado",
|
||||
"Support / Forum": "Soporte / Foro",
|
||||
@@ -87,14 +98,17 @@
|
||||
"Syncing": "Sincronización",
|
||||
"Syncthing has been shut down.": "La sincronización esta apagada",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing incluye los siguientes softwares o partes de ellos:",
|
||||
"Syncthing is restarting.": "Syncthing is restarting.",
|
||||
"Syncthing is upgrading.": "Syncthing is upgrading.",
|
||||
"Syncthing is restarting.": "Syncthing está reiniciando.",
|
||||
"Syncthing is upgrading.": "Syncthing se está actualizando.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing parece estar apagado, o hay un problema con su conexión de Internet. Reintentando...",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Las estadísticas acumuladas están públicamente disponibles en {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuración ha sido guardada pero no activada.\nSyncthing debe reiniciarse para activar la nueva configuración.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "El reporte de uso se envía encriptado diariamente. Se utiliza para hacer un seguimiento de plataformas comunes, tamaño de repositorios y versión de aplicaciones. Si el conjunto de datos cambia sera notificado mediante este dialogo nuevamente.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID de nodo ingresado no es valido. Debe ser una cadena de al menos 52 caracteres consistente en letras y números, con espacios y guiones opcionales.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID de nodo ingresado no es valido. Debe ser una cadena de 52 o de 56 caracteres consistente en letras y números, con espacios y guiones opcionales.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
|
||||
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The node ID cannot be blank.": "El ID de nodo no puede estar vacío.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "El ID de nodo a ingresar aquí puede verse en la opción de menú \"Edición > Mostrar ID\" del otro nodo. Espacios y guiones son opcionales (ignorados).",
|
||||
"The number of old versions to keep, per file.": "El numero de versiones anteriores a conservar, por archivo.",
|
||||
@@ -103,18 +117,21 @@
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "El ID de repositorio debe ser una cadena corta (64 caracteres o menos) consistente solamente en letras, números, punto (.), guion (-) y guion bajo (_).",
|
||||
"The repository ID must be unique.": "El ID de repositorio debe ser único.",
|
||||
"The repository path cannot be blank.": "La ruta del repositorio no puede estar vacía.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Desconocido",
|
||||
"Up to Date": "Actualizado",
|
||||
"Upgrade To {%version%}": "Actualizar a {{version}}",
|
||||
"Upgrading": "Upgrading",
|
||||
"Upgrading": "Actualizando",
|
||||
"Upload Rate": "Tasa de subida",
|
||||
"Usage": "Utilización",
|
||||
"Use Compression": "Usar compresión",
|
||||
"Use HTTPS for GUI": "Usar HTTPS para la GUI",
|
||||
"Version": "Versión",
|
||||
"Versions Path": "Ruta de versiones",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Al agregar un nuevo nodo, recuerde que este nodo debe ser agregado en el otro lado también.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Al agregar un nuevo repositorio, tenga en mente que el ID de repositorio se utiliza para ligar los repositorios entre nodos. Distingue mayúsculas y minúsculas y debe ser exactamente igual en todos los nodos.",
|
||||
"Yes": "Si",
|
||||
"You must keep at least one version.": "Debe mantener al menos una versión",
|
||||
"items": "items"
|
||||
"items": "Articulos"
|
||||
}
|
||||
@@ -38,6 +38,7 @@
|
||||
"Idle": "Au repos",
|
||||
"Ignore Permissions": "Ignorer les permissions",
|
||||
"Keep Versions": "Conserver les versions",
|
||||
"Last seen": "Dernière apparition",
|
||||
"Latest Release": "Dernière version",
|
||||
"Local Discovery": "Recherche locale",
|
||||
"Local Discovery Port": "Port de recherche locale",
|
||||
@@ -45,7 +46,10 @@
|
||||
"Master Repo": "Dossier maître",
|
||||
"Max File Change Rate (KiB/s)": "Débit maximum de changement de fichier (KiB/s)",
|
||||
"Max Outstanding Requests": "Nombre maximum de demandes concurrentes de blocs de fichier",
|
||||
"Maximum Age": "Ancienneté maximum",
|
||||
"Never": "Jamais",
|
||||
"No": "Non",
|
||||
"No File Versioning": "Pas de version de fichier",
|
||||
"Node ID": "ID du nœud",
|
||||
"Node Identification": "Identification du nœud",
|
||||
"Node Name": "Nom du nœud",
|
||||
@@ -57,6 +61,7 @@
|
||||
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
|
||||
"Override Changes": "Écraser les changements",
|
||||
"Path to the repository on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin du répertoire sur l'ordinateur local. Il sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the repository).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
|
||||
"Please wait": "Merci de patienter",
|
||||
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
|
||||
"RAM Utilization": "Utilisation de la RAM",
|
||||
@@ -64,21 +69,27 @@
|
||||
"Repository ID": "ID du répertoire",
|
||||
"Repository Master": "Répertoire maître",
|
||||
"Repository Path": "Chemin du répertoire",
|
||||
"Rescan": "Rescanner",
|
||||
"Rescan Interval": "Intervalle de scan",
|
||||
"Rescan Interval (s)": "Intervalle de rescan (s)",
|
||||
"Restart": "Redémarrer",
|
||||
"Restart Needed": "Redémarrage nécessaire",
|
||||
"Restarting": "Redémarrage",
|
||||
"Save": "Sauver",
|
||||
"Scanning": "En cours de scan",
|
||||
"Select the nodes to share this repository with.": "Sélectionner les nœuds qui partageront ce répertoire.",
|
||||
"Select the nodes to share this repository with.": "Sélectionner les nœuds qui partagent ce répertoire.",
|
||||
"Settings": "Configuration",
|
||||
"Share With Nodes": "Partager avec les nœuds",
|
||||
"Shared With": "Partagé avec",
|
||||
"Short identifier for the repository. Must be the same on all cluster nodes.": "Identifiant court pour le répertoire. Il doit être le même sur l'ensemble des nœuds du cluster.",
|
||||
"Show ID": "Montrer l'ID",
|
||||
"Shown instead of Node ID in the cluster status.": "Affiché à la place de l'ID du nœud au sein du cluster.",
|
||||
"Shown instead of Node ID in the cluster status. Will be advertised to other nodes as an optional default name.": "Affiché à la place de l'ID du nœud dans le statut du cluster. Sera annoncé aux autres nœuds comme un nom par défaut optionnel.",
|
||||
"Shown instead of Node ID in the cluster status. Will be updated to the name the node advertises if left empty.": "Affiché à la place de l'ID du nœud dans le statut du cluster. Sera mis à jour par le nom que le nœud annonce si laissé vide.",
|
||||
"Shutdown": "Éteindre",
|
||||
"Simple File Versioning": "Versions simples de fichier",
|
||||
"Source Code": "Code source",
|
||||
"Staggered File Versioning": "Versions échelonnées de fichier",
|
||||
"Start Browser": "Démarrer le navigateur web",
|
||||
"Stopped": "Arrêté",
|
||||
"Support / Forum": "Aide / Forum",
|
||||
@@ -86,7 +97,7 @@
|
||||
"Synchronization": "Synchronisation",
|
||||
"Syncing": "En cours de synchronisation",
|
||||
"Syncthing has been shut down.": "Syncthing a été éteint.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inclut les logiciels, ou portion de ceux-ci, suivants:",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing intègre les logiciels suivants (ou des éléments provenant de ces logiciels) :",
|
||||
"Syncthing is restarting.": "Syncthing est cours de redémarrage.",
|
||||
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
|
||||
@@ -95,6 +106,9 @@
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, repo sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des répertoires et les versions de l'application. Si le jeu de données rapportées devait être changé, il vous sera demandé de le valider de nouveau via ce dialogue.",
|
||||
"The entered node ID does not look valid. It should be a 52 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID du nœud ne semble pas être valide. Il devrait ressembler à une chaine de 52 caractères comprenant lettres et chiffres, avec des espaces et des traits d'union optionnels.",
|
||||
"The entered node ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID du nœud inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 comprenant lettres et chiffres, avec des espaces et des traits d'union optionnels.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Les intervalles suivant sont utilisés: la première heure une version est conservée chaque 30 secondes, le premier jour une version est conservée chaque heure, les premiers 30 jours une version est conservée chaque jour, jusqu'à la limite d'âge maximum une version est conservée chaque semaine.",
|
||||
"The maximum age must be a number and cannot be blank.": "L'ancienneté maximum doit être un nombre et ne peut être vide.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Le temps maximum de conservation d'une version (en jours, mettre à 0 pour conserver les versions pour toujours)",
|
||||
"The node ID cannot be blank.": "L'ID du nœud ne peut être vide.",
|
||||
"The node ID to enter here can be found in the \"Edit > Show ID\" dialog on the other node. Spaces and dashes are optional (ignored).": "L'ID du nœud à insérer peut être trouvé à travers le menu \"Éditer > Montrer l'ID\" des autres nœuds. Les espaces et les traits d'union sont optionnels (ils seront ignorés).",
|
||||
"The number of old versions to keep, per file.": "Le nombre d'anciennes versions à garder, par fichier.",
|
||||
@@ -103,15 +117,18 @@
|
||||
"The repository ID must be a short identifier (64 characters or less) consisting of letters, numbers and the the dot (.), dash (-) and underscode (_) characters only.": "L'ID du répertoire doit être un identifiant court (64 caractères ou moins) comprenant des lettres, nombres, points (.), trait d'union (-) et tiret bas (_).",
|
||||
"The repository ID must be unique.": "L'ID du répertoire doit être unique.",
|
||||
"The repository path cannot be blank.": "Le chemin du répertoire ne peut pas être vide.",
|
||||
"The rescan interval must be at least 5 seconds.": "L'intervalle de scan doit être d'au minimum 5 secondes.",
|
||||
"Unknown": "Inconnu",
|
||||
"Up to Date": "Synchronisation à jour",
|
||||
"Upgrade To {%version%}": "Upgrader vers {{version}}",
|
||||
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
|
||||
"Upgrading": "Mise à jour de Syncthing",
|
||||
"Upload Rate": "Débit d'envoi",
|
||||
"Usage": "Utilisation",
|
||||
"Use Compression": "Utiliser la compression",
|
||||
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Emplacement des versions",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions sont supprimées automatiquement si celles-ci sont plus anciennes que l'ancienneté maximum ou que leur nombre est supérieur au nombre autorisé dans une intervale.",
|
||||
"When adding a new node, keep in mind that this node must be added on the other side too.": "Lorsqu'un nœud est ajouté, gardez à l'esprit que ce nœud doit aussi être ajouté de l'autre coté.",
|
||||
"When adding a new repository, keep in mind that the Repository ID is used to tie repositories together between nodes. They are case sensitive and must match exactly between all nodes.": "Lorsqu'un nouveau répertoire est ajouté, gardez à l'esprit que l'ID du répertoire est utilisé pour lier les répertoires à travers les nœuds. Ils sont sensibles à la casse et doivent être identiques à travers tous les nœuds.",
|
||||
"Yes": "Oui",
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user