Compare commits
282 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d0ebf06ff8 | ||
|
|
687b249034 | ||
|
|
d1528dcff0 | ||
|
|
1c31cf6319 | ||
|
|
d54c366150 | ||
|
|
4ff535f883 | ||
|
|
3cbddfe545 | ||
|
|
e3cae69495 | ||
|
|
aee40316f8 | ||
|
|
d754f9ae89 | ||
|
|
3c50b3a9e0 | ||
|
|
fb312a71f7 | ||
|
|
136d79eaa3 | ||
|
|
834336499a | ||
|
|
c9da8237df | ||
|
|
9638dcda0a | ||
|
|
756c5a2604 | ||
|
|
a9c31652b6 | ||
|
|
32a76901a9 | ||
|
|
a5e11c7489 | ||
|
|
f2b12014e1 | ||
|
|
60fcaebfdb | ||
|
|
32fe2cb659 | ||
|
|
1207d54fdd | ||
|
|
3932884688 | ||
|
|
19a2042746 | ||
|
|
4c6eb137da | ||
|
|
57ec2ff915 | ||
|
|
77a161a087 | ||
|
|
0642402449 | ||
|
|
50d377d9fe | ||
|
|
f5211b0697 | ||
|
|
fd4ea46fd7 | ||
|
|
8d8546868d | ||
|
|
8ce547edeb | ||
|
|
a17c48aed6 | ||
|
|
d12db3e7b8 | ||
|
|
15b87ae297 | ||
|
|
02fdf59839 | ||
|
|
d9da02b7a8 | ||
|
|
8f2ad6418d | ||
|
|
ff984425a3 | ||
|
|
ac1058359f | ||
|
|
9afbca3001 | ||
|
|
ec3f17cb9c | ||
|
|
73b9d5c5f9 | ||
|
|
ecc8591c95 | ||
|
|
696b67e4b1 | ||
|
|
266a5116a1 | ||
|
|
131f2be857 | ||
|
|
be7b3a9952 | ||
|
|
bb31b1785b | ||
|
|
2a60f4b1e9 | ||
|
|
33a4fb5a1a | ||
|
|
aece6e8b6c | ||
|
|
7bf55dd14f | ||
|
|
e158f17c2b | ||
|
|
c5027d9478 | ||
|
|
36c1d82146 | ||
|
|
bd4f404d45 | ||
|
|
43d39844f7 | ||
|
|
e041a4d212 | ||
|
|
433b923ea7 | ||
|
|
f8f1c72b44 | ||
|
|
542716e216 | ||
|
|
b35958d024 | ||
|
|
9ee3541655 | ||
|
|
bf7d84c12a | ||
|
|
34c691087e | ||
|
|
08c383012f | ||
|
|
e2420495f3 | ||
|
|
d530c5eda7 | ||
|
|
ef7420ecf6 | ||
|
|
c905a41e2a | ||
|
|
42ff4b5bf0 | ||
|
|
4fb74a32cc | ||
|
|
c741465328 | ||
|
|
fbca537a40 | ||
|
|
83420b0199 | ||
|
|
33d3ba1b45 | ||
|
|
497f85a236 | ||
|
|
a624c302ab | ||
|
|
cebe21a3af | ||
|
|
9eb679d70a | ||
|
|
6d84443db8 | ||
|
|
da8a1f242c | ||
|
|
946d98b71f | ||
|
|
dff51fc707 | ||
|
|
7d954dd5d1 | ||
|
|
c6300a5da8 | ||
|
|
9359daa0d9 | ||
|
|
2322e9cff7 | ||
|
|
a876e1e348 | ||
|
|
6a863c8f71 | ||
|
|
392b006b06 | ||
|
|
96289f42b7 | ||
|
|
1b69c2441c | ||
|
|
8ca85a4918 | ||
|
|
2a31031cbc | ||
|
|
d148cd8ccc | ||
|
|
d1cc1828b8 | ||
|
|
069e8cf122 | ||
|
|
45cbcaca6d | ||
|
|
102a2db1f3 | ||
|
|
9f81c85ca7 | ||
|
|
ba4a6fc0c5 | ||
|
|
aa803ce2ff | ||
|
|
a027a60f5d | ||
|
|
270649535e | ||
|
|
cf80ba71f4 | ||
|
|
b74df18a4a | ||
|
|
5cd2906a39 | ||
|
|
bc37b69d17 | ||
|
|
94f6e400ad | ||
|
|
b95a6ccf80 | ||
|
|
7df9c1b6e4 | ||
|
|
75348c0158 | ||
|
|
75fb14acaf | ||
|
|
5350315b68 | ||
|
|
658e39c270 | ||
|
|
ef7ce6c7e1 | ||
|
|
509e2411bf | ||
|
|
65c906f951 | ||
|
|
1f159e8233 | ||
|
|
936c76119d | ||
|
|
f45865606a | ||
|
|
cfc9776bae | ||
|
|
0cb7ed9e4e | ||
|
|
4b07609458 | ||
|
|
e41e58e781 | ||
|
|
f5030f1c2c | ||
|
|
2a48fb8e87 | ||
|
|
df6dbc5fa4 | ||
|
|
4b1d2839e8 | ||
|
|
a892f80e86 | ||
|
|
b2a79855ae | ||
|
|
ff4974178a | ||
|
|
d7100fd9bc | ||
|
|
0bfb40ae51 | ||
|
|
11c83670d6 | ||
|
|
68ff4f3842 | ||
|
|
ab25cd09ed | ||
|
|
8f05b8f982 | ||
|
|
63ae2f64cf | ||
|
|
105103fae0 | ||
|
|
70f4792ab1 | ||
|
|
defd9fa322 | ||
|
|
e884d0fda6 | ||
|
|
5f6a8fdc20 | ||
|
|
196a9ddbb0 | ||
|
|
746140bd11 | ||
|
|
7b99a5fbac | ||
|
|
b74c31e520 | ||
|
|
221f43e4bd | ||
|
|
a17333d73e | ||
|
|
207b43499c | ||
|
|
0c0de17b38 | ||
|
|
77882e6086 | ||
|
|
19a9834843 | ||
|
|
23dab30ca5 | ||
|
|
b5d7ce8ebe | ||
|
|
bf4eb4b269 | ||
|
|
a5edb6807e | ||
|
|
ecadf30fe7 | ||
|
|
515f0db5b4 | ||
|
|
c2f367cf70 | ||
|
|
f21dfea965 | ||
|
|
b84cad4db0 | ||
|
|
16ae019c8c | ||
|
|
bd2051febd | ||
|
|
6fb1e03ed4 | ||
|
|
8d41a762b6 | ||
|
|
6d3003716c | ||
|
|
2c87c3bac3 | ||
|
|
739c525a98 | ||
|
|
ab287ebf40 | ||
|
|
ab6bcab78a | ||
|
|
6e317896e9 | ||
|
|
b08ee3ff81 | ||
|
|
17fd09102e | ||
|
|
a598cd2b18 | ||
|
|
55e434d67a | ||
|
|
04d4b5d8a0 | ||
|
|
7ea00bcb78 | ||
|
|
0ab56ffde8 | ||
|
|
e7e945533e | ||
|
|
7fd1047832 | ||
|
|
19dfa88258 | ||
|
|
65923b5c20 | ||
|
|
ac731aa50c | ||
|
|
2aa3182476 | ||
|
|
529c386943 | ||
|
|
b659da8a4b | ||
|
|
eba98717c9 | ||
|
|
e4dba99cc0 | ||
|
|
454e688c3d | ||
|
|
54752deaa1 | ||
|
|
a3cf37cb2e | ||
|
|
6459d11d32 | ||
|
|
15f2fabaaf | ||
|
|
d6030b8d68 | ||
|
|
e1757ee726 | ||
|
|
9fed75d59c | ||
|
|
5fe15475a4 | ||
|
|
c7f6f4f48d | ||
|
|
747c6c2714 | ||
|
|
2951f128f6 | ||
|
|
53cb66eeaf | ||
|
|
bcf8f798e2 | ||
|
|
7406176fad | ||
|
|
34ba5678c3 | ||
|
|
da0b78c67a | ||
|
|
47e64ae503 | ||
|
|
520bb74626 | ||
|
|
4beef5cc66 | ||
|
|
ba575f55ec | ||
|
|
2012ce02e8 | ||
|
|
c67e2c2a5a | ||
|
|
4b1ce250c1 | ||
|
|
0401a07507 | ||
|
|
c12265499a | ||
|
|
0e341832e0 | ||
|
|
489e2e6ad5 | ||
|
|
941f637bca | ||
|
|
fc0cb704f2 | ||
|
|
960c0cbddf | ||
|
|
9f67d86b30 | ||
|
|
66f7d83baa | ||
|
|
b44e87c6e8 | ||
|
|
6da7f17c4a | ||
|
|
b4f45d1e79 | ||
|
|
7d766bf7c7 | ||
|
|
75dc7e6671 | ||
|
|
3fd887fc57 | ||
|
|
128447a681 | ||
|
|
23bae932c7 | ||
|
|
9701998f82 | ||
|
|
0289c50ad9 | ||
|
|
50490f5b26 | ||
|
|
d12f802027 | ||
|
|
ac7097b4d0 | ||
|
|
66087e4332 | ||
|
|
3706f9bcb8 | ||
|
|
c505218896 | ||
|
|
6186a746e0 | ||
|
|
3e98bae5ec | ||
|
|
2e1e8f764e | ||
|
|
a7492f8612 | ||
|
|
123b1f01e4 | ||
|
|
865f62e3eb | ||
|
|
3ea93f52ee | ||
|
|
b53e545ebc | ||
|
|
157a4c891c | ||
|
|
ad9ea07309 | ||
|
|
fc483cdfc6 | ||
|
|
9033838cf2 | ||
|
|
8e5d2d5905 | ||
|
|
18aa66dabb | ||
|
|
a2f7b78453 | ||
|
|
5c026cbe1d | ||
|
|
dc51476897 | ||
|
|
39eaa577e0 | ||
|
|
1f006481ee | ||
|
|
60faabcbe2 | ||
|
|
d3f1eaf1a3 | ||
|
|
f568e76fd4 | ||
|
|
e947223aaa | ||
|
|
8311162be3 | ||
|
|
75523556e8 | ||
|
|
c82b5d4982 | ||
|
|
1c3158099c | ||
|
|
bdbca75dfa | ||
|
|
124b189cc0 | ||
|
|
de38b46392 | ||
|
|
e1975644d6 | ||
|
|
d9fd27a9e8 | ||
|
|
32425c5561 | ||
|
|
8d20923881 | ||
|
|
3a35b8b26c | ||
|
|
36c93b755a | ||
|
|
ea8c3debea | ||
|
|
49bc74e7a0 |
9
.gitattributes
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
# Text files use LF line endings in this repository
|
||||
* text=auto
|
||||
|
||||
# Except the dependencies, which we leave alone
|
||||
Godeps/** -text=auto
|
||||
|
||||
# Diffs on these files are meaningless
|
||||
gui.files.go -diff
|
||||
*.svg -diff
|
||||
9
AUTHORS
@@ -11,27 +11,31 @@ Ben Sidhom <bsidhom@gmail.com>
|
||||
Brandon Philips <brandon@ifup.org>
|
||||
Brendan Long <self@brendanlong.com>
|
||||
Caleb Callaway <enlightened.despot@gmail.com>
|
||||
Carsten Hagemann <moter8@gmail.com>
|
||||
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
|
||||
Chris Joel <chris@scriptolo.gy>
|
||||
Colin Kennedy <moshen.colin@gmail.com>
|
||||
Daniel Martí <mvdan@mvdan.cc>
|
||||
Dennis Wilson <dw@risu.io>
|
||||
Dominik Heidler <dominik@heidler.eu>
|
||||
Elias Jarlebring <jarlebring@gmail.com>
|
||||
Emil Hessman <emil@hessman.se>
|
||||
Federico Castagnini <federico.castagnini@gmail.com>
|
||||
Felix Ableitner <me@nutomic.com>
|
||||
Felix Unterpaintner <bigbear2nd@gmail.com>
|
||||
Francois-Xavier Gsell <fxgsell@gmail.com>
|
||||
Gilli Sigurdsson <gilli@vx.is>
|
||||
Jakob Borg <jakob@nym.se>
|
||||
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
Jaroslav Malec <dzardacz@gmail.com>
|
||||
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
Jochen Voss <voss@seehuhn.de>
|
||||
Johan Vromans <jvromans@squirrel.nl>
|
||||
Karol Różycki <rozycki.karol@gmail.com>
|
||||
Kamada Ken'ichi <kamada@nanohz.org>
|
||||
Ken'ichi Kamada <kamada@nanohz.org>
|
||||
Lode Hoste <zillode@zillode.be>
|
||||
Marcin Dziadus <dziadus.marcin@gmail.com>
|
||||
Marc Laporte <marc@marclaporte.com>
|
||||
Marc Laporte <marc@marclaporte.com> <marc@laporte.name>
|
||||
Marc Pujol <kilburn@la3.org>
|
||||
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
|
||||
Michael Tilli <pyfisch@gmail.com>
|
||||
@@ -41,6 +45,7 @@ Philippe Schommers <philippe@schommers.be>
|
||||
Phill Luby <phill.luby@newredo.com>
|
||||
Piotr Bejda <piotrb10@gmail.com>
|
||||
Ryan Sullivan <kayoticsully@gmail.com>
|
||||
Sergey Mishin <ralder@yandex.ru>
|
||||
Stefan Tatschner <stefan@sevenbyte.org>
|
||||
Tim Abell <tim@timwise.co.uk>
|
||||
Tobias Nygren <tnn@nygren.pp.se>
|
||||
|
||||
26
Godeps/Godeps.json
generated
@@ -11,7 +11,7 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/logger",
|
||||
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
|
||||
"Rev": "4d4e2801954c5581e4c2a80a3d3beb3b3645fd04"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/luhn",
|
||||
@@ -19,27 +19,31 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/calmh/xdr",
|
||||
"Rev": "ff948d7666c5e0fd18d398f6278881724d36a90b"
|
||||
"Rev": "5f7208e86762911861c94f1849eddbfc0a60cbf0"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/juju/ratelimit",
|
||||
"Rev": "f9f36d11773655c0485207f0ad30dc2655f69d56"
|
||||
"Rev": "c5abe513796336ee2869745bff0638508450e9c5"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/kardianos/osext",
|
||||
"Rev": "91292666f7e40f03185cdd1da7d85633c973eca7"
|
||||
"Rev": "efacde03154693404c65e7aa7d461ac9014acd0c"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syncthing/protocol",
|
||||
"Rev": "1a4398cc55c8fe82a964097eaf59f2475b020a49"
|
||||
"Rev": "e7db2648034fb71b051902a02bc25d4468ed492e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
|
||||
"Rev": "e3f32eb300aa1e514fe8ba58d008da90a062273d"
|
||||
"Rev": "87e4e645d80ae9c537e8f2dee52b28036a5dd75e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/syndtr/gosnappy/snappy",
|
||||
"Rev": "ce8acff4829e0c2458a67ead32390ac0a381c862"
|
||||
"Rev": "156a073208e131d7d2e212cb749feae7c339e846"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/thejerf/suture",
|
||||
"Rev": "ff19fb384c3fe30f42717967eaa69da91e5f317c"
|
||||
},
|
||||
{
|
||||
"ImportPath": "github.com/vitrun/qart/coding",
|
||||
@@ -55,19 +59,19 @@
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/bcrypt",
|
||||
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
|
||||
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/crypto/blowfish",
|
||||
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
|
||||
"Rev": "c57d4a71915a248dbad846d60825145062b4c18e"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/text/transform",
|
||||
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
|
||||
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
|
||||
},
|
||||
{
|
||||
"ImportPath": "golang.org/x/text/unicode/norm",
|
||||
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
|
||||
"Rev": "2076e9cab4147459c82bc81169e46c139d358547"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
19
Godeps/_workspace/src/github.com/calmh/logger/logger.go
generated
vendored
@@ -16,6 +16,7 @@ type LogLevel int
|
||||
|
||||
const (
|
||||
LevelDebug LogLevel = iota
|
||||
LevelVerbose
|
||||
LevelInfo
|
||||
LevelOK
|
||||
LevelWarn
|
||||
@@ -83,6 +84,24 @@ func (l *Logger) Debugf(format string, vals ...interface{}) {
|
||||
l.callHandlers(LevelDebug, s)
|
||||
}
|
||||
|
||||
// Infoln logs a line with a VERBOSE prefix.
|
||||
func (l *Logger) Verboseln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
s := fmt.Sprintln(vals...)
|
||||
l.logger.Output(2, "VERBOSE: "+s)
|
||||
l.callHandlers(LevelVerbose, s)
|
||||
}
|
||||
|
||||
// Infof logs a formatted line with a VERBOSE prefix.
|
||||
func (l *Logger) Verbosef(format string, vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
defer l.mut.Unlock()
|
||||
s := fmt.Sprintf(format, vals...)
|
||||
l.logger.Output(2, "VERBOSE: "+s)
|
||||
l.callHandlers(LevelVerbose, s)
|
||||
}
|
||||
|
||||
// Infoln logs a line with an INFO prefix.
|
||||
func (l *Logger) Infoln(vals ...interface{}) {
|
||||
l.mut.Lock()
|
||||
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/bench_test.go
generated
vendored
@@ -67,7 +67,7 @@ func BenchmarkThisEncode(b *testing.B) {
|
||||
func BenchmarkThisEncoder(b *testing.B) {
|
||||
w := xdr.NewWriter(ioutil.Discard)
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := s.encodeXDR(w)
|
||||
_, err := s.EncodeXDRInto(w)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
@@ -108,7 +108,7 @@ func BenchmarkThisDecoder(b *testing.B) {
|
||||
r := xdr.NewReader(rr)
|
||||
var t XDRBenchStruct
|
||||
for i := 0; i < b.N; i++ {
|
||||
err := t.decodeXDR(r)
|
||||
err := t.DecodeXDRFrom(r)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
|
||||
28
Godeps/_workspace/src/github.com/calmh/xdr/bench_xdr_test.go
generated
vendored
@@ -26,7 +26,9 @@ XDRBenchStruct Structure:
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 0x0000 | I3 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| uint8 |
|
||||
/ /
|
||||
\ uint8 Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Length of Bs0 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
@@ -69,7 +71,7 @@ struct XDRBenchStruct {
|
||||
|
||||
func (o XDRBenchStruct) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) MarshalXDR() ([]byte, error) {
|
||||
@@ -87,11 +89,11 @@ func (o XDRBenchStruct) MustMarshalXDR() []byte {
|
||||
func (o XDRBenchStruct) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o XDRBenchStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint64(o.I1)
|
||||
xw.WriteUint32(o.I2)
|
||||
xw.WriteUint16(o.I3)
|
||||
@@ -111,16 +113,16 @@ func (o XDRBenchStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *XDRBenchStruct) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *XDRBenchStruct) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *XDRBenchStruct) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *XDRBenchStruct) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.I1 = xr.ReadUint64()
|
||||
o.I2 = xr.ReadUint32()
|
||||
o.I3 = xr.ReadUint16()
|
||||
@@ -155,7 +157,7 @@ struct repeatReader {
|
||||
|
||||
func (o repeatReader) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o repeatReader) MarshalXDR() ([]byte, error) {
|
||||
@@ -173,27 +175,27 @@ func (o repeatReader) MustMarshalXDR() []byte {
|
||||
func (o repeatReader) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o repeatReader) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o repeatReader) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteBytes(o.data)
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *repeatReader) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *repeatReader) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *repeatReader) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *repeatReader) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.data = xr.ReadBytes()
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
41
Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
generated
vendored
@@ -52,7 +52,7 @@ import (
|
||||
var encodeTpl = template.Must(template.New("encoder").Parse(`
|
||||
func (o {{.TypeName}}) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) MarshalXDR() ([]byte, error) {
|
||||
@@ -70,11 +70,11 @@ func (o {{.TypeName}}) MustMarshalXDR() []byte {
|
||||
func (o {{.TypeName}}) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}//+n
|
||||
|
||||
func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o {{.TypeName}}) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
{{range $fieldInfo := .Fields}}
|
||||
{{if not $fieldInfo.IsSlice}}
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
@@ -87,7 +87,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
{{end}}
|
||||
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}})
|
||||
{{else}}
|
||||
_, err := o.{{$fieldInfo.Name}}.encodeXDR(xw)
|
||||
_, err := o.{{$fieldInfo.Name}}.EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -105,7 +105,7 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
xw.Write{{$fieldInfo.Encoder}}(o.{{$fieldInfo.Name}}[i])
|
||||
{{else}}
|
||||
_, err := o.{{$fieldInfo.Name}}[i].encodeXDR(xw)
|
||||
_, err := o.{{$fieldInfo.Name}}[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -118,16 +118,16 @@ func (o {{.TypeName}}) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *{{.TypeName}}) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}//+n
|
||||
|
||||
func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *{{.TypeName}}) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
{{range $fieldInfo := .Fields}}
|
||||
{{if not $fieldInfo.IsSlice}}
|
||||
{{if ne $fieldInfo.Convert ""}}
|
||||
@@ -139,10 +139,13 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
|
||||
o.{{$fieldInfo.Name}} = xr.Read{{$fieldInfo.Encoder}}()
|
||||
{{end}}
|
||||
{{else}}
|
||||
(&o.{{$fieldInfo.Name}}).decodeXDR(xr)
|
||||
(&o.{{$fieldInfo.Name}}).DecodeXDRFrom(xr)
|
||||
{{end}}
|
||||
{{else}}
|
||||
_{{$fieldInfo.Name}}Size := int(xr.ReadUint32())
|
||||
if _{{$fieldInfo.Name}}Size < 0 {
|
||||
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
|
||||
}
|
||||
{{if ge $fieldInfo.Max 1}}
|
||||
if _{{$fieldInfo.Name}}Size > {{$fieldInfo.Max}} {
|
||||
return xdr.ElementSizeExceeded("{{$fieldInfo.Name}}", _{{$fieldInfo.Name}}Size, {{$fieldInfo.Max}})
|
||||
@@ -155,7 +158,7 @@ func (o *{{.TypeName}}) decodeXDR(xr *xdr.Reader) error {
|
||||
{{else if $fieldInfo.IsBasic}}
|
||||
o.{{$fieldInfo.Name}}[i] = xr.Read{{$fieldInfo.Encoder}}()
|
||||
{{else}}
|
||||
(&o.{{$fieldInfo.Name}}[i]).decodeXDR(xr)
|
||||
(&o.{{$fieldInfo.Name}}[i]).DecodeXDRFrom(xr)
|
||||
{{end}}
|
||||
}
|
||||
{{end}}
|
||||
@@ -257,12 +260,18 @@ func handleStruct(t *ast.StructType) []fieldInfo {
|
||||
} else {
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
IsBasic: false,
|
||||
IsSlice: true,
|
||||
FieldType: tn,
|
||||
Max: max,
|
||||
}
|
||||
}
|
||||
|
||||
case *ast.SelectorExpr:
|
||||
f = fieldInfo{
|
||||
Name: fn,
|
||||
FieldType: ft.Sel.Name,
|
||||
Max: max,
|
||||
}
|
||||
}
|
||||
|
||||
fs = append(fs, f)
|
||||
@@ -310,10 +319,9 @@ func generateDiagram(output io.Writer, s structInfo) {
|
||||
|
||||
for _, f := range fs {
|
||||
tn := f.FieldType
|
||||
sl := f.IsSlice
|
||||
name := uncamelize(f.Name)
|
||||
|
||||
if sl {
|
||||
if f.IsSlice {
|
||||
fmt.Fprintf(output, "| %s |\n", center("Number of "+name, 61))
|
||||
fmt.Fprintln(output, line)
|
||||
}
|
||||
@@ -340,13 +348,16 @@ func generateDiagram(output io.Writer, s structInfo) {
|
||||
fmt.Fprintf(output, "/ %61s /\n", "")
|
||||
fmt.Fprintln(output, line)
|
||||
default:
|
||||
if sl {
|
||||
if f.IsSlice {
|
||||
tn = "Zero or more " + tn + " Structures"
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
} else {
|
||||
fmt.Fprintf(output, "| %s |\n", center(tn, 61))
|
||||
tn = tn + " Structure"
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
fmt.Fprintf(output, "\\ %s \\\n", center(tn, 61))
|
||||
fmt.Fprintf(output, "/ %s /\n", center("", 61))
|
||||
}
|
||||
fmt.Fprintln(output, line)
|
||||
}
|
||||
|
||||
4
Godeps/_workspace/src/github.com/calmh/xdr/encdec_test.go
generated
vendored
@@ -32,11 +32,11 @@ type TestStruct struct {
|
||||
|
||||
type Opaque [32]byte
|
||||
|
||||
func (u *Opaque) encodeXDR(w *xdr.Writer) (int, error) {
|
||||
func (u *Opaque) EncodeXDRInto(w *xdr.Writer) (int, error) {
|
||||
return w.WriteRaw(u[:])
|
||||
}
|
||||
|
||||
func (u *Opaque) decodeXDR(r *xdr.Reader) (int, error) {
|
||||
func (u *Opaque) DecodeXDRFrom(r *xdr.Reader) (int, error) {
|
||||
return r.ReadRaw(u[:])
|
||||
}
|
||||
|
||||
|
||||
43
Godeps/_workspace/src/github.com/calmh/xdr/encdec_xdr_test.go
generated
vendored
@@ -18,17 +18,23 @@ TestStruct Structure:
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| int |
|
||||
/ /
|
||||
\ int Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| int8 |
|
||||
/ /
|
||||
\ int8 Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| uint8 |
|
||||
/ /
|
||||
\ uint8 Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| int16 |
|
||||
| 0x0000 | I16 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 0x0000 | UI16 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| int32 |
|
||||
| I32 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| UI32 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
@@ -52,7 +58,9 @@ TestStruct Structure:
|
||||
\ S (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Opaque |
|
||||
/ /
|
||||
\ Opaque Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Number of SS |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
@@ -68,9 +76,9 @@ struct TestStruct {
|
||||
int I;
|
||||
int8 I8;
|
||||
uint8 UI8;
|
||||
int16 I16;
|
||||
int I16;
|
||||
unsigned int UI16;
|
||||
int32 I32;
|
||||
int I32;
|
||||
unsigned int UI32;
|
||||
hyper I64;
|
||||
unsigned hyper UI64;
|
||||
@@ -84,7 +92,7 @@ struct TestStruct {
|
||||
|
||||
func (o TestStruct) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o TestStruct) MarshalXDR() ([]byte, error) {
|
||||
@@ -102,11 +110,11 @@ func (o TestStruct) MustMarshalXDR() []byte {
|
||||
func (o TestStruct) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o TestStruct) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint64(uint64(o.I))
|
||||
xw.WriteUint8(uint8(o.I8))
|
||||
xw.WriteUint8(o.UI8)
|
||||
@@ -124,7 +132,7 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("S", l, 1024)
|
||||
}
|
||||
xw.WriteString(o.S)
|
||||
_, err := o.C.encodeXDR(xw)
|
||||
_, err := o.C.EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -140,16 +148,16 @@ func (o TestStruct) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *TestStruct) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *TestStruct) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *TestStruct) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.I = int(xr.ReadUint64())
|
||||
o.I8 = int8(xr.ReadUint8())
|
||||
o.UI8 = xr.ReadUint8()
|
||||
@@ -161,8 +169,11 @@ func (o *TestStruct) decodeXDR(xr *xdr.Reader) error {
|
||||
o.UI64 = xr.ReadUint64()
|
||||
o.BS = xr.ReadBytesMax(1024)
|
||||
o.S = xr.ReadStringMax(1024)
|
||||
(&o.C).decodeXDR(xr)
|
||||
(&o.C).DecodeXDRFrom(xr)
|
||||
_SSSize := int(xr.ReadUint32())
|
||||
if _SSSize < 0 {
|
||||
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
|
||||
}
|
||||
if _SSSize > 1024 {
|
||||
return xdr.ElementSizeExceeded("SS", _SSSize, 1024)
|
||||
}
|
||||
|
||||
3
Godeps/_workspace/src/github.com/calmh/xdr/reader.go
generated
vendored
@@ -68,7 +68,8 @@ func (r *Reader) ReadBytesMaxInto(max int, dst []byte) []byte {
|
||||
if r.err != nil {
|
||||
return nil
|
||||
}
|
||||
if max > 0 && l > max {
|
||||
if l < 0 || max > 0 && l > max {
|
||||
// l may be negative on 32 bit builds
|
||||
r.err = ElementSizeExceeded("bytes field", l, max)
|
||||
return nil
|
||||
}
|
||||
|
||||
3
Godeps/_workspace/src/github.com/juju/ratelimit/LICENSE
generated
vendored
@@ -1,3 +1,6 @@
|
||||
This package contains an efficient token-bucket-based rate limiter.
|
||||
Copyright (C) 2015 Canonical Ltd.
|
||||
|
||||
This software is licensed under the LGPLv3, included below.
|
||||
|
||||
As a special exception to the GNU Lesser General Public License version 3
|
||||
|
||||
8
Godeps/_workspace/src/github.com/kardianos/osext/osext_procfs.go
generated
vendored
@@ -11,12 +11,18 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func executable() (string, error) {
|
||||
switch runtime.GOOS {
|
||||
case "linux":
|
||||
return os.Readlink("/proc/self/exe")
|
||||
const deletedSuffix = " (deleted)"
|
||||
execpath, err := os.Readlink("/proc/self/exe")
|
||||
if err != nil {
|
||||
return execpath, err
|
||||
}
|
||||
return strings.TrimSuffix(execpath, deletedSuffix), nil
|
||||
case "netbsd":
|
||||
return os.Readlink("/proc/curproc/exe")
|
||||
case "openbsd", "dragonfly":
|
||||
|
||||
137
Godeps/_workspace/src/github.com/kardianos/osext/osext_test.go
generated
vendored
@@ -7,35 +7,42 @@
|
||||
package osext
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
oexec "os/exec"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
const execPath_EnvVar = "OSTEST_OUTPUT_EXECPATH"
|
||||
const (
|
||||
executableEnvVar = "OSTEST_OUTPUT_EXECUTABLE"
|
||||
|
||||
func TestExecPath(t *testing.T) {
|
||||
executableEnvValueMatch = "match"
|
||||
executableEnvValueDelete = "delete"
|
||||
)
|
||||
|
||||
func TestExecutableMatch(t *testing.T) {
|
||||
ep, err := Executable()
|
||||
if err != nil {
|
||||
t.Fatalf("ExecPath failed: %v", err)
|
||||
t.Fatalf("Executable failed: %v", err)
|
||||
}
|
||||
// we want fn to be of the form "dir/prog"
|
||||
|
||||
// fullpath to be of the form "dir/prog".
|
||||
dir := filepath.Dir(filepath.Dir(ep))
|
||||
fn, err := filepath.Rel(dir, ep)
|
||||
fullpath, err := filepath.Rel(dir, ep)
|
||||
if err != nil {
|
||||
t.Fatalf("filepath.Rel: %v", err)
|
||||
}
|
||||
cmd := &oexec.Cmd{}
|
||||
// make child start with a relative program path
|
||||
cmd.Dir = dir
|
||||
cmd.Path = fn
|
||||
// forge argv[0] for child, so that we can verify we could correctly
|
||||
// get real path of the executable without influenced by argv[0].
|
||||
cmd.Args = []string{"-", "-test.run=XXXX"}
|
||||
cmd.Env = []string{fmt.Sprintf("%s=1", execPath_EnvVar)}
|
||||
// Make child start with a relative program path.
|
||||
// Alter argv[0] for child to verify getting real path without argv[0].
|
||||
cmd := &exec.Cmd{
|
||||
Dir: dir,
|
||||
Path: fullpath,
|
||||
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueMatch)},
|
||||
}
|
||||
out, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("exec(self) failed: %v", err)
|
||||
@@ -49,6 +56,63 @@ func TestExecPath(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecutableDelete(t *testing.T) {
|
||||
if runtime.GOOS != "linux" {
|
||||
t.Skip()
|
||||
}
|
||||
fpath, err := Executable()
|
||||
if err != nil {
|
||||
t.Fatalf("Executable failed: %v", err)
|
||||
}
|
||||
|
||||
r, w := io.Pipe()
|
||||
stderrBuff := &bytes.Buffer{}
|
||||
stdoutBuff := &bytes.Buffer{}
|
||||
cmd := &exec.Cmd{
|
||||
Path: fpath,
|
||||
Env: []string{fmt.Sprintf("%s=%s", executableEnvVar, executableEnvValueDelete)},
|
||||
Stdin: r,
|
||||
Stderr: stderrBuff,
|
||||
Stdout: stdoutBuff,
|
||||
}
|
||||
err = cmd.Start()
|
||||
if err != nil {
|
||||
t.Fatalf("exec(self) start failed: %v", err)
|
||||
}
|
||||
|
||||
tempPath := fpath + "_copy"
|
||||
_ = os.Remove(tempPath)
|
||||
|
||||
err = copyFile(tempPath, fpath)
|
||||
if err != nil {
|
||||
t.Fatalf("copy file failed: %v", err)
|
||||
}
|
||||
err = os.Remove(fpath)
|
||||
if err != nil {
|
||||
t.Fatalf("remove running test file failed: %v", err)
|
||||
}
|
||||
err = os.Rename(tempPath, fpath)
|
||||
if err != nil {
|
||||
t.Fatalf("rename copy to previous name failed: %v", err)
|
||||
}
|
||||
|
||||
w.Write([]byte{0})
|
||||
w.Close()
|
||||
|
||||
err = cmd.Wait()
|
||||
if err != nil {
|
||||
t.Fatalf("exec wait failed: %v", err)
|
||||
}
|
||||
|
||||
childPath := stderrBuff.String()
|
||||
if !filepath.IsAbs(childPath) {
|
||||
t.Fatalf("Child returned %q, want an absolute path", childPath)
|
||||
}
|
||||
if !sameFile(childPath, fpath) {
|
||||
t.Fatalf("Child returned %q, not the same file as %q", childPath, fpath)
|
||||
}
|
||||
}
|
||||
|
||||
func sameFile(fn1, fn2 string) bool {
|
||||
fi1, err := os.Stat(fn1)
|
||||
if err != nil {
|
||||
@@ -60,10 +124,30 @@ func sameFile(fn1, fn2 string) bool {
|
||||
}
|
||||
return os.SameFile(fi1, fi2)
|
||||
}
|
||||
func copyFile(dest, src string) error {
|
||||
df, err := os.Create(dest)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer df.Close()
|
||||
|
||||
func init() {
|
||||
if e := os.Getenv(execPath_EnvVar); e != "" {
|
||||
// first chdir to another path
|
||||
sf, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer sf.Close()
|
||||
|
||||
_, err = io.Copy(df, sf)
|
||||
return err
|
||||
}
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
env := os.Getenv(executableEnvVar)
|
||||
switch env {
|
||||
case "":
|
||||
os.Exit(m.Run())
|
||||
case executableEnvValueMatch:
|
||||
// First chdir to another path.
|
||||
dir := "/"
|
||||
if runtime.GOOS == "windows" {
|
||||
dir = filepath.VolumeName(".")
|
||||
@@ -74,6 +158,23 @@ func init() {
|
||||
} else {
|
||||
fmt.Fprint(os.Stderr, ep)
|
||||
}
|
||||
os.Exit(0)
|
||||
case executableEnvValueDelete:
|
||||
bb := make([]byte, 1)
|
||||
var err error
|
||||
n, err := os.Stdin.Read(bb)
|
||||
if err != nil {
|
||||
fmt.Fprint(os.Stderr, "ERROR: ", err)
|
||||
os.Exit(2)
|
||||
}
|
||||
if n != 1 {
|
||||
fmt.Fprint(os.Stderr, "ERROR: n != 1, n == ", n)
|
||||
os.Exit(2)
|
||||
}
|
||||
if ep, err := Executable(); err != nil {
|
||||
fmt.Fprint(os.Stderr, "ERROR: ", err)
|
||||
} else {
|
||||
fmt.Fprint(os.Stderr, ep)
|
||||
}
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
12
Godeps/_workspace/src/github.com/syncthing/protocol/common_test.go
generated
vendored
@@ -13,6 +13,9 @@ type TestModel struct {
|
||||
name string
|
||||
offset int64
|
||||
size int
|
||||
hash []byte
|
||||
flags uint32
|
||||
options []Option
|
||||
closedCh chan bool
|
||||
}
|
||||
|
||||
@@ -22,17 +25,20 @@ func newTestModel() *TestModel {
|
||||
}
|
||||
}
|
||||
|
||||
func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
func (t *TestModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
}
|
||||
|
||||
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
func (t *TestModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
}
|
||||
|
||||
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int) ([]byte, error) {
|
||||
func (t *TestModel) Request(deviceID DeviceID, folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
t.folder = folder
|
||||
t.name = name
|
||||
t.offset = offset
|
||||
t.size = size
|
||||
t.hash = hash
|
||||
t.flags = flags
|
||||
t.options = options
|
||||
return t.data, nil
|
||||
}
|
||||
|
||||
|
||||
6
Godeps/_workspace/src/github.com/syncthing/protocol/deviceid.go
generated
vendored
@@ -6,6 +6,7 @@ import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/base32"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
@@ -67,6 +68,11 @@ func (n DeviceID) Equals(other DeviceID) bool {
|
||||
return bytes.Compare(n[:], other[:]) == 0
|
||||
}
|
||||
|
||||
// Short returns an integer representing bits 0-63 of the device ID.
|
||||
func (n DeviceID) Short() uint64 {
|
||||
return binary.BigEndian.Uint64(n[:])
|
||||
}
|
||||
|
||||
func (n *DeviceID) MarshalText() ([]byte, error) {
|
||||
return []byte(n.String()), nil
|
||||
}
|
||||
|
||||
51
Godeps/_workspace/src/github.com/syncthing/protocol/errors.go
generated
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
// Copyright (C) 2014 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"errors"
|
||||
)
|
||||
|
||||
const (
|
||||
ecNoError int32 = iota
|
||||
ecGeneric
|
||||
ecNoSuchFile
|
||||
ecInvalid
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoError error = nil
|
||||
ErrGeneric = errors.New("generic error")
|
||||
ErrNoSuchFile = errors.New("no such file")
|
||||
ErrInvalid = errors.New("file is invalid")
|
||||
)
|
||||
|
||||
var lookupError = map[int32]error{
|
||||
ecNoError: ErrNoError,
|
||||
ecGeneric: ErrGeneric,
|
||||
ecNoSuchFile: ErrNoSuchFile,
|
||||
ecInvalid: ErrInvalid,
|
||||
}
|
||||
|
||||
var lookupCode = map[error]int32{
|
||||
ErrNoError: ecNoError,
|
||||
ErrGeneric: ecGeneric,
|
||||
ErrNoSuchFile: ecNoSuchFile,
|
||||
ErrInvalid: ecInvalid,
|
||||
}
|
||||
|
||||
func codeToError(errcode int32) error {
|
||||
err, ok := lookupError[errcode]
|
||||
if !ok {
|
||||
return ErrGeneric
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func errorToCode(err error) int32 {
|
||||
code, ok := lookupCode[err]
|
||||
if !ok {
|
||||
return ecGeneric
|
||||
}
|
||||
return code
|
||||
}
|
||||
14
Godeps/_workspace/src/github.com/syncthing/protocol/message.go
generated
vendored
@@ -1,5 +1,6 @@
|
||||
// Copyright (C) 2014 The Protocol Authors.
|
||||
|
||||
//go:generate -command genxdr go run ../syncthing/Godeps/_workspace/src/github.com/calmh/xdr/cmd/genxdr/main.go
|
||||
//go:generate genxdr -o message_xdr.go message.go
|
||||
|
||||
package protocol
|
||||
@@ -17,13 +18,13 @@ type FileInfo struct {
|
||||
Name string // max:8192
|
||||
Flags uint32
|
||||
Modified int64
|
||||
Version int64
|
||||
Version Vector
|
||||
LocalVersion int64
|
||||
Blocks []BlockInfo
|
||||
}
|
||||
|
||||
func (f FileInfo) String() string {
|
||||
return fmt.Sprintf("File{Name:%q, Flags:0%o, Modified:%d, Version:%d, Size:%d, Blocks:%v}",
|
||||
return fmt.Sprintf("File{Name:%q, Flags:0%o, Modified:%d, Version:%v, Size:%d, Blocks:%v}",
|
||||
f.Name, f.Flags, f.Modified, f.Version, f.Size(), f.Blocks)
|
||||
}
|
||||
|
||||
@@ -78,8 +79,8 @@ type RequestMessage struct {
|
||||
}
|
||||
|
||||
type ResponseMessage struct {
|
||||
Data []byte
|
||||
Error int32
|
||||
Data []byte
|
||||
Code int32
|
||||
}
|
||||
|
||||
type ClusterConfigMessage struct {
|
||||
@@ -101,12 +102,15 @@ func (o *ClusterConfigMessage) GetOption(key string) string {
|
||||
type Folder struct {
|
||||
ID string // max:64
|
||||
Devices []Device
|
||||
Flags uint32
|
||||
Options []Option // max:64
|
||||
}
|
||||
|
||||
type Device struct {
|
||||
ID []byte // max:32
|
||||
Flags uint32
|
||||
MaxLocalVersion int64
|
||||
Flags uint32
|
||||
Options []Option // max:64
|
||||
}
|
||||
|
||||
type Option struct {
|
||||
|
||||
275
Godeps/_workspace/src/github.com/syncthing/protocol/message_xdr.go
generated
vendored
@@ -51,7 +51,7 @@ struct IndexMessage {
|
||||
|
||||
func (o IndexMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o IndexMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -69,15 +69,15 @@ func (o IndexMessage) MustMarshalXDR() []byte {
|
||||
func (o IndexMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o IndexMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteString(o.Folder)
|
||||
xw.WriteUint32(uint32(len(o.Files)))
|
||||
for i := range o.Files {
|
||||
_, err := o.Files[i].encodeXDR(xw)
|
||||
_, err := o.Files[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -88,7 +88,7 @@ func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Options)))
|
||||
for i := range o.Options {
|
||||
_, err := o.Options[i].encodeXDR(xw)
|
||||
_, err := o.Options[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -98,30 +98,36 @@ func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *IndexMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *IndexMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *IndexMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *IndexMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Folder = xr.ReadString()
|
||||
_FilesSize := int(xr.ReadUint32())
|
||||
if _FilesSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Files", _FilesSize, 0)
|
||||
}
|
||||
o.Files = make([]FileInfo, _FilesSize)
|
||||
for i := range o.Files {
|
||||
(&o.Files[i]).decodeXDR(xr)
|
||||
(&o.Files[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
o.Flags = xr.ReadUint32()
|
||||
_OptionsSize := int(xr.ReadUint32())
|
||||
if _OptionsSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
if _OptionsSize > 64 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
o.Options = make([]Option, _OptionsSize)
|
||||
for i := range o.Options {
|
||||
(&o.Options[i]).decodeXDR(xr)
|
||||
(&o.Options[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
@@ -145,9 +151,9 @@ FileInfo Structure:
|
||||
+ Modified (64 bits) +
|
||||
| |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| |
|
||||
+ Version (64 bits) +
|
||||
| |
|
||||
/ /
|
||||
\ Vector Structure \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| |
|
||||
+ Local Version (64 bits) +
|
||||
@@ -165,7 +171,7 @@ struct FileInfo {
|
||||
string Name<8192>;
|
||||
unsigned int Flags;
|
||||
hyper Modified;
|
||||
hyper Version;
|
||||
Vector Version;
|
||||
hyper LocalVersion;
|
||||
BlockInfo Blocks<>;
|
||||
}
|
||||
@@ -174,7 +180,7 @@ struct FileInfo {
|
||||
|
||||
func (o FileInfo) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o FileInfo) MarshalXDR() ([]byte, error) {
|
||||
@@ -192,22 +198,25 @@ func (o FileInfo) MustMarshalXDR() []byte {
|
||||
func (o FileInfo) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o FileInfo) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o FileInfo) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.Name); l > 8192 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
|
||||
}
|
||||
xw.WriteString(o.Name)
|
||||
xw.WriteUint32(o.Flags)
|
||||
xw.WriteUint64(uint64(o.Modified))
|
||||
xw.WriteUint64(uint64(o.Version))
|
||||
_, err := o.Version.EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
xw.WriteUint64(uint64(o.LocalVersion))
|
||||
xw.WriteUint32(uint32(len(o.Blocks)))
|
||||
for i := range o.Blocks {
|
||||
_, err := o.Blocks[i].encodeXDR(xw)
|
||||
_, err := o.Blocks[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -217,25 +226,28 @@ func (o FileInfo) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *FileInfo) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *FileInfo) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *FileInfo) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *FileInfo) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Name = xr.ReadStringMax(8192)
|
||||
o.Flags = xr.ReadUint32()
|
||||
o.Modified = int64(xr.ReadUint64())
|
||||
o.Version = int64(xr.ReadUint64())
|
||||
(&o.Version).DecodeXDRFrom(xr)
|
||||
o.LocalVersion = int64(xr.ReadUint64())
|
||||
_BlocksSize := int(xr.ReadUint32())
|
||||
if _BlocksSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 0)
|
||||
}
|
||||
o.Blocks = make([]BlockInfo, _BlocksSize)
|
||||
for i := range o.Blocks {
|
||||
(&o.Blocks[i]).decodeXDR(xr)
|
||||
(&o.Blocks[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
@@ -266,7 +278,7 @@ struct BlockInfo {
|
||||
|
||||
func (o BlockInfo) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o BlockInfo) MarshalXDR() ([]byte, error) {
|
||||
@@ -284,11 +296,11 @@ func (o BlockInfo) MustMarshalXDR() []byte {
|
||||
func (o BlockInfo) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o BlockInfo) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o BlockInfo) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteUint32(uint32(o.Size))
|
||||
if l := len(o.Hash); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Hash", l, 64)
|
||||
@@ -299,16 +311,16 @@ func (o BlockInfo) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *BlockInfo) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *BlockInfo) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *BlockInfo) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *BlockInfo) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Size = int32(xr.ReadUint32())
|
||||
o.Hash = xr.ReadBytesMax(64)
|
||||
return xr.Error()
|
||||
@@ -369,7 +381,7 @@ struct RequestMessage {
|
||||
|
||||
func (o RequestMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o RequestMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -387,11 +399,11 @@ func (o RequestMessage) MustMarshalXDR() []byte {
|
||||
func (o RequestMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o RequestMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.Folder); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Folder", l, 64)
|
||||
}
|
||||
@@ -412,7 +424,7 @@ func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Options)))
|
||||
for i := range o.Options {
|
||||
_, err := o.Options[i].encodeXDR(xw)
|
||||
_, err := o.Options[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -422,16 +434,16 @@ func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *RequestMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *RequestMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *RequestMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *RequestMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Folder = xr.ReadStringMax(64)
|
||||
o.Name = xr.ReadStringMax(8192)
|
||||
o.Offset = int64(xr.ReadUint64())
|
||||
@@ -439,12 +451,15 @@ func (o *RequestMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
o.Hash = xr.ReadBytesMax(64)
|
||||
o.Flags = xr.ReadUint32()
|
||||
_OptionsSize := int(xr.ReadUint32())
|
||||
if _OptionsSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
if _OptionsSize > 64 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
o.Options = make([]Option, _OptionsSize)
|
||||
for i := range o.Options {
|
||||
(&o.Options[i]).decodeXDR(xr)
|
||||
(&o.Options[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
@@ -462,20 +477,20 @@ ResponseMessage Structure:
|
||||
\ Data (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Error |
|
||||
| Code |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
|
||||
struct ResponseMessage {
|
||||
opaque Data<>;
|
||||
int Error;
|
||||
int Code;
|
||||
}
|
||||
|
||||
*/
|
||||
|
||||
func (o ResponseMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o ResponseMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -493,30 +508,30 @@ func (o ResponseMessage) MustMarshalXDR() []byte {
|
||||
func (o ResponseMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o ResponseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o ResponseMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteBytes(o.Data)
|
||||
xw.WriteUint32(uint32(o.Error))
|
||||
xw.WriteUint32(uint32(o.Code))
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *ResponseMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *ResponseMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *ResponseMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *ResponseMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Data = xr.ReadBytes()
|
||||
o.Error = int32(xr.ReadUint32())
|
||||
o.Code = int32(xr.ReadUint32())
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
@@ -564,7 +579,7 @@ struct ClusterConfigMessage {
|
||||
|
||||
func (o ClusterConfigMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o ClusterConfigMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -582,11 +597,11 @@ func (o ClusterConfigMessage) MustMarshalXDR() []byte {
|
||||
func (o ClusterConfigMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o ClusterConfigMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.ClientName); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("ClientName", l, 64)
|
||||
}
|
||||
@@ -597,7 +612,7 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
xw.WriteString(o.ClientVersion)
|
||||
xw.WriteUint32(uint32(len(o.Folders)))
|
||||
for i := range o.Folders {
|
||||
_, err := o.Folders[i].encodeXDR(xw)
|
||||
_, err := o.Folders[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -607,7 +622,7 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Options)))
|
||||
for i := range o.Options {
|
||||
_, err := o.Options[i].encodeXDR(xw)
|
||||
_, err := o.Options[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -617,30 +632,36 @@ func (o ClusterConfigMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *ClusterConfigMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *ClusterConfigMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *ClusterConfigMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *ClusterConfigMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.ClientName = xr.ReadStringMax(64)
|
||||
o.ClientVersion = xr.ReadStringMax(64)
|
||||
_FoldersSize := int(xr.ReadUint32())
|
||||
if _FoldersSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Folders", _FoldersSize, 0)
|
||||
}
|
||||
o.Folders = make([]Folder, _FoldersSize)
|
||||
for i := range o.Folders {
|
||||
(&o.Folders[i]).decodeXDR(xr)
|
||||
(&o.Folders[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
_OptionsSize := int(xr.ReadUint32())
|
||||
if _OptionsSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
if _OptionsSize > 64 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
o.Options = make([]Option, _OptionsSize)
|
||||
for i := range o.Options {
|
||||
(&o.Options[i]).decodeXDR(xr)
|
||||
(&o.Options[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
@@ -664,18 +685,28 @@ Folder Structure:
|
||||
\ Zero or more Device Structures \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Flags |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Number of Options |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Zero or more Option Structures \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
|
||||
struct Folder {
|
||||
string ID<64>;
|
||||
Device Devices<>;
|
||||
unsigned int Flags;
|
||||
Option Options<64>;
|
||||
}
|
||||
|
||||
*/
|
||||
|
||||
func (o Folder) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o Folder) MarshalXDR() ([]byte, error) {
|
||||
@@ -693,18 +724,29 @@ func (o Folder) MustMarshalXDR() []byte {
|
||||
func (o Folder) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o Folder) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o Folder) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.ID); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 64)
|
||||
}
|
||||
xw.WriteString(o.ID)
|
||||
xw.WriteUint32(uint32(len(o.Devices)))
|
||||
for i := range o.Devices {
|
||||
_, err := o.Devices[i].encodeXDR(xw)
|
||||
_, err := o.Devices[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
}
|
||||
xw.WriteUint32(o.Flags)
|
||||
if l := len(o.Options); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Options)))
|
||||
for i := range o.Options {
|
||||
_, err := o.Options[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
@@ -714,21 +756,36 @@ func (o Folder) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *Folder) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Folder) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Folder) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *Folder) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.ID = xr.ReadStringMax(64)
|
||||
_DevicesSize := int(xr.ReadUint32())
|
||||
if _DevicesSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Devices", _DevicesSize, 0)
|
||||
}
|
||||
o.Devices = make([]Device, _DevicesSize)
|
||||
for i := range o.Devices {
|
||||
(&o.Devices[i]).decodeXDR(xr)
|
||||
(&o.Devices[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
o.Flags = xr.ReadUint32()
|
||||
_OptionsSize := int(xr.ReadUint32())
|
||||
if _OptionsSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
if _OptionsSize > 64 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
o.Options = make([]Option, _OptionsSize)
|
||||
for i := range o.Options {
|
||||
(&o.Options[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
@@ -746,25 +803,32 @@ Device Structure:
|
||||
\ ID (variable length) \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Flags |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| |
|
||||
+ Max Local Version (64 bits) +
|
||||
| |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Flags |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| Number of Options |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
/ /
|
||||
\ Zero or more Option Structures \
|
||||
/ /
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
|
||||
struct Device {
|
||||
opaque ID<32>;
|
||||
unsigned int Flags;
|
||||
hyper MaxLocalVersion;
|
||||
unsigned int Flags;
|
||||
Option Options<64>;
|
||||
}
|
||||
|
||||
*/
|
||||
|
||||
func (o Device) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o Device) MarshalXDR() ([]byte, error) {
|
||||
@@ -782,35 +846,56 @@ func (o Device) MustMarshalXDR() []byte {
|
||||
func (o Device) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o Device) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o Device) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.ID); l > 32 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("ID", l, 32)
|
||||
}
|
||||
xw.WriteBytes(o.ID)
|
||||
xw.WriteUint32(o.Flags)
|
||||
xw.WriteUint64(uint64(o.MaxLocalVersion))
|
||||
xw.WriteUint32(o.Flags)
|
||||
if l := len(o.Options); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
|
||||
}
|
||||
xw.WriteUint32(uint32(len(o.Options)))
|
||||
for i := range o.Options {
|
||||
_, err := o.Options[i].EncodeXDRInto(xw)
|
||||
if err != nil {
|
||||
return xw.Tot(), err
|
||||
}
|
||||
}
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *Device) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Device) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Device) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *Device) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.ID = xr.ReadBytesMax(32)
|
||||
o.Flags = xr.ReadUint32()
|
||||
o.MaxLocalVersion = int64(xr.ReadUint64())
|
||||
o.Flags = xr.ReadUint32()
|
||||
_OptionsSize := int(xr.ReadUint32())
|
||||
if _OptionsSize < 0 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
if _OptionsSize > 64 {
|
||||
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
|
||||
}
|
||||
o.Options = make([]Option, _OptionsSize)
|
||||
for i := range o.Options {
|
||||
(&o.Options[i]).DecodeXDRFrom(xr)
|
||||
}
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
@@ -844,7 +929,7 @@ struct Option {
|
||||
|
||||
func (o Option) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o Option) MarshalXDR() ([]byte, error) {
|
||||
@@ -862,11 +947,11 @@ func (o Option) MustMarshalXDR() []byte {
|
||||
func (o Option) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o Option) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o Option) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.Key); l > 64 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Key", l, 64)
|
||||
}
|
||||
@@ -880,16 +965,16 @@ func (o Option) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *Option) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Option) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *Option) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *Option) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Key = xr.ReadStringMax(64)
|
||||
o.Value = xr.ReadStringMax(1024)
|
||||
return xr.Error()
|
||||
@@ -921,7 +1006,7 @@ struct CloseMessage {
|
||||
|
||||
func (o CloseMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o CloseMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -939,11 +1024,11 @@ func (o CloseMessage) MustMarshalXDR() []byte {
|
||||
func (o CloseMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o CloseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o CloseMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
if l := len(o.Reason); l > 1024 {
|
||||
return xw.Tot(), xdr.ElementSizeExceeded("Reason", l, 1024)
|
||||
}
|
||||
@@ -954,16 +1039,16 @@ func (o CloseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
|
||||
func (o *CloseMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *CloseMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *CloseMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *CloseMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
o.Reason = xr.ReadStringMax(1024)
|
||||
o.Code = int32(xr.ReadUint32())
|
||||
return xr.Error()
|
||||
@@ -985,7 +1070,7 @@ struct EmptyMessage {
|
||||
|
||||
func (o EmptyMessage) EncodeXDR(w io.Writer) (int, error) {
|
||||
var xw = xdr.NewWriter(w)
|
||||
return o.encodeXDR(xw)
|
||||
return o.EncodeXDRInto(xw)
|
||||
}
|
||||
|
||||
func (o EmptyMessage) MarshalXDR() ([]byte, error) {
|
||||
@@ -1003,25 +1088,25 @@ func (o EmptyMessage) MustMarshalXDR() []byte {
|
||||
func (o EmptyMessage) AppendXDR(bs []byte) ([]byte, error) {
|
||||
var aw = xdr.AppendWriter(bs)
|
||||
var xw = xdr.NewWriter(&aw)
|
||||
_, err := o.encodeXDR(xw)
|
||||
_, err := o.EncodeXDRInto(xw)
|
||||
return []byte(aw), err
|
||||
}
|
||||
|
||||
func (o EmptyMessage) encodeXDR(xw *xdr.Writer) (int, error) {
|
||||
func (o EmptyMessage) EncodeXDRInto(xw *xdr.Writer) (int, error) {
|
||||
return xw.Tot(), xw.Error()
|
||||
}
|
||||
|
||||
func (o *EmptyMessage) DecodeXDR(r io.Reader) error {
|
||||
xr := xdr.NewReader(r)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *EmptyMessage) UnmarshalXDR(bs []byte) error {
|
||||
var br = bytes.NewReader(bs)
|
||||
var xr = xdr.NewReader(br)
|
||||
return o.decodeXDR(xr)
|
||||
return o.DecodeXDRFrom(xr)
|
||||
}
|
||||
|
||||
func (o *EmptyMessage) decodeXDR(xr *xdr.Reader) error {
|
||||
func (o *EmptyMessage) DecodeXDRFrom(xr *xdr.Reader) error {
|
||||
return xr.Error()
|
||||
}
|
||||
|
||||
12
Godeps/_workspace/src/github.com/syncthing/protocol/nativemodel_darwin.go
generated
vendored
@@ -12,23 +12,23 @@ type nativeModel struct {
|
||||
next Model
|
||||
}
|
||||
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
for i := range files {
|
||||
files[i].Name = norm.NFD.String(files[i].Name)
|
||||
}
|
||||
m.next.Index(deviceID, folder, files)
|
||||
m.next.Index(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
for i := range files {
|
||||
files[i].Name = norm.NFD.String(files[i].Name)
|
||||
}
|
||||
m.next.IndexUpdate(deviceID, folder, files)
|
||||
m.next.IndexUpdate(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int) ([]byte, error) {
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
name = norm.NFD.String(name)
|
||||
return m.next.Request(deviceID, folder, name, offset, size)
|
||||
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
|
||||
|
||||
12
Godeps/_workspace/src/github.com/syncthing/protocol/nativemodel_unix.go
generated
vendored
@@ -10,16 +10,16 @@ type nativeModel struct {
|
||||
next Model
|
||||
}
|
||||
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
m.next.Index(deviceID, folder, files)
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
m.next.Index(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
m.next.IndexUpdate(deviceID, folder, files)
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
m.next.IndexUpdate(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int) ([]byte, error) {
|
||||
return m.next.Request(deviceID, folder, name, offset, size)
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
|
||||
|
||||
51
Godeps/_workspace/src/github.com/syncthing/protocol/nativemodel_windows.go
generated
vendored
@@ -24,23 +24,30 @@ type nativeModel struct {
|
||||
next Model
|
||||
}
|
||||
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
for i, f := range files {
|
||||
if strings.ContainsAny(f.Name, disallowedCharacters) {
|
||||
if f.IsDeleted() {
|
||||
// Don't complain if the file is marked as deleted, since it
|
||||
// can't possibly exist here anyway.
|
||||
continue
|
||||
}
|
||||
files[i].Flags |= FlagInvalid
|
||||
l.Warnf("File name %q contains invalid characters; marked as invalid.", f.Name)
|
||||
}
|
||||
files[i].Name = filepath.FromSlash(f.Name)
|
||||
}
|
||||
m.next.Index(deviceID, folder, files)
|
||||
func (m nativeModel) Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
fixupFiles(files)
|
||||
m.next.Index(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo) {
|
||||
func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option) {
|
||||
fixupFiles(files)
|
||||
m.next.IndexUpdate(deviceID, folder, files, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
name = filepath.FromSlash(name)
|
||||
return m.next.Request(deviceID, folder, name, offset, size, hash, flags, options)
|
||||
}
|
||||
|
||||
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
|
||||
m.next.ClusterConfig(deviceID, config)
|
||||
}
|
||||
|
||||
func (m nativeModel) Close(deviceID DeviceID, err error) {
|
||||
m.next.Close(deviceID, err)
|
||||
}
|
||||
|
||||
func fixupFiles(files []FileInfo) {
|
||||
for i, f := range files {
|
||||
if strings.ContainsAny(f.Name, disallowedCharacters) {
|
||||
if f.IsDeleted() {
|
||||
@@ -53,18 +60,4 @@ func (m nativeModel) IndexUpdate(deviceID DeviceID, folder string, files []FileI
|
||||
}
|
||||
files[i].Name = filepath.FromSlash(files[i].Name)
|
||||
}
|
||||
m.next.IndexUpdate(deviceID, folder, files)
|
||||
}
|
||||
|
||||
func (m nativeModel) Request(deviceID DeviceID, folder string, name string, offset int64, size int) ([]byte, error) {
|
||||
name = filepath.FromSlash(name)
|
||||
return m.next.Request(deviceID, folder, name, offset, size)
|
||||
}
|
||||
|
||||
func (m nativeModel) ClusterConfig(deviceID DeviceID, config ClusterConfigMessage) {
|
||||
m.next.ClusterConfig(deviceID, config)
|
||||
}
|
||||
|
||||
func (m nativeModel) Close(deviceID DeviceID, err error) {
|
||||
m.next.Close(deviceID, err)
|
||||
}
|
||||
|
||||
75
Godeps/_workspace/src/github.com/syncthing/protocol/protocol.go
generated
vendored
@@ -35,6 +35,7 @@ const (
|
||||
stateIdxRcvd
|
||||
)
|
||||
|
||||
// FileInfo flags
|
||||
const (
|
||||
FlagDeleted uint32 = 1 << 12
|
||||
FlagInvalid = 1 << 13
|
||||
@@ -48,6 +49,17 @@ const (
|
||||
SymlinkTypeMask = FlagDirectory | FlagSymlinkMissingTarget
|
||||
)
|
||||
|
||||
// IndexMessage message flags (for IndexUpdate)
|
||||
const (
|
||||
FlagIndexTemporary uint32 = 1 << iota
|
||||
)
|
||||
|
||||
// Request message flags
|
||||
const (
|
||||
FlagRequestTemporary uint32 = 1 << iota
|
||||
)
|
||||
|
||||
// ClusterConfigMessage.Folders.Devices flags
|
||||
const (
|
||||
FlagShareTrusted uint32 = 1 << 0
|
||||
FlagShareReadOnly = 1 << 1
|
||||
@@ -66,11 +78,11 @@ type pongMessage struct{ EmptyMessage }
|
||||
|
||||
type Model interface {
|
||||
// An index was received from the peer device
|
||||
Index(deviceID DeviceID, folder string, files []FileInfo)
|
||||
Index(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
|
||||
// An index update was received from the peer device
|
||||
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo)
|
||||
IndexUpdate(deviceID DeviceID, folder string, files []FileInfo, flags uint32, options []Option)
|
||||
// A request was made by the peer device
|
||||
Request(deviceID DeviceID, folder string, name string, offset int64, size int) ([]byte, error)
|
||||
Request(deviceID DeviceID, folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
|
||||
// A cluster configuration message was received
|
||||
ClusterConfig(deviceID DeviceID, config ClusterConfigMessage)
|
||||
// The peer device closed the connection
|
||||
@@ -80,9 +92,9 @@ type Model interface {
|
||||
type Connection interface {
|
||||
ID() DeviceID
|
||||
Name() string
|
||||
Index(folder string, files []FileInfo) error
|
||||
IndexUpdate(folder string, files []FileInfo) error
|
||||
Request(folder string, name string, offset int64, size int) ([]byte, error)
|
||||
Index(folder string, files []FileInfo, flags uint32, options []Option) error
|
||||
IndexUpdate(folder string, files []FileInfo, flags uint32, options []Option) error
|
||||
Request(folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error)
|
||||
ClusterConfig(config ClusterConfigMessage)
|
||||
Statistics() Statistics
|
||||
}
|
||||
@@ -169,7 +181,7 @@ func (c *rawConnection) Name() string {
|
||||
}
|
||||
|
||||
// Index writes the list of file information to the connected peer device
|
||||
func (c *rawConnection) Index(folder string, idx []FileInfo) error {
|
||||
func (c *rawConnection) Index(folder string, idx []FileInfo, flags uint32, options []Option) error {
|
||||
select {
|
||||
case <-c.closed:
|
||||
return ErrClosed
|
||||
@@ -177,15 +189,17 @@ func (c *rawConnection) Index(folder string, idx []FileInfo) error {
|
||||
}
|
||||
c.idxMut.Lock()
|
||||
c.send(-1, messageTypeIndex, IndexMessage{
|
||||
Folder: folder,
|
||||
Files: idx,
|
||||
Folder: folder,
|
||||
Files: idx,
|
||||
Flags: flags,
|
||||
Options: options,
|
||||
})
|
||||
c.idxMut.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// IndexUpdate writes the list of file information to the connected peer device as an update
|
||||
func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo) error {
|
||||
func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo, flags uint32, options []Option) error {
|
||||
select {
|
||||
case <-c.closed:
|
||||
return ErrClosed
|
||||
@@ -193,15 +207,17 @@ func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo) error {
|
||||
}
|
||||
c.idxMut.Lock()
|
||||
c.send(-1, messageTypeIndexUpdate, IndexMessage{
|
||||
Folder: folder,
|
||||
Files: idx,
|
||||
Folder: folder,
|
||||
Files: idx,
|
||||
Flags: flags,
|
||||
Options: options,
|
||||
})
|
||||
c.idxMut.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Request returns the bytes for the specified block after fetching them from the connected peer.
|
||||
func (c *rawConnection) Request(folder string, name string, offset int64, size int) ([]byte, error) {
|
||||
func (c *rawConnection) Request(folder string, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
var id int
|
||||
select {
|
||||
case id = <-c.nextID:
|
||||
@@ -218,10 +234,13 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
|
||||
c.awaitingMut.Unlock()
|
||||
|
||||
ok := c.send(id, messageTypeRequest, RequestMessage{
|
||||
Folder: folder,
|
||||
Name: name,
|
||||
Offset: offset,
|
||||
Size: int32(size),
|
||||
Folder: folder,
|
||||
Name: name,
|
||||
Offset: offset,
|
||||
Size: int32(size),
|
||||
Hash: hash,
|
||||
Flags: flags,
|
||||
Options: options,
|
||||
})
|
||||
if !ok {
|
||||
return nil, ErrClosed
|
||||
@@ -280,11 +299,6 @@ func (c *rawConnection) readerLoop() (err error) {
|
||||
|
||||
switch msg := msg.(type) {
|
||||
case IndexMessage:
|
||||
if msg.Flags != 0 {
|
||||
// We don't currently support or expect any flags.
|
||||
return fmt.Errorf("protocol error: unknown flags 0x%x in Index(Update) message", msg.Flags)
|
||||
}
|
||||
|
||||
switch hdr.msgType {
|
||||
case messageTypeIndex:
|
||||
if c.state < stateCCRcvd {
|
||||
@@ -301,10 +315,6 @@ func (c *rawConnection) readerLoop() (err error) {
|
||||
}
|
||||
|
||||
case RequestMessage:
|
||||
if msg.Flags != 0 {
|
||||
// We don't currently support or expect any flags.
|
||||
return fmt.Errorf("protocol error: unknown flags 0x%x in Request message", msg.Flags)
|
||||
}
|
||||
if c.state < stateIdxRcvd {
|
||||
return fmt.Errorf("protocol error: request message in state %d", c.state)
|
||||
}
|
||||
@@ -460,16 +470,16 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
|
||||
|
||||
func (c *rawConnection) handleIndex(im IndexMessage) {
|
||||
if debug {
|
||||
l.Debugf("Index(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
|
||||
l.Debugf("Index(%v, %v, %d file, flags %x, opts: %s)", c.id, im.Folder, len(im.Files), im.Flags, im.Options)
|
||||
}
|
||||
c.receiver.Index(c.id, im.Folder, filterIndexMessageFiles(im.Files))
|
||||
c.receiver.Index(c.id, im.Folder, filterIndexMessageFiles(im.Files), im.Flags, im.Options)
|
||||
}
|
||||
|
||||
func (c *rawConnection) handleIndexUpdate(im IndexMessage) {
|
||||
if debug {
|
||||
l.Debugf("queueing IndexUpdate(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
|
||||
l.Debugf("queueing IndexUpdate(%v, %v, %d files, flags %x, opts: %s)", c.id, im.Folder, len(im.Files), im.Flags, im.Options)
|
||||
}
|
||||
c.receiver.IndexUpdate(c.id, im.Folder, filterIndexMessageFiles(im.Files))
|
||||
c.receiver.IndexUpdate(c.id, im.Folder, filterIndexMessageFiles(im.Files), im.Flags, im.Options)
|
||||
}
|
||||
|
||||
func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
|
||||
@@ -499,10 +509,11 @@ func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
|
||||
}
|
||||
|
||||
func (c *rawConnection) handleRequest(msgID int, req RequestMessage) {
|
||||
data, _ := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size))
|
||||
data, err := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size), req.Hash, req.Flags, req.Options)
|
||||
|
||||
c.send(msgID, messageTypeResponse, ResponseMessage{
|
||||
Data: data,
|
||||
Code: errorToCode(err),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -510,7 +521,7 @@ func (c *rawConnection) handleResponse(msgID int, resp ResponseMessage) {
|
||||
c.awaitingMut.Lock()
|
||||
if rc := c.awaiting[msgID]; rc != nil {
|
||||
c.awaiting[msgID] = nil
|
||||
rc <- asyncResult{resp.Data, nil}
|
||||
rc <- asyncResult{resp.Data, codeToError(resp.Code)}
|
||||
close(rc)
|
||||
}
|
||||
c.awaitingMut.Unlock()
|
||||
|
||||
6
Godeps/_workspace/src/github.com/syncthing/protocol/protocol_test.go
generated
vendored
@@ -229,10 +229,10 @@ func TestClose(t *testing.T) {
|
||||
t.Error("Ping should not return true")
|
||||
}
|
||||
|
||||
c0.Index("default", nil)
|
||||
c0.Index("default", nil)
|
||||
c0.Index("default", nil, 0, nil)
|
||||
c0.Index("default", nil, 0, nil)
|
||||
|
||||
if _, err := c0.Request("default", "foo", 0, 0); err == nil {
|
||||
if _, err := c0.Request("default", "foo", 0, 0, nil, 0, nil); err == nil {
|
||||
t.Error("Request should return an error")
|
||||
}
|
||||
}
|
||||
|
||||
115
Godeps/_workspace/src/github.com/syncthing/protocol/vector.go
generated
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
// Copyright (C) 2015 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
// The Vector type represents a version vector. The zero value is a usable
|
||||
// version vector. The vector has slice semantics and some operations on it
|
||||
// are "append-like" in that they may return the same vector modified, or a
|
||||
// new allocated Vector with the modified contents.
|
||||
type Vector []Counter
|
||||
|
||||
// Counter represents a single counter in the version vector.
|
||||
type Counter struct {
|
||||
ID uint64
|
||||
Value uint64
|
||||
}
|
||||
|
||||
// Update returns a Vector with the index for the specific ID incremented by
|
||||
// one. If it is possible, the vector v is updated and returned. If it is not,
|
||||
// a copy will be created, updated and returned.
|
||||
func (v Vector) Update(ID uint64) Vector {
|
||||
for i := range v {
|
||||
if v[i].ID == ID {
|
||||
// Update an existing index
|
||||
v[i].Value++
|
||||
return v
|
||||
} else if v[i].ID > ID {
|
||||
// Insert a new index
|
||||
nv := make(Vector, len(v)+1)
|
||||
copy(nv, v[:i])
|
||||
nv[i].ID = ID
|
||||
nv[i].Value = 1
|
||||
copy(nv[i+1:], v[i:])
|
||||
return nv
|
||||
}
|
||||
}
|
||||
// Append a new new index
|
||||
return append(v, Counter{ID, 1})
|
||||
}
|
||||
|
||||
// Merge returns the vector containing the maximum indexes from a and b. If it
|
||||
// is possible, the vector a is updated and returned. If it is not, a copy
|
||||
// will be created, updated and returned.
|
||||
func (a Vector) Merge(b Vector) Vector {
|
||||
var ai, bi int
|
||||
for bi < len(b) {
|
||||
if ai == len(a) {
|
||||
// We've reach the end of a, all that remains are appends
|
||||
return append(a, b[bi:]...)
|
||||
}
|
||||
|
||||
if a[ai].ID > b[bi].ID {
|
||||
// The index from b should be inserted here
|
||||
n := make(Vector, len(a)+1)
|
||||
copy(n, a[:ai])
|
||||
n[ai] = b[bi]
|
||||
copy(n[ai+1:], a[ai:])
|
||||
a = n
|
||||
}
|
||||
|
||||
if a[ai].ID == b[bi].ID {
|
||||
if v := b[bi].Value; v > a[ai].Value {
|
||||
a[ai].Value = v
|
||||
}
|
||||
}
|
||||
|
||||
if bi < len(b) && a[ai].ID == b[bi].ID {
|
||||
bi++
|
||||
}
|
||||
ai++
|
||||
}
|
||||
|
||||
return a
|
||||
}
|
||||
|
||||
// Copy returns an identical vector that is not shared with v.
|
||||
func (v Vector) Copy() Vector {
|
||||
nv := make(Vector, len(v))
|
||||
copy(nv, v)
|
||||
return nv
|
||||
}
|
||||
|
||||
// Equal returns true when the two vectors are equivalent.
|
||||
func (a Vector) Equal(b Vector) bool {
|
||||
return a.Compare(b) == Equal
|
||||
}
|
||||
|
||||
// LesserEqual returns true when the two vectors are equivalent or a is Lesser
|
||||
// than b.
|
||||
func (a Vector) LesserEqual(b Vector) bool {
|
||||
comp := a.Compare(b)
|
||||
return comp == Lesser || comp == Equal
|
||||
}
|
||||
|
||||
// LesserEqual returns true when the two vectors are equivalent or a is Greater
|
||||
// than b.
|
||||
func (a Vector) GreaterEqual(b Vector) bool {
|
||||
comp := a.Compare(b)
|
||||
return comp == Greater || comp == Equal
|
||||
}
|
||||
|
||||
// Concurrent returns true when the two vectors are concrurrent.
|
||||
func (a Vector) Concurrent(b Vector) bool {
|
||||
comp := a.Compare(b)
|
||||
return comp == ConcurrentGreater || comp == ConcurrentLesser
|
||||
}
|
||||
|
||||
// Counter returns the current value of the given counter ID.
|
||||
func (v Vector) Counter(id uint64) uint64 {
|
||||
for _, c := range v {
|
||||
if c.ID == id {
|
||||
return c.Value
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
89
Godeps/_workspace/src/github.com/syncthing/protocol/vector_compare.go
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
// Copyright (C) 2015 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
// Ordering represents the relationship between two Vectors.
|
||||
type Ordering int
|
||||
|
||||
const (
|
||||
Equal Ordering = iota
|
||||
Greater
|
||||
Lesser
|
||||
ConcurrentLesser
|
||||
ConcurrentGreater
|
||||
)
|
||||
|
||||
// There's really no such thing as "concurrent lesser" and "concurrent
|
||||
// greater" in version vectors, just "concurrent". But it's useful to be able
|
||||
// to get a strict ordering between versions for stable sorts and so on, so we
|
||||
// return both variants. The convenience method Concurrent() can be used to
|
||||
// check for either case.
|
||||
|
||||
// Compare returns the Ordering that describes a's relation to b.
|
||||
func (a Vector) Compare(b Vector) Ordering {
|
||||
var ai, bi int // index into a and b
|
||||
var av, bv Counter // value at current index
|
||||
|
||||
result := Equal
|
||||
|
||||
for ai < len(a) || bi < len(b) {
|
||||
var aMissing, bMissing bool
|
||||
|
||||
if ai < len(a) {
|
||||
av = a[ai]
|
||||
} else {
|
||||
av = Counter{}
|
||||
aMissing = true
|
||||
}
|
||||
|
||||
if bi < len(b) {
|
||||
bv = b[bi]
|
||||
} else {
|
||||
bv = Counter{}
|
||||
bMissing = true
|
||||
}
|
||||
|
||||
switch {
|
||||
case av.ID == bv.ID:
|
||||
// We have a counter value for each side
|
||||
if av.Value > bv.Value {
|
||||
if result == Lesser {
|
||||
return ConcurrentLesser
|
||||
}
|
||||
result = Greater
|
||||
} else if av.Value < bv.Value {
|
||||
if result == Greater {
|
||||
return ConcurrentGreater
|
||||
}
|
||||
result = Lesser
|
||||
}
|
||||
|
||||
case !aMissing && av.ID < bv.ID || bMissing:
|
||||
// Value is missing on the b side
|
||||
if av.Value > 0 {
|
||||
if result == Lesser {
|
||||
return ConcurrentLesser
|
||||
}
|
||||
result = Greater
|
||||
}
|
||||
|
||||
case !bMissing && bv.ID < av.ID || aMissing:
|
||||
// Value is missing on the a side
|
||||
if bv.Value > 0 {
|
||||
if result == Greater {
|
||||
return ConcurrentGreater
|
||||
}
|
||||
result = Lesser
|
||||
}
|
||||
}
|
||||
|
||||
if ai < len(a) && (av.ID <= bv.ID || bMissing) {
|
||||
ai++
|
||||
}
|
||||
if bi < len(b) && (bv.ID <= av.ID || aMissing) {
|
||||
bi++
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
249
Godeps/_workspace/src/github.com/syncthing/protocol/vector_compare_test.go
generated
vendored
Normal file
@@ -0,0 +1,249 @@
|
||||
// Copyright (C) 2015 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCompare(t *testing.T) {
|
||||
testcases := []struct {
|
||||
a, b Vector
|
||||
r Ordering
|
||||
}{
|
||||
// Empty vectors are identical
|
||||
{Vector{}, Vector{}, Equal},
|
||||
{Vector{}, nil, Equal},
|
||||
{nil, Vector{}, Equal},
|
||||
{nil, Vector{Counter{42, 0}}, Equal},
|
||||
{Vector{}, Vector{Counter{42, 0}}, Equal},
|
||||
{Vector{Counter{42, 0}}, nil, Equal},
|
||||
{Vector{Counter{42, 0}}, Vector{}, Equal},
|
||||
|
||||
// Zero is the implied value for a missing Counter
|
||||
{
|
||||
Vector{Counter{42, 0}},
|
||||
Vector{Counter{77, 0}},
|
||||
Equal,
|
||||
},
|
||||
|
||||
// Equal vectors are equal
|
||||
{
|
||||
Vector{Counter{42, 33}},
|
||||
Vector{Counter{42, 33}},
|
||||
Equal,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 33}, Counter{77, 24}},
|
||||
Vector{Counter{42, 33}, Counter{77, 24}},
|
||||
Equal,
|
||||
},
|
||||
|
||||
// These a-vectors are all greater than the b-vector
|
||||
{
|
||||
Vector{Counter{42, 1}},
|
||||
nil,
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 1}},
|
||||
Vector{},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{0, 1}},
|
||||
Vector{Counter{0, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 1}},
|
||||
Vector{Counter{42, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{math.MaxUint64, 1}},
|
||||
Vector{Counter{math.MaxUint64, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{0, math.MaxUint64}},
|
||||
Vector{Counter{0, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, math.MaxUint64}},
|
||||
Vector{Counter{42, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{math.MaxUint64, math.MaxUint64}},
|
||||
Vector{Counter{math.MaxUint64, 0}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{0, math.MaxUint64}},
|
||||
Vector{Counter{0, math.MaxUint64 - 1}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, math.MaxUint64}},
|
||||
Vector{Counter{42, math.MaxUint64 - 1}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{math.MaxUint64, math.MaxUint64}},
|
||||
Vector{Counter{math.MaxUint64, math.MaxUint64 - 1}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 2}},
|
||||
Vector{Counter{42, 1}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 22}, Counter{42, 2}},
|
||||
Vector{Counter{22, 22}, Counter{42, 1}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 2}, Counter{77, 3}},
|
||||
Vector{Counter{42, 1}, Counter{77, 3}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 22}, Counter{42, 2}, Counter{77, 3}},
|
||||
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
|
||||
Greater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 23}, Counter{42, 2}, Counter{77, 4}},
|
||||
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
|
||||
Greater,
|
||||
},
|
||||
|
||||
// These a-vectors are all lesser than the b-vector
|
||||
{nil, Vector{Counter{42, 1}}, Lesser},
|
||||
{Vector{}, Vector{Counter{42, 1}}, Lesser},
|
||||
{
|
||||
Vector{Counter{42, 0}},
|
||||
Vector{Counter{42, 1}},
|
||||
Lesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 1}},
|
||||
Vector{Counter{42, 2}},
|
||||
Lesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 22}, Counter{42, 1}},
|
||||
Vector{Counter{22, 22}, Counter{42, 2}},
|
||||
Lesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 1}, Counter{77, 3}},
|
||||
Vector{Counter{42, 2}, Counter{77, 3}},
|
||||
Lesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
|
||||
Vector{Counter{22, 22}, Counter{42, 2}, Counter{77, 3}},
|
||||
Lesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 22}, Counter{42, 1}, Counter{77, 3}},
|
||||
Vector{Counter{22, 23}, Counter{42, 2}, Counter{77, 4}},
|
||||
Lesser,
|
||||
},
|
||||
|
||||
// These are all in conflict
|
||||
{
|
||||
Vector{Counter{42, 2}},
|
||||
Vector{Counter{43, 1}},
|
||||
ConcurrentGreater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{43, 1}},
|
||||
Vector{Counter{42, 2}},
|
||||
ConcurrentLesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 23}, Counter{42, 1}},
|
||||
Vector{Counter{22, 22}, Counter{42, 2}},
|
||||
ConcurrentGreater,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 21}, Counter{42, 2}},
|
||||
Vector{Counter{22, 22}, Counter{42, 1}},
|
||||
ConcurrentLesser,
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 21}, Counter{42, 2}, Counter{43, 1}},
|
||||
Vector{Counter{20, 1}, Counter{22, 22}, Counter{42, 1}},
|
||||
ConcurrentLesser,
|
||||
},
|
||||
}
|
||||
|
||||
for i, tc := range testcases {
|
||||
// Test real Compare
|
||||
if r := tc.a.Compare(tc.b); r != tc.r {
|
||||
t.Errorf("%d: %+v.Compare(%+v) == %v (expected %v)", i, tc.a, tc.b, r, tc.r)
|
||||
}
|
||||
|
||||
// Test convenience functions
|
||||
switch tc.r {
|
||||
case Greater:
|
||||
if tc.a.Equal(tc.b) {
|
||||
t.Errorf("%+v == %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.Concurrent(tc.b) {
|
||||
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
|
||||
}
|
||||
if !tc.a.GreaterEqual(tc.b) {
|
||||
t.Errorf("%+v not >= %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.LesserEqual(tc.b) {
|
||||
t.Errorf("%+v <= %+v", tc.a, tc.b)
|
||||
}
|
||||
case Lesser:
|
||||
if tc.a.Concurrent(tc.b) {
|
||||
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.Equal(tc.b) {
|
||||
t.Errorf("%+v == %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.GreaterEqual(tc.b) {
|
||||
t.Errorf("%+v >= %+v", tc.a, tc.b)
|
||||
}
|
||||
if !tc.a.LesserEqual(tc.b) {
|
||||
t.Errorf("%+v not <= %+v", tc.a, tc.b)
|
||||
}
|
||||
case Equal:
|
||||
if tc.a.Concurrent(tc.b) {
|
||||
t.Errorf("%+v concurrent %+v", tc.a, tc.b)
|
||||
}
|
||||
if !tc.a.Equal(tc.b) {
|
||||
t.Errorf("%+v not == %+v", tc.a, tc.b)
|
||||
}
|
||||
if !tc.a.GreaterEqual(tc.b) {
|
||||
t.Errorf("%+v not <= %+v", tc.a, tc.b)
|
||||
}
|
||||
if !tc.a.LesserEqual(tc.b) {
|
||||
t.Errorf("%+v not <= %+v", tc.a, tc.b)
|
||||
}
|
||||
case ConcurrentLesser, ConcurrentGreater:
|
||||
if !tc.a.Concurrent(tc.b) {
|
||||
t.Errorf("%+v not concurrent %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.Equal(tc.b) {
|
||||
t.Errorf("%+v == %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.GreaterEqual(tc.b) {
|
||||
t.Errorf("%+v >= %+v", tc.a, tc.b)
|
||||
}
|
||||
if tc.a.LesserEqual(tc.b) {
|
||||
t.Errorf("%+v <= %+v", tc.a, tc.b)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
134
Godeps/_workspace/src/github.com/syncthing/protocol/vector_test.go
generated
vendored
Normal file
@@ -0,0 +1,134 @@
|
||||
// Copyright (C) 2015 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestUpdate(t *testing.T) {
|
||||
var v Vector
|
||||
|
||||
// Append
|
||||
|
||||
v = v.Update(42)
|
||||
expected := Vector{Counter{42, 1}}
|
||||
|
||||
if v.Compare(expected) != Equal {
|
||||
t.Errorf("Update error, %+v != %+v", v, expected)
|
||||
}
|
||||
|
||||
// Insert at front
|
||||
|
||||
v = v.Update(36)
|
||||
expected = Vector{Counter{36, 1}, Counter{42, 1}}
|
||||
|
||||
if v.Compare(expected) != Equal {
|
||||
t.Errorf("Update error, %+v != %+v", v, expected)
|
||||
}
|
||||
|
||||
// Insert in moddle
|
||||
|
||||
v = v.Update(37)
|
||||
expected = Vector{Counter{36, 1}, Counter{37, 1}, Counter{42, 1}}
|
||||
|
||||
if v.Compare(expected) != Equal {
|
||||
t.Errorf("Update error, %+v != %+v", v, expected)
|
||||
}
|
||||
|
||||
// Update existing
|
||||
|
||||
v = v.Update(37)
|
||||
expected = Vector{Counter{36, 1}, Counter{37, 2}, Counter{42, 1}}
|
||||
|
||||
if v.Compare(expected) != Equal {
|
||||
t.Errorf("Update error, %+v != %+v", v, expected)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCopy(t *testing.T) {
|
||||
v0 := Vector{Counter{42, 1}}
|
||||
v1 := v0.Copy()
|
||||
v1.Update(42)
|
||||
if v0.Compare(v1) != Lesser {
|
||||
t.Errorf("Copy error, %+v should be ancestor of %+v", v0, v1)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMerge(t *testing.T) {
|
||||
testcases := []struct {
|
||||
a, b, m Vector
|
||||
}{
|
||||
// No-ops
|
||||
{
|
||||
Vector{},
|
||||
Vector{},
|
||||
Vector{},
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
},
|
||||
|
||||
// Appends
|
||||
{
|
||||
Vector{},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 1}},
|
||||
Vector{Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
},
|
||||
{
|
||||
Vector{Counter{22, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
},
|
||||
|
||||
// Insert
|
||||
{
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{23, 2}, Counter{42, 1}},
|
||||
Vector{Counter{22, 1}, Counter{23, 2}, Counter{42, 1}},
|
||||
},
|
||||
{
|
||||
Vector{Counter{42, 1}},
|
||||
Vector{Counter{22, 1}},
|
||||
Vector{Counter{22, 1}, Counter{42, 1}},
|
||||
},
|
||||
|
||||
// Update
|
||||
{
|
||||
Vector{Counter{22, 1}, Counter{42, 2}},
|
||||
Vector{Counter{22, 2}, Counter{42, 1}},
|
||||
Vector{Counter{22, 2}, Counter{42, 2}},
|
||||
},
|
||||
|
||||
// All of the above
|
||||
{
|
||||
Vector{Counter{10, 1}, Counter{20, 2}, Counter{30, 1}},
|
||||
Vector{Counter{5, 1}, Counter{10, 2}, Counter{15, 1}, Counter{20, 1}, Counter{25, 1}, Counter{35, 1}},
|
||||
Vector{Counter{5, 1}, Counter{10, 2}, Counter{15, 1}, Counter{20, 2}, Counter{25, 1}, Counter{30, 1}, Counter{35, 1}},
|
||||
},
|
||||
}
|
||||
|
||||
for i, tc := range testcases {
|
||||
if m := tc.a.Merge(tc.b); m.Compare(tc.m) != Equal {
|
||||
t.Errorf("%d: %+v.Merge(%+v) == %+v (expected %+v)", i, tc.a, tc.b, m, tc.m)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCounterValue(t *testing.T) {
|
||||
v0 := Vector{Counter{42, 1}, Counter{64, 5}}
|
||||
if v0.Counter(42) != 1 {
|
||||
t.Error("Counter error, %d != %d", v0.Counter(42), 1)
|
||||
}
|
||||
if v0.Counter(64) != 5 {
|
||||
t.Error("Counter error, %d != %d", v0.Counter(64), 5)
|
||||
}
|
||||
if v0.Counter(72) != 0 {
|
||||
t.Error("Counter error, %d != %d", v0.Counter(72), 0)
|
||||
}
|
||||
}
|
||||
38
Godeps/_workspace/src/github.com/syncthing/protocol/vector_xdr.go
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
// Copyright (C) 2015 The Protocol Authors.
|
||||
|
||||
package protocol
|
||||
|
||||
// This stuff is hacked up manually because genxdr doesn't support 'type
|
||||
// Vector []Counter' declarations and it was tricky when I tried to add it...
|
||||
|
||||
type xdrWriter interface {
|
||||
WriteUint32(uint32) (int, error)
|
||||
WriteUint64(uint64) (int, error)
|
||||
}
|
||||
type xdrReader interface {
|
||||
ReadUint32() uint32
|
||||
ReadUint64() uint64
|
||||
}
|
||||
|
||||
// EncodeXDRInto encodes the vector as an XDR object into the given XDR
|
||||
// encoder.
|
||||
func (v Vector) EncodeXDRInto(w xdrWriter) (int, error) {
|
||||
w.WriteUint32(uint32(len(v)))
|
||||
for i := range v {
|
||||
w.WriteUint64(v[i].ID)
|
||||
w.WriteUint64(v[i].Value)
|
||||
}
|
||||
return 4 + 16*len(v), nil
|
||||
}
|
||||
|
||||
// DecodeXDRFrom decodes the XDR objects from the given reader into itself.
|
||||
func (v *Vector) DecodeXDRFrom(r xdrReader) error {
|
||||
l := int(r.ReadUint32())
|
||||
n := make(Vector, l)
|
||||
for i := range n {
|
||||
n[i].ID = r.ReadUint64()
|
||||
n[i].Value = r.ReadUint64()
|
||||
}
|
||||
*v = n
|
||||
return nil
|
||||
}
|
||||
12
Godeps/_workspace/src/github.com/syncthing/protocol/wireformat.go
generated
vendored
@@ -20,7 +20,7 @@ func (c wireFormatConnection) Name() string {
|
||||
return c.next.Name()
|
||||
}
|
||||
|
||||
func (c wireFormatConnection) Index(folder string, fs []FileInfo) error {
|
||||
func (c wireFormatConnection) Index(folder string, fs []FileInfo, flags uint32, options []Option) error {
|
||||
var myFs = make([]FileInfo, len(fs))
|
||||
copy(myFs, fs)
|
||||
|
||||
@@ -28,10 +28,10 @@ func (c wireFormatConnection) Index(folder string, fs []FileInfo) error {
|
||||
myFs[i].Name = norm.NFC.String(filepath.ToSlash(myFs[i].Name))
|
||||
}
|
||||
|
||||
return c.next.Index(folder, myFs)
|
||||
return c.next.Index(folder, myFs, flags, options)
|
||||
}
|
||||
|
||||
func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo) error {
|
||||
func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo, flags uint32, options []Option) error {
|
||||
var myFs = make([]FileInfo, len(fs))
|
||||
copy(myFs, fs)
|
||||
|
||||
@@ -39,12 +39,12 @@ func (c wireFormatConnection) IndexUpdate(folder string, fs []FileInfo) error {
|
||||
myFs[i].Name = norm.NFC.String(filepath.ToSlash(myFs[i].Name))
|
||||
}
|
||||
|
||||
return c.next.IndexUpdate(folder, myFs)
|
||||
return c.next.IndexUpdate(folder, myFs, flags, options)
|
||||
}
|
||||
|
||||
func (c wireFormatConnection) Request(folder, name string, offset int64, size int) ([]byte, error) {
|
||||
func (c wireFormatConnection) Request(folder, name string, offset int64, size int, hash []byte, flags uint32, options []Option) ([]byte, error) {
|
||||
name = norm.NFC.String(filepath.ToSlash(name))
|
||||
return c.next.Request(folder, name, offset, size)
|
||||
return c.next.Request(folder, name, offset, size, hash, flags, options)
|
||||
}
|
||||
|
||||
func (c wireFormatConnection) ClusterConfig(config ClusterConfigMessage) {
|
||||
|
||||
14
Godeps/_workspace/src/github.com/syndtr/goleveldb/leveldb/opt/options.go
generated
vendored
@@ -153,7 +153,7 @@ type Options struct {
|
||||
BlockCacher Cacher
|
||||
|
||||
// BlockCacheCapacity defines the capacity of the 'sorted table' block caching.
|
||||
// Use -1 for zero, this has same effect with specifying NoCacher to BlockCacher.
|
||||
// Use -1 for zero, this has same effect as specifying NoCacher to BlockCacher.
|
||||
//
|
||||
// The default value is 8MiB.
|
||||
BlockCacheCapacity int
|
||||
@@ -308,7 +308,7 @@ type Options struct {
|
||||
OpenFilesCacher Cacher
|
||||
|
||||
// OpenFilesCacheCapacity defines the capacity of the open files caching.
|
||||
// Use -1 for zero, this has same effect with specifying NoCacher to OpenFilesCacher.
|
||||
// Use -1 for zero, this has same effect as specifying NoCacher to OpenFilesCacher.
|
||||
//
|
||||
// The default value is 500.
|
||||
OpenFilesCacheCapacity int
|
||||
@@ -355,9 +355,9 @@ func (o *Options) GetBlockCacher() Cacher {
|
||||
}
|
||||
|
||||
func (o *Options) GetBlockCacheCapacity() int {
|
||||
if o == nil || o.BlockCacheCapacity <= 0 {
|
||||
if o == nil || o.BlockCacheCapacity == 0 {
|
||||
return DefaultBlockCacheCapacity
|
||||
} else if o.BlockCacheCapacity == -1 {
|
||||
} else if o.BlockCacheCapacity < 0 {
|
||||
return 0
|
||||
}
|
||||
return o.BlockCacheCapacity
|
||||
@@ -497,7 +497,7 @@ func (o *Options) GetMaxMemCompationLevel() int {
|
||||
if o != nil {
|
||||
if o.MaxMemCompationLevel > 0 {
|
||||
level = o.MaxMemCompationLevel
|
||||
} else if o.MaxMemCompationLevel == -1 {
|
||||
} else if o.MaxMemCompationLevel < 0 {
|
||||
level = 0
|
||||
}
|
||||
}
|
||||
@@ -525,9 +525,9 @@ func (o *Options) GetOpenFilesCacher() Cacher {
|
||||
}
|
||||
|
||||
func (o *Options) GetOpenFilesCacheCapacity() int {
|
||||
if o == nil || o.OpenFilesCacheCapacity <= 0 {
|
||||
if o == nil || o.OpenFilesCacheCapacity == 0 {
|
||||
return DefaultOpenFilesCacheCapacity
|
||||
} else if o.OpenFilesCacheCapacity == -1 {
|
||||
} else if o.OpenFilesCacheCapacity < 0 {
|
||||
return 0
|
||||
}
|
||||
return o.OpenFilesCacheCapacity
|
||||
|
||||
172
Godeps/_workspace/src/github.com/syndtr/gosnappy/snappy/decode.go
generated
vendored
@@ -7,10 +7,15 @@ package snappy
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// ErrCorrupt reports that the input is invalid.
|
||||
var ErrCorrupt = errors.New("snappy: corrupt input")
|
||||
var (
|
||||
// ErrCorrupt reports that the input is invalid.
|
||||
ErrCorrupt = errors.New("snappy: corrupt input")
|
||||
// ErrUnsupported reports that the input isn't supported.
|
||||
ErrUnsupported = errors.New("snappy: unsupported input")
|
||||
)
|
||||
|
||||
// DecodedLen returns the length of the decoded block.
|
||||
func DecodedLen(src []byte) (int, error) {
|
||||
@@ -122,3 +127,166 @@ func Decode(dst, src []byte) ([]byte, error) {
|
||||
}
|
||||
return dst[:d], nil
|
||||
}
|
||||
|
||||
// NewReader returns a new Reader that decompresses from r, using the framing
|
||||
// format described at
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
|
||||
func NewReader(r io.Reader) *Reader {
|
||||
return &Reader{
|
||||
r: r,
|
||||
decoded: make([]byte, maxUncompressedChunkLen),
|
||||
buf: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)+checksumSize),
|
||||
}
|
||||
}
|
||||
|
||||
// Reader is an io.Reader than can read Snappy-compressed bytes.
|
||||
type Reader struct {
|
||||
r io.Reader
|
||||
err error
|
||||
decoded []byte
|
||||
buf []byte
|
||||
// decoded[i:j] contains decoded bytes that have not yet been passed on.
|
||||
i, j int
|
||||
readHeader bool
|
||||
}
|
||||
|
||||
// Reset discards any buffered data, resets all state, and switches the Snappy
|
||||
// reader to read from r. This permits reusing a Reader rather than allocating
|
||||
// a new one.
|
||||
func (r *Reader) Reset(reader io.Reader) {
|
||||
r.r = reader
|
||||
r.err = nil
|
||||
r.i = 0
|
||||
r.j = 0
|
||||
r.readHeader = false
|
||||
}
|
||||
|
||||
func (r *Reader) readFull(p []byte) (ok bool) {
|
||||
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
|
||||
if r.err == io.ErrUnexpectedEOF {
|
||||
r.err = ErrCorrupt
|
||||
}
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Read satisfies the io.Reader interface.
|
||||
func (r *Reader) Read(p []byte) (int, error) {
|
||||
if r.err != nil {
|
||||
return 0, r.err
|
||||
}
|
||||
for {
|
||||
if r.i < r.j {
|
||||
n := copy(p, r.decoded[r.i:r.j])
|
||||
r.i += n
|
||||
return n, nil
|
||||
}
|
||||
if !r.readFull(r.buf[:4]) {
|
||||
return 0, r.err
|
||||
}
|
||||
chunkType := r.buf[0]
|
||||
if !r.readHeader {
|
||||
if chunkType != chunkTypeStreamIdentifier {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.readHeader = true
|
||||
}
|
||||
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
|
||||
if chunkLen > len(r.buf) {
|
||||
r.err = ErrUnsupported
|
||||
return 0, r.err
|
||||
}
|
||||
|
||||
// The chunk types are specified at
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
|
||||
switch chunkType {
|
||||
case chunkTypeCompressedData:
|
||||
// Section 4.2. Compressed data (chunk type 0x00).
|
||||
if chunkLen < checksumSize {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
buf := r.buf[:chunkLen]
|
||||
if !r.readFull(buf) {
|
||||
return 0, r.err
|
||||
}
|
||||
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
|
||||
buf = buf[checksumSize:]
|
||||
|
||||
n, err := DecodedLen(buf)
|
||||
if err != nil {
|
||||
r.err = err
|
||||
return 0, r.err
|
||||
}
|
||||
if n > len(r.decoded) {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
if _, err := Decode(r.decoded, buf); err != nil {
|
||||
r.err = err
|
||||
return 0, r.err
|
||||
}
|
||||
if crc(r.decoded[:n]) != checksum {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.i, r.j = 0, n
|
||||
continue
|
||||
|
||||
case chunkTypeUncompressedData:
|
||||
// Section 4.3. Uncompressed data (chunk type 0x01).
|
||||
if chunkLen < checksumSize {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
buf := r.buf[:checksumSize]
|
||||
if !r.readFull(buf) {
|
||||
return 0, r.err
|
||||
}
|
||||
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
|
||||
// Read directly into r.decoded instead of via r.buf.
|
||||
n := chunkLen - checksumSize
|
||||
if !r.readFull(r.decoded[:n]) {
|
||||
return 0, r.err
|
||||
}
|
||||
if crc(r.decoded[:n]) != checksum {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
r.i, r.j = 0, n
|
||||
continue
|
||||
|
||||
case chunkTypeStreamIdentifier:
|
||||
// Section 4.1. Stream identifier (chunk type 0xff).
|
||||
if chunkLen != len(magicBody) {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
if !r.readFull(r.buf[:len(magicBody)]) {
|
||||
return 0, r.err
|
||||
}
|
||||
for i := 0; i < len(magicBody); i++ {
|
||||
if r.buf[i] != magicBody[i] {
|
||||
r.err = ErrCorrupt
|
||||
return 0, r.err
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if chunkType <= 0x7f {
|
||||
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
|
||||
r.err = ErrUnsupported
|
||||
return 0, r.err
|
||||
|
||||
} else {
|
||||
// Section 4.4 Padding (chunk type 0xfe).
|
||||
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
|
||||
if !r.readFull(r.buf[:chunkLen]) {
|
||||
return 0, r.err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
84
Godeps/_workspace/src/github.com/syndtr/gosnappy/snappy/encode.go
generated
vendored
@@ -6,6 +6,7 @@ package snappy
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"io"
|
||||
)
|
||||
|
||||
// We limit how far copy back-references can go, the same as the C++ code.
|
||||
@@ -172,3 +173,86 @@ func MaxEncodedLen(srcLen int) int {
|
||||
// This last factor dominates the blowup, so the final estimate is:
|
||||
return 32 + srcLen + srcLen/6
|
||||
}
|
||||
|
||||
// NewWriter returns a new Writer that compresses to w, using the framing
|
||||
// format described at
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
|
||||
func NewWriter(w io.Writer) *Writer {
|
||||
return &Writer{
|
||||
w: w,
|
||||
enc: make([]byte, MaxEncodedLen(maxUncompressedChunkLen)),
|
||||
}
|
||||
}
|
||||
|
||||
// Writer is an io.Writer than can write Snappy-compressed bytes.
|
||||
type Writer struct {
|
||||
w io.Writer
|
||||
err error
|
||||
enc []byte
|
||||
buf [checksumSize + chunkHeaderSize]byte
|
||||
wroteHeader bool
|
||||
}
|
||||
|
||||
// Reset discards the writer's state and switches the Snappy writer to write to
|
||||
// w. This permits reusing a Writer rather than allocating a new one.
|
||||
func (w *Writer) Reset(writer io.Writer) {
|
||||
w.w = writer
|
||||
w.err = nil
|
||||
w.wroteHeader = false
|
||||
}
|
||||
|
||||
// Write satisfies the io.Writer interface.
|
||||
func (w *Writer) Write(p []byte) (n int, errRet error) {
|
||||
if w.err != nil {
|
||||
return 0, w.err
|
||||
}
|
||||
if !w.wroteHeader {
|
||||
copy(w.enc, magicChunk)
|
||||
if _, err := w.w.Write(w.enc[:len(magicChunk)]); err != nil {
|
||||
w.err = err
|
||||
return n, err
|
||||
}
|
||||
w.wroteHeader = true
|
||||
}
|
||||
for len(p) > 0 {
|
||||
var uncompressed []byte
|
||||
if len(p) > maxUncompressedChunkLen {
|
||||
uncompressed, p = p[:maxUncompressedChunkLen], p[maxUncompressedChunkLen:]
|
||||
} else {
|
||||
uncompressed, p = p, nil
|
||||
}
|
||||
checksum := crc(uncompressed)
|
||||
|
||||
// Compress the buffer, discarding the result if the improvement
|
||||
// isn't at least 12.5%.
|
||||
chunkType := uint8(chunkTypeCompressedData)
|
||||
chunkBody, err := Encode(w.enc, uncompressed)
|
||||
if err != nil {
|
||||
w.err = err
|
||||
return n, err
|
||||
}
|
||||
if len(chunkBody) >= len(uncompressed)-len(uncompressed)/8 {
|
||||
chunkType, chunkBody = chunkTypeUncompressedData, uncompressed
|
||||
}
|
||||
|
||||
chunkLen := 4 + len(chunkBody)
|
||||
w.buf[0] = chunkType
|
||||
w.buf[1] = uint8(chunkLen >> 0)
|
||||
w.buf[2] = uint8(chunkLen >> 8)
|
||||
w.buf[3] = uint8(chunkLen >> 16)
|
||||
w.buf[4] = uint8(checksum >> 0)
|
||||
w.buf[5] = uint8(checksum >> 8)
|
||||
w.buf[6] = uint8(checksum >> 16)
|
||||
w.buf[7] = uint8(checksum >> 24)
|
||||
if _, err = w.w.Write(w.buf[:]); err != nil {
|
||||
w.err = err
|
||||
return n, err
|
||||
}
|
||||
if _, err = w.w.Write(chunkBody); err != nil {
|
||||
w.err = err
|
||||
return n, err
|
||||
}
|
||||
n += len(uncompressed)
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
30
Godeps/_workspace/src/github.com/syndtr/gosnappy/snappy/snappy.go
generated
vendored
@@ -8,6 +8,10 @@
|
||||
// The C++ snappy implementation is at http://code.google.com/p/snappy/
|
||||
package snappy
|
||||
|
||||
import (
|
||||
"hash/crc32"
|
||||
)
|
||||
|
||||
/*
|
||||
Each encoded block begins with the varint-encoded length of the decoded data,
|
||||
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
|
||||
@@ -36,3 +40,29 @@ const (
|
||||
tagCopy2 = 0x02
|
||||
tagCopy4 = 0x03
|
||||
)
|
||||
|
||||
const (
|
||||
checksumSize = 4
|
||||
chunkHeaderSize = 4
|
||||
magicChunk = "\xff\x06\x00\x00" + magicBody
|
||||
magicBody = "sNaPpY"
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt says
|
||||
// that "the uncompressed data in a chunk must be no longer than 65536 bytes".
|
||||
maxUncompressedChunkLen = 65536
|
||||
)
|
||||
|
||||
const (
|
||||
chunkTypeCompressedData = 0x00
|
||||
chunkTypeUncompressedData = 0x01
|
||||
chunkTypePadding = 0xfe
|
||||
chunkTypeStreamIdentifier = 0xff
|
||||
)
|
||||
|
||||
var crcTable = crc32.MakeTable(crc32.Castagnoli)
|
||||
|
||||
// crc implements the checksum specified in section 3 of
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt
|
||||
func crc(b []byte) uint32 {
|
||||
c := crc32.Update(0, crcTable, b)
|
||||
return uint32(c>>15|c<<17) + 0xa282ead8
|
||||
}
|
||||
|
||||
201
Godeps/_workspace/src/github.com/syndtr/gosnappy/snappy/snappy_test.go
generated
vendored
@@ -18,7 +18,10 @@ import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
var download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
|
||||
var (
|
||||
download = flag.Bool("download", false, "If true, download any missing files before running benchmarks")
|
||||
testdata = flag.String("testdata", "testdata", "Directory containing the test data")
|
||||
)
|
||||
|
||||
func roundtrip(b, ebuf, dbuf []byte) error {
|
||||
e, err := Encode(ebuf, b)
|
||||
@@ -55,11 +58,11 @@ func TestSmallCopy(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSmallRand(t *testing.T) {
|
||||
rand.Seed(27354294)
|
||||
rng := rand.New(rand.NewSource(27354294))
|
||||
for n := 1; n < 20000; n += 23 {
|
||||
b := make([]byte, n)
|
||||
for i, _ := range b {
|
||||
b[i] = uint8(rand.Uint32())
|
||||
for i := range b {
|
||||
b[i] = uint8(rng.Uint32())
|
||||
}
|
||||
if err := roundtrip(b, nil, nil); err != nil {
|
||||
t.Fatal(err)
|
||||
@@ -70,7 +73,7 @@ func TestSmallRand(t *testing.T) {
|
||||
func TestSmallRegular(t *testing.T) {
|
||||
for n := 1; n < 20000; n += 23 {
|
||||
b := make([]byte, n)
|
||||
for i, _ := range b {
|
||||
for i := range b {
|
||||
b[i] = uint8(i%10 + 'a')
|
||||
}
|
||||
if err := roundtrip(b, nil, nil); err != nil {
|
||||
@@ -79,6 +82,120 @@ func TestSmallRegular(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func cmp(a, b []byte) error {
|
||||
if len(a) != len(b) {
|
||||
return fmt.Errorf("got %d bytes, want %d", len(a), len(b))
|
||||
}
|
||||
for i := range a {
|
||||
if a[i] != b[i] {
|
||||
return fmt.Errorf("byte #%d: got 0x%02x, want 0x%02x", i, a[i], b[i])
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestFramingFormat(t *testing.T) {
|
||||
// src is comprised of alternating 1e5-sized sequences of random
|
||||
// (incompressible) bytes and repeated (compressible) bytes. 1e5 was chosen
|
||||
// because it is larger than maxUncompressedChunkLen (64k).
|
||||
src := make([]byte, 1e6)
|
||||
rng := rand.New(rand.NewSource(1))
|
||||
for i := 0; i < 10; i++ {
|
||||
if i%2 == 0 {
|
||||
for j := 0; j < 1e5; j++ {
|
||||
src[1e5*i+j] = uint8(rng.Intn(256))
|
||||
}
|
||||
} else {
|
||||
for j := 0; j < 1e5; j++ {
|
||||
src[1e5*i+j] = uint8(i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := NewWriter(buf).Write(src); err != nil {
|
||||
t.Fatalf("Write: encoding: %v", err)
|
||||
}
|
||||
dst, err := ioutil.ReadAll(NewReader(buf))
|
||||
if err != nil {
|
||||
t.Fatalf("ReadAll: decoding: %v", err)
|
||||
}
|
||||
if err := cmp(dst, src); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReaderReset(t *testing.T) {
|
||||
gold := bytes.Repeat([]byte("All that is gold does not glitter,\n"), 10000)
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := NewWriter(buf).Write(gold); err != nil {
|
||||
t.Fatalf("Write: %v", err)
|
||||
}
|
||||
encoded, invalid, partial := buf.String(), "invalid", "partial"
|
||||
r := NewReader(nil)
|
||||
for i, s := range []string{encoded, invalid, partial, encoded, partial, invalid, encoded, encoded} {
|
||||
if s == partial {
|
||||
r.Reset(strings.NewReader(encoded))
|
||||
if _, err := r.Read(make([]byte, 101)); err != nil {
|
||||
t.Errorf("#%d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
continue
|
||||
}
|
||||
r.Reset(strings.NewReader(s))
|
||||
got, err := ioutil.ReadAll(r)
|
||||
switch s {
|
||||
case encoded:
|
||||
if err != nil {
|
||||
t.Errorf("#%d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
if err := cmp(got, gold); err != nil {
|
||||
t.Errorf("#%d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
case invalid:
|
||||
if err == nil {
|
||||
t.Errorf("#%d: got nil error, want non-nil", i)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriterReset(t *testing.T) {
|
||||
gold := bytes.Repeat([]byte("Not all those who wander are lost;\n"), 10000)
|
||||
var gots, wants [][]byte
|
||||
const n = 20
|
||||
w, failed := NewWriter(nil), false
|
||||
for i := 0; i <= n; i++ {
|
||||
buf := new(bytes.Buffer)
|
||||
w.Reset(buf)
|
||||
want := gold[:len(gold)*i/n]
|
||||
if _, err := w.Write(want); err != nil {
|
||||
t.Errorf("#%d: Write: %v", i, err)
|
||||
failed = true
|
||||
continue
|
||||
}
|
||||
got, err := ioutil.ReadAll(NewReader(buf))
|
||||
if err != nil {
|
||||
t.Errorf("#%d: ReadAll: %v", i, err)
|
||||
failed = true
|
||||
continue
|
||||
}
|
||||
gots = append(gots, got)
|
||||
wants = append(wants, want)
|
||||
}
|
||||
if failed {
|
||||
return
|
||||
}
|
||||
for i := range gots {
|
||||
if err := cmp(gots[i], wants[i]); err != nil {
|
||||
t.Errorf("#%d: %v", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func benchDecode(b *testing.B, src []byte) {
|
||||
encoded, err := Encode(nil, src)
|
||||
if err != nil {
|
||||
@@ -102,7 +219,7 @@ func benchEncode(b *testing.B, src []byte) {
|
||||
}
|
||||
}
|
||||
|
||||
func readFile(b *testing.B, filename string) []byte {
|
||||
func readFile(b testing.TB, filename string) []byte {
|
||||
src, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
b.Fatalf("failed reading %s: %s", filename, err)
|
||||
@@ -144,7 +261,7 @@ func BenchmarkWordsEncode1e5(b *testing.B) { benchWords(b, 1e5, false) }
|
||||
func BenchmarkWordsEncode1e6(b *testing.B) { benchWords(b, 1e6, false) }
|
||||
|
||||
// testFiles' values are copied directly from
|
||||
// https://code.google.com/p/snappy/source/browse/trunk/snappy_unittest.cc.
|
||||
// https://raw.githubusercontent.com/google/snappy/master/snappy_unittest.cc
|
||||
// The label field is unused in snappy-go.
|
||||
var testFiles = []struct {
|
||||
label string
|
||||
@@ -152,29 +269,36 @@ var testFiles = []struct {
|
||||
}{
|
||||
{"html", "html"},
|
||||
{"urls", "urls.10K"},
|
||||
{"jpg", "house.jpg"},
|
||||
{"pdf", "mapreduce-osdi-1.pdf"},
|
||||
{"jpg", "fireworks.jpeg"},
|
||||
{"jpg_200", "fireworks.jpeg"},
|
||||
{"pdf", "paper-100k.pdf"},
|
||||
{"html4", "html_x_4"},
|
||||
{"cp", "cp.html"},
|
||||
{"c", "fields.c"},
|
||||
{"lsp", "grammar.lsp"},
|
||||
{"xls", "kennedy.xls"},
|
||||
{"txt1", "alice29.txt"},
|
||||
{"txt2", "asyoulik.txt"},
|
||||
{"txt3", "lcet10.txt"},
|
||||
{"txt4", "plrabn12.txt"},
|
||||
{"bin", "ptt5"},
|
||||
{"sum", "sum"},
|
||||
{"man", "xargs.1"},
|
||||
{"pb", "geo.protodata"},
|
||||
{"gaviota", "kppkn.gtb"},
|
||||
}
|
||||
|
||||
// The test data files are present at this canonical URL.
|
||||
const baseURL = "https://snappy.googlecode.com/svn/trunk/testdata/"
|
||||
const baseURL = "https://raw.githubusercontent.com/google/snappy/master/testdata/"
|
||||
|
||||
func downloadTestdata(basename string) (errRet error) {
|
||||
filename := filepath.Join("testdata", basename)
|
||||
filename := filepath.Join(*testdata, basename)
|
||||
if stat, err := os.Stat(filename); err == nil && stat.Size() != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !*download {
|
||||
return fmt.Errorf("test data not found; skipping benchmark without the -download flag")
|
||||
}
|
||||
// Download the official snappy C++ implementation reference test data
|
||||
// files for benchmarking.
|
||||
if err := os.Mkdir(*testdata, 0777); err != nil && !os.IsExist(err) {
|
||||
return fmt.Errorf("failed to create testdata: %s", err)
|
||||
}
|
||||
|
||||
f, err := os.Create(filename)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create %s: %s", filename, err)
|
||||
@@ -185,36 +309,27 @@ func downloadTestdata(basename string) (errRet error) {
|
||||
os.Remove(filename)
|
||||
}
|
||||
}()
|
||||
resp, err := http.Get(baseURL + basename)
|
||||
url := baseURL + basename
|
||||
resp, err := http.Get(url)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to download %s: %s", baseURL+basename, err)
|
||||
return fmt.Errorf("failed to download %s: %s", url, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if s := resp.StatusCode; s != http.StatusOK {
|
||||
return fmt.Errorf("downloading %s: HTTP status code %d (%s)", url, s, http.StatusText(s))
|
||||
}
|
||||
_, err = io.Copy(f, resp.Body)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to write %s: %s", filename, err)
|
||||
return fmt.Errorf("failed to download %s to %s: %s", url, filename, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func benchFile(b *testing.B, n int, decode bool) {
|
||||
filename := filepath.Join("testdata", testFiles[n].filename)
|
||||
if stat, err := os.Stat(filename); err != nil || stat.Size() == 0 {
|
||||
if !*download {
|
||||
b.Fatal("test data not found; skipping benchmark without the -download flag")
|
||||
}
|
||||
// Download the official snappy C++ implementation reference test data
|
||||
// files for benchmarking.
|
||||
if err := os.Mkdir("testdata", 0777); err != nil && !os.IsExist(err) {
|
||||
b.Fatalf("failed to create testdata: %s", err)
|
||||
}
|
||||
for _, tf := range testFiles {
|
||||
if err := downloadTestdata(tf.filename); err != nil {
|
||||
b.Fatalf("failed to download testdata: %s", err)
|
||||
}
|
||||
}
|
||||
if err := downloadTestdata(testFiles[n].filename); err != nil {
|
||||
b.Fatalf("failed to download testdata: %s", err)
|
||||
}
|
||||
data := readFile(b, filename)
|
||||
data := readFile(b, filepath.Join(*testdata, testFiles[n].filename))
|
||||
if decode {
|
||||
benchDecode(b, data)
|
||||
} else {
|
||||
@@ -235,12 +350,6 @@ func Benchmark_UFlat8(b *testing.B) { benchFile(b, 8, true) }
|
||||
func Benchmark_UFlat9(b *testing.B) { benchFile(b, 9, true) }
|
||||
func Benchmark_UFlat10(b *testing.B) { benchFile(b, 10, true) }
|
||||
func Benchmark_UFlat11(b *testing.B) { benchFile(b, 11, true) }
|
||||
func Benchmark_UFlat12(b *testing.B) { benchFile(b, 12, true) }
|
||||
func Benchmark_UFlat13(b *testing.B) { benchFile(b, 13, true) }
|
||||
func Benchmark_UFlat14(b *testing.B) { benchFile(b, 14, true) }
|
||||
func Benchmark_UFlat15(b *testing.B) { benchFile(b, 15, true) }
|
||||
func Benchmark_UFlat16(b *testing.B) { benchFile(b, 16, true) }
|
||||
func Benchmark_UFlat17(b *testing.B) { benchFile(b, 17, true) }
|
||||
func Benchmark_ZFlat0(b *testing.B) { benchFile(b, 0, false) }
|
||||
func Benchmark_ZFlat1(b *testing.B) { benchFile(b, 1, false) }
|
||||
func Benchmark_ZFlat2(b *testing.B) { benchFile(b, 2, false) }
|
||||
@@ -253,9 +362,3 @@ func Benchmark_ZFlat8(b *testing.B) { benchFile(b, 8, false) }
|
||||
func Benchmark_ZFlat9(b *testing.B) { benchFile(b, 9, false) }
|
||||
func Benchmark_ZFlat10(b *testing.B) { benchFile(b, 10, false) }
|
||||
func Benchmark_ZFlat11(b *testing.B) { benchFile(b, 11, false) }
|
||||
func Benchmark_ZFlat12(b *testing.B) { benchFile(b, 12, false) }
|
||||
func Benchmark_ZFlat13(b *testing.B) { benchFile(b, 13, false) }
|
||||
func Benchmark_ZFlat14(b *testing.B) { benchFile(b, 14, false) }
|
||||
func Benchmark_ZFlat15(b *testing.B) { benchFile(b, 15, false) }
|
||||
func Benchmark_ZFlat16(b *testing.B) { benchFile(b, 16, false) }
|
||||
func Benchmark_ZFlat17(b *testing.B) { benchFile(b, 17, false) }
|
||||
|
||||
7
Godeps/_workspace/src/github.com/thejerf/suture/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.1
|
||||
- 1.2
|
||||
- 1.3
|
||||
- 1.4
|
||||
- tip
|
||||
19
Godeps/_workspace/src/github.com/thejerf/suture/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
Copyright (c) 2014 Barracuda Networks, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
45
Godeps/_workspace/src/github.com/thejerf/suture/README.md
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
Suture
|
||||
======
|
||||
|
||||
[](https://travis-ci.org/thejerf/suture)
|
||||
|
||||
Suture provides Erlang-ish supervisor trees for Go. "Supervisor trees" ->
|
||||
"sutree" -> "suture" -> holds your code together when it's trying to die.
|
||||
|
||||
This is intended to be a production-quality library going into code that I
|
||||
will be very early on the phone tree to support when it goes down. However,
|
||||
it has not been deployed into something quite that serious yet. (I will
|
||||
update this statement when that changes.)
|
||||
|
||||
It is intended to deal gracefully with the real failure cases that can
|
||||
occur with supervision trees (such as burning all your CPU time endlessly
|
||||
restarting dead services), while also making no unnecessary demands on the
|
||||
"service" code, and providing hooks to perform adequate logging with in a
|
||||
production environment.
|
||||
|
||||
[A blog post describing the design decisions](http://www.jerf.org/iri/post/2930)
|
||||
is available.
|
||||
|
||||
This module is fully covered with [godoc](http://godoc.org/github.com/thejerf/suture),
|
||||
including an example, usage, and everything else you might expect from a
|
||||
README.md on GitHub. (DRY.)
|
||||
|
||||
This is not currently tagged with particular git tags for Go as this is
|
||||
currently considered to be alpha code. As I move this into production and
|
||||
feel more confident about it, I'll give it relevant tags.
|
||||
|
||||
Code Signing
|
||||
------------
|
||||
|
||||
Starting with the commit after ac7cf8591b, I will be signing this repository
|
||||
with the ["jerf" keybase account](https://keybase.io/jerf).
|
||||
|
||||
Aspiration
|
||||
----------
|
||||
|
||||
One of the big wins the Erlang community has with their pervasive OTP
|
||||
support is that it makes it easy for them to distribute libraries that
|
||||
easily fit into the OTP paradigm. It ought to someday be considered a good
|
||||
idea to distribute libraries that provide some sort of supervisor tree
|
||||
functionality out of the box. It is possible to provide this functionality
|
||||
without explicitly depending on the Suture library.
|
||||
12
Godeps/_workspace/src/github.com/thejerf/suture/pre-commit
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
GOLINTOUT=$(golint *go)
|
||||
|
||||
if [ ! -z "$GOLINTOUT" -o "$?" != 0 ]; then
|
||||
echo golint failed:
|
||||
echo $GOLINTOUT
|
||||
exit 1
|
||||
fi
|
||||
|
||||
go test
|
||||
|
||||
650
Godeps/_workspace/src/github.com/thejerf/suture/suture.go
generated
vendored
Normal file
@@ -0,0 +1,650 @@
|
||||
/*
|
||||
|
||||
Package suture provides Erlang-like supervisor trees.
|
||||
|
||||
This implements Erlang-esque supervisor trees, as adapted for Go. This is
|
||||
intended to be an industrial-strength implementation, but it has not yet
|
||||
been deployed in a hostile environment. (It's headed there, though.)
|
||||
|
||||
Supervisor Tree -> SuTree -> suture -> holds your code together when it's
|
||||
trying to fall apart.
|
||||
|
||||
Why use Suture?
|
||||
|
||||
* You want to write bullet-resistant services that will remain available
|
||||
despite unforeseen failure.
|
||||
* You need the code to be smart enough not to consume 100% of the CPU
|
||||
restarting things.
|
||||
* You want to easily compose multiple such services in one program.
|
||||
* You want the Erlang programmers to stop lording their supervision
|
||||
trees over you.
|
||||
|
||||
Suture has 100% test coverage, and is golint clean. This doesn't prove it
|
||||
free of bugs, but it shows I care.
|
||||
|
||||
A blog post describing the design decisions is available at
|
||||
http://www.jerf.org/iri/post/2930 .
|
||||
|
||||
Using Suture
|
||||
|
||||
To idiomatically use Suture, create a Supervisor which is your top level
|
||||
"application" supervisor. This will often occur in your program's "main"
|
||||
function.
|
||||
|
||||
Create "Service"s, which implement the Service interface. .Add() them
|
||||
to your Supervisor. Supervisors are also services, so you can create a
|
||||
tree structure here, depending on the exact combination of restarts
|
||||
you want to create.
|
||||
|
||||
Finally, as what is probably the last line of your main() function, call
|
||||
.Serve() on your top level supervisor. This will start all the services
|
||||
you've defined.
|
||||
|
||||
See the Example for an example, using a simple service that serves out
|
||||
incrementing integers.
|
||||
|
||||
*/
|
||||
package suture
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"math"
|
||||
"runtime"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
notRunning = iota
|
||||
normal
|
||||
paused
|
||||
)
|
||||
|
||||
type supervisorID uint32
|
||||
type serviceID uint32
|
||||
|
||||
var currentSupervisorID uint32
|
||||
|
||||
// ErrWrongSupervisor is returned by the (*Supervisor).Remove method
|
||||
// if you pass a ServiceToken from the wrong Supervisor.
|
||||
var ErrWrongSupervisor = errors.New("wrong supervisor for this service token, no service removed")
|
||||
|
||||
// ServiceToken is an opaque identifier that can be used to terminate a service that
|
||||
// has been Add()ed to a Supervisor.
|
||||
type ServiceToken struct {
|
||||
id uint64
|
||||
}
|
||||
|
||||
/*
|
||||
Supervisor is the core type of the module that represents a Supervisor.
|
||||
|
||||
Supervisors should be constructed either by New or NewSimple.
|
||||
|
||||
Once constructed, a Supervisor should be started in one of three ways:
|
||||
|
||||
1. Calling .Serve().
|
||||
2. Calling .ServeBackground().
|
||||
3. Adding it to an existing Supervisor.
|
||||
|
||||
Calling Serve will cause the supervisor to run until it is shut down by
|
||||
an external user calling Stop() on it. If that never happens, it simply
|
||||
runs forever. I suggest creating your services in Supervisors, then making
|
||||
a Serve() call on your top-level Supervisor be the last line of your main
|
||||
func.
|
||||
|
||||
Calling ServeBackground will CORRECTLY start the supervisor running in a
|
||||
new goroutine. You do not want to just:
|
||||
|
||||
go supervisor.Serve()
|
||||
|
||||
because that will briefly create a race condition as it starts up, if you
|
||||
try to .Add() services immediately afterward.
|
||||
|
||||
*/
|
||||
type Supervisor struct {
|
||||
Name string
|
||||
id supervisorID
|
||||
|
||||
failureDecay float64
|
||||
failureThreshold float64
|
||||
failureBackoff time.Duration
|
||||
timeout time.Duration
|
||||
log func(string)
|
||||
services map[serviceID]Service
|
||||
lastFail time.Time
|
||||
failures float64
|
||||
restartQueue []serviceID
|
||||
state uint8
|
||||
serviceCounter serviceID
|
||||
control chan supervisorMessage
|
||||
resumeTimer <-chan time.Time
|
||||
|
||||
// The testing uses the ability to grab these individual logging functions
|
||||
// and get inside of suture's handling at a deep level.
|
||||
// If you ever come up with some need to get into these, submit a pull
|
||||
// request to make them public and some smidge of justification, and
|
||||
// I'll happily do it.
|
||||
logBadStop func(Service)
|
||||
logFailure func(service Service, currentFailures float64, failureThreshold float64, restarting bool, error interface{}, stacktrace []byte)
|
||||
logBackoff func(*Supervisor, bool)
|
||||
|
||||
// avoid a dependency on github.com/thejerf/abtime by just implementing
|
||||
// a minimal chunk.
|
||||
getNow func() time.Time
|
||||
getResume func(time.Duration) <-chan time.Time
|
||||
}
|
||||
|
||||
// Spec is used to pass arguments to the New function to create a
|
||||
// supervisor. See the New function for full documentation.
|
||||
type Spec struct {
|
||||
Log func(string)
|
||||
FailureDecay float64
|
||||
FailureThreshold float64
|
||||
FailureBackoff time.Duration
|
||||
Timeout time.Duration
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
New is the full constructor function for a supervisor.
|
||||
|
||||
The name is a friendly human name for the supervisor, used in logging. Suture
|
||||
does not care if this is unique, but it is good for your sanity if it is.
|
||||
|
||||
If not set, the following values are used:
|
||||
|
||||
* Log: A function is created that uses log.Print.
|
||||
* FailureDecay: 30 seconds
|
||||
* FailureThreshold: 5 failures
|
||||
* FailureBackoff: 15 seconds
|
||||
* Timeout: 10 seconds
|
||||
|
||||
The Log function will be called when errors occur. Suture will log the
|
||||
following:
|
||||
|
||||
* When a service has failed, with a descriptive message about the
|
||||
current backoff status, and whether it was immediately restarted
|
||||
* When the supervisor has gone into its backoff mode, and when it
|
||||
exits it
|
||||
* When a service fails to stop
|
||||
|
||||
The failureRate, failureThreshold, and failureBackoff controls how failures
|
||||
are handled, in order to avoid the supervisor failure case where the
|
||||
program does nothing but restarting failed services. If you do not
|
||||
care how failures behave, the default values should be fine for the
|
||||
vast majority of services, but if you want the details:
|
||||
|
||||
The supervisor tracks the number of failures that have occurred, with an
|
||||
exponential decay on the count. Every FailureDecay seconds, the number of
|
||||
failures that have occurred is cut in half. (This is done smoothly with an
|
||||
exponential function.) When a failure occurs, the number of failures
|
||||
is incremented by one. When the number of failures passes the
|
||||
FailureThreshold, the entire service waits for FailureBackoff seconds
|
||||
before attempting any further restarts, at which point it resets its
|
||||
failure count to zero.
|
||||
|
||||
Timeout is how long Suture will wait for a service to properly terminate.
|
||||
|
||||
*/
|
||||
func New(name string, spec Spec) (s *Supervisor) {
|
||||
s = new(Supervisor)
|
||||
|
||||
s.Name = name
|
||||
s.id = supervisorID(atomic.AddUint32(¤tSupervisorID, 1))
|
||||
|
||||
if spec.Log == nil {
|
||||
s.log = func(msg string) {
|
||||
log.Print(fmt.Sprintf("Supervisor %s: %s", s.Name, msg))
|
||||
}
|
||||
} else {
|
||||
s.log = spec.Log
|
||||
}
|
||||
|
||||
if spec.FailureDecay == 0 {
|
||||
s.failureDecay = 30
|
||||
} else {
|
||||
s.failureDecay = spec.FailureDecay
|
||||
}
|
||||
if spec.FailureThreshold == 0 {
|
||||
s.failureThreshold = 5
|
||||
} else {
|
||||
s.failureThreshold = spec.FailureThreshold
|
||||
}
|
||||
if spec.FailureBackoff == 0 {
|
||||
s.failureBackoff = time.Second * 15
|
||||
} else {
|
||||
s.failureBackoff = spec.FailureBackoff
|
||||
}
|
||||
if spec.Timeout == 0 {
|
||||
s.timeout = time.Second * 10
|
||||
} else {
|
||||
s.timeout = spec.Timeout
|
||||
}
|
||||
|
||||
// overriding these allows for testing the threshold behavior
|
||||
s.getNow = time.Now
|
||||
s.getResume = time.After
|
||||
|
||||
s.control = make(chan supervisorMessage)
|
||||
s.services = make(map[serviceID]Service)
|
||||
s.restartQueue = make([]serviceID, 0, 1)
|
||||
s.resumeTimer = make(chan time.Time)
|
||||
|
||||
// set up the default logging handlers
|
||||
s.logBadStop = func(service Service) {
|
||||
s.log(fmt.Sprintf("Service %s failed to terminate in a timely manner", serviceName(service)))
|
||||
}
|
||||
s.logFailure = func(service Service, failures float64, threshold float64, restarting bool, err interface{}, st []byte) {
|
||||
var errString string
|
||||
|
||||
e, canError := err.(error)
|
||||
if canError {
|
||||
errString = e.Error()
|
||||
} else {
|
||||
errString = fmt.Sprintf("%#v", err)
|
||||
}
|
||||
|
||||
s.log(fmt.Sprintf("Failed service '%s' (%f failures of %f), restarting: %#v, error: %s, stacktrace: %s", serviceName(service), failures, threshold, restarting, errString, string(st)))
|
||||
}
|
||||
s.logBackoff = func(s *Supervisor, entering bool) {
|
||||
if entering {
|
||||
s.log("Entering the backoff state.")
|
||||
} else {
|
||||
s.log("Exiting backoff state.")
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func serviceName(service Service) (serviceName string) {
|
||||
stringer, canStringer := service.(fmt.Stringer)
|
||||
if canStringer {
|
||||
serviceName = stringer.String()
|
||||
} else {
|
||||
serviceName = fmt.Sprintf("%#v", service)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// NewSimple is a convenience function to create a service with just a name
|
||||
// and the sensible defaults.
|
||||
func NewSimple(name string) *Supervisor {
|
||||
return New(name, Spec{})
|
||||
}
|
||||
|
||||
/*
|
||||
Service is the interface that describes a service to a Supervisor.
|
||||
|
||||
Serve Method
|
||||
|
||||
The Serve method is called by a Supervisor to start the service.
|
||||
The service should execute within the goroutine that this is
|
||||
called in. If this function either returns or panics, the Supervisor
|
||||
will call it again.
|
||||
|
||||
A Serve method SHOULD do as much cleanup of the state as possible,
|
||||
to prevent any corruption in the previous state from crashing the
|
||||
service again.
|
||||
|
||||
Stop Method
|
||||
|
||||
This method is used by the supervisor to stop the service. Calling this
|
||||
directly on a Service given to a Supervisor will simply result in the
|
||||
Service being restarted; use the Supervisor's .Remove(ServiceToken) method
|
||||
to stop a service. A supervisor will call .Stop() only once. Thus, it may
|
||||
be as destructive as it likes to get the service to stop.
|
||||
|
||||
Once Stop has been called on a Service, the Service SHOULD NOT be
|
||||
reused in any other supervisor! Because of the impossibility of
|
||||
guaranteeing that the service has actually stopped in Go, you can't
|
||||
prove that you won't be starting two goroutines using the exact
|
||||
same memory to store state, causing completely unpredictable behavior.
|
||||
|
||||
Stop should not return until the service has actually stopped.
|
||||
"Stopped" here is defined as "the service will stop servicing any
|
||||
further requests in the future". For instance, a common implementation
|
||||
is to receive a message on a dedicated "stop" channel and immediately
|
||||
returning. Once the stop command has been processed, the service is
|
||||
stopped.
|
||||
|
||||
Another common Stop implementation is to forcibly close an open socket
|
||||
or other resource, which will cause detectable errors to manifest in the
|
||||
service code. Bear in mind that to perfectly correctly use this
|
||||
approach requires a bit more work to handle the chance of a Stop
|
||||
command coming in before the resource has been created.
|
||||
|
||||
If a service does not Stop within the supervisor's timeout duration, a log
|
||||
entry will be made with a descriptive string to that effect. This does
|
||||
not guarantee that the service is hung; it may still get around to being
|
||||
properly stopped in the future. Until the service is fully stopped,
|
||||
both the service and the spawned goroutine trying to stop it will be
|
||||
"leaked".
|
||||
|
||||
Stringer Interface
|
||||
|
||||
It is not mandatory to implement the fmt.Stringer interface on your
|
||||
service, but if your Service does happen to implement that, the log
|
||||
messages that describe your service will use that when naming the
|
||||
service. Otherwise, you'll see the GoString of your service object,
|
||||
obtained via fmt.Sprintf("%#v", service).
|
||||
|
||||
*/
|
||||
type Service interface {
|
||||
Serve()
|
||||
Stop()
|
||||
}
|
||||
|
||||
/*
|
||||
Add adds a service to this supervisor.
|
||||
|
||||
If the supervisor is currently running, the service will be started
|
||||
immediately. If the supervisor is not currently running, the service
|
||||
will be started when the supervisor is.
|
||||
|
||||
The returned ServiceID may be passed to the Remove method of the Supervisor
|
||||
to terminate the service.
|
||||
*/
|
||||
func (s *Supervisor) Add(service Service) ServiceToken {
|
||||
if s == nil {
|
||||
panic("can't add service to nil *suture.Supervisor")
|
||||
}
|
||||
|
||||
if s.state == notRunning {
|
||||
id := s.serviceCounter
|
||||
s.serviceCounter++
|
||||
|
||||
s.services[id] = service
|
||||
s.restartQueue = append(s.restartQueue, id)
|
||||
|
||||
return ServiceToken{uint64(s.id)<<32 | uint64(id)}
|
||||
}
|
||||
|
||||
response := make(chan serviceID)
|
||||
s.control <- addService{service, response}
|
||||
return ServiceToken{uint64(s.id)<<32 | uint64(<-response)}
|
||||
}
|
||||
|
||||
// ServeBackground starts running a supervisor in its own goroutine. This
|
||||
// method does not return until it is safe to use .Add() on the Supervisor.
|
||||
func (s *Supervisor) ServeBackground() {
|
||||
go s.Serve()
|
||||
s.sync()
|
||||
}
|
||||
|
||||
/*
|
||||
Serve starts the supervisor. You should call this on the top-level supervisor,
|
||||
but nothing else.
|
||||
*/
|
||||
func (s *Supervisor) Serve() {
|
||||
if s == nil {
|
||||
panic("Can't serve with a nil *suture.Supervisor")
|
||||
}
|
||||
if s.id == 0 {
|
||||
panic("Can't call Serve on an incorrectly-constructed *suture.Supervisor")
|
||||
}
|
||||
|
||||
defer func() {
|
||||
s.state = notRunning
|
||||
}()
|
||||
|
||||
if s.state != notRunning {
|
||||
// FIXME: Don't explain why I don't need a semaphore, just use one
|
||||
// This doesn't use a semaphore because it's just a sanity check.
|
||||
panic("Running a supervisor while it is already running?")
|
||||
}
|
||||
|
||||
s.state = normal
|
||||
|
||||
// for all the services I currently know about, start them
|
||||
for _, id := range s.restartQueue {
|
||||
service, present := s.services[id]
|
||||
if present {
|
||||
s.runService(service, id)
|
||||
}
|
||||
}
|
||||
s.restartQueue = make([]serviceID, 0, 1)
|
||||
|
||||
for {
|
||||
select {
|
||||
case m := <-s.control:
|
||||
switch msg := m.(type) {
|
||||
case serviceFailed:
|
||||
s.handleFailedService(msg.id, msg.err, msg.stacktrace)
|
||||
case serviceEnded:
|
||||
service, monitored := s.services[msg.id]
|
||||
if monitored {
|
||||
s.handleFailedService(msg.id, fmt.Sprintf("%s returned unexpectedly", service), []byte("[unknown stack trace]"))
|
||||
}
|
||||
case addService:
|
||||
id := s.serviceCounter
|
||||
s.serviceCounter++
|
||||
|
||||
s.services[id] = msg.service
|
||||
s.runService(msg.service, id)
|
||||
|
||||
msg.response <- id
|
||||
case removeService:
|
||||
s.removeService(msg.id)
|
||||
case stopSupervisor:
|
||||
for id := range s.services {
|
||||
s.removeService(id)
|
||||
}
|
||||
return
|
||||
case listServices:
|
||||
services := []Service{}
|
||||
for _, service := range s.services {
|
||||
services = append(services, service)
|
||||
}
|
||||
msg.c <- services
|
||||
case syncSupervisor:
|
||||
// this does nothing on purpose; its sole purpose is to
|
||||
// introduce a sync point via the channel receive
|
||||
case panicSupervisor:
|
||||
// used only by tests
|
||||
panic("Panicking as requested!")
|
||||
}
|
||||
case _ = <-s.resumeTimer:
|
||||
// We're resuming normal operation after a pause due to
|
||||
// excessive thrashing
|
||||
// FIXME: Ought to permit some spacing of these functions, rather
|
||||
// than simply hammering through them
|
||||
s.state = normal
|
||||
s.failures = 0
|
||||
s.logBackoff(s, false)
|
||||
for _, id := range s.restartQueue {
|
||||
service, present := s.services[id]
|
||||
if present {
|
||||
s.runService(service, id)
|
||||
}
|
||||
}
|
||||
s.restartQueue = make([]serviceID, 0, 1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Supervisor) handleFailedService(id serviceID, err interface{}, stacktrace []byte) {
|
||||
now := s.getNow()
|
||||
|
||||
if s.lastFail.IsZero() {
|
||||
s.lastFail = now
|
||||
s.failures = 1.0
|
||||
} else {
|
||||
sinceLastFail := now.Sub(s.lastFail).Seconds()
|
||||
intervals := sinceLastFail / s.failureDecay
|
||||
s.failures = s.failures*math.Pow(.5, intervals) + 1
|
||||
}
|
||||
|
||||
if s.failures > s.failureThreshold {
|
||||
s.state = paused
|
||||
s.logBackoff(s, true)
|
||||
s.resumeTimer = s.getResume(s.failureBackoff)
|
||||
}
|
||||
|
||||
s.lastFail = now
|
||||
|
||||
failedService, monitored := s.services[id]
|
||||
|
||||
// It is possible for a service to be no longer monitored
|
||||
// by the time we get here. In that case, just ignore it.
|
||||
if monitored {
|
||||
if s.state == normal {
|
||||
s.runService(failedService, id)
|
||||
s.logFailure(failedService, s.failures, s.failureThreshold, true, err, stacktrace)
|
||||
} else {
|
||||
// FIXME: When restarting, check that the service still
|
||||
// exists (it may have been stopped in the meantime)
|
||||
s.restartQueue = append(s.restartQueue, id)
|
||||
s.logFailure(failedService, s.failures, s.failureThreshold, false, err, stacktrace)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Supervisor) runService(service Service, id serviceID) {
|
||||
go func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
buf := make([]byte, 65535, 65535)
|
||||
written := runtime.Stack(buf, false)
|
||||
buf = buf[:written]
|
||||
s.fail(id, r, buf)
|
||||
}
|
||||
}()
|
||||
|
||||
service.Serve()
|
||||
|
||||
s.serviceEnded(id)
|
||||
}()
|
||||
}
|
||||
|
||||
func (s *Supervisor) removeService(id serviceID) {
|
||||
service, present := s.services[id]
|
||||
if present {
|
||||
delete(s.services, id)
|
||||
go func() {
|
||||
successChan := make(chan bool)
|
||||
go func() {
|
||||
service.Stop()
|
||||
successChan <- true
|
||||
}()
|
||||
|
||||
failChan := s.getResume(s.timeout)
|
||||
|
||||
select {
|
||||
case <-successChan:
|
||||
// Life is good!
|
||||
case <-failChan:
|
||||
s.logBadStop(service)
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
// String implements the fmt.Stringer interface.
|
||||
func (s *Supervisor) String() string {
|
||||
return s.Name
|
||||
}
|
||||
|
||||
// sum type pattern for type-safe message passing; see
|
||||
// http://www.jerf.org/iri/post/2917
|
||||
|
||||
type supervisorMessage interface {
|
||||
isSupervisorMessage()
|
||||
}
|
||||
|
||||
/*
|
||||
Remove will remove the given service from the Supervisor, and attempt to Stop() it.
|
||||
The ServiceID token comes from the Add() call.
|
||||
*/
|
||||
func (s *Supervisor) Remove(id ServiceToken) error {
|
||||
sID := supervisorID(id.id >> 32)
|
||||
if sID != s.id {
|
||||
return ErrWrongSupervisor
|
||||
}
|
||||
s.control <- removeService{serviceID(id.id & 0xffffffff)}
|
||||
return nil
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
Services returns a []Service containing a snapshot of the services this
|
||||
Supervisor is managing.
|
||||
|
||||
*/
|
||||
func (s *Supervisor) Services() []Service {
|
||||
ls := listServices{make(chan []Service)}
|
||||
s.control <- ls
|
||||
return <-ls.c
|
||||
}
|
||||
|
||||
type listServices struct {
|
||||
c chan []Service
|
||||
}
|
||||
|
||||
func (ls listServices) isSupervisorMessage() {}
|
||||
|
||||
type removeService struct {
|
||||
id serviceID
|
||||
}
|
||||
|
||||
func (rs removeService) isSupervisorMessage() {}
|
||||
|
||||
func (s *Supervisor) sync() {
|
||||
s.control <- syncSupervisor{}
|
||||
}
|
||||
|
||||
type syncSupervisor struct {
|
||||
}
|
||||
|
||||
func (ss syncSupervisor) isSupervisorMessage() {}
|
||||
|
||||
func (s *Supervisor) fail(id serviceID, err interface{}, stacktrace []byte) {
|
||||
s.control <- serviceFailed{id, err, stacktrace}
|
||||
}
|
||||
|
||||
type serviceFailed struct {
|
||||
id serviceID
|
||||
err interface{}
|
||||
stacktrace []byte
|
||||
}
|
||||
|
||||
func (sf serviceFailed) isSupervisorMessage() {}
|
||||
|
||||
func (s *Supervisor) serviceEnded(id serviceID) {
|
||||
s.control <- serviceEnded{id}
|
||||
}
|
||||
|
||||
type serviceEnded struct {
|
||||
id serviceID
|
||||
}
|
||||
|
||||
func (s serviceEnded) isSupervisorMessage() {}
|
||||
|
||||
// added by the Add() method
|
||||
type addService struct {
|
||||
service Service
|
||||
response chan serviceID
|
||||
}
|
||||
|
||||
func (as addService) isSupervisorMessage() {}
|
||||
|
||||
// Stop stops the Supervisor.
|
||||
func (s *Supervisor) Stop() {
|
||||
s.control <- stopSupervisor{}
|
||||
}
|
||||
|
||||
type stopSupervisor struct {
|
||||
}
|
||||
|
||||
func (ss stopSupervisor) isSupervisorMessage() {}
|
||||
|
||||
func (s *Supervisor) panic() {
|
||||
s.control <- panicSupervisor{}
|
||||
}
|
||||
|
||||
type panicSupervisor struct {
|
||||
}
|
||||
|
||||
func (ps panicSupervisor) isSupervisorMessage() {}
|
||||
49
Godeps/_workspace/src/github.com/thejerf/suture/suture_simple_test.go
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
package suture
|
||||
|
||||
import "fmt"
|
||||
|
||||
type Incrementor struct {
|
||||
current int
|
||||
next chan int
|
||||
stop chan bool
|
||||
}
|
||||
|
||||
func (i *Incrementor) Stop() {
|
||||
fmt.Println("Stopping the service")
|
||||
i.stop <- true
|
||||
}
|
||||
|
||||
func (i *Incrementor) Serve() {
|
||||
for {
|
||||
select {
|
||||
case i.next <- i.current:
|
||||
i.current += 1
|
||||
case <-i.stop:
|
||||
// We sync here just to guarantee the output of "Stopping the service",
|
||||
// so this passes the test reliably.
|
||||
// Most services would simply "return" here.
|
||||
i.stop <- true
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ExampleNew_simple() {
|
||||
supervisor := NewSimple("Supervisor")
|
||||
service := &Incrementor{0, make(chan int), make(chan bool)}
|
||||
supervisor.Add(service)
|
||||
|
||||
go supervisor.ServeBackground()
|
||||
|
||||
fmt.Println("Got:", <-service.next)
|
||||
fmt.Println("Got:", <-service.next)
|
||||
supervisor.Stop()
|
||||
|
||||
// We sync here just to guarantee the output of "Stopping the service"
|
||||
<-service.stop
|
||||
|
||||
// Output:
|
||||
// Got: 0
|
||||
// Got: 1
|
||||
// Stopping the service
|
||||
}
|
||||
592
Godeps/_workspace/src/github.com/thejerf/suture/suture_test.go
generated
vendored
Normal file
@@ -0,0 +1,592 @@
|
||||
package suture
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
Happy = iota
|
||||
Fail
|
||||
Panic
|
||||
Hang
|
||||
UseStopChan
|
||||
)
|
||||
|
||||
var everMultistarted = false
|
||||
|
||||
// Test that supervisors work perfectly when everything is hunky dory.
|
||||
func TestTheHappyCase(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("A")
|
||||
if s.String() != "A" {
|
||||
t.Fatal("Can't get name from a supervisor")
|
||||
}
|
||||
service := NewService("B")
|
||||
|
||||
s.Add(service)
|
||||
|
||||
go s.Serve()
|
||||
|
||||
<-service.started
|
||||
|
||||
// If we stop the service, it just gets restarted
|
||||
service.Stop()
|
||||
<-service.started
|
||||
|
||||
// And it is shut down when we stop the supervisor
|
||||
service.take <- UseStopChan
|
||||
s.Stop()
|
||||
<-service.stop
|
||||
}
|
||||
|
||||
// Test that adding to a running supervisor does indeed start the service.
|
||||
func TestAddingToRunningSupervisor(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("A1")
|
||||
|
||||
s.ServeBackground()
|
||||
defer s.Stop()
|
||||
|
||||
service := NewService("B1")
|
||||
s.Add(service)
|
||||
|
||||
<-service.started
|
||||
|
||||
services := s.Services()
|
||||
if !reflect.DeepEqual([]Service{service}, services) {
|
||||
t.Fatal("Can't get list of services as expected.")
|
||||
}
|
||||
}
|
||||
|
||||
// Test what happens when services fail.
|
||||
func TestFailures(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("A2")
|
||||
s.failureThreshold = 3.5
|
||||
|
||||
go s.Serve()
|
||||
defer func() {
|
||||
// to avoid deadlocks during shutdown, we have to not try to send
|
||||
// things out on channels while we're shutting down (this undoes the
|
||||
// logFailure overide about 25 lines down)
|
||||
s.logFailure = func(Service, float64, float64, bool, interface{}, []byte) {}
|
||||
s.Stop()
|
||||
}()
|
||||
s.sync()
|
||||
|
||||
service1 := NewService("B2")
|
||||
service2 := NewService("C2")
|
||||
|
||||
s.Add(service1)
|
||||
<-service1.started
|
||||
s.Add(service2)
|
||||
<-service2.started
|
||||
|
||||
nowFeeder := NewNowFeeder()
|
||||
pastVal := time.Unix(1000000, 0)
|
||||
nowFeeder.appendTimes(pastVal)
|
||||
s.getNow = nowFeeder.getter
|
||||
|
||||
resumeChan := make(chan time.Time)
|
||||
s.getResume = func(d time.Duration) <-chan time.Time {
|
||||
return resumeChan
|
||||
}
|
||||
|
||||
failNotify := make(chan bool)
|
||||
// use this to synchronize on here
|
||||
s.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
|
||||
failNotify <- r
|
||||
}
|
||||
|
||||
// All that setup was for this: Service1, please return now.
|
||||
service1.take <- Fail
|
||||
restarted := <-failNotify
|
||||
<-service1.started
|
||||
|
||||
if !restarted || s.failures != 1 || s.lastFail != pastVal {
|
||||
t.Fatal("Did not fail in the expected manner")
|
||||
}
|
||||
// Getting past this means the service was restarted.
|
||||
service1.take <- Happy
|
||||
|
||||
// Service2, your turn.
|
||||
service2.take <- Fail
|
||||
nowFeeder.appendTimes(pastVal)
|
||||
restarted = <-failNotify
|
||||
<-service2.started
|
||||
if !restarted || s.failures != 2 || s.lastFail != pastVal {
|
||||
t.Fatal("Did not fail in the expected manner")
|
||||
}
|
||||
// And you're back. (That is, the correct service was restarted.)
|
||||
service2.take <- Happy
|
||||
|
||||
// Now, one failureDecay later, is everything working correctly?
|
||||
oneDecayLater := time.Unix(1000030, 0)
|
||||
nowFeeder.appendTimes(oneDecayLater)
|
||||
service2.take <- Fail
|
||||
restarted = <-failNotify
|
||||
<-service2.started
|
||||
// playing a bit fast and loose here with floating point, but...
|
||||
// we get 2 by taking the current failure value of 2, decaying it
|
||||
// by one interval, which cuts it in half to 1, then adding 1 again,
|
||||
// all of which "should" be precise
|
||||
if !restarted || s.failures != 2 || s.lastFail != oneDecayLater {
|
||||
t.Fatal("Did not decay properly", s.lastFail, oneDecayLater)
|
||||
}
|
||||
|
||||
// For a change of pace, service1 would you be so kind as to panic?
|
||||
nowFeeder.appendTimes(oneDecayLater)
|
||||
service1.take <- Panic
|
||||
restarted = <-failNotify
|
||||
<-service1.started
|
||||
if !restarted || s.failures != 3 || s.lastFail != oneDecayLater {
|
||||
t.Fatal("Did not correctly recover from a panic")
|
||||
}
|
||||
|
||||
nowFeeder.appendTimes(oneDecayLater)
|
||||
backingoff := make(chan bool)
|
||||
s.logBackoff = func(s *Supervisor, backingOff bool) {
|
||||
backingoff <- backingOff
|
||||
}
|
||||
|
||||
// And with this failure, we trigger the backoff code.
|
||||
service1.take <- Fail
|
||||
backoff := <-backingoff
|
||||
restarted = <-failNotify
|
||||
|
||||
if !backoff || restarted || s.failures != 4 {
|
||||
t.Fatal("Broke past the threshold but did not log correctly", s.failures)
|
||||
}
|
||||
if service1.existing != 0 {
|
||||
t.Fatal("service1 still exists according to itself?")
|
||||
}
|
||||
|
||||
// service2 is still running, because we don't shut anything down in a
|
||||
// backoff, we just stop restarting.
|
||||
service2.take <- Happy
|
||||
|
||||
var correct bool
|
||||
timer := time.NewTimer(time.Millisecond * 10)
|
||||
// verify the service has not been restarted
|
||||
// hard to get around race conditions here without simply using a timer...
|
||||
select {
|
||||
case service1.take <- Happy:
|
||||
correct = false
|
||||
case <-timer.C:
|
||||
correct = true
|
||||
}
|
||||
if !correct {
|
||||
t.Fatal("Restarted the service during the backoff interval")
|
||||
}
|
||||
|
||||
// tell the supervisor the restart interval has passed
|
||||
resumeChan <- time.Time{}
|
||||
backoff = <-backingoff
|
||||
<-service1.started
|
||||
s.sync()
|
||||
if s.failures != 0 {
|
||||
t.Fatal("Did not reset failure count after coming back from timeout.")
|
||||
}
|
||||
|
||||
nowFeeder.appendTimes(oneDecayLater)
|
||||
service1.take <- Fail
|
||||
restarted = <-failNotify
|
||||
<-service1.started
|
||||
if !restarted || backoff {
|
||||
t.Fatal("For some reason, got that we were backing off again.", restarted, backoff)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunningAlreadyRunning(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("A3")
|
||||
go s.Serve()
|
||||
defer s.Stop()
|
||||
|
||||
// ensure the supervisor has made it to its main loop
|
||||
s.sync()
|
||||
var errored bool
|
||||
func() {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
errored = true
|
||||
}
|
||||
}()
|
||||
|
||||
s.Serve()
|
||||
}()
|
||||
if !errored {
|
||||
t.Fatal("Supervisor failed to prevent itself from double-running.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFullConstruction(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := New("Moo", Spec{
|
||||
Log: func(string) {},
|
||||
FailureDecay: 1,
|
||||
FailureThreshold: 2,
|
||||
FailureBackoff: 3,
|
||||
Timeout: time.Second * 29,
|
||||
})
|
||||
if s.String() != "Moo" || s.failureDecay != 1 || s.failureThreshold != 2 || s.failureBackoff != 3 || s.timeout != time.Second*29 {
|
||||
t.Fatal("Full construction failed somehow")
|
||||
}
|
||||
}
|
||||
|
||||
// This is mostly for coverage testing.
|
||||
func TestDefaultLogging(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("A4")
|
||||
|
||||
service := NewService("B4")
|
||||
s.Add(service)
|
||||
|
||||
s.failureThreshold = .5
|
||||
s.failureBackoff = time.Millisecond * 25
|
||||
go s.Serve()
|
||||
s.sync()
|
||||
|
||||
<-service.started
|
||||
|
||||
resumeChan := make(chan time.Time)
|
||||
s.getResume = func(d time.Duration) <-chan time.Time {
|
||||
return resumeChan
|
||||
}
|
||||
|
||||
service.take <- UseStopChan
|
||||
service.take <- Fail
|
||||
<-service.stop
|
||||
resumeChan <- time.Time{}
|
||||
|
||||
<-service.started
|
||||
|
||||
service.take <- Happy
|
||||
|
||||
serviceName(&BarelyService{})
|
||||
|
||||
s.logBadStop(service)
|
||||
s.logFailure(service, 1, 1, true, errors.New("test error"), []byte{})
|
||||
|
||||
s.Stop()
|
||||
}
|
||||
|
||||
func TestNestedSupervisors(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
super1 := NewSimple("Top5")
|
||||
super2 := NewSimple("Nested5")
|
||||
service := NewService("Service5")
|
||||
|
||||
super1.Add(super2)
|
||||
super2.Add(service)
|
||||
|
||||
go super1.Serve()
|
||||
super1.sync()
|
||||
|
||||
<-service.started
|
||||
service.take <- Happy
|
||||
|
||||
super1.Stop()
|
||||
}
|
||||
|
||||
func TestStoppingSupervisorStopsServices(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("Top6")
|
||||
service := NewService("Service 6")
|
||||
|
||||
s.Add(service)
|
||||
|
||||
go s.Serve()
|
||||
s.sync()
|
||||
|
||||
<-service.started
|
||||
|
||||
service.take <- UseStopChan
|
||||
|
||||
s.Stop()
|
||||
<-service.stop
|
||||
}
|
||||
|
||||
func TestStoppingStillWorksWithHungServices(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("Top7")
|
||||
service := NewService("Service WillHang7")
|
||||
|
||||
s.Add(service)
|
||||
|
||||
go s.Serve()
|
||||
|
||||
<-service.started
|
||||
|
||||
service.take <- UseStopChan
|
||||
service.take <- Hang
|
||||
|
||||
resumeChan := make(chan time.Time)
|
||||
s.getResume = func(d time.Duration) <-chan time.Time {
|
||||
return resumeChan
|
||||
}
|
||||
failNotify := make(chan struct{})
|
||||
s.logBadStop = func(s Service) {
|
||||
failNotify <- struct{}{}
|
||||
}
|
||||
|
||||
s.Stop()
|
||||
|
||||
resumeChan <- time.Time{}
|
||||
<-failNotify
|
||||
service.release <- true
|
||||
<-service.stop
|
||||
}
|
||||
|
||||
func TestRemoveService(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
s := NewSimple("Top")
|
||||
service := NewService("ServiceToRemove8")
|
||||
|
||||
id := s.Add(service)
|
||||
|
||||
go s.Serve()
|
||||
|
||||
<-service.started
|
||||
service.take <- UseStopChan
|
||||
|
||||
err := s.Remove(id)
|
||||
if err != nil {
|
||||
t.Fatal("Removing service somehow failed")
|
||||
}
|
||||
<-service.stop
|
||||
|
||||
err = s.Remove(ServiceToken{1<<36 + 1})
|
||||
if err != ErrWrongSupervisor {
|
||||
t.Fatal("Did not detect that the ServiceToken was wrong")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFailureToConstruct(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var s *Supervisor
|
||||
|
||||
panics(func() {
|
||||
s.Serve()
|
||||
})
|
||||
|
||||
s = new(Supervisor)
|
||||
panics(func() {
|
||||
s.Serve()
|
||||
})
|
||||
}
|
||||
|
||||
func TestFailingSupervisors(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// This is a bit of a complicated test, so let me explain what
|
||||
// all this is doing:
|
||||
// 1. Set up a top-level supervisor with a hair-trigger backoff.
|
||||
// 2. Add a supervisor to that.
|
||||
// 3. To that supervisor, add a service.
|
||||
// 4. Panic the supervisor in the middle, sending the top-level into
|
||||
// backoff.
|
||||
// 5. Kill the lower level service too.
|
||||
// 6. Verify that when the top-level service comes out of backoff,
|
||||
// the service ends up restarted as expected.
|
||||
|
||||
// Ultimately, we can't have more than a best-effort recovery here.
|
||||
// A panic'ed supervisor can't really be trusted to have consistent state,
|
||||
// and without *that*, we can't trust it to do anything sensible with
|
||||
// the children it may have been running. So unlike Erlang, we can't
|
||||
// can't really expect to be able to safely restart them or anything.
|
||||
// Really, the "correct" answer is that the Supervisor must never panic,
|
||||
// but in the event that it does, this verifies that it at least tries
|
||||
// to get on with life.
|
||||
|
||||
// This also tests that if a Supervisor itself panics, and one of its
|
||||
// monitored services goes down in the meantime, that the monitored
|
||||
// service also gets correctly restarted when the supervisor does.
|
||||
|
||||
s1 := NewSimple("Top9")
|
||||
s2 := NewSimple("Nested9")
|
||||
service := NewService("Service9")
|
||||
|
||||
s1.Add(s2)
|
||||
s2.Add(service)
|
||||
|
||||
go s1.Serve()
|
||||
<-service.started
|
||||
|
||||
s1.failureThreshold = .5
|
||||
|
||||
// let us control precisely when s1 comes back
|
||||
resumeChan := make(chan time.Time)
|
||||
s1.getResume = func(d time.Duration) <-chan time.Time {
|
||||
return resumeChan
|
||||
}
|
||||
failNotify := make(chan string)
|
||||
// use this to synchronize on here
|
||||
s1.logFailure = func(s Service, cf float64, ft float64, r bool, error interface{}, stacktrace []byte) {
|
||||
failNotify <- fmt.Sprintf("%s", s)
|
||||
}
|
||||
|
||||
s2.panic()
|
||||
|
||||
failing := <-failNotify
|
||||
// that's enough sync to guarantee this:
|
||||
if failing != "Nested9" || s1.state != paused {
|
||||
t.Fatal("Top-level supervisor did not go into backoff as expected")
|
||||
}
|
||||
|
||||
service.take <- Fail
|
||||
|
||||
resumeChan <- time.Time{}
|
||||
<-service.started
|
||||
}
|
||||
|
||||
func TestNilSupervisorAdd(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
var s *Supervisor
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r == nil {
|
||||
t.Fatal("did not panic as expected on nil add")
|
||||
}
|
||||
}()
|
||||
|
||||
s.Add(s)
|
||||
}
|
||||
|
||||
// http://golangtutorials.blogspot.com/2011/10/gotest-unit-testing-and-benchmarking-go.html
|
||||
// claims test function are run in the same order as the source file...
|
||||
// I'm not sure if this is part of the contract, though. Especially in the
|
||||
// face of "t.Parallel()"...
|
||||
func TestEverMultistarted(t *testing.T) {
|
||||
if everMultistarted {
|
||||
t.Fatal("Seem to have multistarted a service at some point, bummer.")
|
||||
}
|
||||
}
|
||||
|
||||
// A test service that can be induced to fail, panic, or hang on demand.
|
||||
func NewService(name string) *FailableService {
|
||||
return &FailableService{name, make(chan bool), make(chan int),
|
||||
make(chan bool, 1), make(chan bool), make(chan bool), 0}
|
||||
}
|
||||
|
||||
type FailableService struct {
|
||||
name string
|
||||
started chan bool
|
||||
take chan int
|
||||
shutdown chan bool
|
||||
release chan bool
|
||||
stop chan bool
|
||||
existing int
|
||||
}
|
||||
|
||||
func (s *FailableService) Serve() {
|
||||
if s.existing != 0 {
|
||||
everMultistarted = true
|
||||
panic("Multi-started the same service! " + s.name)
|
||||
}
|
||||
s.existing += 1
|
||||
|
||||
s.started <- true
|
||||
|
||||
useStopChan := false
|
||||
|
||||
for {
|
||||
select {
|
||||
case val := <-s.take:
|
||||
switch val {
|
||||
case Happy:
|
||||
// Do nothing on purpose. Life is good!
|
||||
case Fail:
|
||||
s.existing -= 1
|
||||
if useStopChan {
|
||||
s.stop <- true
|
||||
}
|
||||
return
|
||||
case Panic:
|
||||
s.existing -= 1
|
||||
panic("Panic!")
|
||||
case Hang:
|
||||
// or more specifically, "hang until I release you"
|
||||
<-s.release
|
||||
case UseStopChan:
|
||||
useStopChan = true
|
||||
}
|
||||
case <-s.shutdown:
|
||||
s.existing -= 1
|
||||
if useStopChan {
|
||||
s.stop <- true
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *FailableService) String() string {
|
||||
return s.name
|
||||
}
|
||||
|
||||
func (s *FailableService) Stop() {
|
||||
s.shutdown <- true
|
||||
}
|
||||
|
||||
type NowFeeder struct {
|
||||
values []time.Time
|
||||
getter func() time.Time
|
||||
m sync.Mutex
|
||||
}
|
||||
|
||||
// This is used to test serviceName; it's a service without a Stringer.
|
||||
type BarelyService struct{}
|
||||
|
||||
func (bs *BarelyService) Serve() {}
|
||||
func (bs *BarelyService) Stop() {}
|
||||
|
||||
func NewNowFeeder() (nf *NowFeeder) {
|
||||
nf = new(NowFeeder)
|
||||
nf.getter = func() time.Time {
|
||||
nf.m.Lock()
|
||||
defer nf.m.Unlock()
|
||||
if len(nf.values) > 0 {
|
||||
ret := nf.values[0]
|
||||
nf.values = nf.values[1:]
|
||||
return ret
|
||||
}
|
||||
panic("Ran out of values for NowFeeder")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (nf *NowFeeder) appendTimes(t ...time.Time) {
|
||||
nf.m.Lock()
|
||||
defer nf.m.Unlock()
|
||||
nf.values = append(nf.values, t...)
|
||||
}
|
||||
|
||||
func panics(doesItPanic func()) (panics bool) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
panics = true
|
||||
}
|
||||
}()
|
||||
|
||||
doesItPanic()
|
||||
|
||||
return
|
||||
}
|
||||
23
Godeps/_workspace/src/golang.org/x/text/unicode/norm/Makefile
generated
vendored
@@ -1,23 +0,0 @@
|
||||
# Copyright 2011 The Go Authors. All rights reserved.
|
||||
# Use of this source code is governed by a BSD-style
|
||||
# license that can be found in the LICENSE file.
|
||||
|
||||
maketables: maketables.go triegen.go
|
||||
go build $^
|
||||
|
||||
normregtest: normregtest.go
|
||||
go build $^
|
||||
|
||||
tables: maketables
|
||||
./maketables > tables.go
|
||||
gofmt -w tables.go
|
||||
|
||||
# Downloads from www.unicode.org, so not part
|
||||
# of standard test scripts.
|
||||
test: testtables regtest
|
||||
|
||||
testtables: maketables
|
||||
./maketables -test > data_test.go && go test -tags=test
|
||||
|
||||
regtest: normregtest
|
||||
./normregtest
|
||||
208
Godeps/_workspace/src/golang.org/x/text/unicode/norm/maketables.go
generated
vendored
@@ -16,20 +16,17 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"golang.org/x/text/internal/gen"
|
||||
"golang.org/x/text/internal/triegen"
|
||||
"golang.org/x/text/internal/ucd"
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
gen.Init()
|
||||
loadUnicodeData()
|
||||
compactCCC()
|
||||
loadCompositionExclusions()
|
||||
@@ -46,24 +43,18 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
var url = flag.String("url",
|
||||
"http://www.unicode.org/Public/"+unicode.Version+"/ucd/",
|
||||
"URL of Unicode database directory")
|
||||
var tablelist = flag.String("tables",
|
||||
"all",
|
||||
"comma-separated list of which tables to generate; "+
|
||||
"can be 'decomp', 'recomp', 'info' and 'all'")
|
||||
var test = flag.Bool("test",
|
||||
false,
|
||||
"test existing tables against DerivedNormalizationProps and generate test data for regression testing")
|
||||
var verbose = flag.Bool("verbose",
|
||||
false,
|
||||
"write data to stdout as it is parsed")
|
||||
var localFiles = flag.Bool("local",
|
||||
false,
|
||||
"data files have been copied to the current directory; for debugging only")
|
||||
|
||||
var logger = log.New(os.Stderr, "", log.Lshortfile)
|
||||
var (
|
||||
tablelist = flag.String("tables",
|
||||
"all",
|
||||
"comma-separated list of which tables to generate; "+
|
||||
"can be 'decomp', 'recomp', 'info' and 'all'")
|
||||
test = flag.Bool("test",
|
||||
false,
|
||||
"test existing tables against DerivedNormalizationProps and generate test data for regression testing")
|
||||
verbose = flag.Bool("verbose",
|
||||
false,
|
||||
"write data to stdout as it is parsed")
|
||||
)
|
||||
|
||||
const MaxChar = 0x10FFFF // anything above this shouldn't exist
|
||||
|
||||
@@ -189,27 +180,6 @@ func (f FormInfo) String() string {
|
||||
|
||||
type Decomposition []rune
|
||||
|
||||
func openReader(file string) (input io.ReadCloser) {
|
||||
if *localFiles {
|
||||
f, err := os.Open(file)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
input = f
|
||||
} else {
|
||||
path := *url + file
|
||||
resp, err := http.Get(path)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
logger.Fatal("bad GET status for "+file, resp.Status)
|
||||
}
|
||||
input = resp.Body
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
|
||||
decomp := strings.Split(s, " ")
|
||||
if len(decomp) > 0 && skipfirst {
|
||||
@@ -226,7 +196,7 @@ func parseDecomposition(s string, skipfirst bool) (a []rune, err error) {
|
||||
}
|
||||
|
||||
func loadUnicodeData() {
|
||||
f := openReader("UnicodeData.txt")
|
||||
f := gen.OpenUCDFile("UnicodeData.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
@@ -242,7 +212,7 @@ func loadUnicodeData() {
|
||||
if len(decmap) > 0 {
|
||||
exp, err = parseDecomposition(decmap, true)
|
||||
if err != nil {
|
||||
logger.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err)
|
||||
log.Fatalf(`%U: bad decomp |%v|: "%s"`, r, decmap, err)
|
||||
}
|
||||
isCompat = true
|
||||
}
|
||||
@@ -261,7 +231,7 @@ func loadUnicodeData() {
|
||||
}
|
||||
}
|
||||
if err := p.Err(); err != nil {
|
||||
logger.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -296,18 +266,18 @@ func compactCCC() {
|
||||
// 0958 # ...
|
||||
// See http://unicode.org/reports/tr44/ for full explanation
|
||||
func loadCompositionExclusions() {
|
||||
f := openReader("CompositionExclusions.txt")
|
||||
f := gen.OpenUCDFile("CompositionExclusions.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
c := &chars[p.Rune(0)]
|
||||
if c.excludeInComp {
|
||||
logger.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
|
||||
log.Fatalf("%U: Duplicate entry in exclusions.", c.codePoint)
|
||||
}
|
||||
c.excludeInComp = true
|
||||
}
|
||||
if e := p.Err(); e != nil {
|
||||
logger.Fatal(e)
|
||||
log.Fatal(e)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -542,19 +512,19 @@ func computeNonStarterCounts() {
|
||||
}
|
||||
}
|
||||
|
||||
func printBytes(b []byte, name string) {
|
||||
fmt.Printf("// %s: %d bytes\n", name, len(b))
|
||||
fmt.Printf("var %s = [...]byte {", name)
|
||||
func printBytes(w io.Writer, b []byte, name string) {
|
||||
fmt.Fprintf(w, "// %s: %d bytes\n", name, len(b))
|
||||
fmt.Fprintf(w, "var %s = [...]byte {", name)
|
||||
for i, c := range b {
|
||||
switch {
|
||||
case i%64 == 0:
|
||||
fmt.Printf("\n// Bytes %x - %x\n", i, i+63)
|
||||
fmt.Fprintf(w, "\n// Bytes %x - %x\n", i, i+63)
|
||||
case i%8 == 0:
|
||||
fmt.Printf("\n")
|
||||
fmt.Fprintf(w, "\n")
|
||||
}
|
||||
fmt.Printf("0x%.2X, ", c)
|
||||
fmt.Fprintf(w, "0x%.2X, ", c)
|
||||
}
|
||||
fmt.Print("\n}\n\n")
|
||||
fmt.Fprint(w, "\n}\n\n")
|
||||
}
|
||||
|
||||
// See forminfo.go for format.
|
||||
@@ -610,13 +580,13 @@ func (m *decompSet) insert(key int, s string) {
|
||||
m[key][s] = true
|
||||
}
|
||||
|
||||
func printCharInfoTables() int {
|
||||
func printCharInfoTables(w io.Writer) int {
|
||||
mkstr := func(r rune, f *FormInfo) (int, string) {
|
||||
d := f.expandedDecomp
|
||||
s := string([]rune(d))
|
||||
if max := 1 << 6; len(s) >= max {
|
||||
const msg = "%U: too many bytes in decomposition: %d >= %d"
|
||||
logger.Fatalf(msg, r, len(s), max)
|
||||
log.Fatalf(msg, r, len(s), max)
|
||||
}
|
||||
head := uint8(len(s))
|
||||
if f.quickCheck[MComposed] != QCYes {
|
||||
@@ -631,11 +601,11 @@ func printCharInfoTables() int {
|
||||
tccc := ccc(d[len(d)-1])
|
||||
cc := ccc(r)
|
||||
if cc != 0 && lccc == 0 && tccc == 0 {
|
||||
logger.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc)
|
||||
log.Fatalf("%U: trailing and leading ccc are 0 for non-zero ccc %d", r, cc)
|
||||
}
|
||||
if tccc < lccc && lccc != 0 {
|
||||
const msg = "%U: lccc (%d) must be <= tcc (%d)"
|
||||
logger.Fatalf(msg, r, lccc, tccc)
|
||||
log.Fatalf(msg, r, lccc, tccc)
|
||||
}
|
||||
index := normalDecomp
|
||||
nTrail := chars[r].nTrailingNonStarters
|
||||
@@ -652,13 +622,13 @@ func printCharInfoTables() int {
|
||||
if lccc > 0 {
|
||||
s += string([]byte{lccc})
|
||||
if index == firstCCC {
|
||||
logger.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r)
|
||||
log.Fatalf("%U: multi-segment decomposition not supported for decompositions with leading CCC != 0", r)
|
||||
}
|
||||
index = firstLeadingCCC
|
||||
}
|
||||
if cc != lccc {
|
||||
if cc != 0 {
|
||||
logger.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc)
|
||||
log.Fatalf("%U: for lccc != ccc, expected ccc to be 0; was %d", r, cc)
|
||||
}
|
||||
index = firstCCCZeroExcept
|
||||
}
|
||||
@@ -680,7 +650,7 @@ func printCharInfoTables() int {
|
||||
continue
|
||||
}
|
||||
if f.combinesBackward {
|
||||
logger.Fatalf("%U: combinesBackward and decompose", c.codePoint)
|
||||
log.Fatalf("%U: combinesBackward and decompose", c.codePoint)
|
||||
}
|
||||
index, s := mkstr(c.codePoint, &f)
|
||||
decompSet.insert(index, s)
|
||||
@@ -691,7 +661,7 @@ func printCharInfoTables() int {
|
||||
size := 0
|
||||
positionMap := make(map[string]uint16)
|
||||
decompositions.WriteString("\000")
|
||||
fmt.Println("const (")
|
||||
fmt.Fprintln(w, "const (")
|
||||
for i, m := range decompSet {
|
||||
sa := []string{}
|
||||
for s := range m {
|
||||
@@ -704,13 +674,13 @@ func printCharInfoTables() int {
|
||||
positionMap[s] = uint16(p)
|
||||
}
|
||||
if cname[i] != "" {
|
||||
fmt.Printf("%s = 0x%X\n", cname[i], decompositions.Len())
|
||||
fmt.Fprintf(w, "%s = 0x%X\n", cname[i], decompositions.Len())
|
||||
}
|
||||
}
|
||||
fmt.Println("maxDecomp = 0x8000")
|
||||
fmt.Println(")")
|
||||
fmt.Fprintln(w, "maxDecomp = 0x8000")
|
||||
fmt.Fprintln(w, ")")
|
||||
b := decompositions.Bytes()
|
||||
printBytes(b, "decomps")
|
||||
printBytes(w, b, "decomps")
|
||||
size += len(b)
|
||||
|
||||
varnames := []string{"nfc", "nfkc"}
|
||||
@@ -726,7 +696,7 @@ func printCharInfoTables() int {
|
||||
if c.ccc != ccc(d[0]) {
|
||||
// We assume the lead ccc of a decomposition !=0 in this case.
|
||||
if ccc(d[0]) == 0 {
|
||||
logger.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc)
|
||||
log.Fatalf("Expected leading CCC to be non-zero; ccc is %d", c.ccc)
|
||||
}
|
||||
}
|
||||
} else if c.nLeadingNonStarters > 0 && len(f.expandedDecomp) == 0 && c.ccc == 0 && !f.combinesBackward {
|
||||
@@ -737,9 +707,9 @@ func printCharInfoTables() int {
|
||||
trie.Insert(c.codePoint, uint64(0x8000|v))
|
||||
}
|
||||
}
|
||||
sz, err := trie.Gen(os.Stdout, triegen.Compact(&normCompacter{name: varnames[i]}))
|
||||
sz, err := trie.Gen(w, triegen.Compact(&normCompacter{name: varnames[i]}))
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
size += sz
|
||||
}
|
||||
@@ -755,30 +725,9 @@ func contains(sa []string, s string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// Extract the version number from the URL.
|
||||
func version() string {
|
||||
// From http://www.unicode.org/standard/versions/#Version_Numbering:
|
||||
// for the later Unicode versions, data files are located in
|
||||
// versioned directories.
|
||||
fields := strings.Split(*url, "/")
|
||||
for _, f := range fields {
|
||||
if match, _ := regexp.MatchString(`[0-9]\.[0-9]\.[0-9]`, f); match {
|
||||
return f
|
||||
}
|
||||
}
|
||||
logger.Fatal("unknown version")
|
||||
return "Unknown"
|
||||
}
|
||||
|
||||
const fileHeader = `// Generated by running
|
||||
// maketables --tables=%s --url=%s
|
||||
// DO NOT EDIT
|
||||
|
||||
package norm
|
||||
|
||||
`
|
||||
|
||||
func makeTables() {
|
||||
w := &bytes.Buffer{}
|
||||
|
||||
size := 0
|
||||
if *tablelist == "" {
|
||||
return
|
||||
@@ -787,7 +736,6 @@ func makeTables() {
|
||||
if *tablelist == "all" {
|
||||
list = []string{"recomp", "info"}
|
||||
}
|
||||
fmt.Printf(fileHeader, *tablelist, *url)
|
||||
|
||||
// Compute maximum decomposition size.
|
||||
max := 0
|
||||
@@ -797,30 +745,30 @@ func makeTables() {
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println("const (")
|
||||
fmt.Println("\t// Version is the Unicode edition from which the tables are derived.")
|
||||
fmt.Printf("\tVersion = %q\n", version())
|
||||
fmt.Println()
|
||||
fmt.Println("\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform")
|
||||
fmt.Println("\t// may need to write atomically for any Form. Making a destination buffer at")
|
||||
fmt.Println("\t// least this size ensures that Transform can always make progress and that")
|
||||
fmt.Println("\t// the user does not need to grow the buffer on an ErrShortDst.")
|
||||
fmt.Printf("\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max)
|
||||
fmt.Println(")\n")
|
||||
fmt.Fprintln(w, "const (")
|
||||
fmt.Fprintln(w, "\t// Version is the Unicode edition from which the tables are derived.")
|
||||
fmt.Fprintf(w, "\tVersion = %q\n", gen.UnicodeVersion())
|
||||
fmt.Fprintln(w)
|
||||
fmt.Fprintln(w, "\t// MaxTransformChunkSize indicates the maximum number of bytes that Transform")
|
||||
fmt.Fprintln(w, "\t// may need to write atomically for any Form. Making a destination buffer at")
|
||||
fmt.Fprintln(w, "\t// least this size ensures that Transform can always make progress and that")
|
||||
fmt.Fprintln(w, "\t// the user does not need to grow the buffer on an ErrShortDst.")
|
||||
fmt.Fprintf(w, "\tMaxTransformChunkSize = %d+maxNonStarters*4\n", len(string(0x034F))+max)
|
||||
fmt.Fprintln(w, ")\n")
|
||||
|
||||
// Print the CCC remap table.
|
||||
size += len(cccMap)
|
||||
fmt.Printf("var ccc = [%d]uint8{", len(cccMap))
|
||||
fmt.Fprintf(w, "var ccc = [%d]uint8{", len(cccMap))
|
||||
for i := 0; i < len(cccMap); i++ {
|
||||
if i%8 == 0 {
|
||||
fmt.Println()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
fmt.Printf("%3d, ", cccMap[uint8(i)])
|
||||
fmt.Fprintf(w, "%3d, ", cccMap[uint8(i)])
|
||||
}
|
||||
fmt.Println("\n}\n")
|
||||
fmt.Fprintln(w, "\n}\n")
|
||||
|
||||
if contains(list, "info") {
|
||||
size += printCharInfoTables()
|
||||
size += printCharInfoTables(w)
|
||||
}
|
||||
|
||||
if contains(list, "recomp") {
|
||||
@@ -842,20 +790,21 @@ func makeTables() {
|
||||
}
|
||||
sz := nrentries * 8
|
||||
size += sz
|
||||
fmt.Printf("// recompMap: %d bytes (entries only)\n", sz)
|
||||
fmt.Println("var recompMap = map[uint32]rune{")
|
||||
fmt.Fprintf(w, "// recompMap: %d bytes (entries only)\n", sz)
|
||||
fmt.Fprintln(w, "var recompMap = map[uint32]rune{")
|
||||
for i, c := range chars {
|
||||
f := c.forms[FCanonical]
|
||||
d := f.decomp
|
||||
if !f.isOneWay && len(d) > 0 {
|
||||
key := uint32(uint16(d[0]))<<16 + uint32(uint16(d[1]))
|
||||
fmt.Printf("0x%.8X: 0x%.4X,\n", key, i)
|
||||
fmt.Fprintf(w, "0x%.8X: 0x%.4X,\n", key, i)
|
||||
}
|
||||
}
|
||||
fmt.Printf("}\n\n")
|
||||
fmt.Fprintf(w, "}\n\n")
|
||||
}
|
||||
|
||||
fmt.Printf("// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size)
|
||||
fmt.Fprintf(w, "// Total size of tables: %dKB (%d bytes)\n", (size+512)/1024, size)
|
||||
gen.WriteGoFile("tables.go", "norm", w.Bytes())
|
||||
}
|
||||
|
||||
func printChars() {
|
||||
@@ -901,7 +850,7 @@ func verifyComputed() {
|
||||
nfc := c.forms[FCanonical]
|
||||
nfkc := c.forms[FCompatibility]
|
||||
if nfc.combinesBackward != nfkc.combinesBackward {
|
||||
logger.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint)
|
||||
log.Fatalf("%U: Cannot combine combinesBackward\n", c.codePoint)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -913,7 +862,7 @@ func verifyComputed() {
|
||||
// 0374 ; NFD_QC; N # ...
|
||||
// See http://unicode.org/reports/tr44/ for full explanation
|
||||
func testDerived() {
|
||||
f := openReader("DerivedNormalizationProps.txt")
|
||||
f := gen.OpenUCDFile("DerivedNormalizationProps.txt")
|
||||
defer f.Close()
|
||||
p := ucd.New(f)
|
||||
for p.Next() {
|
||||
@@ -946,12 +895,12 @@ func testDerived() {
|
||||
log.Fatalf(`Unexpected quick check value "%s"`, p.String(2))
|
||||
}
|
||||
if got := c.forms[ftype].quickCheck[mode]; got != qr {
|
||||
logger.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr)
|
||||
log.Printf("%U: FAILED %s (was %v need %v)\n", r, qt, got, qr)
|
||||
}
|
||||
c.forms[ftype].verified[mode] = true
|
||||
}
|
||||
if err := p.Err(); err != nil {
|
||||
logger.Fatal(err)
|
||||
log.Fatal(err)
|
||||
}
|
||||
// Any unspecified value must be QCYes. Verify this.
|
||||
for i, c := range chars {
|
||||
@@ -959,20 +908,14 @@ func testDerived() {
|
||||
for k, qr := range fd.quickCheck {
|
||||
if !fd.verified[k] && qr != QCYes {
|
||||
m := "%U: FAIL F:%d M:%d (was %v need Yes) %s\n"
|
||||
logger.Printf(m, i, j, k, qr, c.name)
|
||||
log.Printf(m, i, j, k, qr, c.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var testHeader = `// Generated by running
|
||||
// maketables --test --url=%s
|
||||
// +build test
|
||||
|
||||
package norm
|
||||
|
||||
const (
|
||||
var testHeader = `const (
|
||||
Yes = iota
|
||||
No
|
||||
Maybe
|
||||
@@ -1010,8 +953,10 @@ func printTestdata() {
|
||||
nTrail uint8
|
||||
f string
|
||||
}
|
||||
|
||||
last := lastInfo{}
|
||||
fmt.Printf(testHeader, *url)
|
||||
w := &bytes.Buffer{}
|
||||
fmt.Fprintf(w, testHeader)
|
||||
for r, c := range chars {
|
||||
f := c.forms[FCanonical]
|
||||
qc, cf, d := f.quickCheck[MComposed], f.combinesForward, string(f.expandedDecomp)
|
||||
@@ -1025,9 +970,10 @@ func printTestdata() {
|
||||
}
|
||||
current := lastInfo{c.ccc, c.nLeadingNonStarters, c.nTrailingNonStarters, s}
|
||||
if last != current {
|
||||
fmt.Printf("\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s)
|
||||
fmt.Fprintf(w, "\t{0x%x, %d, %d, %d, %s},\n", r, c.origCCC, c.nLeadingNonStarters, c.nTrailingNonStarters, s)
|
||||
last = current
|
||||
}
|
||||
}
|
||||
fmt.Println("}")
|
||||
fmt.Fprintln(w, "}")
|
||||
gen.WriteGoFile("data_test.go", "norm", w.Bytes())
|
||||
}
|
||||
|
||||
3
Godeps/_workspace/src/golang.org/x/text/unicode/norm/normalize.go
generated
vendored
@@ -2,6 +2,9 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:generate go run maketables.go triegen.go
|
||||
//go:generate go run maketables.go triegen.go -test
|
||||
|
||||
// Package norm contains types and functions for normalizing Unicode strings.
|
||||
package norm
|
||||
|
||||
|
||||
4
Godeps/_workspace/src/golang.org/x/text/unicode/norm/tables.go
generated
vendored
@@ -1,6 +1,4 @@
|
||||
// Generated by running
|
||||
// maketables --tables=all --url=http://www.unicode.org/Public/7.0.0/ucd/
|
||||
// DO NOT EDIT
|
||||
// This file was generated by go generate; DO NOT EDIT
|
||||
|
||||
package norm
|
||||
|
||||
|
||||
@@ -2,52 +2,37 @@
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
package norm
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/unicode/norm"
|
||||
"golang.org/x/text/internal/gen"
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
loadTestData()
|
||||
CharacterByCharacterTests()
|
||||
StandardTests()
|
||||
PerformanceTest()
|
||||
if errorCount == 0 {
|
||||
fmt.Println("PASS")
|
||||
var long = flag.Bool("long", false,
|
||||
"run time-consuming tests, such as tests that fetch data online")
|
||||
|
||||
var once sync.Once
|
||||
|
||||
func skipShort(t *testing.T) {
|
||||
if !gen.IsLocal() && !*long {
|
||||
t.Skip("skipping test to prevent downloading; to run use -long or use -local to specify a local source")
|
||||
}
|
||||
once.Do(func() { loadTestData(t) })
|
||||
}
|
||||
|
||||
const file = "NormalizationTest.txt"
|
||||
|
||||
var url = flag.String("url",
|
||||
"http://www.unicode.org/Public/"+unicode.Version+"/ucd/"+file,
|
||||
"URL of Unicode database directory")
|
||||
var localFiles = flag.Bool("local",
|
||||
false,
|
||||
"data files have been copied to the current directory; for debugging only")
|
||||
|
||||
var logger = log.New(os.Stderr, "", log.Lshortfile)
|
||||
|
||||
// This regression test runs the test set in NormalizationTest.txt
|
||||
// (taken from http://www.unicode.org/Public/<unicode.Version>/ucd/).
|
||||
//
|
||||
@@ -124,22 +109,8 @@ var testRe = regexp.MustCompile(`^` + strings.Repeat(`([\dA-F ]+);`, 5) + ` # (.
|
||||
var counter int
|
||||
|
||||
// Load the data form NormalizationTest.txt
|
||||
func loadTestData() {
|
||||
if *localFiles {
|
||||
pwd, _ := os.Getwd()
|
||||
*url = "file://" + path.Join(pwd, file)
|
||||
}
|
||||
t := &http.Transport{}
|
||||
t.RegisterProtocol("file", http.NewFileTransport(http.Dir("/")))
|
||||
c := &http.Client{Transport: t}
|
||||
resp, err := c.Get(*url)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
logger.Fatal("bad GET status for "+file, resp.Status)
|
||||
}
|
||||
f := resp.Body
|
||||
func loadTestData(t *testing.T) {
|
||||
f := gen.OpenUCDFile("NormalizationTest.txt")
|
||||
defer f.Close()
|
||||
scanner := bufio.NewScanner(f)
|
||||
for scanner.Scan() {
|
||||
@@ -150,11 +121,11 @@ func loadTestData() {
|
||||
m := partRe.FindStringSubmatch(line)
|
||||
if m != nil {
|
||||
if len(m) < 3 {
|
||||
logger.Fatal("Failed to parse Part: ", line)
|
||||
t.Fatal("Failed to parse Part: ", line)
|
||||
}
|
||||
i, err := strconv.Atoi(m[1])
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
t.Fatal(err)
|
||||
}
|
||||
name := m[2]
|
||||
part = append(part, Part{name: name[:len(name)-1], number: i})
|
||||
@@ -162,7 +133,7 @@ func loadTestData() {
|
||||
}
|
||||
m = testRe.FindStringSubmatch(line)
|
||||
if m == nil || len(m) < 7 {
|
||||
logger.Fatalf(`Failed to parse: "%s" result: %#v`, line, m)
|
||||
t.Fatalf(`Failed to parse: "%s" result: %#v`, line, m)
|
||||
}
|
||||
test := Test{name: m[6], partnr: len(part) - 1, number: counter}
|
||||
counter++
|
||||
@@ -170,7 +141,7 @@ func loadTestData() {
|
||||
for _, split := range strings.Split(m[j], " ") {
|
||||
r, err := strconv.ParseUint(split, 16, 64)
|
||||
if err != nil {
|
||||
logger.Fatal(err)
|
||||
t.Fatal(err)
|
||||
}
|
||||
if test.r == 0 {
|
||||
// save for CharacterByCharacterTests
|
||||
@@ -185,50 +156,38 @@ func loadTestData() {
|
||||
part.tests = append(part.tests, test)
|
||||
}
|
||||
if scanner.Err() != nil {
|
||||
logger.Fatal(scanner.Err())
|
||||
t.Fatal(scanner.Err())
|
||||
}
|
||||
}
|
||||
|
||||
var fstr = []string{"NFC", "NFD", "NFKC", "NFKD"}
|
||||
|
||||
var errorCount int
|
||||
|
||||
func cmpResult(t *Test, name string, f norm.Form, gold, test, result string) {
|
||||
func cmpResult(t *testing.T, tc *Test, name string, f Form, gold, test, result string) {
|
||||
if gold != result {
|
||||
errorCount++
|
||||
if errorCount > 20 {
|
||||
return
|
||||
}
|
||||
logger.Printf("%s:%s: %s(%+q)=%+q; want %+q: %s",
|
||||
t.Name(), name, fstr[f], test, result, gold, t.name)
|
||||
t.Errorf("%s:%s: %s(%+q)=%+q; want %+q: %s",
|
||||
tc.Name(), name, fstr[f], test, result, gold, tc.name)
|
||||
}
|
||||
}
|
||||
|
||||
func cmpIsNormal(t *Test, name string, f norm.Form, test string, result, want bool) {
|
||||
func cmpIsNormal(t *testing.T, tc *Test, name string, f Form, test string, result, want bool) {
|
||||
if result != want {
|
||||
errorCount++
|
||||
if errorCount > 20 {
|
||||
return
|
||||
}
|
||||
logger.Printf("%s:%s: %s(%+q)=%v; want %v", t.Name(), name, fstr[f], test, result, want)
|
||||
t.Errorf("%s:%s: %s(%+q)=%v; want %v", tc.Name(), name, fstr[f], test, result, want)
|
||||
}
|
||||
}
|
||||
|
||||
func doTest(t *Test, f norm.Form, gold, test string) {
|
||||
func doTest(t *testing.T, tc *Test, f Form, gold, test string) {
|
||||
testb := []byte(test)
|
||||
result := f.Bytes(testb)
|
||||
cmpResult(t, "Bytes", f, gold, test, string(result))
|
||||
cmpResult(t, tc, "Bytes", f, gold, test, string(result))
|
||||
|
||||
sresult := f.String(test)
|
||||
cmpResult(t, "String", f, gold, test, sresult)
|
||||
cmpResult(t, tc, "String", f, gold, test, sresult)
|
||||
|
||||
acc := []byte{}
|
||||
i := norm.Iter{}
|
||||
i := Iter{}
|
||||
i.InitString(f, test)
|
||||
for !i.Done() {
|
||||
acc = append(acc, i.Next()...)
|
||||
}
|
||||
cmpResult(t, "Iter.Next", f, gold, test, string(acc))
|
||||
cmpResult(t, tc, "Iter.Next", f, gold, test, string(acc))
|
||||
|
||||
buf := make([]byte, 128)
|
||||
acc = nil
|
||||
@@ -237,32 +196,33 @@ func doTest(t *Test, f norm.Form, gold, test string) {
|
||||
acc = append(acc, buf[:nDst]...)
|
||||
p += nSrc
|
||||
}
|
||||
cmpResult(t, "Transform", f, gold, test, string(acc))
|
||||
cmpResult(t, tc, "Transform", f, gold, test, string(acc))
|
||||
|
||||
for i := range test {
|
||||
out := f.Append(f.Bytes([]byte(test[:i])), []byte(test[i:])...)
|
||||
cmpResult(t, fmt.Sprintf(":Append:%d", i), f, gold, test, string(out))
|
||||
cmpResult(t, tc, fmt.Sprintf(":Append:%d", i), f, gold, test, string(out))
|
||||
}
|
||||
cmpIsNormal(t, "IsNormal", f, test, f.IsNormal([]byte(test)), test == gold)
|
||||
cmpIsNormal(t, "IsNormalString", f, test, f.IsNormalString(test), test == gold)
|
||||
cmpIsNormal(t, tc, "IsNormal", f, test, f.IsNormal([]byte(test)), test == gold)
|
||||
cmpIsNormal(t, tc, "IsNormalString", f, test, f.IsNormalString(test), test == gold)
|
||||
}
|
||||
|
||||
func doConformanceTests(t *Test, partn int) {
|
||||
func doConformanceTests(t *testing.T, tc *Test, partn int) {
|
||||
for i := 0; i <= 2; i++ {
|
||||
doTest(t, norm.NFC, t.cols[1], t.cols[i])
|
||||
doTest(t, norm.NFD, t.cols[2], t.cols[i])
|
||||
doTest(t, norm.NFKC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFKD, t.cols[4], t.cols[i])
|
||||
doTest(t, tc, NFC, tc.cols[1], tc.cols[i])
|
||||
doTest(t, tc, NFD, tc.cols[2], tc.cols[i])
|
||||
doTest(t, tc, NFKC, tc.cols[3], tc.cols[i])
|
||||
doTest(t, tc, NFKD, tc.cols[4], tc.cols[i])
|
||||
}
|
||||
for i := 3; i <= 4; i++ {
|
||||
doTest(t, norm.NFC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFD, t.cols[4], t.cols[i])
|
||||
doTest(t, norm.NFKC, t.cols[3], t.cols[i])
|
||||
doTest(t, norm.NFKD, t.cols[4], t.cols[i])
|
||||
doTest(t, tc, NFC, tc.cols[3], tc.cols[i])
|
||||
doTest(t, tc, NFD, tc.cols[4], tc.cols[i])
|
||||
doTest(t, tc, NFKC, tc.cols[3], tc.cols[i])
|
||||
doTest(t, tc, NFKD, tc.cols[4], tc.cols[i])
|
||||
}
|
||||
}
|
||||
|
||||
func CharacterByCharacterTests() {
|
||||
func TestCharacterByCharacter(t *testing.T) {
|
||||
skipShort(t)
|
||||
tests := part[1].tests
|
||||
var last rune = 0
|
||||
for i := 0; i <= len(tests); i++ { // last one is special case
|
||||
@@ -274,37 +234,39 @@ func CharacterByCharacterTests() {
|
||||
}
|
||||
for last++; last < r; last++ {
|
||||
// Check all characters that were not explicitly listed in the test.
|
||||
t := &Test{partnr: 1, number: -1}
|
||||
tc := &Test{partnr: 1, number: -1}
|
||||
char := string(last)
|
||||
doTest(t, norm.NFC, char, char)
|
||||
doTest(t, norm.NFD, char, char)
|
||||
doTest(t, norm.NFKC, char, char)
|
||||
doTest(t, norm.NFKD, char, char)
|
||||
doTest(t, tc, NFC, char, char)
|
||||
doTest(t, tc, NFD, char, char)
|
||||
doTest(t, tc, NFKC, char, char)
|
||||
doTest(t, tc, NFKD, char, char)
|
||||
}
|
||||
if i < len(tests) {
|
||||
doConformanceTests(&tests[i], 1)
|
||||
doConformanceTests(t, &tests[i], 1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func StandardTests() {
|
||||
func TestStandardTests(t *testing.T) {
|
||||
skipShort(t)
|
||||
for _, j := range []int{0, 2, 3} {
|
||||
for _, test := range part[j].tests {
|
||||
doConformanceTests(&test, j)
|
||||
doConformanceTests(t, &test, j)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// PerformanceTest verifies that normalization is O(n). If any of the
|
||||
// TestPerformance verifies that normalization is O(n). If any of the
|
||||
// code does not properly check for maxCombiningChars, normalization
|
||||
// may exhibit O(n**2) behavior.
|
||||
func PerformanceTest() {
|
||||
func TestPerformance(t *testing.T) {
|
||||
skipShort(t)
|
||||
runtime.GOMAXPROCS(2)
|
||||
success := make(chan bool, 1)
|
||||
go func() {
|
||||
buf := bytes.Repeat([]byte("\u035D"), 1024*1024)
|
||||
buf = append(buf, "\u035B"...)
|
||||
norm.NFC.Append(nil, buf...)
|
||||
NFC.Append(nil, buf...)
|
||||
success <- true
|
||||
}()
|
||||
timeout := time.After(1 * time.Second)
|
||||
@@ -312,7 +274,6 @@ func PerformanceTest() {
|
||||
case <-success:
|
||||
// test completed before the timeout
|
||||
case <-timeout:
|
||||
errorCount++
|
||||
logger.Printf(`unexpectedly long time to complete PerformanceTest`)
|
||||
t.Errorf(`unexpectedly long time to complete PerformanceTest`)
|
||||
}
|
||||
}
|
||||
5
NICKS
@@ -3,6 +3,7 @@
|
||||
AudriusButkevicius <audrius.butkevicius@gmail.com>
|
||||
Cathryne <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
|
||||
KayoticSully <kayoticsully@gmail.com>
|
||||
Moter8 <moter8@gmail.com>
|
||||
Nutomic <me@nutomic.com>
|
||||
Rewt0r <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
|
||||
Vilbrekin <vilbrekin@gmail.com>
|
||||
@@ -18,10 +19,12 @@ calmh <jakob@nym.se>
|
||||
cdata <chris@scriptolo.gy>
|
||||
ceh <emil@hessman.se>
|
||||
cqcallaw <enlightened.despot@gmail.com>
|
||||
dzarda <dzardacz@gmail.com>
|
||||
facastagnini <federico.castagnini@gmail.com>
|
||||
filoozoom <philippe@schommers.be>
|
||||
frioux <frew@afoolishmanifesto.com> <frioux@gmail.com>
|
||||
gillisig <gilli@vx.is>
|
||||
jarlebring <jarlebring@gmail.com>
|
||||
jedie <github.com@jensdiemer.de> <git@jensdiemer.de>
|
||||
jpjp <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
|
||||
kamadak <kamada@nanohz.org>
|
||||
@@ -38,6 +41,7 @@ philips <brandon@ifup.org>
|
||||
piobpl <piotrb10@gmail.com>
|
||||
pluby <phill.luby@newredo.com>
|
||||
pyfisch <pyfisch@gmail.com>
|
||||
ralder <ralder@yandex.ru>
|
||||
rumpelsepp <stefan@sevenbyte.org>
|
||||
qbit <qbit@deftly.net>
|
||||
sciurius <jvromans@squirrel.nl>
|
||||
@@ -48,3 +52,4 @@ tnn2 <tnn@nygren.pp.se>
|
||||
tojrobinson <tully@tojr.org>
|
||||
uok <ueomkail@gmail.com> <uok@users.noreply.github.com>
|
||||
veeti <veeti.paananen@rojekti.fi>
|
||||
zukoo <fxgsell@gmail.com>
|
||||
|
||||
19
README.md
@@ -5,18 +5,17 @@ syncthing
|
||||
[](http://godoc.org/github.com/syncthing/syncthing)
|
||||
[](https://www.mozilla.org/MPL/2.0/)
|
||||
|
||||
This is the `syncthing` project. The following are the project goals:
|
||||
This is the `syncthing` project which pursues the following goals:
|
||||
|
||||
1. Define a protocol for synchronization of a folder between a number of
|
||||
collaborating devices. The protocol should be well defined, unambiguous,
|
||||
collaborating devices. This protocol should be well defined, unambiguous,
|
||||
easily understood, free to use, efficient, secure and language neutral.
|
||||
This is the [Block Exchange
|
||||
This is called the [Block Exchange
|
||||
Protocol](https://github.com/syncthing/specs/blob/master/BEPv1.md).
|
||||
|
||||
2. Provide the reference implementation to demonstrate the usability of
|
||||
said protocol. This is the `syncthing` utility. It is the hope that
|
||||
alternative, compatible implementations of the protocol will come to
|
||||
exist.
|
||||
said protocol. This is the `syncthing` utility. We hope that
|
||||
alternative, compatible implementations of the protocol will arise.
|
||||
|
||||
The two are evolving together; the protocol is not to be considered
|
||||
stable until syncthing 1.0 is released, at which point it is locked down
|
||||
@@ -32,20 +31,20 @@ There are a few examples for keeping syncthing running in the background
|
||||
on your system in [the etc directory](https://github.com/syncthing/syncthing/blob/master/etc).
|
||||
|
||||
There is an IRC channel, `#syncthing` on Freenode, for talking directly
|
||||
to developers and users (when awake and present, etc.).
|
||||
to developers and users.
|
||||
|
||||
Building
|
||||
--------
|
||||
|
||||
Building Syncthing from source is easy, and there's a
|
||||
[guide](https://github.com/syncthing/syncthing/wiki/Building).
|
||||
that describes it for both Unix and Windows.
|
||||
that describes it for both Unix and Windows systems.
|
||||
|
||||
Signed Releases
|
||||
---------------
|
||||
|
||||
As of v0.10.15 and onwards, git tags and release binaries are GPG signed
|
||||
with the key D26E6ED000654A3E (see http://syncthing.net/security.html).
|
||||
with the key D26E6ED000654A3E (see https://syncthing.net/security.html).
|
||||
For release binaries, MD5 and SHA1 checksums are calculated and signed,
|
||||
available in the md5sum.txt.asc and sha1sum.txt.asc files.
|
||||
|
||||
@@ -57,4 +56,4 @@ documentation](https://github.com/syncthing/syncthing/wiki/) is on the
|
||||
Github wiki.
|
||||
|
||||
All code is licensed under the
|
||||
[MPLv2](https://github.com/syncthing/syncthing/blob/master/LICENSE).
|
||||
[MPLv2 License](https://github.com/syncthing/syncthing/blob/master/LICENSE).
|
||||
|
||||
|
Before Width: | Height: | Size: 3.7 KiB After Width: | Height: | Size: 3.6 KiB |
|
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 3.8 KiB After Width: | Height: | Size: 3.7 KiB |
58
authors.go
Normal file
@@ -0,0 +1,58 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
// +build ignore
|
||||
|
||||
// Generates the list of contributors in gui/index.html based on contents of
|
||||
// AUTHORS.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func main() {
|
||||
bs := readAll("AUTHORS")
|
||||
lines := strings.Split(string(bs), "\n")
|
||||
nameRe := regexp.MustCompile(`(.+?)\s+<`)
|
||||
authors := make([]string, 0, len(lines))
|
||||
for _, line := range lines {
|
||||
if m := nameRe.FindStringSubmatch(line); len(m) == 2 {
|
||||
authors = append(authors, " <li class=\"auto-generated\">"+m[1]+"</li>")
|
||||
}
|
||||
}
|
||||
sort.Strings(authors)
|
||||
replacement := strings.Join(authors, "\n")
|
||||
|
||||
authorsRe := regexp.MustCompile(`(?s)id="contributor-list">.*?</ul>`)
|
||||
bs = readAll("gui/index.html")
|
||||
bs = authorsRe.ReplaceAll(bs, []byte("id=\"contributor-list\">\n"+replacement+"\n </ul>"))
|
||||
|
||||
if err := ioutil.WriteFile("gui/index.html", bs, 0644); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func readAll(path string) []byte {
|
||||
fd, err := os.Open(path)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer fd.Close()
|
||||
|
||||
bs, err := ioutil.ReadAll(fd)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
return bs
|
||||
}
|
||||
55
build.go
@@ -78,6 +78,11 @@ func main() {
|
||||
tags = []string{"noupgrade"}
|
||||
}
|
||||
install("./cmd/...", tags)
|
||||
|
||||
vet("./cmd/syncthing")
|
||||
vet("./internal/...")
|
||||
lint("./cmd/syncthing")
|
||||
lint("./internal/...")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -103,8 +108,7 @@ func main() {
|
||||
build(pkg, tags)
|
||||
|
||||
case "test":
|
||||
pkg := "./..."
|
||||
test(pkg)
|
||||
test("./...")
|
||||
|
||||
case "assets":
|
||||
assets()
|
||||
@@ -130,6 +134,14 @@ func main() {
|
||||
case "clean":
|
||||
clean()
|
||||
|
||||
case "vet":
|
||||
vet("./cmd/syncthing")
|
||||
vet("./internal/...")
|
||||
|
||||
case "lint":
|
||||
lint("./cmd/syncthing")
|
||||
lint("./internal/...")
|
||||
|
||||
default:
|
||||
log.Fatalf("Unknown command %q", cmd)
|
||||
}
|
||||
@@ -437,10 +449,7 @@ func run(cmd string, args ...string) []byte {
|
||||
func runError(cmd string, args ...string) ([]byte, error) {
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
bs, err := ecmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return bytes.TrimSpace(bs), nil
|
||||
return bytes.TrimSpace(bs), err
|
||||
}
|
||||
|
||||
func runPrint(cmd string, args ...string) {
|
||||
@@ -615,3 +624,37 @@ func md5File(file string) error {
|
||||
|
||||
return out.Close()
|
||||
}
|
||||
|
||||
func vet(pkg string) {
|
||||
bs, err := runError("go", "vet", pkg)
|
||||
if err != nil && err.Error() == "exit status 3" || bytes.Contains(bs, []byte("no such tool \"vet\"")) {
|
||||
// Go said there is no go vet
|
||||
log.Println(`- No go vet, no vetting. Try "go get -u golang.org/x/tools/cmd/vet".`)
|
||||
return
|
||||
}
|
||||
|
||||
falseAlarmComposites := regexp.MustCompile("composite literal uses unkeyed fields")
|
||||
exitStatus := regexp.MustCompile("exit status 1")
|
||||
for _, line := range bytes.Split(bs, []byte("\n")) {
|
||||
if falseAlarmComposites.Match(line) || exitStatus.Match(line) {
|
||||
continue
|
||||
}
|
||||
log.Printf("%s", line)
|
||||
}
|
||||
}
|
||||
|
||||
func lint(pkg string) {
|
||||
bs, err := runError("golint", pkg)
|
||||
if err != nil {
|
||||
log.Println(`- No golint, not linting. Try "go get -u github.com/golang/lint/golint".`)
|
||||
return
|
||||
}
|
||||
|
||||
analCommentPolicy := regexp.MustCompile(`exported (function|method|const|type|var) [^\s]+ should have comment`)
|
||||
for _, line := range bytes.Split(bs, []byte("\n")) {
|
||||
if analCommentPolicy.Match(line) {
|
||||
continue
|
||||
}
|
||||
log.Printf("%s", line)
|
||||
}
|
||||
}
|
||||
|
||||
23
build.sh
@@ -118,8 +118,6 @@ case "${1:-default}" in
|
||||
-e "STTRACE=$STTRACE" \
|
||||
syncthing/build:latest \
|
||||
sh -c './build.sh clean \
|
||||
&& go vet ./cmd/... ./internal/... \
|
||||
&& ( golint ./cmd/... ; golint ./internal/... ) | egrep -v "comment on exported|should have comment" \
|
||||
&& ./build.sh all \
|
||||
&& STTRACE=all ./build.sh test-cov'
|
||||
;;
|
||||
@@ -134,10 +132,29 @@ case "${1:-default}" in
|
||||
&& go run build.go -race \
|
||||
&& export GOPATH=$(pwd)/Godeps/_workspace:$GOPATH \
|
||||
&& cd test \
|
||||
&& go test -tags integration -v -timeout 60m -short \
|
||||
&& go test -tags integration -v -timeout 90m -short \
|
||||
&& git clean -fxd .'
|
||||
;;
|
||||
|
||||
docker-lint)
|
||||
docker run --rm -h syncthing-builder -u $(id -u) -t \
|
||||
-v $(pwd):/go/src/github.com/syncthing/syncthing \
|
||||
-w /go/src/github.com/syncthing/syncthing \
|
||||
-e "STTRACE=$STTRACE" \
|
||||
syncthing/build:latest \
|
||||
sh -euxc 'go run build.go lint'
|
||||
;;
|
||||
|
||||
|
||||
docker-vet)
|
||||
docker run --rm -h syncthing-builder -u $(id -u) -t \
|
||||
-v $(pwd):/go/src/github.com/syncthing/syncthing \
|
||||
-w /go/src/github.com/syncthing/syncthing \
|
||||
-e "STTRACE=$STTRACE" \
|
||||
syncthing/build:latest \
|
||||
sh -euxc 'go run build.go vet'
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "Unknown build command $1"
|
||||
;;
|
||||
|
||||
72
changelog.go
Normal file
@@ -0,0 +1,72 @@
|
||||
// Copyright (C) 2014 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
subjectIssues = regexp.MustCompile(`^([^(]+)\s+\((?:fixes|ref) ([^)]+)\)(?:[^\w])?$`)
|
||||
issueNumbers = regexp.MustCompile(`(#\d+)`)
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
// Display changelog since the version given on the command line, or
|
||||
// figure out the last release if there were no arguments.
|
||||
var prevRel string
|
||||
if flag.NArg() > 0 {
|
||||
prevRel = flag.Arg(0)
|
||||
} else {
|
||||
bs, err := runError("git", "describe", "--abbrev=0", "HEAD^")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
prevRel = string(bs)
|
||||
}
|
||||
|
||||
// Get the git log with subject and author nickname
|
||||
bs, err := runError("git", "log", "--reverse", "--pretty=format:%s|%aN", prevRel+"..")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Split into lines
|
||||
for _, line := range bytes.Split(bs, []byte{'\n'}) {
|
||||
// Split into subject and author
|
||||
fields := bytes.Split(line, []byte{'|'})
|
||||
subj := fields[0]
|
||||
author := fields[1]
|
||||
|
||||
// Check if subject contains a "(fixes ...)" or "(ref ...)""
|
||||
if m := subjectIssues.FindSubmatch(subj); len(m) > 0 {
|
||||
// Find all issue numbers
|
||||
issues := issueNumbers.FindAll(m[2], -1)
|
||||
|
||||
// Format a changelog entry
|
||||
fmt.Printf("* %s (%s, @%s)\n", m[1], bytes.Join(issues, []byte(", ")), author)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func runError(cmd string, args ...string) ([]byte, error) {
|
||||
ecmd := exec.Command(cmd, args...)
|
||||
bs, err := ecmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return bytes.TrimSpace(bs), nil
|
||||
}
|
||||
11
changelog.sh
@@ -1,11 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
since="$1"
|
||||
if [[ -z $since ]] ; then
|
||||
since="$(git describe --abbrev=0 HEAD^).."
|
||||
fi
|
||||
|
||||
git log --reverse --pretty=format:'* %s, @%aN)' "$since" | egrep 'fixes #\d|ref #\d' | sed 's/)[,. ]*,/,/' | sed 's/fixes #/#/g' | sed 's/ref #/#/g'
|
||||
|
||||
git diff "$since" -- AUTHORS
|
||||
|
||||
@@ -17,7 +17,8 @@ no-docs-typos() {
|
||||
grep -v a9339d0627fff439879d157c75077f02c9fac61b |\
|
||||
grep -v 254c63763a3ad42fd82259f1767db526cff94a14 |\
|
||||
grep -v 4b76ec40c07078beaa2c5e250ed7d9bd6276a718 |\
|
||||
grep -v ffc39dfbcb34eacc3ea12327a02b6e7741a2c207
|
||||
grep -v ffc39dfbcb34eacc3ea12327a02b6e7741a2c207 |\
|
||||
grep -v 32a76901a91ff0f663db6f0830e0aedec946e4d0
|
||||
}
|
||||
|
||||
print-missing-authors() {
|
||||
|
||||
@@ -15,33 +15,32 @@ import (
|
||||
"flag"
|
||||
"go/format"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"text/template"
|
||||
"time"
|
||||
)
|
||||
|
||||
var tpl = template.Must(template.New("assets").Parse(`package auto
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/base64"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
const (
|
||||
AssetsBuildDate = "{{.BuildDate}}"
|
||||
)
|
||||
|
||||
func Assets() map[string][]byte {
|
||||
var assets = make(map[string][]byte, {{.assets | len}})
|
||||
var bs []byte
|
||||
var gr *gzip.Reader
|
||||
{{range $asset := .assets}}
|
||||
bs, _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
|
||||
gr, _ = gzip.NewReader(bytes.NewReader(bs))
|
||||
bs, _ = ioutil.ReadAll(gr)
|
||||
assets["{{$asset.Name}}"] = bs
|
||||
var assets = make(map[string][]byte, {{.Assets | len}})
|
||||
{{range $asset := .Assets}}
|
||||
assets["{{$asset.Name}}"], _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
|
||||
{{end}}
|
||||
return assets
|
||||
}
|
||||
|
||||
`))
|
||||
|
||||
type asset struct {
|
||||
@@ -86,12 +85,20 @@ func walkerFor(basePath string) filepath.WalkFunc {
|
||||
}
|
||||
}
|
||||
|
||||
type templateVars struct {
|
||||
Assets []asset
|
||||
BuildDate string
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
filepath.Walk(flag.Arg(0), walkerFor(flag.Arg(0)))
|
||||
var buf bytes.Buffer
|
||||
tpl.Execute(&buf, map[string][]asset{"assets": assets})
|
||||
tpl.Execute(&buf, templateVars{
|
||||
Assets: assets,
|
||||
BuildDate: time.Now().UTC().Format(http.TimeFormat),
|
||||
})
|
||||
bs, err := format.Source(buf.Bytes())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
|
||||
@@ -27,7 +27,7 @@ func main() {
|
||||
log.SetOutput(os.Stdout)
|
||||
log.SetFlags(0)
|
||||
|
||||
target := flag.String("target", "localhost:8080", "Target Syncthing instance")
|
||||
target := flag.String("target", "localhost:8384", "Target Syncthing instance")
|
||||
apikey := flag.String("apikey", "", "Syncthing API key")
|
||||
flag.Parse()
|
||||
|
||||
|
||||
69
cmd/syncthing/audit.go
Normal file
@@ -0,0 +1,69 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
)
|
||||
|
||||
// The auditSvc subscribes to events and writes these in JSON format, one
|
||||
// event per line, to the specified writer.
|
||||
type auditSvc struct {
|
||||
w io.Writer // audit destination
|
||||
stop chan struct{} // signals time to stop
|
||||
started chan struct{} // signals startup complete
|
||||
stopped chan struct{} // signals stop complete
|
||||
}
|
||||
|
||||
func newAuditSvc(w io.Writer) *auditSvc {
|
||||
return &auditSvc{
|
||||
w: w,
|
||||
stop: make(chan struct{}),
|
||||
started: make(chan struct{}),
|
||||
stopped: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Serve runs the audit service.
|
||||
func (s *auditSvc) Serve() {
|
||||
defer close(s.stopped)
|
||||
sub := events.Default.Subscribe(events.AllEvents)
|
||||
defer events.Default.Unsubscribe(sub)
|
||||
enc := json.NewEncoder(s.w)
|
||||
|
||||
// We're ready to start processing events.
|
||||
close(s.started)
|
||||
|
||||
for {
|
||||
select {
|
||||
case ev := <-sub.C():
|
||||
enc.Encode(ev)
|
||||
case <-s.stop:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the audit service.
|
||||
func (s *auditSvc) Stop() {
|
||||
close(s.stop)
|
||||
}
|
||||
|
||||
// WaitForStart returns once the audit service is ready to receive events, or
|
||||
// immediately if it's already running.
|
||||
func (s *auditSvc) WaitForStart() {
|
||||
<-s.started
|
||||
}
|
||||
|
||||
// WaitForStop returns once the audit service has stopped.
|
||||
// (Needed by the tests.)
|
||||
func (s *auditSvc) WaitForStop() {
|
||||
<-s.stopped
|
||||
}
|
||||
54
cmd/syncthing/auditsvc_test.go
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
)
|
||||
|
||||
func TestAuditService(t *testing.T) {
|
||||
buf := new(bytes.Buffer)
|
||||
svc := newAuditSvc(buf)
|
||||
|
||||
// Event sent before start, will not be logged
|
||||
events.Default.Log(events.Ping, "the first event")
|
||||
|
||||
go svc.Serve()
|
||||
svc.WaitForStart()
|
||||
|
||||
// Event that should end up in the audit log
|
||||
events.Default.Log(events.Ping, "the second event")
|
||||
|
||||
// We need to give the events time to arrive, since the channels are buffered etc.
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
svc.Stop()
|
||||
svc.WaitForStop()
|
||||
|
||||
// This event should not be logged, since we have stopped.
|
||||
events.Default.Log(events.Ping, "the third event")
|
||||
|
||||
result := string(buf.Bytes())
|
||||
t.Log(result)
|
||||
|
||||
if strings.Contains(result, "first event") {
|
||||
t.Error("Unexpected first event")
|
||||
}
|
||||
|
||||
if !strings.Contains(result, "second event") {
|
||||
t.Error("Missing second event")
|
||||
}
|
||||
|
||||
if strings.Contains(result, "third event") {
|
||||
t.Error("Missing third event")
|
||||
}
|
||||
}
|
||||
@@ -15,23 +15,84 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/protocol"
|
||||
"github.com/syncthing/syncthing/internal/config"
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
"github.com/syncthing/syncthing/internal/model"
|
||||
"github.com/thejerf/suture"
|
||||
)
|
||||
|
||||
func listenConnect(myID protocol.DeviceID, m *model.Model, tlsCfg *tls.Config) {
|
||||
var conns = make(chan *tls.Conn)
|
||||
// The connection service listens on TLS and dials configured unconnected
|
||||
// devices. Successful connections are handed to the model.
|
||||
type connectionSvc struct {
|
||||
*suture.Supervisor
|
||||
cfg *config.Wrapper
|
||||
myID protocol.DeviceID
|
||||
model *model.Model
|
||||
tlsCfg *tls.Config
|
||||
conns chan *tls.Conn
|
||||
}
|
||||
|
||||
// Listen
|
||||
for _, addr := range cfg.Options().ListenAddress {
|
||||
go listenTLS(conns, addr, tlsCfg)
|
||||
func newConnectionSvc(cfg *config.Wrapper, myID protocol.DeviceID, model *model.Model, tlsCfg *tls.Config) *connectionSvc {
|
||||
svc := &connectionSvc{
|
||||
Supervisor: suture.NewSimple("connectionSvc"),
|
||||
cfg: cfg,
|
||||
myID: myID,
|
||||
model: model,
|
||||
tlsCfg: tlsCfg,
|
||||
conns: make(chan *tls.Conn),
|
||||
}
|
||||
|
||||
// Connect
|
||||
go dialTLS(m, conns, tlsCfg)
|
||||
// There are several moving parts here; one routine per listening address
|
||||
// to handle incoming connections, one routine to periodically attempt
|
||||
// outgoing connections, and lastly one routine to the the common handling
|
||||
// regardless of whether the connection was incoming or outgoing. It ends
|
||||
// up as in the diagram below. We embed a Supervisor to manage the
|
||||
// routines (i.e. log and restart if they crash or exit, etc).
|
||||
//
|
||||
// +-----------------+
|
||||
// Incoming | +---------------+-+ +-----------------+
|
||||
// Connections | | | | | Outgoing
|
||||
// -------------->| | svc.listen | | | Connections
|
||||
// | | (1 per listen | | svc.connect |-------------->
|
||||
// | | address) | | |
|
||||
// +-+ | | |
|
||||
// +-----------------+ +-----------------+
|
||||
// v v
|
||||
// | |
|
||||
// | |
|
||||
// +------------+-----------+
|
||||
// |
|
||||
// | svc.conns
|
||||
// v
|
||||
// +-----------------+
|
||||
// | |
|
||||
// | |
|
||||
// | svc.handle |------> model.AddConnection()
|
||||
// | |
|
||||
// | |
|
||||
// +-----------------+
|
||||
//
|
||||
// TODO: Clean shutdown, and/or handling config changes on the fly. We
|
||||
// partly do this now - new devices and addresses will be picked up, but
|
||||
// not new listen addresses and we don't support disconnecting devices
|
||||
// that are removed and so on...
|
||||
|
||||
svc.Add(serviceFunc(svc.connect))
|
||||
for _, addr := range svc.cfg.Options().ListenAddress {
|
||||
addr := addr
|
||||
listener := serviceFunc(func() {
|
||||
svc.listen(addr)
|
||||
})
|
||||
svc.Add(listener)
|
||||
}
|
||||
svc.Add(serviceFunc(svc.handle))
|
||||
|
||||
return svc
|
||||
}
|
||||
|
||||
func (s *connectionSvc) handle() {
|
||||
next:
|
||||
for conn := range conns {
|
||||
for conn := range s.conns {
|
||||
cs := conn.ConnectionState()
|
||||
|
||||
// We should have negotiated the next level protocol "bep/1.0" as part
|
||||
@@ -55,7 +116,7 @@ next:
|
||||
remoteID := protocol.NewDeviceID(remoteCert.Raw)
|
||||
|
||||
// The device ID should not be that of ourselves. It can happen
|
||||
// though, especially in the presense of NAT hairpinning, multiple
|
||||
// though, especially in the presence of NAT hairpinning, multiple
|
||||
// clients between the same NAT gateway, and global discovery.
|
||||
if remoteID == myID {
|
||||
l.Infof("Connected to myself (%s) - should not happen", remoteID)
|
||||
@@ -67,15 +128,15 @@ next:
|
||||
// could use some better handling. If the old connection is dead but
|
||||
// hasn't timed out yet we may want to drop *that* connection and keep
|
||||
// this one. But in case we are two devices connecting to each other
|
||||
// in parallell we don't want to do that or we end up with no
|
||||
// in parallel we don't want to do that or we end up with no
|
||||
// connections still established...
|
||||
if m.ConnectedTo(remoteID) {
|
||||
if s.model.ConnectedTo(remoteID) {
|
||||
l.Infof("Connected to already connected device (%s)", remoteID)
|
||||
conn.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
for deviceID, deviceCfg := range cfg.Devices() {
|
||||
for deviceID, deviceCfg := range s.cfg.Devices() {
|
||||
if deviceID == remoteID {
|
||||
// Verify the name on the certificate. By default we set it to
|
||||
// "syncthing" when generating, but the user may have replaced
|
||||
@@ -97,7 +158,7 @@ next:
|
||||
// If rate limiting is set, and based on the address we should
|
||||
// limit the connection, then we wrap it in a limiter.
|
||||
|
||||
limit := shouldLimit(conn.RemoteAddr())
|
||||
limit := s.shouldLimit(conn.RemoteAddr())
|
||||
|
||||
wr := io.Writer(conn)
|
||||
if limit && writeRateLimit != nil {
|
||||
@@ -110,7 +171,7 @@ next:
|
||||
}
|
||||
|
||||
name := fmt.Sprintf("%s-%s", conn.LocalAddr(), conn.RemoteAddr())
|
||||
protoConn := protocol.NewConnection(remoteID, rd, wr, m, name, deviceCfg.Compression)
|
||||
protoConn := protocol.NewConnection(remoteID, rd, wr, s.model, name, deviceCfg.Compression)
|
||||
|
||||
l.Infof("Established secure connection to %s at %s", remoteID, name)
|
||||
if debugNet {
|
||||
@@ -121,12 +182,12 @@ next:
|
||||
"addr": conn.RemoteAddr().String(),
|
||||
})
|
||||
|
||||
m.AddConnection(conn, protoConn)
|
||||
s.model.AddConnection(conn, protoConn)
|
||||
continue next
|
||||
}
|
||||
}
|
||||
|
||||
if !cfg.IgnoredDevice(remoteID) {
|
||||
if !s.cfg.IgnoredDevice(remoteID) {
|
||||
events.Default.Log(events.DeviceRejected, map[string]string{
|
||||
"device": remoteID.String(),
|
||||
"address": conn.RemoteAddr().String(),
|
||||
@@ -140,7 +201,7 @@ next:
|
||||
}
|
||||
}
|
||||
|
||||
func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
|
||||
func (s *connectionSvc) listen(addr string) {
|
||||
if debugNet {
|
||||
l.Debugln("listening on", addr)
|
||||
}
|
||||
@@ -166,9 +227,9 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
|
||||
}
|
||||
|
||||
tcpConn := conn.(*net.TCPConn)
|
||||
setTCPOptions(tcpConn)
|
||||
s.setTCPOptions(tcpConn)
|
||||
|
||||
tc := tls.Server(conn, tlsCfg)
|
||||
tc := tls.Server(conn, s.tlsCfg)
|
||||
err = tc.Handshake()
|
||||
if err != nil {
|
||||
l.Infoln("TLS handshake:", err)
|
||||
@@ -176,21 +237,20 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
|
||||
continue
|
||||
}
|
||||
|
||||
conns <- tc
|
||||
s.conns <- tc
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
|
||||
func (s *connectionSvc) connect() {
|
||||
delay := time.Second
|
||||
for {
|
||||
nextDevice:
|
||||
for deviceID, deviceCfg := range cfg.Devices() {
|
||||
for deviceID, deviceCfg := range s.cfg.Devices() {
|
||||
if deviceID == myID {
|
||||
continue
|
||||
}
|
||||
|
||||
if m.ConnectedTo(deviceID) {
|
||||
if s.model.ConnectedTo(deviceID) {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -238,9 +298,9 @@ func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
|
||||
continue
|
||||
}
|
||||
|
||||
setTCPOptions(conn)
|
||||
s.setTCPOptions(conn)
|
||||
|
||||
tc := tls.Client(conn, tlsCfg)
|
||||
tc := tls.Client(conn, s.tlsCfg)
|
||||
err = tc.Handshake()
|
||||
if err != nil {
|
||||
l.Infoln("TLS handshake:", err)
|
||||
@@ -248,20 +308,20 @@ func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
|
||||
continue
|
||||
}
|
||||
|
||||
conns <- tc
|
||||
s.conns <- tc
|
||||
continue nextDevice
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(delay)
|
||||
delay *= 2
|
||||
if maxD := time.Duration(cfg.Options().ReconnectIntervalS) * time.Second; delay > maxD {
|
||||
if maxD := time.Duration(s.cfg.Options().ReconnectIntervalS) * time.Second; delay > maxD {
|
||||
delay = maxD
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func setTCPOptions(conn *net.TCPConn) {
|
||||
func (*connectionSvc) setTCPOptions(conn *net.TCPConn) {
|
||||
var err error
|
||||
if err = conn.SetLinger(0); err != nil {
|
||||
l.Infoln(err)
|
||||
@@ -277,8 +337,8 @@ func setTCPOptions(conn *net.TCPConn) {
|
||||
}
|
||||
}
|
||||
|
||||
func shouldLimit(addr net.Addr) bool {
|
||||
if cfg.Options().LimitBandwidthInLan {
|
||||
func (s *connectionSvc) shouldLimit(addr net.Addr) bool {
|
||||
if s.cfg.Options().LimitBandwidthInLan {
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -12,5 +12,6 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
debugNet = strings.Contains(os.Getenv("STTRACE"), "net") || os.Getenv("STTRACE") == "all"
|
||||
debugNet = strings.Contains(os.Getenv("STTRACE"), "net") || os.Getenv("STTRACE") == "all"
|
||||
debugHTTP = strings.Contains(os.Getenv("STTRACE"), "http") || os.Getenv("STTRACE") == "all"
|
||||
)
|
||||
|
||||
@@ -7,6 +7,8 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"crypto/tls"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
@@ -16,10 +18,10 @@ import (
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/calmh/logger"
|
||||
@@ -31,34 +33,48 @@ import (
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
"github.com/syncthing/syncthing/internal/model"
|
||||
"github.com/syncthing/syncthing/internal/osutil"
|
||||
"github.com/syncthing/syncthing/internal/sync"
|
||||
"github.com/syncthing/syncthing/internal/upgrade"
|
||||
"github.com/vitrun/qart/qr"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
)
|
||||
|
||||
type guiError struct {
|
||||
Time time.Time
|
||||
Error string
|
||||
Time time.Time `json:"time"`
|
||||
Error string `json:"error"`
|
||||
}
|
||||
|
||||
var (
|
||||
configInSync = true
|
||||
guiErrors = []guiError{}
|
||||
guiErrorsMut sync.Mutex
|
||||
modt = time.Now().UTC().Format(http.TimeFormat)
|
||||
guiErrorsMut = sync.NewMutex()
|
||||
startTime = time.Now()
|
||||
eventSub *events.BufferedSubscription
|
||||
)
|
||||
|
||||
func init() {
|
||||
l.AddHandler(logger.LevelWarn, showGuiError)
|
||||
sub := events.Default.Subscribe(events.AllEvents)
|
||||
eventSub = events.NewBufferedSubscription(sub, 1000)
|
||||
type apiSvc struct {
|
||||
cfg config.GUIConfiguration
|
||||
assetDir string
|
||||
model *model.Model
|
||||
fss *folderSummarySvc
|
||||
listener net.Listener
|
||||
}
|
||||
|
||||
func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) error {
|
||||
var err error
|
||||
func newAPISvc(cfg config.GUIConfiguration, assetDir string, m *model.Model) (*apiSvc, error) {
|
||||
svc := &apiSvc{
|
||||
cfg: cfg,
|
||||
assetDir: assetDir,
|
||||
model: m,
|
||||
fss: newFolderSummarySvc(m),
|
||||
}
|
||||
|
||||
cert, err := loadCert(confDir, "https-")
|
||||
var err error
|
||||
svc.listener, err = svc.getListener()
|
||||
return svc, err
|
||||
}
|
||||
|
||||
func (s *apiSvc) getListener() (net.Listener, error) {
|
||||
cert, err := tls.LoadX509KeyPair(locations[locHTTPSCertFile], locations[locHTTPSKeyFile])
|
||||
if err != nil {
|
||||
l.Infoln("Loading HTTPS certificate:", err)
|
||||
l.Infoln("Creating new HTTPS certificate")
|
||||
@@ -71,11 +87,10 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
name = tlsDefaultCommonName
|
||||
}
|
||||
|
||||
newCertificate(confDir, "https-", name)
|
||||
cert, err = loadCert(confDir, "https-")
|
||||
cert, err = newCertificate(locations[locHTTPSCertFile], locations[locHTTPSKeyFile], name)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
tlsCfg := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
@@ -95,55 +110,64 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
},
|
||||
}
|
||||
|
||||
rawListener, err := net.Listen("tcp", cfg.Address)
|
||||
rawListener, err := net.Listen("tcp", s.cfg.Address)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
listener := &DowngradingListener{rawListener, tlsCfg}
|
||||
return listener, nil
|
||||
}
|
||||
|
||||
func (s *apiSvc) Serve() {
|
||||
l.AddHandler(logger.LevelWarn, s.showGuiError)
|
||||
sub := events.Default.Subscribe(events.AllEvents)
|
||||
eventSub = events.NewBufferedSubscription(sub, 1000)
|
||||
defer events.Default.Unsubscribe(sub)
|
||||
|
||||
// The GET handlers
|
||||
getRestMux := http.NewServeMux()
|
||||
getRestMux.HandleFunc("/rest/ping", restPing)
|
||||
getRestMux.HandleFunc("/rest/completion", withModel(m, restGetCompletion))
|
||||
getRestMux.HandleFunc("/rest/config", restGetConfig)
|
||||
getRestMux.HandleFunc("/rest/config/sync", restGetConfigInSync)
|
||||
getRestMux.HandleFunc("/rest/connections", withModel(m, restGetConnections))
|
||||
getRestMux.HandleFunc("/rest/autocomplete/directory", restGetAutocompleteDirectory)
|
||||
getRestMux.HandleFunc("/rest/discovery", restGetDiscovery)
|
||||
getRestMux.HandleFunc("/rest/errors", restGetErrors)
|
||||
getRestMux.HandleFunc("/rest/events", restGetEvents)
|
||||
getRestMux.HandleFunc("/rest/ignores", withModel(m, restGetIgnores))
|
||||
getRestMux.HandleFunc("/rest/lang", restGetLang)
|
||||
getRestMux.HandleFunc("/rest/model", withModel(m, restGetModel))
|
||||
getRestMux.HandleFunc("/rest/need", withModel(m, restGetNeed))
|
||||
getRestMux.HandleFunc("/rest/deviceid", restGetDeviceID)
|
||||
getRestMux.HandleFunc("/rest/report", withModel(m, restGetReport))
|
||||
getRestMux.HandleFunc("/rest/system", restGetSystem)
|
||||
getRestMux.HandleFunc("/rest/upgrade", restGetUpgrade)
|
||||
getRestMux.HandleFunc("/rest/version", restGetVersion)
|
||||
getRestMux.HandleFunc("/rest/tree", withModel(m, restGetTree))
|
||||
getRestMux.HandleFunc("/rest/stats/device", withModel(m, restGetDeviceStats))
|
||||
getRestMux.HandleFunc("/rest/stats/folder", withModel(m, restGetFolderStats))
|
||||
getRestMux.HandleFunc("/rest/filestatus", withModel(m, restGetFileStatus))
|
||||
|
||||
// Debug endpoints, not for general use
|
||||
getRestMux.HandleFunc("/rest/debug/peerCompletion", withModel(m, restGetPeerCompletion))
|
||||
getRestMux.HandleFunc("/rest/db/completion", s.getDBCompletion) // device folder
|
||||
getRestMux.HandleFunc("/rest/db/file", s.getDBFile) // folder file
|
||||
getRestMux.HandleFunc("/rest/db/ignores", s.getDBIgnores) // folder
|
||||
getRestMux.HandleFunc("/rest/db/need", s.getDBNeed) // folder [perpage] [page]
|
||||
getRestMux.HandleFunc("/rest/db/status", s.getDBStatus) // folder
|
||||
getRestMux.HandleFunc("/rest/db/browse", s.getDBBrowse) // folder [prefix] [dirsonly] [levels]
|
||||
getRestMux.HandleFunc("/rest/events", s.getEvents) // since [limit]
|
||||
getRestMux.HandleFunc("/rest/stats/device", s.getDeviceStats) // -
|
||||
getRestMux.HandleFunc("/rest/stats/folder", s.getFolderStats) // -
|
||||
getRestMux.HandleFunc("/rest/svc/deviceid", s.getDeviceID) // id
|
||||
getRestMux.HandleFunc("/rest/svc/lang", s.getLang) // -
|
||||
getRestMux.HandleFunc("/rest/svc/report", s.getReport) // -
|
||||
getRestMux.HandleFunc("/rest/system/browse", s.getSystemBrowse) // current
|
||||
getRestMux.HandleFunc("/rest/system/config", s.getSystemConfig) // -
|
||||
getRestMux.HandleFunc("/rest/system/config/insync", s.getSystemConfigInsync) // -
|
||||
getRestMux.HandleFunc("/rest/system/connections", s.getSystemConnections) // -
|
||||
getRestMux.HandleFunc("/rest/system/discovery", s.getSystemDiscovery) // -
|
||||
getRestMux.HandleFunc("/rest/system/error", s.getSystemError) // -
|
||||
getRestMux.HandleFunc("/rest/system/ping", s.restPing) // -
|
||||
getRestMux.HandleFunc("/rest/system/status", s.getSystemStatus) // -
|
||||
getRestMux.HandleFunc("/rest/system/upgrade", s.getSystemUpgrade) // -
|
||||
getRestMux.HandleFunc("/rest/system/version", s.getSystemVersion) // -
|
||||
|
||||
// The POST handlers
|
||||
postRestMux := http.NewServeMux()
|
||||
postRestMux.HandleFunc("/rest/ping", restPing)
|
||||
postRestMux.HandleFunc("/rest/config", withModel(m, restPostConfig))
|
||||
postRestMux.HandleFunc("/rest/discovery/hint", restPostDiscoveryHint)
|
||||
postRestMux.HandleFunc("/rest/error", restPostError)
|
||||
postRestMux.HandleFunc("/rest/error/clear", restClearErrors)
|
||||
postRestMux.HandleFunc("/rest/ignores", withModel(m, restPostIgnores))
|
||||
postRestMux.HandleFunc("/rest/model/override", withModel(m, restPostOverride))
|
||||
postRestMux.HandleFunc("/rest/reset", restPostReset)
|
||||
postRestMux.HandleFunc("/rest/restart", restPostRestart)
|
||||
postRestMux.HandleFunc("/rest/shutdown", restPostShutdown)
|
||||
postRestMux.HandleFunc("/rest/upgrade", restPostUpgrade)
|
||||
postRestMux.HandleFunc("/rest/scan", withModel(m, restPostScan))
|
||||
postRestMux.HandleFunc("/rest/bump", withModel(m, restPostBump))
|
||||
postRestMux.HandleFunc("/rest/db/prio", s.postDBPrio) // folder file [perpage] [page]
|
||||
postRestMux.HandleFunc("/rest/db/ignores", s.postDBIgnores) // folder
|
||||
postRestMux.HandleFunc("/rest/db/override", s.postDBOverride) // folder
|
||||
postRestMux.HandleFunc("/rest/db/scan", s.postDBScan) // folder [sub...]
|
||||
postRestMux.HandleFunc("/rest/system/config", s.postSystemConfig) // <body>
|
||||
postRestMux.HandleFunc("/rest/system/discovery", s.postSystemDiscovery) // device addr
|
||||
postRestMux.HandleFunc("/rest/system/error", s.postSystemError) // <body>
|
||||
postRestMux.HandleFunc("/rest/system/error/clear", s.postSystemErrorClear) // -
|
||||
postRestMux.HandleFunc("/rest/system/ping", s.restPing) // -
|
||||
postRestMux.HandleFunc("/rest/system/reset", s.postSystemReset) // [folder]
|
||||
postRestMux.HandleFunc("/rest/system/restart", s.postSystemRestart) // -
|
||||
postRestMux.HandleFunc("/rest/system/shutdown", s.postSystemShutdown) // -
|
||||
postRestMux.HandleFunc("/rest/system/upgrade", s.postSystemUpgrade) // -
|
||||
|
||||
// Debug endpoints, not for general use
|
||||
getRestMux.HandleFunc("/rest/debug/peerCompletion", s.getPeerCompletion)
|
||||
|
||||
// A handler that splits requests between the two above and disables
|
||||
// caching
|
||||
@@ -152,40 +176,49 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
|
||||
// The main routing handler
|
||||
mux := http.NewServeMux()
|
||||
mux.Handle("/rest/", restMux)
|
||||
mux.HandleFunc("/qr/", getQR)
|
||||
mux.HandleFunc("/qr/", s.getQR)
|
||||
|
||||
// Serve compiled in assets unless an asset directory was set (for development)
|
||||
mux.Handle("/", embeddedStatic(assetDir))
|
||||
mux.Handle("/", embeddedStatic{
|
||||
assetDir: s.assetDir,
|
||||
assets: auto.Assets(),
|
||||
})
|
||||
|
||||
// Wrap everything in CSRF protection. The /rest prefix should be
|
||||
// protected, other requests will grant cookies.
|
||||
handler := csrfMiddleware("/rest", cfg.APIKey, mux)
|
||||
handler := csrfMiddleware("/rest", s.cfg.APIKey, mux)
|
||||
|
||||
// Add our version as a header to responses
|
||||
handler = withVersionMiddleware(handler)
|
||||
|
||||
// Wrap everything in basic auth, if user/password is set.
|
||||
if len(cfg.User) > 0 && len(cfg.Password) > 0 {
|
||||
handler = basicAuthAndSessionMiddleware(cfg, handler)
|
||||
if len(s.cfg.User) > 0 && len(s.cfg.Password) > 0 {
|
||||
handler = basicAuthAndSessionMiddleware(s.cfg, handler)
|
||||
}
|
||||
|
||||
// Redirect to HTTPS if we are supposed to
|
||||
if cfg.UseTLS {
|
||||
if s.cfg.UseTLS {
|
||||
handler = redirectToHTTPSMiddleware(handler)
|
||||
}
|
||||
|
||||
if debugHTTP {
|
||||
handler = debugMiddleware(handler)
|
||||
}
|
||||
|
||||
srv := http.Server{
|
||||
Handler: handler,
|
||||
ReadTimeout: 10 * time.Second,
|
||||
}
|
||||
|
||||
go func() {
|
||||
err := srv.Serve(listener)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
return nil
|
||||
s.fss.ServeBackground()
|
||||
|
||||
err := srv.Serve(s.listener)
|
||||
l.Warnln("API:", err)
|
||||
}
|
||||
|
||||
func (s *apiSvc) Stop() {
|
||||
s.listener.Close()
|
||||
s.fss.Stop()
|
||||
}
|
||||
|
||||
func getPostHandler(get, post http.Handler) http.Handler {
|
||||
@@ -201,6 +234,30 @@ func getPostHandler(get, post http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
func debugMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
t0 := time.Now()
|
||||
h.ServeHTTP(w, r)
|
||||
ms := 1000 * time.Since(t0).Seconds()
|
||||
|
||||
// The variable `w` is most likely a *http.response, which we can't do
|
||||
// much with since it's a non exported type. We can however peek into
|
||||
// it with reflection to get at the status code and number of bytes
|
||||
// written.
|
||||
var status, written int64
|
||||
if rw := reflect.Indirect(reflect.ValueOf(w)); rw.IsValid() && rw.Kind() == reflect.Struct {
|
||||
if rf := rw.FieldByName("status"); rf.IsValid() && rf.Kind() == reflect.Int {
|
||||
status = rf.Int()
|
||||
}
|
||||
if rf := rw.FieldByName("written"); rf.IsValid() && rf.Kind() == reflect.Int64 {
|
||||
written = rf.Int()
|
||||
}
|
||||
}
|
||||
|
||||
l.Debugf("http: %s %q: status %d, %d bytes in %.02f ms", r.Method, r.URL.String(), status, written, ms)
|
||||
})
|
||||
}
|
||||
|
||||
func redirectToHTTPSMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Add a generous access-control-allow-origin header since we may be
|
||||
@@ -220,7 +277,9 @@ func redirectToHTTPSMiddleware(h http.Handler) http.Handler {
|
||||
|
||||
func noCacheMiddleware(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Cache-Control", "no-cache")
|
||||
w.Header().Set("Cache-Control", "max-age=0, no-cache, no-store")
|
||||
w.Header().Set("Expires", time.Now().UTC().Format(http.TimeFormat))
|
||||
w.Header().Set("Pragma", "no-cache")
|
||||
h.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
@@ -232,20 +291,14 @@ func withVersionMiddleware(h http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
func withModel(m *model.Model, h func(m *model.Model, w http.ResponseWriter, r *http.Request)) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
h(m, w, r)
|
||||
}
|
||||
}
|
||||
|
||||
func restPing(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) restPing(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(map[string]string{
|
||||
"ping": "pong",
|
||||
})
|
||||
}
|
||||
|
||||
func restGetVersion(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemVersion(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(map[string]string{
|
||||
"version": Version,
|
||||
@@ -255,7 +308,7 @@ func restGetVersion(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func restGetTree(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getDBBrowse(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
prefix := qs.Get("prefix")
|
||||
@@ -268,12 +321,12 @@ func restGetTree(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
|
||||
tree := m.GlobalDirectoryTree(folder, prefix, levels, dirsonly)
|
||||
tree := s.model.GlobalDirectoryTree(folder, prefix, levels, dirsonly)
|
||||
|
||||
json.NewEncoder(w).Encode(tree)
|
||||
}
|
||||
|
||||
func restGetCompletion(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getDBCompletion(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var folder = qs.Get("folder")
|
||||
var deviceStr = qs.Get("device")
|
||||
@@ -285,16 +338,22 @@ func restGetCompletion(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
res := map[string]float64{
|
||||
"completion": m.Completion(device, folder),
|
||||
"completion": s.model.Completion(device, folder),
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var folder = qs.Get("folder")
|
||||
func (s *apiSvc) getDBStatus(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
res := folderSummary(s.model, folder)
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func folderSummary(m *model.Model, folder string) map[string]interface{} {
|
||||
var res = make(map[string]interface{})
|
||||
|
||||
res["invalid"] = cfg.Folders()[folder].Invalid
|
||||
@@ -310,7 +369,12 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
res["inSyncFiles"], res["inSyncBytes"] = globalFiles-needFiles, globalBytes-needBytes
|
||||
|
||||
res["state"], res["stateChanged"] = m.State(folder)
|
||||
var err error
|
||||
res["state"], res["stateChanged"], err = m.State(folder)
|
||||
if err != nil {
|
||||
res["error"] = err.Error()
|
||||
}
|
||||
|
||||
res["version"] = m.CurrentLocalVersion(folder) + m.RemoteLocalVersion(folder)
|
||||
|
||||
ignorePatterns, _, _ := m.GetIgnores(folder)
|
||||
@@ -322,77 +386,84 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
return res
|
||||
}
|
||||
|
||||
func restPostOverride(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postDBOverride(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var folder = qs.Get("folder")
|
||||
go m.Override(folder)
|
||||
go s.model.Override(folder)
|
||||
}
|
||||
|
||||
func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var folder = qs.Get("folder")
|
||||
func (s *apiSvc) getDBNeed(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
|
||||
folder := qs.Get("folder")
|
||||
|
||||
page, err := strconv.Atoi(qs.Get("page"))
|
||||
if err != nil || page < 1 {
|
||||
page = 1
|
||||
}
|
||||
perpage, err := strconv.Atoi(qs.Get("perpage"))
|
||||
if err != nil || perpage < 1 {
|
||||
perpage = 1 << 16
|
||||
}
|
||||
|
||||
progress, queued, rest, total := s.model.NeedFolderFiles(folder, page, perpage)
|
||||
|
||||
progress, queued, rest := m.NeedFolderFiles(folder, 100)
|
||||
// Convert the struct to a more loose structure, and inject the size.
|
||||
output := map[string][]map[string]interface{}{
|
||||
"progress": toNeedSlice(progress),
|
||||
"queued": toNeedSlice(queued),
|
||||
"rest": toNeedSlice(rest),
|
||||
output := map[string]interface{}{
|
||||
"progress": s.toNeedSlice(progress),
|
||||
"queued": s.toNeedSlice(queued),
|
||||
"rest": s.toNeedSlice(rest),
|
||||
"total": total,
|
||||
"page": page,
|
||||
"perpage": perpage,
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(output)
|
||||
}
|
||||
|
||||
func restGetConnections(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var res = m.ConnectionStats()
|
||||
func (s *apiSvc) getSystemConnections(w http.ResponseWriter, r *http.Request) {
|
||||
var res = s.model.ConnectionStats()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetDeviceStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var res = m.DeviceStatistics()
|
||||
func (s *apiSvc) getDeviceStats(w http.ResponseWriter, r *http.Request) {
|
||||
var res = s.model.DeviceStatistics()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetFolderStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
var res = m.FolderStatistics()
|
||||
func (s *apiSvc) getFolderStats(w http.ResponseWriter, r *http.Request) {
|
||||
var res = s.model.FolderStatistics()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetFileStatus(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getDBFile(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
file := qs.Get("file")
|
||||
withBlocks := qs.Get("blocks") != ""
|
||||
gf, _ := m.CurrentGlobalFile(folder, file)
|
||||
lf, _ := m.CurrentFolderFile(folder, file)
|
||||
gf, _ := s.model.CurrentGlobalFile(folder, file)
|
||||
lf, _ := s.model.CurrentFolderFile(folder, file)
|
||||
|
||||
if !withBlocks {
|
||||
gf.Blocks = nil
|
||||
lf.Blocks = nil
|
||||
}
|
||||
|
||||
av := m.Availability(folder, file)
|
||||
av := s.model.Availability(folder, file)
|
||||
json.NewEncoder(w).Encode(map[string]interface{}{
|
||||
"global": gf,
|
||||
"local": lf,
|
||||
"global": jsonFileInfo(gf),
|
||||
"local": jsonFileInfo(lf),
|
||||
"availability": av,
|
||||
})
|
||||
}
|
||||
|
||||
func restGetConfig(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemConfig(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(cfg.Raw())
|
||||
}
|
||||
|
||||
func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postSystemConfig(w http.ResponseWriter, r *http.Request) {
|
||||
var newCfg config.Configuration
|
||||
err := json.NewDecoder(r.Body).Decode(&newCfg)
|
||||
if err != nil {
|
||||
@@ -420,11 +491,11 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
// UR was enabled
|
||||
newCfg.Options.URAccepted = usageReportVersion
|
||||
newCfg.Options.URUniqueID = randomString(8)
|
||||
err := sendUsageReport(m)
|
||||
err := sendUsageReport(s.model)
|
||||
if err != nil {
|
||||
l.Infoln("Usage report:", err)
|
||||
}
|
||||
go usageReportingLoop(m)
|
||||
go usageReportingLoop(s.model)
|
||||
} else if newCfg.Options.URAccepted < curAcc {
|
||||
// UR was disabled
|
||||
newCfg.Options.URAccepted = -1
|
||||
@@ -439,37 +510,52 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
cfg.Save()
|
||||
}
|
||||
|
||||
func restGetConfigInSync(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemConfigInsync(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(map[string]bool{"configInSync": configInSync})
|
||||
}
|
||||
|
||||
func restPostRestart(w http.ResponseWriter, r *http.Request) {
|
||||
flushResponse(`{"ok": "restarting"}`, w)
|
||||
func (s *apiSvc) postSystemRestart(w http.ResponseWriter, r *http.Request) {
|
||||
s.flushResponse(`{"ok": "restarting"}`, w)
|
||||
go restart()
|
||||
}
|
||||
|
||||
func restPostReset(w http.ResponseWriter, r *http.Request) {
|
||||
flushResponse(`{"ok": "resetting folders"}`, w)
|
||||
resetFolders()
|
||||
func (s *apiSvc) postSystemReset(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
var err error
|
||||
if len(folder) == 0 {
|
||||
err = resetDB()
|
||||
} else {
|
||||
err = s.model.ResetFolder(folder)
|
||||
}
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
if len(folder) == 0 {
|
||||
s.flushResponse(`{"ok": "resetting database"}`, w)
|
||||
} else {
|
||||
s.flushResponse(`{"ok": "resetting folder " + folder}`, w)
|
||||
}
|
||||
go restart()
|
||||
}
|
||||
|
||||
func restPostShutdown(w http.ResponseWriter, r *http.Request) {
|
||||
flushResponse(`{"ok": "shutting down"}`, w)
|
||||
func (s *apiSvc) postSystemShutdown(w http.ResponseWriter, r *http.Request) {
|
||||
s.flushResponse(`{"ok": "shutting down"}`, w)
|
||||
go shutdown()
|
||||
}
|
||||
|
||||
func flushResponse(s string, w http.ResponseWriter) {
|
||||
w.Write([]byte(s + "\n"))
|
||||
func (s *apiSvc) flushResponse(resp string, w http.ResponseWriter) {
|
||||
w.Write([]byte(resp + "\n"))
|
||||
f := w.(http.Flusher)
|
||||
f.Flush()
|
||||
}
|
||||
|
||||
var cpuUsagePercent [10]float64 // The last ten seconds
|
||||
var cpuUsageLock sync.RWMutex
|
||||
var cpuUsageLock = sync.NewRWMutex()
|
||||
|
||||
func restGetSystem(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemStatus(w http.ResponseWriter, r *http.Request) {
|
||||
var m runtime.MemStats
|
||||
runtime.ReadMemStats(&m)
|
||||
|
||||
@@ -491,31 +577,32 @@ func restGetSystem(w http.ResponseWriter, r *http.Request) {
|
||||
cpuUsageLock.RUnlock()
|
||||
res["cpuPercent"] = cpusum / float64(len(cpuUsagePercent)) / float64(runtime.NumCPU())
|
||||
res["pathSeparator"] = string(filepath.Separator)
|
||||
res["uptime"] = int(time.Since(startTime).Seconds())
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetErrors(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemError(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
guiErrorsMut.Lock()
|
||||
json.NewEncoder(w).Encode(map[string][]guiError{"errors": guiErrors})
|
||||
guiErrorsMut.Unlock()
|
||||
}
|
||||
|
||||
func restPostError(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postSystemError(w http.ResponseWriter, r *http.Request) {
|
||||
bs, _ := ioutil.ReadAll(r.Body)
|
||||
r.Body.Close()
|
||||
showGuiError(0, string(bs))
|
||||
s.showGuiError(0, string(bs))
|
||||
}
|
||||
|
||||
func restClearErrors(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postSystemErrorClear(w http.ResponseWriter, r *http.Request) {
|
||||
guiErrorsMut.Lock()
|
||||
guiErrors = []guiError{}
|
||||
guiErrorsMut.Unlock()
|
||||
}
|
||||
|
||||
func showGuiError(l logger.LogLevel, err string) {
|
||||
func (s *apiSvc) showGuiError(l logger.LogLevel, err string) {
|
||||
guiErrorsMut.Lock()
|
||||
guiErrors = append(guiErrors, guiError{time.Now(), err})
|
||||
if len(guiErrors) > 5 {
|
||||
@@ -524,7 +611,7 @@ func showGuiError(l logger.LogLevel, err string) {
|
||||
guiErrorsMut.Unlock()
|
||||
}
|
||||
|
||||
func restPostDiscoveryHint(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postSystemDiscovery(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var device = qs.Get("device")
|
||||
var addr = qs.Get("addr")
|
||||
@@ -533,7 +620,7 @@ func restPostDiscoveryHint(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func restGetDiscovery(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemDiscovery(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
devices := map[string][]discover.CacheEntry{}
|
||||
|
||||
@@ -549,16 +636,16 @@ func restGetDiscovery(w http.ResponseWriter, r *http.Request) {
|
||||
json.NewEncoder(w).Encode(devices)
|
||||
}
|
||||
|
||||
func restGetReport(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getReport(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
json.NewEncoder(w).Encode(reportData(m))
|
||||
json.NewEncoder(w).Encode(reportData(s.model))
|
||||
}
|
||||
|
||||
func restGetIgnores(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getDBIgnores(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
|
||||
ignores, patterns, err := m.GetIgnores(qs.Get("folder"))
|
||||
ignores, patterns, err := s.model.GetIgnores(qs.Get("folder"))
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
@@ -570,7 +657,7 @@ func restGetIgnores(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func restPostIgnores(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postDBIgnores(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
|
||||
var data map[string][]string
|
||||
@@ -582,22 +669,24 @@ func restPostIgnores(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
err = m.SetIgnores(qs.Get("folder"), data["ignore"])
|
||||
err = s.model.SetIgnores(qs.Get("folder"), data["ignore"])
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
return
|
||||
}
|
||||
|
||||
restGetIgnores(m, w, r)
|
||||
s.getDBIgnores(w, r)
|
||||
}
|
||||
|
||||
func restGetEvents(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getEvents(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
sinceStr := qs.Get("since")
|
||||
limitStr := qs.Get("limit")
|
||||
since, _ := strconv.Atoi(sinceStr)
|
||||
limit, _ := strconv.Atoi(limitStr)
|
||||
|
||||
s.fss.gotEventRequest()
|
||||
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
|
||||
// Flush before blocking, to indicate that we've received the request
|
||||
@@ -613,7 +702,7 @@ func restGetEvents(w http.ResponseWriter, r *http.Request) {
|
||||
json.NewEncoder(w).Encode(evs)
|
||||
}
|
||||
|
||||
func restGetUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
if noUpgrade {
|
||||
http.Error(w, upgrade.ErrUpgradeUnsupported.Error(), 500)
|
||||
return
|
||||
@@ -633,7 +722,7 @@ func restGetUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
json.NewEncoder(w).Encode(res)
|
||||
}
|
||||
|
||||
func restGetDeviceID(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getDeviceID(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
idStr := qs.Get("id")
|
||||
id, err := protocol.DeviceIDFromString(idStr)
|
||||
@@ -649,7 +738,7 @@ func restGetDeviceID(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getLang(w http.ResponseWriter, r *http.Request) {
|
||||
lang := r.Header.Get("Accept-Language")
|
||||
var langs []string
|
||||
for _, l := range strings.Split(lang, ",") {
|
||||
@@ -660,7 +749,7 @@ func restGetLang(w http.ResponseWriter, r *http.Request) {
|
||||
json.NewEncoder(w).Encode(langs)
|
||||
}
|
||||
|
||||
func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postSystemUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
rel, err := upgrade.LatestRelease(Version)
|
||||
if err != nil {
|
||||
l.Warnln("getting latest release:", err)
|
||||
@@ -676,23 +765,23 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
flushResponse(`{"ok": "restarting"}`, w)
|
||||
s.flushResponse(`{"ok": "restarting"}`, w)
|
||||
l.Infoln("Upgrading")
|
||||
stop <- exitUpgrading
|
||||
}
|
||||
}
|
||||
|
||||
func restPostScan(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postDBScan(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
if folder != "" {
|
||||
sub := qs.Get("sub")
|
||||
err := m.ScanFolderSub(folder, sub)
|
||||
subs := qs["sub"]
|
||||
err := s.model.ScanFolderSubs(folder, subs)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), 500)
|
||||
}
|
||||
} else {
|
||||
errors := m.ScanFolders()
|
||||
errors := s.model.ScanFolders()
|
||||
if len(errors) > 0 {
|
||||
http.Error(w, "Error scanning folders", 500)
|
||||
json.NewEncoder(w).Encode(errors)
|
||||
@@ -700,15 +789,15 @@ func restPostScan(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
func restPostBump(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) postDBPrio(w http.ResponseWriter, r *http.Request) {
|
||||
qs := r.URL.Query()
|
||||
folder := qs.Get("folder")
|
||||
file := qs.Get("file")
|
||||
m.BringToFront(folder, file)
|
||||
restGetNeed(m, w, r)
|
||||
s.model.BringToFront(folder, file)
|
||||
s.getDBNeed(w, r)
|
||||
}
|
||||
|
||||
func getQR(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getQR(w http.ResponseWriter, r *http.Request) {
|
||||
var qs = r.URL.Query()
|
||||
var text = qs.Get("text")
|
||||
code, err := qr.Encode(text, qr.M)
|
||||
@@ -721,15 +810,15 @@ func getQR(w http.ResponseWriter, r *http.Request) {
|
||||
w.Write(code.PNG())
|
||||
}
|
||||
|
||||
func restGetPeerCompletion(m *model.Model, w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getPeerCompletion(w http.ResponseWriter, r *http.Request) {
|
||||
tot := map[string]float64{}
|
||||
count := map[string]float64{}
|
||||
|
||||
for _, folder := range cfg.Folders() {
|
||||
for _, device := range folder.DeviceIDs() {
|
||||
deviceStr := device.String()
|
||||
if m.ConnectedTo(device) {
|
||||
tot[deviceStr] += m.Completion(device, folder.ID)
|
||||
if s.model.ConnectedTo(device) {
|
||||
tot[deviceStr] += s.model.Completion(device, folder.ID)
|
||||
} else {
|
||||
tot[deviceStr] = 0
|
||||
}
|
||||
@@ -746,7 +835,7 @@ func restGetPeerCompletion(m *model.Model, w http.ResponseWriter, r *http.Reques
|
||||
json.NewEncoder(w).Encode(comp)
|
||||
}
|
||||
|
||||
func restGetAutocompleteDirectory(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *apiSvc) getSystemBrowse(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json; charset=utf-8")
|
||||
qs := r.URL.Query()
|
||||
current := qs.Get("current")
|
||||
@@ -755,7 +844,7 @@ func restGetAutocompleteDirectory(w http.ResponseWriter, r *http.Request) {
|
||||
if strings.HasSuffix(current, pathSeparator) && !strings.HasSuffix(search, pathSeparator) {
|
||||
search = search + pathSeparator
|
||||
}
|
||||
subdirectories, _ := filepath.Glob(search + "*")
|
||||
subdirectories, _ := osutil.Glob(search + "*")
|
||||
ret := make([]string, 0, 10)
|
||||
for _, subdirectory := range subdirectories {
|
||||
info, err := os.Stat(subdirectory)
|
||||
@@ -769,47 +858,57 @@ func restGetAutocompleteDirectory(w http.ResponseWriter, r *http.Request) {
|
||||
json.NewEncoder(w).Encode(ret)
|
||||
}
|
||||
|
||||
func embeddedStatic(assetDir string) http.Handler {
|
||||
assets := auto.Assets()
|
||||
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
file := r.URL.Path
|
||||
|
||||
if file[0] == '/' {
|
||||
file = file[1:]
|
||||
}
|
||||
|
||||
if len(file) == 0 {
|
||||
file = "index.html"
|
||||
}
|
||||
|
||||
if assetDir != "" {
|
||||
p := filepath.Join(assetDir, filepath.FromSlash(file))
|
||||
_, err := os.Stat(p)
|
||||
if err == nil {
|
||||
http.ServeFile(w, r, p)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
bs, ok := assets[file]
|
||||
if !ok {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
mtype := mimeTypeForFile(file)
|
||||
if len(mtype) != 0 {
|
||||
w.Header().Set("Content-Type", mtype)
|
||||
}
|
||||
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
|
||||
w.Header().Set("Last-Modified", modt)
|
||||
|
||||
w.Write(bs)
|
||||
})
|
||||
type embeddedStatic struct {
|
||||
assetDir string
|
||||
assets map[string][]byte
|
||||
}
|
||||
|
||||
func mimeTypeForFile(file string) string {
|
||||
func (s embeddedStatic) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
file := r.URL.Path
|
||||
|
||||
if file[0] == '/' {
|
||||
file = file[1:]
|
||||
}
|
||||
|
||||
if len(file) == 0 {
|
||||
file = "index.html"
|
||||
}
|
||||
|
||||
if s.assetDir != "" {
|
||||
p := filepath.Join(s.assetDir, filepath.FromSlash(file))
|
||||
_, err := os.Stat(p)
|
||||
if err == nil {
|
||||
http.ServeFile(w, r, p)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
bs, ok := s.assets[file]
|
||||
if !ok {
|
||||
http.NotFound(w, r)
|
||||
return
|
||||
}
|
||||
|
||||
mtype := s.mimeTypeForFile(file)
|
||||
if len(mtype) != 0 {
|
||||
w.Header().Set("Content-Type", mtype)
|
||||
}
|
||||
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
|
||||
w.Header().Set("Content-Encoding", "gzip")
|
||||
} else {
|
||||
// ungzip if browser not send gzip accepted header
|
||||
var gr *gzip.Reader
|
||||
gr, _ = gzip.NewReader(bytes.NewReader(bs))
|
||||
bs, _ = ioutil.ReadAll(gr)
|
||||
gr.Close()
|
||||
}
|
||||
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
|
||||
w.Header().Set("Last-Modified", auto.AssetsBuildDate)
|
||||
|
||||
w.Write(bs)
|
||||
}
|
||||
|
||||
func (s embeddedStatic) mimeTypeForFile(file string) string {
|
||||
// We use a built in table of the common types since the system
|
||||
// TypeByExtension might be unreliable. But if we don't know, we delegate
|
||||
// to the system.
|
||||
@@ -836,17 +935,49 @@ func mimeTypeForFile(file string) string {
|
||||
}
|
||||
}
|
||||
|
||||
func toNeedSlice(fs []db.FileInfoTruncated) []map[string]interface{} {
|
||||
output := make([]map[string]interface{}, len(fs))
|
||||
for i, file := range fs {
|
||||
output[i] = map[string]interface{}{
|
||||
"Name": file.Name,
|
||||
"Flags": file.Flags,
|
||||
"Modified": file.Modified,
|
||||
"Version": file.Version,
|
||||
"LocalVersion": file.LocalVersion,
|
||||
"Size": file.Size(),
|
||||
}
|
||||
func (s *apiSvc) toNeedSlice(fs []db.FileInfoTruncated) []jsonDBFileInfo {
|
||||
res := make([]jsonDBFileInfo, len(fs))
|
||||
for i, f := range fs {
|
||||
res[i] = jsonDBFileInfo(f)
|
||||
}
|
||||
return output
|
||||
return res
|
||||
}
|
||||
|
||||
// Type wrappers for nice JSON serialization
|
||||
|
||||
type jsonFileInfo protocol.FileInfo
|
||||
|
||||
func (f jsonFileInfo) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(map[string]interface{}{
|
||||
"name": f.Name,
|
||||
"size": protocol.FileInfo(f).Size(),
|
||||
"flags": fmt.Sprintf("%#o", f.Flags),
|
||||
"modified": time.Unix(f.Modified, 0),
|
||||
"localVersion": f.LocalVersion,
|
||||
"numBlocks": len(f.Blocks),
|
||||
"version": jsonVersionVector(f.Version),
|
||||
})
|
||||
}
|
||||
|
||||
type jsonDBFileInfo db.FileInfoTruncated
|
||||
|
||||
func (f jsonDBFileInfo) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(map[string]interface{}{
|
||||
"name": f.Name,
|
||||
"size": db.FileInfoTruncated(f).Size(),
|
||||
"flags": fmt.Sprintf("%#o", f.Flags),
|
||||
"modified": time.Unix(f.Modified, 0),
|
||||
"localVersion": f.LocalVersion,
|
||||
"version": jsonVersionVector(f.Version),
|
||||
})
|
||||
}
|
||||
|
||||
type jsonVersionVector protocol.Vector
|
||||
|
||||
func (v jsonVersionVector) MarshalJSON() ([]byte, error) {
|
||||
res := make([]string, len(v))
|
||||
for i, c := range v {
|
||||
res[i] = fmt.Sprintf("%d:%d", c.ID, c.Value)
|
||||
}
|
||||
return json.Marshal(res)
|
||||
}
|
||||
|
||||
@@ -12,16 +12,16 @@ import (
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/config"
|
||||
"github.com/syncthing/syncthing/internal/sync"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
)
|
||||
|
||||
var (
|
||||
sessions = make(map[string]bool)
|
||||
sessionsMut sync.Mutex
|
||||
sessionsMut = sync.NewMutex()
|
||||
)
|
||||
|
||||
func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handler) http.Handler {
|
||||
@@ -42,6 +42,10 @@ func basicAuthAndSessionMiddleware(cfg config.GUIConfiguration, next http.Handle
|
||||
}
|
||||
}
|
||||
|
||||
if debugHTTP {
|
||||
l.Debugln("Sessionless HTTP request with authentication; this is expensive.")
|
||||
}
|
||||
|
||||
error := func() {
|
||||
time.Sleep(time.Duration(rand.Intn(100)+100) * time.Millisecond)
|
||||
w.Header().Set("WWW-Authenticate", "Basic realm=\"Authorization Required\"")
|
||||
|
||||
@@ -11,16 +11,15 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/osutil"
|
||||
"github.com/syncthing/syncthing/internal/sync"
|
||||
)
|
||||
|
||||
var csrfTokens []string
|
||||
var csrfMut sync.Mutex
|
||||
var csrfMut = sync.NewMutex()
|
||||
|
||||
// Check for CSRF token on /rest/ URLs. If a correct one is not given, reject
|
||||
// the request with 403. For / and /index.html, set a new CSRF cookie if none
|
||||
@@ -92,7 +91,7 @@ func newCsrfToken() string {
|
||||
}
|
||||
|
||||
func saveCsrfTokens() {
|
||||
name := filepath.Join(confDir, "csrftokens.txt")
|
||||
name := locations[locCsrfTokens]
|
||||
tmp := fmt.Sprintf("%s.tmp.%d", name, time.Now().UnixNano())
|
||||
|
||||
f, err := os.OpenFile(tmp, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0644)
|
||||
@@ -117,8 +116,7 @@ func saveCsrfTokens() {
|
||||
}
|
||||
|
||||
func loadCsrfTokens() {
|
||||
name := filepath.Join(confDir, "csrftokens.txt")
|
||||
f, err := os.Open(name)
|
||||
f, err := os.Open(locations[locCsrfTokens])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
123
cmd/syncthing/locations.go
Normal file
@@ -0,0 +1,123 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/osutil"
|
||||
)
|
||||
|
||||
type locationEnum string
|
||||
|
||||
// Use strings as keys to make printout and serialization of the locations map
|
||||
// more meaningful.
|
||||
const (
|
||||
locConfigFile locationEnum = "config"
|
||||
locCertFile = "certFile"
|
||||
locKeyFile = "keyFile"
|
||||
locHTTPSCertFile = "httpsCertFile"
|
||||
locHTTPSKeyFile = "httpsKeyFile"
|
||||
locDatabase = "database"
|
||||
locLogFile = "logFile"
|
||||
locCsrfTokens = "csrfTokens"
|
||||
locPanicLog = "panicLog"
|
||||
locAuditLog = "auditLog"
|
||||
locDefFolder = "defFolder"
|
||||
)
|
||||
|
||||
// Platform dependent directories
|
||||
var baseDirs = map[string]string{
|
||||
"config": defaultConfigDir(), // Overridden by -home flag
|
||||
"home": homeDir(), // User's home directory, *not* -home flag
|
||||
}
|
||||
|
||||
// Use the variables from baseDirs here
|
||||
var locations = map[locationEnum]string{
|
||||
locConfigFile: "${config}/config.xml",
|
||||
locCertFile: "${config}/cert.pem",
|
||||
locKeyFile: "${config}/key.pem",
|
||||
locHTTPSCertFile: "${config}/https-cert.pem",
|
||||
locHTTPSKeyFile: "${config}/https-key.pem",
|
||||
locDatabase: "${config}/index-v0.11.0.db",
|
||||
locLogFile: "${config}/syncthing.log", // -logfile on Windows
|
||||
locCsrfTokens: "${config}/csrftokens.txt",
|
||||
locPanicLog: "${config}/panic-${timestamp}.log",
|
||||
locAuditLog: "${config}/audit-${timestamp}.log",
|
||||
locDefFolder: "${home}/Sync",
|
||||
}
|
||||
|
||||
// expandLocations replaces the variables in the location map with actual
|
||||
// directory locations.
|
||||
func expandLocations() error {
|
||||
for key, dir := range locations {
|
||||
for varName, value := range baseDirs {
|
||||
dir = strings.Replace(dir, "${"+varName+"}", value, -1)
|
||||
}
|
||||
var err error
|
||||
dir, err = osutil.ExpandTilde(dir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
locations[key] = dir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// defaultConfigDir returns the default configuration directory, as figured
|
||||
// out by various the environment variables present on each platform, or dies
|
||||
// trying.
|
||||
func defaultConfigDir() string {
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
if p := os.Getenv("LocalAppData"); p != "" {
|
||||
return filepath.Join(p, "Syncthing")
|
||||
}
|
||||
return filepath.Join(os.Getenv("AppData"), "Syncthing")
|
||||
|
||||
case "darwin":
|
||||
dir, err := osutil.ExpandTilde("~/Library/Application Support/Syncthing")
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
return dir
|
||||
|
||||
default:
|
||||
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
|
||||
return filepath.Join(xdgCfg, "syncthing")
|
||||
}
|
||||
dir, err := osutil.ExpandTilde("~/.config/syncthing")
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
return dir
|
||||
}
|
||||
}
|
||||
|
||||
// homeDir returns the user's home directory, or dies trying.
|
||||
func homeDir() string {
|
||||
home, err := osutil.ExpandTilde("~")
|
||||
if err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
return home
|
||||
}
|
||||
|
||||
func timestampedLoc(key locationEnum) string {
|
||||
// We take the roundtrip via "${timestamp}" instead of passing the path
|
||||
// directly through time.Format() to avoid issues when the path we are
|
||||
// expanding contains numbers; otherwise for example
|
||||
// /home/user2006/.../panic-20060102-150405.log would get both instances of
|
||||
// 2006 replaced by 2015...
|
||||
tpl := locations[key]
|
||||
now := time.Now().Format("20060102-150405")
|
||||
return strings.Replace(tpl, "${timestamp}", now, -1)
|
||||
}
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net"
|
||||
"net/http"
|
||||
@@ -19,7 +20,6 @@ import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"runtime/pprof"
|
||||
"strconv"
|
||||
"strings"
|
||||
@@ -36,10 +36,10 @@ import (
|
||||
"github.com/syncthing/syncthing/internal/osutil"
|
||||
"github.com/syncthing/syncthing/internal/symlinks"
|
||||
"github.com/syncthing/syncthing/internal/upgrade"
|
||||
"github.com/syncthing/syncthing/internal/upnp"
|
||||
"github.com/syndtr/goleveldb/leveldb"
|
||||
"github.com/syndtr/goleveldb/leveldb/errors"
|
||||
"github.com/syndtr/goleveldb/leveldb/opt"
|
||||
"github.com/thejerf/suture"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
)
|
||||
|
||||
@@ -63,7 +63,10 @@ const (
|
||||
exitUpgrading = 4
|
||||
)
|
||||
|
||||
const bepProtocolName = "bep/1.0"
|
||||
const (
|
||||
bepProtocolName = "bep/1.0"
|
||||
pingEventInterval = time.Minute
|
||||
)
|
||||
|
||||
var l = logger.DefaultLogger
|
||||
|
||||
@@ -76,12 +79,15 @@ func init() {
|
||||
}
|
||||
}
|
||||
|
||||
// Check for a clean release build.
|
||||
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-beta[\d\.]+)?$`)
|
||||
IsRelease = exp.MatchString(Version)
|
||||
// Check for a clean release build. A release is something like "v0.1.2",
|
||||
// with an optional suffix of letters and dot separated numbers like
|
||||
// "-beta3.47". If there's more stuff, like a plus sign and a commit hash
|
||||
// and so on, then it's not a release. If there's a dash anywhere in
|
||||
// there, it's some kind of beta or prerelease version.
|
||||
|
||||
// Check for a beta build
|
||||
IsBeta = strings.Contains(Version, "-beta")
|
||||
exp := regexp.MustCompile(`^v\d+\.\d+\.\d+(-[a-z]+[\d\.]+)?$`)
|
||||
IsRelease = exp.MatchString(Version)
|
||||
IsBeta = strings.Contains(Version, "-")
|
||||
|
||||
stamp, _ := strconv.Atoi(BuildStamp)
|
||||
BuildDate = time.Unix(int64(stamp), 0)
|
||||
@@ -103,8 +109,6 @@ var (
|
||||
readRateLimit *ratelimit.Bucket
|
||||
stop = make(chan int)
|
||||
discoverer *discover.Discoverer
|
||||
externalPort int
|
||||
igd *upnp.IGD
|
||||
cert tls.Certificate
|
||||
lans []*net.IPNet
|
||||
)
|
||||
@@ -145,6 +149,8 @@ are mostly useful for developers. Use with care.
|
||||
- "discover" (the discover package)
|
||||
- "events" (the events package)
|
||||
- "files" (the files package)
|
||||
- "http" (the main package; HTTP requests)
|
||||
- "locks" (the sync package; trace long held locks)
|
||||
- "net" (the main package; connections & network messages)
|
||||
- "model" (the model package)
|
||||
- "scanner" (the scanner package)
|
||||
@@ -170,7 +176,11 @@ are mostly useful for developers. Use with care.
|
||||
STNOUPGRADE Disable automatic upgrades.
|
||||
|
||||
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
|
||||
available CPU cores.`
|
||||
available CPU cores.
|
||||
|
||||
GOGC Percentage of heap growth at which to trigger GC. Default is
|
||||
100. Lower numbers keep peak memory usage down, at the price
|
||||
of CPU usage (ie. performance).`
|
||||
)
|
||||
|
||||
// Command line and environment options
|
||||
@@ -184,6 +194,8 @@ var (
|
||||
noConsole bool
|
||||
generateDir string
|
||||
logFile string
|
||||
auditEnabled bool
|
||||
verbose bool
|
||||
noRestart = os.Getenv("STNORESTART") != ""
|
||||
noUpgrade = os.Getenv("STNOUPGRADE") != ""
|
||||
guiAddress = os.Getenv("STGUIADDRESS") // legacy
|
||||
@@ -197,17 +209,11 @@ var (
|
||||
)
|
||||
|
||||
func main() {
|
||||
defConfDir, err := getDefaultConfDir()
|
||||
if err != nil {
|
||||
l.Fatalln("home:", err)
|
||||
}
|
||||
|
||||
if runtime.GOOS == "windows" {
|
||||
// On Windows, we use a log file by default. Setting the -logfile flag
|
||||
// to the empty string disables this behavior.
|
||||
// to "-" disables this behavior.
|
||||
|
||||
logFile = filepath.Join(defConfDir, "syncthing.log")
|
||||
flag.StringVar(&logFile, "logfile", logFile, "Log file name (blank for stdout)")
|
||||
flag.StringVar(&logFile, "logfile", "", "Log file name (use \"-\" for stdout)")
|
||||
|
||||
// We also add an option to hide the console window
|
||||
flag.BoolVar(&noConsole, "no-console", false, "Hide console window")
|
||||
@@ -221,29 +227,38 @@ func main() {
|
||||
flag.IntVar(&logFlags, "logflags", logFlags, "Select information in log line prefix")
|
||||
flag.BoolVar(&noBrowser, "no-browser", false, "Do not start browser")
|
||||
flag.BoolVar(&noRestart, "no-restart", noRestart, "Do not restart; just exit")
|
||||
flag.BoolVar(&reset, "reset", false, "Prepare to resync from cluster")
|
||||
flag.BoolVar(&reset, "reset", false, "Reset the database")
|
||||
flag.BoolVar(&doUpgrade, "upgrade", false, "Perform upgrade")
|
||||
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
|
||||
flag.BoolVar(&showVersion, "version", false, "Show version")
|
||||
flag.StringVar(&upgradeTo, "upgrade-to", upgradeTo, "Force upgrade directly from specified URL")
|
||||
flag.BoolVar(&auditEnabled, "audit", false, "Write events to audit file")
|
||||
flag.BoolVar(&verbose, "verbose", false, "Print verbose log output")
|
||||
|
||||
flag.Usage = usageFor(flag.CommandLine, usage, fmt.Sprintf(extraUsage, defConfDir))
|
||||
flag.Usage = usageFor(flag.CommandLine, usage, fmt.Sprintf(extraUsage, baseDirs["config"]))
|
||||
flag.Parse()
|
||||
|
||||
if noConsole {
|
||||
osutil.HideConsole()
|
||||
}
|
||||
|
||||
if confDir == "" {
|
||||
if confDir != "" {
|
||||
// Not set as default above because the string can be really long.
|
||||
confDir = defConfDir
|
||||
baseDirs["config"] = confDir
|
||||
}
|
||||
|
||||
if confDir != defConfDir && filepath.Dir(logFile) == defConfDir {
|
||||
// The user changed the config dir with -home, but not the log file
|
||||
// location. In this case we assume they meant for the logfile to
|
||||
// still live in it's default location *relative to the config dir*.
|
||||
logFile = filepath.Join(confDir, "syncthing.log")
|
||||
if err := expandLocations(); err != nil {
|
||||
l.Fatalln(err)
|
||||
}
|
||||
|
||||
if runtime.GOOS == "windows" {
|
||||
if logFile == "" {
|
||||
// Use the default log file location
|
||||
logFile = locations[locLogFile]
|
||||
} else if logFile == "-" {
|
||||
// Don't use a logFile
|
||||
logFile = ""
|
||||
}
|
||||
}
|
||||
|
||||
if showVersion {
|
||||
@@ -270,13 +285,13 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
cert, err := loadCert(dir, "")
|
||||
certFile, keyFile := filepath.Join(dir, "cert.pem"), filepath.Join(dir, "key.pem")
|
||||
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
|
||||
if err == nil {
|
||||
l.Warnln("Key exists; will not overwrite.")
|
||||
l.Infoln("Device ID:", protocol.NewDeviceID(cert.Certificate[0]))
|
||||
} else {
|
||||
newCertificate(dir, "", tlsDefaultCommonName)
|
||||
cert, err = loadCert(dir, "")
|
||||
cert, err = newCertificate(certFile, keyFile, tlsDefaultCommonName)
|
||||
myID = protocol.NewDeviceID(cert.Certificate[0])
|
||||
if err != nil {
|
||||
l.Fatalln("load cert:", err)
|
||||
@@ -302,17 +317,12 @@ func main() {
|
||||
return
|
||||
}
|
||||
|
||||
confDir, err := osutil.ExpandTilde(confDir)
|
||||
if err != nil {
|
||||
l.Fatalln("home:", err)
|
||||
}
|
||||
|
||||
if info, err := os.Stat(confDir); err == nil && !info.IsDir() {
|
||||
l.Fatalln("Config directory", confDir, "is not a directory")
|
||||
if info, err := os.Stat(baseDirs["config"]); err == nil && !info.IsDir() {
|
||||
l.Fatalln("Config directory", baseDirs["config"], "is not a directory")
|
||||
}
|
||||
|
||||
// Ensure that our home directory exists.
|
||||
ensureDir(confDir, 0700)
|
||||
ensureDir(baseDirs["config"], 0700)
|
||||
|
||||
if upgradeTo != "" {
|
||||
err := upgrade.ToURL(upgradeTo)
|
||||
@@ -338,9 +348,15 @@ func main() {
|
||||
|
||||
if doUpgrade {
|
||||
// Use leveldb database locks to protect against concurrent upgrades
|
||||
_, err = leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{OpenFilesCacheCapacity: 100})
|
||||
_, err = leveldb.OpenFile(locations[locDatabase], &opt.Options{OpenFilesCacheCapacity: 100})
|
||||
if err != nil {
|
||||
l.Fatalln("Cannot upgrade, database seems to be locked. Is another copy of Syncthing already running?")
|
||||
l.Infoln("Attempting upgrade through running Syncthing...")
|
||||
err = upgradeViaRest()
|
||||
if err != nil {
|
||||
l.Fatalln("Upgrade:", err)
|
||||
}
|
||||
l.Okln("Syncthing upgrading")
|
||||
return
|
||||
}
|
||||
|
||||
err = upgrade.To(rel)
|
||||
@@ -354,7 +370,7 @@ func main() {
|
||||
}
|
||||
|
||||
if reset {
|
||||
resetFolders()
|
||||
resetDB()
|
||||
return
|
||||
}
|
||||
|
||||
@@ -365,24 +381,76 @@ func main() {
|
||||
}
|
||||
}
|
||||
|
||||
func syncthingMain() {
|
||||
var err error
|
||||
func upgradeViaRest() error {
|
||||
cfg, err := config.Load(locations[locConfigFile], protocol.LocalDeviceID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
target := cfg.GUI().Address
|
||||
if cfg.GUI().UseTLS {
|
||||
target = "https://" + target
|
||||
} else {
|
||||
target = "http://" + target
|
||||
}
|
||||
r, _ := http.NewRequest("POST", target+"/rest/system/upgrade", nil)
|
||||
r.Header.Set("X-API-Key", cfg.GUI().APIKey)
|
||||
|
||||
if len(os.Getenv("GOGC")) == 0 {
|
||||
debug.SetGCPercent(25)
|
||||
tr := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: 60 * time.Second,
|
||||
}
|
||||
resp, err := client.Do(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode != 200 {
|
||||
bs, err := ioutil.ReadAll(resp.Body)
|
||||
defer resp.Body.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New(string(bs))
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func syncthingMain() {
|
||||
// Create a main service manager. We'll add things to this as we go along.
|
||||
// We want any logging it does to go through our log system, with INFO
|
||||
// severity.
|
||||
mainSvc := suture.New("main", suture.Spec{
|
||||
Log: func(line string) {
|
||||
l.Infoln(line)
|
||||
},
|
||||
})
|
||||
mainSvc.ServeBackground()
|
||||
|
||||
// Set a log prefix similar to the ID we will have later on, or early log
|
||||
// lines look ugly.
|
||||
l.SetPrefix("[start] ")
|
||||
|
||||
if auditEnabled {
|
||||
startAuditing(mainSvc)
|
||||
}
|
||||
|
||||
if verbose {
|
||||
mainSvc.Add(newVerboseSvc())
|
||||
}
|
||||
|
||||
if len(os.Getenv("GOMAXPROCS")) == 0 {
|
||||
runtime.GOMAXPROCS(runtime.NumCPU())
|
||||
}
|
||||
|
||||
events.Default.Log(events.Starting, map[string]string{"home": confDir})
|
||||
events.Default.Log(events.Starting, map[string]string{"home": baseDirs["config"]})
|
||||
|
||||
// Ensure that that we have a certificate and key.
|
||||
cert, err = loadCert(confDir, "")
|
||||
cert, err := tls.LoadX509KeyPair(locations[locCertFile], locations[locKeyFile])
|
||||
if err != nil {
|
||||
newCertificate(confDir, "", tlsDefaultCommonName)
|
||||
cert, err = loadCert(confDir, "")
|
||||
cert, err = newCertificate(locations[locCertFile], locations[locKeyFile], tlsDefaultCommonName)
|
||||
if err != nil {
|
||||
l.Fatalln("load cert:", err)
|
||||
}
|
||||
@@ -400,7 +468,7 @@ func syncthingMain() {
|
||||
|
||||
// Prepare to be able to save configuration
|
||||
|
||||
cfgFile := filepath.Join(confDir, "config.xml")
|
||||
cfgFile := locations[locConfigFile]
|
||||
|
||||
var myName string
|
||||
|
||||
@@ -439,6 +507,10 @@ func syncthingMain() {
|
||||
cfg.Save()
|
||||
}
|
||||
|
||||
if err := checkShortIDs(cfg); err != nil {
|
||||
l.Fatalln("Short device IDs are in conflict. Unlucky!\n Regenerate the device ID of one if the following:\n ", err)
|
||||
}
|
||||
|
||||
if len(profiler) > 0 {
|
||||
go func() {
|
||||
l.Debugln("Starting profiler on", profiler)
|
||||
@@ -495,11 +567,10 @@ func syncthingMain() {
|
||||
l.Infoln("Local networks:", strings.Join(networks, ", "))
|
||||
}
|
||||
|
||||
dbFile := filepath.Join(confDir, "index")
|
||||
dbOpts := &opt.Options{OpenFilesCacheCapacity: 100}
|
||||
ldb, err := leveldb.OpenFile(dbFile, dbOpts)
|
||||
dbFile := locations[locDatabase]
|
||||
ldb, err := leveldb.OpenFile(dbFile, dbOpts())
|
||||
if err != nil && errors.IsCorrupted(err) {
|
||||
ldb, err = leveldb.RecoverFile(dbFile, dbOpts)
|
||||
ldb, err = leveldb.RecoverFile(dbFile, dbOpts())
|
||||
}
|
||||
if err != nil {
|
||||
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
|
||||
@@ -514,26 +585,31 @@ func syncthingMain() {
|
||||
}
|
||||
}
|
||||
|
||||
m := model.NewModel(cfg, myName, "syncthing", Version, ldb)
|
||||
m := model.NewModel(cfg, myID, myName, "syncthing", Version, ldb)
|
||||
|
||||
sanityCheckFolders(cfg, m)
|
||||
if t := os.Getenv("STDEADLOCKTIMEOUT"); len(t) > 0 {
|
||||
it, err := strconv.Atoi(t)
|
||||
if err == nil {
|
||||
m.StartDeadlockDetector(time.Duration(it) * time.Second)
|
||||
}
|
||||
} else if !IsRelease || IsBeta {
|
||||
m.StartDeadlockDetector(20 * 60 * time.Second)
|
||||
}
|
||||
|
||||
// GUI
|
||||
|
||||
setupGUI(cfg, m)
|
||||
setupGUI(mainSvc, cfg, m)
|
||||
|
||||
// Clear out old indexes for other devices. Otherwise we'll start up and
|
||||
// start needing a bunch of files which are nowhere to be found. This
|
||||
// needs to be changed when we correctly do persistent indexes.
|
||||
for _, folderCfg := range cfg.Folders() {
|
||||
if folderCfg.Invalid != "" {
|
||||
continue
|
||||
}
|
||||
m.AddFolder(folderCfg)
|
||||
for _, device := range folderCfg.DeviceIDs() {
|
||||
if device == myID {
|
||||
continue
|
||||
}
|
||||
m.Index(device, folderCfg.ID, nil)
|
||||
m.Index(device, folderCfg.ID, nil, 0, nil)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -543,24 +619,24 @@ func syncthingMain() {
|
||||
if err != nil {
|
||||
l.Fatalln("Bad listen address:", err)
|
||||
}
|
||||
externalPort = addr.Port
|
||||
|
||||
// UPnP
|
||||
igd = nil
|
||||
// Start discovery
|
||||
|
||||
localPort := addr.Port
|
||||
discoverer = discovery(localPort)
|
||||
|
||||
// Start UPnP. The UPnP service will restart global discovery if the
|
||||
// external port changes.
|
||||
|
||||
if opts.UPnPEnabled {
|
||||
setupUPnP()
|
||||
upnpSvc := newUPnPSvc(cfg, localPort)
|
||||
mainSvc.Add(upnpSvc)
|
||||
}
|
||||
|
||||
// Routine to connect out to configured devices
|
||||
discoverer = discovery(externalPort)
|
||||
go listenConnect(myID, m, tlsCfg)
|
||||
connectionSvc := newConnectionSvc(cfg, myID, m, tlsCfg)
|
||||
mainSvc.Add(connectionSvc)
|
||||
|
||||
for _, folder := range cfg.Folders() {
|
||||
if folder.Invalid != "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Routine to pull blocks from other devices to synchronize the local
|
||||
// folder. Does not run when we are in read only (publish only) mode.
|
||||
if folder.ReadOnly {
|
||||
@@ -626,15 +702,67 @@ func syncthingMain() {
|
||||
}
|
||||
|
||||
events.Default.Log(events.StartupComplete, nil)
|
||||
go generateEvents()
|
||||
go generatePingEvents()
|
||||
|
||||
cleanConfigDirectory()
|
||||
|
||||
code := <-stop
|
||||
|
||||
mainSvc.Stop()
|
||||
|
||||
l.Okln("Exiting")
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func setupGUI(cfg *config.Wrapper, m *model.Model) {
|
||||
func dbOpts() *opt.Options {
|
||||
// Calculate a sutiable database block cache capacity. We start at the
|
||||
// default of 8 MiB and use larger values for machines with more memory.
|
||||
// In reality, the database will use twice the amount we calculate here,
|
||||
// as it also has two write buffers each sized at half the block cache.
|
||||
|
||||
blockCacheCapacity := 8 << 20
|
||||
if bytes, err := memorySize(); err == nil {
|
||||
if bytes > 74<<30 {
|
||||
// At 74 GiB of RAM, we hit a 256 MiB block cache (per the
|
||||
// calculations below). There's probably no point in growing the
|
||||
// cache beyond this point.
|
||||
blockCacheCapacity = 256 << 20
|
||||
} else if bytes > 8<<30 {
|
||||
// Slowly grow from 128 MiB at 8 GiB of RAM up to 256 MiB for a
|
||||
// ~74 GiB RAM machine
|
||||
blockCacheCapacity = int(bytes/512) + 128 - 16
|
||||
} else if bytes > 512<<20 {
|
||||
// Grow from 8 MiB at start to 128 MiB of cache at 8 GiB of RAM.
|
||||
blockCacheCapacity = int(bytes / 64)
|
||||
}
|
||||
l.Infoln("Database block cache capacity", blockCacheCapacity/1024, "KiB")
|
||||
}
|
||||
|
||||
return &opt.Options{
|
||||
OpenFilesCacheCapacity: 100,
|
||||
BlockCacheCapacity: blockCacheCapacity,
|
||||
WriteBuffer: blockCacheCapacity / 2,
|
||||
}
|
||||
}
|
||||
|
||||
func startAuditing(mainSvc *suture.Supervisor) {
|
||||
auditFile := timestampedLoc(locAuditLog)
|
||||
fd, err := os.OpenFile(auditFile, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
|
||||
if err != nil {
|
||||
l.Fatalln("Audit:", err)
|
||||
}
|
||||
|
||||
auditSvc := newAuditSvc(fd)
|
||||
mainSvc.Add(auditSvc)
|
||||
|
||||
// We wait for the audit service to fully start before we return, to
|
||||
// ensure we capture all events from the start.
|
||||
auditSvc.WaitForStart()
|
||||
|
||||
l.Infoln("Audit log in", auditFile)
|
||||
}
|
||||
|
||||
func setupGUI(mainSvc *suture.Supervisor, cfg *config.Wrapper, m *model.Model) {
|
||||
opts := cfg.Options()
|
||||
guiCfg := overrideGUIConfig(cfg.GUI(), guiAddress, guiAuthentication, guiAPIKey)
|
||||
|
||||
@@ -663,10 +791,12 @@ func setupGUI(cfg *config.Wrapper, m *model.Model) {
|
||||
|
||||
urlShow := fmt.Sprintf("%s://%s/", proto, net.JoinHostPort(hostShow, strconv.Itoa(addr.Port)))
|
||||
l.Infoln("Starting web GUI on", urlShow)
|
||||
err := startGUI(guiCfg, guiAssets, m)
|
||||
api, err := newAPISvc(guiCfg, guiAssets, m)
|
||||
if err != nil {
|
||||
l.Fatalln("Cannot start GUI:", err)
|
||||
}
|
||||
mainSvc.Add(api)
|
||||
|
||||
if opts.StartBrowser && !noBrowser && !stRestarting {
|
||||
urlOpen := fmt.Sprintf("%s://%s/", proto, net.JoinHostPort(hostOpen, strconv.Itoa(addr.Port)))
|
||||
// Can potentially block if the utility we are invoking doesn't
|
||||
@@ -677,66 +807,12 @@ func setupGUI(cfg *config.Wrapper, m *model.Model) {
|
||||
}
|
||||
}
|
||||
|
||||
func sanityCheckFolders(cfg *config.Wrapper, m *model.Model) {
|
||||
nextFolder:
|
||||
for id, folder := range cfg.Folders() {
|
||||
if folder.Invalid != "" {
|
||||
continue
|
||||
}
|
||||
m.AddFolder(folder)
|
||||
|
||||
fi, err := os.Stat(folder.Path)
|
||||
if m.CurrentLocalVersion(id) > 0 {
|
||||
// Safety check. If the cached index contains files but the
|
||||
// folder doesn't exist, we have a problem. We would assume
|
||||
// that all files have been deleted which might not be the case,
|
||||
// so mark it as invalid instead.
|
||||
if err != nil || !fi.IsDir() {
|
||||
l.Warnf("Stopping folder %q - path does not exist, but has files in index", folder.ID)
|
||||
cfg.InvalidateFolder(id, "folder path missing")
|
||||
continue nextFolder
|
||||
} else if !folder.HasMarker() {
|
||||
l.Warnf("Stopping folder %q - path exists, but folder marker missing, check for mount issues", folder.ID)
|
||||
cfg.InvalidateFolder(id, "folder marker missing")
|
||||
continue nextFolder
|
||||
}
|
||||
} else if os.IsNotExist(err) {
|
||||
// If we don't have any files in the index, and the directory
|
||||
// doesn't exist, try creating it.
|
||||
err = os.MkdirAll(folder.Path, 0700)
|
||||
if err != nil {
|
||||
l.Warnf("Stopping folder %q - %v", folder.ID, err)
|
||||
cfg.InvalidateFolder(id, err.Error())
|
||||
continue nextFolder
|
||||
}
|
||||
err = folder.CreateMarker()
|
||||
} else if !folder.HasMarker() {
|
||||
// If we don't have any files in the index, and the path does exist
|
||||
// but the marker is not there, create it.
|
||||
err = folder.CreateMarker()
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// If there was another error or we could not create the
|
||||
// path, the folder is invalid.
|
||||
l.Warnf("Stopping folder %q - %v", folder.ID, err)
|
||||
cfg.InvalidateFolder(id, err.Error())
|
||||
continue nextFolder
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func defaultConfig(myName string) config.Configuration {
|
||||
defaultFolder, err := osutil.ExpandTilde("~/Sync")
|
||||
if err != nil {
|
||||
l.Fatalln("home:", err)
|
||||
}
|
||||
|
||||
newCfg := config.New(myID)
|
||||
newCfg.Folders = []config.FolderConfiguration{
|
||||
{
|
||||
ID: "default",
|
||||
Path: defaultFolder,
|
||||
RawPath: locations[locDefFolder],
|
||||
RescanIntervalS: 60,
|
||||
Devices: []config.FolderDeviceConfiguration{{DeviceID: myID}},
|
||||
},
|
||||
@@ -749,7 +825,7 @@ func defaultConfig(myName string) config.Configuration {
|
||||
},
|
||||
}
|
||||
|
||||
port, err := getFreePort("127.0.0.1", 8080)
|
||||
port, err := getFreePort("127.0.0.1", 8384)
|
||||
if err != nil {
|
||||
l.Fatalln("get free port (GUI):", err)
|
||||
}
|
||||
@@ -763,139 +839,15 @@ func defaultConfig(myName string) config.Configuration {
|
||||
return newCfg
|
||||
}
|
||||
|
||||
func generateEvents() {
|
||||
func generatePingEvents() {
|
||||
for {
|
||||
time.Sleep(300 * time.Second)
|
||||
time.Sleep(pingEventInterval)
|
||||
events.Default.Log(events.Ping, nil)
|
||||
}
|
||||
}
|
||||
|
||||
func setupUPnP() {
|
||||
if opts := cfg.Options(); len(opts.ListenAddress) == 1 {
|
||||
_, portStr, err := net.SplitHostPort(opts.ListenAddress[0])
|
||||
if err != nil {
|
||||
l.Warnln("Bad listen address:", err)
|
||||
} else {
|
||||
// Set up incoming port forwarding, if necessary and possible
|
||||
port, _ := strconv.Atoi(portStr)
|
||||
igds := upnp.Discover()
|
||||
if len(igds) > 0 {
|
||||
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
|
||||
// for handling multiple IGDs, which will require changes to the global discovery service
|
||||
igd = &igds[0]
|
||||
|
||||
externalPort = setupExternalPort(igd, port)
|
||||
if externalPort == 0 {
|
||||
l.Warnln("Failed to create UPnP port mapping")
|
||||
} else {
|
||||
l.Infof("Created UPnP port mapping for external port %d on UPnP device %s.", externalPort, igd.FriendlyIdentifier())
|
||||
|
||||
if opts.UPnPRenewal > 0 {
|
||||
go renewUPnP(port)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
l.Warnln("Multiple listening addresses; not attempting UPnP port mapping")
|
||||
}
|
||||
}
|
||||
|
||||
func setupExternalPort(igd *upnp.IGD, port int) int {
|
||||
if igd == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
r := 1024 + predictableRandom.Intn(65535-1024)
|
||||
err := igd.AddPortMapping(upnp.TCP, r, port, fmt.Sprintf("syncthing-%d", r), cfg.Options().UPnPLease*60)
|
||||
if err == nil {
|
||||
return r
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func renewUPnP(port int) {
|
||||
for {
|
||||
opts := cfg.Options()
|
||||
time.Sleep(time.Duration(opts.UPnPRenewal) * time.Minute)
|
||||
|
||||
// Make sure our IGD reference isn't nil
|
||||
if igd == nil {
|
||||
if debugNet {
|
||||
l.Debugln("Undefined IGD during UPnP port renewal. Re-discovering...")
|
||||
}
|
||||
igds := upnp.Discover()
|
||||
if len(igds) > 0 {
|
||||
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
|
||||
// for handling multiple IGDs, which will require changes to the global discovery service
|
||||
igd = &igds[0]
|
||||
} else {
|
||||
if debugNet {
|
||||
l.Debugln("Failed to discover IGD during UPnP port mapping renewal.")
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Just renew the same port that we already have
|
||||
if externalPort != 0 {
|
||||
err := igd.AddPortMapping(upnp.TCP, externalPort, port, "syncthing", opts.UPnPLease*60)
|
||||
if err != nil {
|
||||
l.Warnf("Error renewing UPnP port mapping for external port %d on device %s: %s", externalPort, igd.FriendlyIdentifier(), err.Error())
|
||||
} else if debugNet {
|
||||
l.Debugf("Renewed UPnP port mapping for external port %d on device %s.", externalPort, igd.FriendlyIdentifier())
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// Something strange has happened. We didn't have an external port before?
|
||||
// Or perhaps the gateway has changed?
|
||||
// Retry the same port sequence from the beginning.
|
||||
if debugNet {
|
||||
l.Debugln("No UPnP port mapping defined, updating...")
|
||||
}
|
||||
|
||||
forwardedPort := setupExternalPort(igd, port)
|
||||
if forwardedPort != 0 {
|
||||
externalPort = forwardedPort
|
||||
discoverer.StopGlobal()
|
||||
discoverer.StartGlobal(opts.GlobalAnnServers, uint16(forwardedPort))
|
||||
if debugNet {
|
||||
l.Debugf("Updated UPnP port mapping for external port %d on device %s.", forwardedPort, igd.FriendlyIdentifier())
|
||||
}
|
||||
} else {
|
||||
l.Warnf("Failed to update UPnP port mapping for external port on device " + igd.FriendlyIdentifier() + ".")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func resetFolders() {
|
||||
confDir, err := osutil.ExpandTilde(confDir)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
cfgFile := filepath.Join(confDir, "config.xml")
|
||||
cfg, err := config.Load(cfgFile, myID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
suffix := fmt.Sprintf(".syncthing-reset-%d", time.Now().UnixNano())
|
||||
for _, folder := range cfg.Folders() {
|
||||
if _, err := os.Stat(folder.Path); err == nil {
|
||||
base := filepath.Base(folder.Path)
|
||||
dir := filepath.Dir(filepath.Join(folder.Path, ".."))
|
||||
l.Infof("Reset: Moving %s -> %s", folder.Path, filepath.Join(dir, base+suffix))
|
||||
os.Rename(folder.Path, filepath.Join(dir, base+suffix))
|
||||
}
|
||||
}
|
||||
|
||||
idx := filepath.Join(confDir, "index")
|
||||
os.RemoveAll(idx)
|
||||
func resetDB() error {
|
||||
return os.RemoveAll(locations[locDatabase])
|
||||
}
|
||||
|
||||
func restart() {
|
||||
@@ -941,25 +893,6 @@ func ensureDir(dir string, mode int) {
|
||||
}
|
||||
}
|
||||
|
||||
func getDefaultConfDir() (string, error) {
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
if p := os.Getenv("LocalAppData"); p != "" {
|
||||
return filepath.Join(p, "Syncthing"), nil
|
||||
}
|
||||
return filepath.Join(os.Getenv("AppData"), "Syncthing"), nil
|
||||
|
||||
case "darwin":
|
||||
return osutil.ExpandTilde("~/Library/Application Support/Syncthing")
|
||||
|
||||
default:
|
||||
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
|
||||
return filepath.Join(xdgCfg, "syncthing"), nil
|
||||
}
|
||||
return osutil.ExpandTilde("~/.config/syncthing")
|
||||
}
|
||||
}
|
||||
|
||||
// getFreePort returns a free TCP port fort listening on. The ports given are
|
||||
// tried in succession and the first to succeed is returned. If none succeed,
|
||||
// a random high port is returned.
|
||||
@@ -1090,3 +1023,56 @@ func autoUpgrade() {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// cleanConfigDirectory removes old, unused configuration and index formats, a
|
||||
// suitable time after they have gone out of fashion.
|
||||
func cleanConfigDirectory() {
|
||||
patterns := map[string]time.Duration{
|
||||
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
|
||||
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
|
||||
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
|
||||
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
|
||||
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist
|
||||
"backup-of-v0.8": 30 * 24 * time.Hour, // these neither
|
||||
}
|
||||
|
||||
for pat, dur := range patterns {
|
||||
pat = filepath.Join(baseDirs["config"], pat)
|
||||
files, err := osutil.Glob(pat)
|
||||
if err != nil {
|
||||
l.Infoln("Cleaning:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
info, err := osutil.Lstat(file)
|
||||
if err != nil {
|
||||
l.Infoln("Cleaning:", err)
|
||||
continue
|
||||
}
|
||||
|
||||
if time.Since(info.ModTime()) > dur {
|
||||
if err = os.RemoveAll(file); err != nil {
|
||||
l.Infoln("Cleaning:", err)
|
||||
} else {
|
||||
l.Infoln("Cleaned away old file", filepath.Base(file))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// checkShortIDs verifies that the configuration won't result in duplicate
|
||||
// short ID:s; that is, that the devices in the cluster all have unique
|
||||
// initial 64 bits.
|
||||
func checkShortIDs(cfg *config.Wrapper) error {
|
||||
exists := make(map[uint64]protocol.DeviceID)
|
||||
for deviceID := range cfg.Devices() {
|
||||
shortID := deviceID.Short()
|
||||
if otherID, ok := exists[shortID]; ok {
|
||||
return fmt.Errorf("%v in conflict with %v", deviceID, otherID)
|
||||
}
|
||||
exists[shortID] = deviceID
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -19,19 +19,22 @@ import (
|
||||
"github.com/syndtr/goleveldb/leveldb/storage"
|
||||
)
|
||||
|
||||
func TestSanityCheck(t *testing.T) {
|
||||
func TestFolderErrors(t *testing.T) {
|
||||
// This test intentionally avoids starting the folders. If they are
|
||||
// started, they will perform an initial scan, which will create missing
|
||||
// folder markers and race with the stuff we do in the test.
|
||||
|
||||
fcfg := config.FolderConfiguration{
|
||||
ID: "folder",
|
||||
Path: "testdata/testfolder",
|
||||
ID: "folder",
|
||||
RawPath: "testdata/testfolder",
|
||||
}
|
||||
cfg := config.Wrap("/tmp/test", config.Configuration{
|
||||
Folders: []config.FolderConfiguration{fcfg},
|
||||
})
|
||||
|
||||
for _, file := range []string{".stfolder", "testfolder", "testfolder/.stfolder"} {
|
||||
_, err := os.Stat("testdata/" + file)
|
||||
if err == nil {
|
||||
t.Error("Found unexpected file")
|
||||
for _, file := range []string{".stfolder", "testfolder/.stfolder", "testfolder"} {
|
||||
if err := os.Remove("testdata/" + file); err != nil && !os.IsNotExist(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,10 +42,10 @@ func TestSanityCheck(t *testing.T) {
|
||||
|
||||
// Case 1 - new folder, directory and marker created
|
||||
|
||||
m := model.NewModel(cfg, "device", "syncthing", "dev", ldb)
|
||||
sanityCheckFolders(cfg, m)
|
||||
m := model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
|
||||
m.AddFolder(fcfg)
|
||||
|
||||
if cfg.Folders()["folder"].Invalid != "" {
|
||||
if err := m.CheckFolderHealth("folder"); err != nil {
|
||||
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
|
||||
}
|
||||
|
||||
@@ -56,20 +59,24 @@ func TestSanityCheck(t *testing.T) {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
os.Remove("testdata/testfolder/.stfolder")
|
||||
os.Remove("testdata/testfolder/")
|
||||
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.Remove("testdata/testfolder/"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Case 2 - new folder, marker created
|
||||
|
||||
fcfg.Path = "testdata/"
|
||||
fcfg.RawPath = "testdata/"
|
||||
cfg = config.Wrap("/tmp/test", config.Configuration{
|
||||
Folders: []config.FolderConfiguration{fcfg},
|
||||
})
|
||||
|
||||
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
|
||||
sanityCheckFolders(cfg, m)
|
||||
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
|
||||
m.AddFolder(fcfg)
|
||||
|
||||
if cfg.Folders()["folder"].Invalid != "" {
|
||||
if err := m.CheckFolderHealth("folder"); err != nil {
|
||||
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
|
||||
}
|
||||
|
||||
@@ -78,33 +85,96 @@ func TestSanityCheck(t *testing.T) {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
os.Remove("testdata/.stfolder")
|
||||
if err := os.Remove("testdata/.stfolder"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Case 3 - marker missing
|
||||
// Case 3 - Folder marker missing
|
||||
|
||||
set := db.NewFileSet("folder", ldb)
|
||||
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
|
||||
{Name: "dummyfile"},
|
||||
})
|
||||
|
||||
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
|
||||
sanityCheckFolders(cfg, m)
|
||||
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
|
||||
m.AddFolder(fcfg)
|
||||
|
||||
if cfg.Folders()["folder"].Invalid != "folder marker missing" {
|
||||
t.Error("Incorrect error")
|
||||
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
|
||||
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
|
||||
}
|
||||
|
||||
// Case 4 - path missing
|
||||
// Case 3.1 - recover after folder marker missing
|
||||
|
||||
fcfg.Path = "testdata/testfolder"
|
||||
cfg = config.Wrap("/tmp/test", config.Configuration{
|
||||
if err = fcfg.CreateMarker(); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if err := m.CheckFolderHealth("folder"); err != nil {
|
||||
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
|
||||
}
|
||||
|
||||
// Case 4 - Folder path missing
|
||||
|
||||
if err := os.Remove("testdata/testfolder/.stfolder"); err != nil && !os.IsNotExist(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.Remove("testdata/testfolder"); err != nil && !os.IsNotExist(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
fcfg.RawPath = "testdata/testfolder"
|
||||
cfg = config.Wrap("testdata/subfolder", config.Configuration{
|
||||
Folders: []config.FolderConfiguration{fcfg},
|
||||
})
|
||||
|
||||
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
|
||||
sanityCheckFolders(cfg, m)
|
||||
m = model.NewModel(cfg, protocol.LocalDeviceID, "device", "syncthing", "dev", ldb)
|
||||
m.AddFolder(fcfg)
|
||||
|
||||
if cfg.Folders()["folder"].Invalid != "folder path missing" {
|
||||
t.Error("Incorrect error")
|
||||
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder path missing" {
|
||||
t.Error("Incorrect error: Folder path missing !=", m.CheckFolderHealth("folder"))
|
||||
}
|
||||
|
||||
// Case 4.1 - recover after folder path missing
|
||||
|
||||
if err := os.Mkdir("testdata/testfolder", 0700); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := m.CheckFolderHealth("folder"); err == nil || err.Error() != "folder marker missing" {
|
||||
t.Error("Incorrect error: Folder marker missing !=", m.CheckFolderHealth("folder"))
|
||||
}
|
||||
|
||||
// Case 4.2 - recover after missing marker
|
||||
|
||||
if err = fcfg.CreateMarker(); err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if err := m.CheckFolderHealth("folder"); err != nil {
|
||||
t.Error("Unexpected error", cfg.Folders()["folder"].Invalid)
|
||||
}
|
||||
}
|
||||
|
||||
func TestShortIDCheck(t *testing.T) {
|
||||
cfg := config.Wrap("/tmp/test", config.Configuration{
|
||||
Devices: []config.DeviceConfiguration{
|
||||
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 0, 0}},
|
||||
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 1, 1}}, // first 56 bits same, differ in the first 64 bits
|
||||
},
|
||||
})
|
||||
|
||||
if err := checkShortIDs(cfg); err != nil {
|
||||
t.Error("Unexpected error:", err)
|
||||
}
|
||||
|
||||
cfg = config.Wrap("/tmp/test", config.Configuration{
|
||||
Devices: []config.DeviceConfiguration{
|
||||
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 64, 0}},
|
||||
{DeviceID: protocol.DeviceID{8, 16, 24, 32, 40, 48, 56, 64, 1}}, // first 64 bits same
|
||||
},
|
||||
})
|
||||
|
||||
if err := checkShortIDs(cfg); err == nil {
|
||||
t.Error("Should have gotten an error")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,20 +12,19 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/osutil"
|
||||
"github.com/syncthing/syncthing/internal/sync"
|
||||
)
|
||||
|
||||
var (
|
||||
stdoutFirstLines []string // The first 10 lines of stdout
|
||||
stdoutLastLines []string // The last 50 lines of stdout
|
||||
stdoutMut sync.Mutex
|
||||
stdoutMut = sync.NewMutex()
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -125,7 +124,7 @@ func monitorMain() {
|
||||
|
||||
case err = <-exit:
|
||||
if err == nil {
|
||||
// Successfull exit indicates an intentional shutdown
|
||||
// Successful exit indicates an intentional shutdown
|
||||
return
|
||||
} else if exiterr, ok := err.(*exec.ExitError); ok {
|
||||
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
|
||||
@@ -164,7 +163,7 @@ func copyStderr(stderr io.ReadCloser, dst io.Writer) {
|
||||
dst.Write([]byte(line))
|
||||
|
||||
if strings.HasPrefix(line, "panic:") || strings.HasPrefix(line, "fatal error:") {
|
||||
panicFd, err = os.Create(filepath.Join(confDir, time.Now().Format("panic-20060102-150405.log")))
|
||||
panicFd, err = os.Create(timestampedLoc(locPanicLog))
|
||||
if err != nil {
|
||||
l.Warnln("Create panic log:", err)
|
||||
continue
|
||||
|
||||
203
cmd/syncthing/summarysvc.go
Normal file
@@ -0,0 +1,203 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
"github.com/syncthing/syncthing/internal/model"
|
||||
"github.com/syncthing/syncthing/internal/sync"
|
||||
"github.com/thejerf/suture"
|
||||
)
|
||||
|
||||
// The folderSummarySvc adds summary information events (FolderSummary and
|
||||
// FolderCompletion) into the event stream at certain intervals.
|
||||
type folderSummarySvc struct {
|
||||
*suture.Supervisor
|
||||
|
||||
model *model.Model
|
||||
stop chan struct{}
|
||||
immediate chan string
|
||||
|
||||
// For keeping track of folders to recalculate for
|
||||
foldersMut sync.Mutex
|
||||
folders map[string]struct{}
|
||||
|
||||
// For keeping track of when the last event request on the API was
|
||||
lastEventReq time.Time
|
||||
lastEventReqMut sync.Mutex
|
||||
}
|
||||
|
||||
func newFolderSummarySvc(m *model.Model) *folderSummarySvc {
|
||||
svc := &folderSummarySvc{
|
||||
Supervisor: suture.NewSimple("folderSummarySvc"),
|
||||
model: m,
|
||||
stop: make(chan struct{}),
|
||||
immediate: make(chan string),
|
||||
folders: make(map[string]struct{}),
|
||||
foldersMut: sync.NewMutex(),
|
||||
lastEventReqMut: sync.NewMutex(),
|
||||
}
|
||||
|
||||
svc.Add(serviceFunc(svc.listenForUpdates))
|
||||
svc.Add(serviceFunc(svc.calculateSummaries))
|
||||
|
||||
return svc
|
||||
}
|
||||
|
||||
func (c *folderSummarySvc) Stop() {
|
||||
c.Supervisor.Stop()
|
||||
close(c.stop)
|
||||
}
|
||||
|
||||
// listenForUpdates subscribes to the event bus and makes note of folders that
|
||||
// need their data recalculated.
|
||||
func (c *folderSummarySvc) listenForUpdates() {
|
||||
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged)
|
||||
defer events.Default.Unsubscribe(sub)
|
||||
|
||||
for {
|
||||
// This loop needs to be fast so we don't miss too many events.
|
||||
|
||||
select {
|
||||
case ev := <-sub.C():
|
||||
// Whenever the local or remote index is updated for a given
|
||||
// folder we make a note of it.
|
||||
|
||||
data := ev.Data.(map[string]interface{})
|
||||
folder := data["folder"].(string)
|
||||
|
||||
switch ev.Type {
|
||||
case events.StateChanged:
|
||||
if data["to"].(string) == "idle" && data["from"].(string) == "syncing" {
|
||||
// The folder changed to idle from syncing. We should do an
|
||||
// immediate refresh to update the GUI. The send to
|
||||
// c.immediate must be nonblocking so that we can continue
|
||||
// handling events.
|
||||
|
||||
select {
|
||||
case c.immediate <- folder:
|
||||
c.foldersMut.Lock()
|
||||
delete(c.folders, folder)
|
||||
c.foldersMut.Unlock()
|
||||
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
// This folder needs to be refreshed whenever we do the next
|
||||
// refresh.
|
||||
|
||||
c.foldersMut.Lock()
|
||||
c.folders[folder] = struct{}{}
|
||||
c.foldersMut.Unlock()
|
||||
}
|
||||
|
||||
case <-c.stop:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// calculateSummaries periodically recalculates folder summaries and
|
||||
// completion percentage, and sends the results on the event bus.
|
||||
func (c *folderSummarySvc) calculateSummaries() {
|
||||
const pumpInterval = 2 * time.Second
|
||||
pump := time.NewTimer(pumpInterval)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-pump.C:
|
||||
t0 := time.Now()
|
||||
for _, folder := range c.foldersToHandle() {
|
||||
c.sendSummary(folder)
|
||||
}
|
||||
|
||||
// We don't want to spend all our time calculating summaries. Lets
|
||||
// set an arbitrary limit at not spending more than about 30% of
|
||||
// our time here...
|
||||
wait := 2*time.Since(t0) + pumpInterval
|
||||
pump.Reset(wait)
|
||||
|
||||
case folder := <-c.immediate:
|
||||
c.sendSummary(folder)
|
||||
|
||||
case <-c.stop:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// foldersToHandle returns the list of folders needing a summary update, and
|
||||
// clears the list.
|
||||
func (c *folderSummarySvc) foldersToHandle() []string {
|
||||
// We only recalculate summaries if someone is listening to events
|
||||
// (a request to /rest/events has been made within the last
|
||||
// pingEventInterval).
|
||||
|
||||
c.lastEventReqMut.Lock()
|
||||
last := c.lastEventReq
|
||||
c.lastEventReqMut.Unlock()
|
||||
if time.Since(last) > pingEventInterval {
|
||||
return nil
|
||||
}
|
||||
|
||||
c.foldersMut.Lock()
|
||||
res := make([]string, 0, len(c.folders))
|
||||
for folder := range c.folders {
|
||||
res = append(res, folder)
|
||||
delete(c.folders, folder)
|
||||
}
|
||||
c.foldersMut.Unlock()
|
||||
return res
|
||||
}
|
||||
|
||||
// sendSummary send the summary events for a single folder
|
||||
func (c *folderSummarySvc) sendSummary(folder string) {
|
||||
// The folder summary contains how many bytes, files etc
|
||||
// are in the folder and how in sync we are.
|
||||
data := folderSummary(c.model, folder)
|
||||
events.Default.Log(events.FolderSummary, map[string]interface{}{
|
||||
"folder": folder,
|
||||
"summary": data,
|
||||
})
|
||||
|
||||
for _, devCfg := range cfg.Folders()[folder].Devices {
|
||||
if devCfg.DeviceID.Equals(myID) {
|
||||
// We already know about ourselves.
|
||||
continue
|
||||
}
|
||||
if !c.model.ConnectedTo(devCfg.DeviceID) {
|
||||
// We're not interested in disconnected devices.
|
||||
continue
|
||||
}
|
||||
|
||||
// Get completion percentage of this folder for the
|
||||
// remote device.
|
||||
comp := c.model.Completion(devCfg.DeviceID, folder)
|
||||
events.Default.Log(events.FolderCompletion, map[string]interface{}{
|
||||
"folder": folder,
|
||||
"device": devCfg.DeviceID.String(),
|
||||
"completion": comp,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (c *folderSummarySvc) gotEventRequest() {
|
||||
c.lastEventReqMut.Lock()
|
||||
c.lastEventReq = time.Now()
|
||||
c.lastEventReqMut.Unlock()
|
||||
}
|
||||
|
||||
// serviceFunc wraps a function to create a suture.Service without stop
|
||||
// functionality.
|
||||
type serviceFunc func()
|
||||
|
||||
func (f serviceFunc) Serve() { f() }
|
||||
func (f serviceFunc) Stop() {}
|
||||
0
cmd/syncthing/testdata/.stfolder
vendored
Normal file
0
cmd/syncthing/testdata/testfolder/.stfolder
vendored
Normal file
@@ -19,7 +19,6 @@ import (
|
||||
mr "math/rand"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -28,13 +27,7 @@ const (
|
||||
tlsDefaultCommonName = "syncthing"
|
||||
)
|
||||
|
||||
func loadCert(dir string, prefix string) (tls.Certificate, error) {
|
||||
cf := filepath.Join(dir, prefix+"cert.pem")
|
||||
kf := filepath.Join(dir, prefix+"key.pem")
|
||||
return tls.LoadX509KeyPair(cf, kf)
|
||||
}
|
||||
|
||||
func newCertificate(dir, prefix, name string) {
|
||||
func newCertificate(certFile, keyFile, name string) (tls.Certificate, error) {
|
||||
l.Infof("Generating RSA key and certificate for %s...", name)
|
||||
|
||||
priv, err := rsa.GenerateKey(rand.Reader, tlsRSABits)
|
||||
@@ -63,7 +56,7 @@ func newCertificate(dir, prefix, name string) {
|
||||
l.Fatalln("create cert:", err)
|
||||
}
|
||||
|
||||
certOut, err := os.Create(filepath.Join(dir, prefix+"cert.pem"))
|
||||
certOut, err := os.Create(certFile)
|
||||
if err != nil {
|
||||
l.Fatalln("save cert:", err)
|
||||
}
|
||||
@@ -76,7 +69,7 @@ func newCertificate(dir, prefix, name string) {
|
||||
l.Fatalln("save cert:", err)
|
||||
}
|
||||
|
||||
keyOut, err := os.OpenFile(filepath.Join(dir, prefix+"key.pem"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
|
||||
keyOut, err := os.OpenFile(keyFile, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
|
||||
if err != nil {
|
||||
l.Fatalln("save key:", err)
|
||||
}
|
||||
@@ -88,6 +81,8 @@ func newCertificate(dir, prefix, name string) {
|
||||
if err != nil {
|
||||
l.Fatalln("save key:", err)
|
||||
}
|
||||
|
||||
return tls.LoadX509KeyPair(certFile, keyFile)
|
||||
}
|
||||
|
||||
type DowngradingListener struct {
|
||||
|
||||
111
cmd/syncthing/upnpsvc.go
Normal file
@@ -0,0 +1,111 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/config"
|
||||
"github.com/syncthing/syncthing/internal/upnp"
|
||||
)
|
||||
|
||||
// The UPnP service runs a loop for discovery of IGDs (Internet Gateway
|
||||
// Devices) and setup/renewal of a port mapping.
|
||||
type upnpSvc struct {
|
||||
cfg *config.Wrapper
|
||||
localPort int
|
||||
}
|
||||
|
||||
func newUPnPSvc(cfg *config.Wrapper, localPort int) *upnpSvc {
|
||||
return &upnpSvc{
|
||||
cfg: cfg,
|
||||
localPort: localPort,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *upnpSvc) Serve() {
|
||||
extPort := 0
|
||||
foundIGD := true
|
||||
|
||||
for {
|
||||
igds := upnp.Discover(time.Duration(s.cfg.Options().UPnPTimeoutS) * time.Second)
|
||||
if len(igds) > 0 {
|
||||
foundIGD = true
|
||||
extPort = s.tryIGDs(igds, extPort)
|
||||
} else if foundIGD {
|
||||
// Only print a notice if we've previously found an IGD or this is
|
||||
// the first time around.
|
||||
foundIGD = false
|
||||
l.Infof("No UPnP device detected")
|
||||
}
|
||||
|
||||
d := time.Duration(s.cfg.Options().UPnPRenewalM) * time.Minute
|
||||
if d == 0 {
|
||||
// We always want to do renewal so lets just pick a nice sane number.
|
||||
d = 30 * time.Minute
|
||||
}
|
||||
time.Sleep(d)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *upnpSvc) Stop() {
|
||||
panic("upnpSvc cannot stop")
|
||||
}
|
||||
|
||||
func (s *upnpSvc) tryIGDs(igds []upnp.IGD, prevExtPort int) int {
|
||||
// Lets try all the IGDs we found and use the first one that works.
|
||||
// TODO: Use all of them, and sort out the resulting mess to the
|
||||
// discovery announcement code...
|
||||
for _, igd := range igds {
|
||||
extPort, err := s.tryIGD(igd, prevExtPort)
|
||||
if err != nil {
|
||||
l.Warnf("Failed to set UPnP port mapping: external port %d on device %s.", extPort, igd.FriendlyIdentifier())
|
||||
continue
|
||||
}
|
||||
|
||||
if extPort != prevExtPort {
|
||||
// External port changed; refresh the discovery announcement.
|
||||
// TODO: Don't reach out to some magic global here?
|
||||
l.Infof("New UPnP port mapping: external port %d to local port %d.", extPort, s.localPort)
|
||||
discoverer.StopGlobal()
|
||||
discoverer.StartGlobal(s.cfg.Options().GlobalAnnServers, uint16(extPort))
|
||||
}
|
||||
if debugNet {
|
||||
l.Debugf("Created/updated UPnP port mapping for external port %d on device %s.", extPort, igd.FriendlyIdentifier())
|
||||
}
|
||||
return extPort
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (s *upnpSvc) tryIGD(igd upnp.IGD, suggestedPort int) (int, error) {
|
||||
var err error
|
||||
leaseTime := s.cfg.Options().UPnPLeaseM * 60
|
||||
|
||||
if suggestedPort != 0 {
|
||||
// First try renewing our existing mapping.
|
||||
name := fmt.Sprintf("syncthing-%d", suggestedPort)
|
||||
err = igd.AddPortMapping(upnp.TCP, suggestedPort, s.localPort, name, leaseTime)
|
||||
if err == nil {
|
||||
return suggestedPort, nil
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < 10; i++ {
|
||||
// Then try up to ten random ports.
|
||||
extPort := 1024 + predictableRandom.Intn(65535-1024)
|
||||
name := fmt.Sprintf("syncthing-%d", suggestedPort)
|
||||
err = igd.AddPortMapping(upnp.TCP, extPort, s.localPort, name, leaseTime)
|
||||
if err == nil {
|
||||
return extPort, nil
|
||||
}
|
||||
}
|
||||
|
||||
return 0, err
|
||||
}
|
||||
126
cmd/syncthing/verbose.go
Normal file
@@ -0,0 +1,126 @@
|
||||
// Copyright (C) 2015 The Syncthing Authors.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/syncthing/syncthing/internal/events"
|
||||
)
|
||||
|
||||
// The verbose logging service subscribes to events and prints these in
|
||||
// verbose format to the console using INFO level.
|
||||
type verboseSvc struct {
|
||||
stop chan struct{} // signals time to stop
|
||||
started chan struct{} // signals startup complete
|
||||
}
|
||||
|
||||
func newVerboseSvc() *verboseSvc {
|
||||
return &verboseSvc{
|
||||
stop: make(chan struct{}),
|
||||
started: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Serve runs the verbose logging service.
|
||||
func (s *verboseSvc) Serve() {
|
||||
sub := events.Default.Subscribe(events.AllEvents)
|
||||
defer events.Default.Unsubscribe(sub)
|
||||
|
||||
// We're ready to start processing events.
|
||||
close(s.started)
|
||||
|
||||
for {
|
||||
select {
|
||||
case ev := <-sub.C():
|
||||
formatted := s.formatEvent(ev)
|
||||
if formatted != "" {
|
||||
l.Verboseln(formatted)
|
||||
}
|
||||
case <-s.stop:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the verbose logging service.
|
||||
func (s *verboseSvc) Stop() {
|
||||
close(s.stop)
|
||||
}
|
||||
|
||||
// WaitForStart returns once the verbose logging service is ready to receive
|
||||
// events, or immediately if it's already running.
|
||||
func (s *verboseSvc) WaitForStart() {
|
||||
<-s.started
|
||||
}
|
||||
|
||||
func (s *verboseSvc) formatEvent(ev events.Event) string {
|
||||
switch ev.Type {
|
||||
case events.Ping, events.DownloadProgress:
|
||||
// Skip
|
||||
return ""
|
||||
|
||||
case events.Starting:
|
||||
return fmt.Sprintf("Starting up (%s)", ev.Data.(map[string]string)["home"])
|
||||
case events.StartupComplete:
|
||||
return "Startup complete"
|
||||
|
||||
case events.DeviceDiscovered:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Discovered device %v at %v", data["device"], data["addrs"])
|
||||
case events.DeviceConnected:
|
||||
data := ev.Data.(map[string]string)
|
||||
return fmt.Sprintf("Connected to device %v at %v", data["id"], data["addr"])
|
||||
case events.DeviceDisconnected:
|
||||
data := ev.Data.(map[string]string)
|
||||
return fmt.Sprintf("Disconnected from device %v", data["id"])
|
||||
|
||||
case events.StateChanged:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Folder %q is now %v", data["folder"], data["to"])
|
||||
|
||||
case events.RemoteIndexUpdated:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Device %v sent an index update for %q with %d items", data["device"], data["folder"], data["items"])
|
||||
case events.LocalIndexUpdated:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Updated index for folder %q with %v items", data["folder"], data["items"])
|
||||
|
||||
case events.DeviceRejected:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Rejected connection from device %v at %v", data["device"], data["address"])
|
||||
case events.FolderRejected:
|
||||
data := ev.Data.(map[string]string)
|
||||
return fmt.Sprintf("Rejected unshared folder %q from device %v", data["folder"], data["device"])
|
||||
|
||||
case events.ItemStarted:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Started syncing %q / %q (%v %v)", data["folder"], data["item"], data["action"], data["type"])
|
||||
case events.ItemFinished:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
if err := data["err"]; err != nil {
|
||||
return fmt.Sprintf("Finished syncing %q / %q (%v %v): %v", data["folder"], data["item"], data["action"], data["type"], err)
|
||||
}
|
||||
return fmt.Sprintf("Finished syncing %q / %q (%v %v): Success", data["folder"], data["item"], data["action"], data["type"])
|
||||
|
||||
case events.ConfigSaved:
|
||||
return "Configuration was saved"
|
||||
|
||||
case events.FolderCompletion:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
return fmt.Sprintf("Completion for folder %q on device %v is %v%%", data["folder"], data["device"], data["completion"])
|
||||
case events.FolderSummary:
|
||||
data := ev.Data.(map[string]interface{})
|
||||
sum := data["summary"].(map[string]interface{})
|
||||
delete(sum, "invalid")
|
||||
delete(sum, "ignorePatterns")
|
||||
delete(sum, "stateChanged")
|
||||
return fmt.Sprintf("Summary for folder %q is %v", data["folder"], data["summary"])
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s %#v", ev.Type, ev)
|
||||
}
|
||||
@@ -1,27 +1,8 @@
|
||||
This directory contains a configuration for running syncthing under the
|
||||
# Systemd Configuration
|
||||
|
||||
This directory contains configuration files for running syncthing under the
|
||||
"systemd" service manager on Linux both under either a systemd system service or
|
||||
systemd user service.
|
||||
systemd user service. For further documentation take a look at the [systemd
|
||||
section][1] on the Github Wiki.
|
||||
|
||||
1. Install systemd.
|
||||
|
||||
2. If you are running this as a system level service:
|
||||
|
||||
1. Create the user you will be running the service as (foo in this example).
|
||||
|
||||
2. Copy the syncthing@.service files to /etc/systemd/system
|
||||
|
||||
3. Enable and start the service
|
||||
systemctl enable syncthing@foo.service
|
||||
systemctl start syncthing@foo.service
|
||||
|
||||
3. If you are running this as a user level service:
|
||||
|
||||
1. Log in as the user you will be running the service as
|
||||
|
||||
2. Copy the syncthing.service files to /etc/systemd/user
|
||||
|
||||
3. Enable and start the service
|
||||
systemctl --user enable syncthing.service
|
||||
systemctl --user start syncthing.service
|
||||
|
||||
Log output is sent to the journal.
|
||||
[1]: https://github.com/syncthing/syncthing/wiki/Autostart-syncthing#systemd
|
||||
|
||||
@@ -8,8 +8,7 @@ User=%i
|
||||
Environment=STNORESTART=yes
|
||||
ExecStart=/usr/bin/syncthing -no-browser -logflags=0
|
||||
Restart=on-failure
|
||||
RestartPreventExitStatus=1
|
||||
SuccessExitStatus=2
|
||||
SuccessExitStatus=2 3 4
|
||||
RestartForceExitStatus=3 4
|
||||
|
||||
[Install]
|
||||
|
||||
@@ -7,8 +7,7 @@ After=network.target
|
||||
Environment=STNORESTART=yes
|
||||
ExecStart=/usr/bin/syncthing -no-browser -logflags=0
|
||||
Restart=on-failure
|
||||
RestartPreventExitStatus=1
|
||||
SuccessExitStatus=2
|
||||
SuccessExitStatus=2 3 4
|
||||
RestartForceExitStatus=3 4
|
||||
|
||||
[Install]
|
||||
|
||||
@@ -153,6 +153,45 @@ table.table-condensed td {
|
||||
padding-right: 15px;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Menu for select language
|
||||
*/
|
||||
|
||||
@media (min-width:480px) {
|
||||
*[language-select] > .dropdown-menu > li {
|
||||
width: 50%;
|
||||
float: left;
|
||||
}
|
||||
*[language-select] > .dropdown-menu {
|
||||
width: 400px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width:479px) {
|
||||
.dropdown-menu {
|
||||
padding-top: 55px;
|
||||
}
|
||||
|
||||
*[language-select] > .dropdown-toggle {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.dropdown-toggle {
|
||||
float: left;
|
||||
}
|
||||
|
||||
.navbar-brand {
|
||||
padding-left: 0;
|
||||
padding-top: 16px;
|
||||
}
|
||||
|
||||
.navbar-nav .open .dropdown-menu > li > a {
|
||||
padding: 12px 15px 12px 25px;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
.panel-body .table-condensed {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
|
Before Width: | Height: | Size: 3.7 KiB After Width: | Height: | Size: 3.6 KiB |
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Адрасы",
|
||||
"All Data": "All Data",
|
||||
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Any devices configured on an introducer device will be added to this device as well.",
|
||||
"Automatic upgrades": "Automatic upgrades",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Выкарыстаньне працэсара",
|
||||
"Changelog": "Сьпіс зьменаў",
|
||||
"Close": "Зачыніць",
|
||||
"Command": "Command",
|
||||
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "Сьцісканьне рэкамэндавана ў большасьці выпадкаў.",
|
||||
"Connection Error": "Connection Error",
|
||||
"Copied from elsewhere": "Copied from elsewhere",
|
||||
"Copied from original": "Copied from original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg and the following Contributors:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Выдаліць",
|
||||
"Device ID": "ID прылады",
|
||||
"Device Identification": "Ідэнтыфікацыя прылады",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.",
|
||||
"Enter ignore patterns, one per line.": "Enter ignore patterns, one per line.",
|
||||
"Error": "Error",
|
||||
"External File Versioning": "External File Versioning",
|
||||
"File Versioning": "Вэрсіі файлаў",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "File permission bits are ignored when looking for changes. Use on FAT filesystems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
|
||||
"Folder ID": "ID каталёгу",
|
||||
"Folder Master": "Folder Master",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Глябальнае вызначэньне",
|
||||
"Global Discovery Server": "Сэрвер глябальнага вызначэньня",
|
||||
"Global State": "Глябальны стан",
|
||||
"Idle": "Idle",
|
||||
"Ignore": "Ignore",
|
||||
"Ignore Patterns": "Ігнараваць шаблёны",
|
||||
"Ignore Permissions": "Ігнараваць правы",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
|
||||
"Keep Versions": "Трымаць вэрсій",
|
||||
"Last File Received": "Апошні атрыманы файл",
|
||||
"Last File Synced": "Апошні сынхранізаваны файл",
|
||||
"Last seen": "Апошні раз бачылі",
|
||||
"Later": "Later",
|
||||
"Latest Release": "Latest Release",
|
||||
"Local Discovery": "Лякальнае вызначэньне",
|
||||
"Local State": "Лякальны стан",
|
||||
"Maximum Age": "Maximum Age",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Notice",
|
||||
"OK": "Добра",
|
||||
"Off": "Off",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Несынхранізавана",
|
||||
"Out of Sync Items": "Несынхранізаваныя складнікі",
|
||||
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
|
||||
@@ -114,7 +111,7 @@
|
||||
"Share Folders With Device": "Share Folders With Device",
|
||||
"Share With Devices": "Абагуліць з прыладамі",
|
||||
"Share this folder?": "Абагуліць гэты каталёг ?",
|
||||
"Shared With": "Абагульны з",
|
||||
"Shared With": "Абагулены з",
|
||||
"Short identifier for the folder. Must be the same on all cluster devices.": "Short identifier for the folder. Must be the same on all cluster devices.",
|
||||
"Show ID": "Паказаць ID",
|
||||
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.",
|
||||
@@ -126,24 +123,23 @@
|
||||
"Source Code": "Зыходнікі",
|
||||
"Staggered File Versioning": "Адаптыўнае захоўваньне вэрсій",
|
||||
"Start Browser": "Start Browser",
|
||||
"Stopped": "Stopped",
|
||||
"Stopped": "Спынена",
|
||||
"Support": "Падтрымка",
|
||||
"Support / Forum": "Падтрымка / Форум",
|
||||
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
|
||||
"Synchronization": "Сынхранізацыя",
|
||||
"Syncing": "Сынхранізуецца",
|
||||
"Syncthing has been shut down.": "Syncthing has been shut down.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
|
||||
"Syncthing is restarting.": "Syncthing перастартоўвае.",
|
||||
"Syncthing is upgrading.": "Syncthing is upgrading.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
|
||||
"The device ID cannot be blank.": "The device ID cannot be blank.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "The folder ID cannot be blank.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.",
|
||||
"The folder ID must be unique.": "The folder ID must be unique.",
|
||||
@@ -153,16 +149,15 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The number of old versions to keep, per file.": "Колькі старых вэрсій трымаць, для кожнага файлу.",
|
||||
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
|
||||
"The path cannot be blank.": "The path cannot be blank.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Unknown",
|
||||
"Unknown": "Невядома",
|
||||
"Unshared": "Unshared",
|
||||
"Unused": "Unused",
|
||||
"Up to Date": "Найноўшае",
|
||||
"Upgrade To {%version%}": "Upgrade To {{version}}",
|
||||
"Upgrading": "Абнаўленьне",
|
||||
"Upload Rate": "Хуткасьць запампоўваньня",
|
||||
"Use Compression": "Выкарыстоўваць сьцісканьне",
|
||||
"Use HTTPS for GUI": "Use HTTPS for GUI",
|
||||
"Version": "Вэрсія",
|
||||
"Versions Path": "Versions Path",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Адреси",
|
||||
"All Data": "Всички данни",
|
||||
"Allow Anonymous Usage Reporting?": "Разреши анонимен доклад за ползване на програмата?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Друга команда се занимава с версиите. Тази команда трябва да премахни файла от синхронизираната папка.",
|
||||
"Anonymous Usage Reporting": "Анонимен Доклад",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Устройства настроени на introducer компютъра също ще бъдат добавени към този компютър.",
|
||||
"Automatic upgrades": "Автоматични ъпдейти",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Натоварване на Процесора",
|
||||
"Changelog": "Сипъск с промени",
|
||||
"Close": "Затвори",
|
||||
"Command": "Команда",
|
||||
"Comment, when used at the start of a line": "Коментар, използван в началото на реда",
|
||||
"Compression": "Компресия",
|
||||
"Compression is recommended in most setups.": "Компресията е препоръчителна в повечето конфигурации.",
|
||||
"Connection Error": "Грешка при Свързването",
|
||||
"Copied from elsewhere": "Копиране от някъде другаде",
|
||||
"Copied from original": "Копиран от оригинала",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Правата запазени © 2014 Jakob Borg и следните Сътрудници:",
|
||||
"Copyright © 2015 the following Contributors:": "Правата запазени © 2015 Сътрудници:",
|
||||
"Delete": "Изтрий",
|
||||
"Device ID": "Идентификатор на устройство",
|
||||
"Device Identification": "Идентификация на устройство",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Въведи \"ip:port\" адреси разделени със запетая или \"dynamic\", за да извършиш автоматична връзка на адреси.",
|
||||
"Enter ignore patterns, one per line.": "Добави шаблони за игнориране, по един на ред.",
|
||||
"Error": "Грешка",
|
||||
"External File Versioning": "Външно упраление на версиите",
|
||||
"File Versioning": "Файлови Версии",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други устройства, но промени направени на това устройство ще бъдат синхронизирани с другите устройства.",
|
||||
"Folder ID": "Идентификатор на папка",
|
||||
"Folder Master": "Главна папка",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Глобавно Откриване",
|
||||
"Global Discovery Server": "Сървър за Глобално Откриване",
|
||||
"Global State": "Глобално състояние",
|
||||
"Idle": "Без Работа",
|
||||
"Ignore": "Игнорирай",
|
||||
"Ignore Patterns": "Шаблони за Игнориране",
|
||||
"Ignore Permissions": "Игнорирай Права за Достъп",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Обратното на даденото условие (пр. не изключвай)",
|
||||
"Keep Versions": "Пази Версии",
|
||||
"Last File Received": "Последния получен файл",
|
||||
"Last File Synced": "Последния синхронизиран файл",
|
||||
"Last seen": "Последно видян",
|
||||
"Later": "По-късно",
|
||||
"Latest Release": "Най-новата Версия",
|
||||
"Local Discovery": "Локално Откриване",
|
||||
"Local State": "Локално състояние",
|
||||
"Maximum Age": "Максимална Възраст",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Известие",
|
||||
"OK": "ОК",
|
||||
"Off": "Изключено",
|
||||
"Offline": "Не е на линия",
|
||||
"Online": "На линия",
|
||||
"Out Of Sync": "Не Синхронизиран",
|
||||
"Out of Sync Items": "Несинхронизирани елементи",
|
||||
"Outgoing Rate Limit (KiB/s)": "Лимит на Изходящата Скорост (KiB/s)",
|
||||
@@ -128,9 +125,7 @@
|
||||
"Start Browser": "Стартирай Браузъра",
|
||||
"Stopped": "Спряна",
|
||||
"Support": "Помощ",
|
||||
"Support / Forum": "Помощ / Форум",
|
||||
"Sync Protocol Listen Addresses": "Адрес за слушане на синхронизиращия протокол",
|
||||
"Synchronization": "Синхронизация",
|
||||
"Syncing": "Синхронизиране",
|
||||
"Syncthing has been shut down.": "Syncthing е спрян.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing включва следният софтуер пълно или частично:",
|
||||
@@ -144,6 +139,7 @@
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Идентификатор на устройство за въвеждане тук, може да бъде намерен в \"Промени > Покажи Идентификатора\". Интервалите и тиретата са пожелание(биват прескачани).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Криптираният доклад се изпраща дневно. Използва се, за да следи общи платформи, размери на папки и версии на приложението. Ако събираните данни се променят, ще бъдете информиран с подобен на този диалог.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Въведни идентификатор на устройство не е валиден. Трябва да бъде 52 или 56 символа и да се състои от букви и цифри, като интервалите и тиретата са пожелание.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Първият параметър за командата е пътя до папката, а вторият е релативния път в самата папка.",
|
||||
"The folder ID cannot be blank.": "Полето идентификатор на папка неможе да бъде празно.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Идентификаторът на папка трябва да бъде къс (64 символа или по-малко) състоящ се само от букви, цифри, точка(.), тире(-) и подчерта (_).",
|
||||
"The folder ID must be unique.": "Идентификаторът на папката тряба да бъде уникален.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максималното време да се пазят весрсии (в дни, сложи 0, за да пазиш версии завинаги).",
|
||||
"The number of old versions to keep, per file.": "Броят стари версии, които да бъдат пазени за всеки файл.",
|
||||
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
|
||||
"The path cannot be blank.": "Пътят неможе да бъде празен.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Интервала на сканиране трябва да бъде не отрицателно число в секунди.",
|
||||
"The rescan interval must be at least 5 seconds.": "Интервала за повторно сканиране трябва да бъде поне 5 секунди.",
|
||||
"Unknown": "Неясен",
|
||||
"Unshared": "Споделянето прекратено",
|
||||
"Unused": "Неизползван",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Обновен До {{version}}",
|
||||
"Upgrading": "Обновяване",
|
||||
"Upload Rate": "Скорост на Качване",
|
||||
"Use Compression": "Използвай Компресиране",
|
||||
"Use HTTPS for GUI": "Използвай HTTPS за Потребителския Интерфейс",
|
||||
"Version": "Версия",
|
||||
"Versions Path": "Път до Версиите",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adreces",
|
||||
"All Data": "All Data",
|
||||
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Informe anònim d'ús",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Qualsevol dispositiu configurat com a dispositiu introductor s'afegirà a aquest dispositiu tambè.",
|
||||
"Automatic upgrades": "Actualitzacions automàtiques",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Utilització del CPU",
|
||||
"Changelog": "Historial de canvis",
|
||||
"Close": "Tancar",
|
||||
"Command": "Command",
|
||||
"Comment, when used at the start of a line": "Comentari quan és usat al principi d'una línia",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "Generalment, la compressió és recomanada.",
|
||||
"Connection Error": "Error de connexió",
|
||||
"Copied from elsewhere": "Copiat d'un altre lloc",
|
||||
"Copied from original": "Copiat de l'original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Drets d'autor © 2014 Jakob Borg i els següents contribuïdors:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Esborrar",
|
||||
"Device ID": "ID del dispositiu",
|
||||
"Device Identification": "Identificació del dispositiu",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
|
||||
"Enter ignore patterns, one per line.": "Introduïx els patrons d'ignoració, un per línia.",
|
||||
"Error": "Error",
|
||||
"External File Versioning": "External File Versioning",
|
||||
"File Versioning": "Versionat de Fitxers",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres dispositius, però els canvis fets en aquest dispositiu seran enviats a la resta del cluster.",
|
||||
"Folder ID": "ID de carpeta",
|
||||
"Folder Master": "Carpeta mestre",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Descobriment Global",
|
||||
"Global Discovery Server": "Servidor de Descobriment Global",
|
||||
"Global State": "Estat global",
|
||||
"Idle": "Inactiu",
|
||||
"Ignore": "Ignorar",
|
||||
"Ignore Patterns": "Patrons d'ignoració",
|
||||
"Ignore Permissions": "Ignora Permisos",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversió del patrò introduït",
|
||||
"Keep Versions": "Mantenir Versions",
|
||||
"Last File Received": "Últim fitxer rebut",
|
||||
"Last File Synced": "Últim fitxer sincronitzat",
|
||||
"Last seen": "Vist per última vegada",
|
||||
"Later": "Després",
|
||||
"Latest Release": "Última publicació",
|
||||
"Local Discovery": "Descobriment Local",
|
||||
"Local State": "Estat local",
|
||||
"Maximum Age": "Antiguitat Màxima",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Avís",
|
||||
"OK": "OK",
|
||||
"Off": "Off",
|
||||
"Offline": "Desconnectat",
|
||||
"Online": "Connectat",
|
||||
"Out Of Sync": "Fora de la Sincronització",
|
||||
"Out of Sync Items": "Arxius encara no sincronitzats",
|
||||
"Outgoing Rate Limit (KiB/s)": "Tasca Límit de Sortida (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Arrancar Navegador",
|
||||
"Stopped": "Aturat",
|
||||
"Support": "Suport",
|
||||
"Support / Forum": "Suport / Fòrum",
|
||||
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
|
||||
"Synchronization": "Sincronització",
|
||||
"Syncing": "Synthing",
|
||||
"Syncthing has been shut down.": "S'ha aturat el synthing.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
|
||||
"Syncthing is restarting.": "Reiniciant syncthing.",
|
||||
"Syncthing is upgrading.": "Actualitzant syncthing.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing sembla estar experimentant un problema en processar la teva sol·licitud. Si us plau, recarregeu el navegador o reinicieu Syncthing si el problema persisteix.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
|
||||
"The device ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "El ID del dispositiu per introduir ací es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre dispositiu. Els espais i les barres son opcionals (s'ignoren).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de carpetes i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del dispositiu introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "El ID de la carpeta ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
|
||||
"The folder ID must be unique.": "El ID de la carpeta ha de ser únic.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
|
||||
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
|
||||
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
|
||||
"The path cannot be blank.": "The path cannot be blank.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "El interval de re-escaneig ha der ser un nombre positiu de segons.",
|
||||
"The rescan interval must be at least 5 seconds.": "El interval de re-escaneig ha de ser com a mínim de 5 segons.",
|
||||
"Unknown": "Desconegut",
|
||||
"Unshared": "No compartit",
|
||||
"Unused": "No usat",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Actualitzar a {{version}}",
|
||||
"Upgrading": "Actualitzant",
|
||||
"Upload Rate": "Tasca de Pujada",
|
||||
"Use Compression": "Utilitza compressió",
|
||||
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
|
||||
"Version": "Versió",
|
||||
"Versions Path": "Carpeta de les Versions",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adresy",
|
||||
"All Data": "Všechna data",
|
||||
"Allow Anonymous Usage Reporting?": "Povolit anonymní hlášení o používání?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Verzování obstarává externí příkaz. Musí odstranit soubor ze sdíleného adresáře.",
|
||||
"Anonymous Usage Reporting": "Anonymní hlášení o používání",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Jakékoliv přístroje nakonfigurované na zavaděči budou přidány také na tento přístroj.",
|
||||
"Automatic upgrades": "Automatický upgrade",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Využití CPU",
|
||||
"Changelog": "Changelog",
|
||||
"Close": "Zavřít",
|
||||
"Command": "Příkaz",
|
||||
"Comment, when used at the start of a line": "Komentář, pokud použito na začátku řádku",
|
||||
"Compression": "Komprese",
|
||||
"Compression is recommended in most setups.": "Komprese je doporučena pro většinu nastavení.",
|
||||
"Connection Error": "Chyba připojení",
|
||||
"Copied from elsewhere": "Zkopírováno odjinud",
|
||||
"Copied from original": "Zkopírováno z originálu",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg a následující přispěvatelé:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 následující přispěvatelé:",
|
||||
"Delete": "Smazat",
|
||||
"Device ID": "ID přístroje",
|
||||
"Device Identification": "Identifikace přístroje",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Vlož čárkou oddělené adresy \"ip:port\" nebo \"dynamic\" (pro automatické zjišťování adres). ",
|
||||
"Enter ignore patterns, one per line.": "Vložit ignorované vzory, jeden na řádek.",
|
||||
"Error": "Chyba",
|
||||
"File Versioning": "Verze souborů",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Bity označující práva souborů jsou při hledání změn ignorovány. Použít pro souborové systémy FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Po nahrazení nebo smazání aplikací Syncthing jsou soubory přesunuty do verzí označených daty v adresáři .stversions.",
|
||||
"External File Versioning": "Externí verzování souborů",
|
||||
"File Versioning": "Verzování souborů",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Bity označující práva souborů jsou při hledání změn ignorovány. Použít pro souborové systémy FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Po nahrazení nebo smazání aplikací Syncthing jsou soubory přesunuty do verzí označených daty v adresáři .stversions.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Soubory jsou chráněny před změnami na ostatních přístrojích, ale změny provedené z tohoto přístroje budou rozeslány na zbytek clusteru.",
|
||||
"Folder ID": "ID adresáře",
|
||||
"Folder Master": "Master adresář",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Globální oznamování",
|
||||
"Global Discovery Server": "Server globálního oznamování",
|
||||
"Global State": "Všeobecný status",
|
||||
"Idle": "Nečinný",
|
||||
"Ignore": "Ignorovat",
|
||||
"Ignore Patterns": "Ignorované vzory",
|
||||
"Ignore Permissions": "Ignorovat oprávnění",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Prohození zadané podmínky (např. nevynechat)",
|
||||
"Keep Versions": "Ponechat verze",
|
||||
"Last File Received": "Poslední přijatý soubor",
|
||||
"Last File Synced": "Poslední synchronizovaný soubor",
|
||||
"Last seen": "Naposledy spatřen",
|
||||
"Later": "Později",
|
||||
"Latest Release": "Poslední vydání",
|
||||
"Local Discovery": "Místní oznamování",
|
||||
"Local State": "Místní status",
|
||||
"Maximum Age": "Maximální časový limit",
|
||||
@@ -80,12 +79,10 @@
|
||||
"New Device": "Nový přístroj",
|
||||
"New Folder": "Nový adresář",
|
||||
"No": "Ne",
|
||||
"No File Versioning": "Bez verzí souborů",
|
||||
"No File Versioning": "Bez verzování souborů",
|
||||
"Notice": "Oznámení",
|
||||
"OK": "OK",
|
||||
"Off": "Vypnuta",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Nesesynchronizováno",
|
||||
"Out of Sync Items": "Nesesynchronizované položky",
|
||||
"Outgoing Rate Limit (KiB/s)": "Omezení odchozí rychlosti (KiB/s)",
|
||||
@@ -121,29 +118,28 @@
|
||||
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Zobrazeno místo ID přístroje na náhledu stavu clusteru. Pokud nebude vyplněno, bude nastaveno na jméno, které přístroj odesílá.",
|
||||
"Shutdown": "Vypnout",
|
||||
"Shutdown Complete": "Vypnutí dokončeno",
|
||||
"Simple File Versioning": "Jednoduché verze souborů",
|
||||
"Simple File Versioning": "Jednoduché verzování souborů",
|
||||
"Single level wildcard (matches within a directory only)": "Jednoúrovňový zástupný znak (shody pouze uvnitř adresáře)",
|
||||
"Source Code": "Zdrojový kód",
|
||||
"Staggered File Versioning": "Vícenásobné verze souborů",
|
||||
"Staggered File Versioning": "Postupné verzování souborů",
|
||||
"Start Browser": "Otevřít prohlížeč",
|
||||
"Stopped": "Pozastaveno",
|
||||
"Support": "Podpora",
|
||||
"Support / Forum": "Podpora / Fórum",
|
||||
"Sync Protocol Listen Addresses": "Adresa naslouchání synchronizačního protokolu",
|
||||
"Synchronization": "Synchronizace",
|
||||
"Syncing": "Synchronizuje se",
|
||||
"Syncthing has been shut down.": "Syncthing byl vypnut.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing obsahuje následující software nebo jejich část:",
|
||||
"Syncthing is restarting.": "Syncthing se restartuje.",
|
||||
"Syncthing is upgrading.": "Syncthing se aktualizuje.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing se zdá být nefunkční, nebo je problém s připojením k Internetu. Opakuji...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing má nejspíše problém s provedením vašeho požadavku. Pokud problém přetrvává, načtěte znovu data v prohlížeči nebo restartujte Syncthing.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing má nejspíše problém s provedením vašeho požadavku. Pokud problém přetrvává, obnovte stránku v prohlížeči nebo restartujte Syncthing.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Souhrnné statistiky jsou veřejně dostupné na {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfigurace byla uložena, ale není aktivována. Pro aktivaci nové konfigurace je třeba restartovat Syncthing.",
|
||||
"The device ID cannot be blank.": "ID přístroje nemůže být prázdné.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "ID přístroje, které je třeba vložit, lze nalézt v dialogu \"Upravit > Zobrazit ID\" na druhém přístroji. Mezery a pomlčky nejsou nutné (budou ignorovány).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Šifrovaná data o využití jsou zasílána denně. Jsou používána pro zjištění nejobvyklejších platforem, velikosti adresářů a verzí aplikace. Pokud se hlášená data změní, budete opět upozorněni tímto dialogem.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Zadané ID přístroje není platné. Mělo by mít 52 nebo 56 znaků a mělo by obsahovat písmena a čísla. Mezery a pomlčky jsou nepovinné.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "První parametr příkazové řádky je cesta k adresáři, druhý je relativní cesta v témže adresáři.",
|
||||
"The folder ID cannot be blank.": "ID adresáře nemůže být prázdné.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "ID adresáře musí být stručný popisek (64 znaků nebo méně) obsahující pouze písmena, čísla, tečku (.), pomlčku (-) a nebo podtržítko (_).",
|
||||
"The folder ID must be unique.": "ID adresáře musí být unikátní.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maximální doba pro zachování verze (dny, zapsáním hodnoty 0 bude ponecháno navždy).",
|
||||
"The number of old versions to keep, per file.": "Počet starších verzí k zachování pro každý soubor.",
|
||||
"The number of versions must be a number and cannot be blank.": "Počet verzí musí být číslo a nemůže být prázdné.",
|
||||
"The path cannot be blank.": "Cesta nesmí být prázdná.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Interval opakování skenování musí být pozitivní číslo.",
|
||||
"The rescan interval must be at least 5 seconds.": "Interval opakování skenování musí být delší než 5 sekund.",
|
||||
"Unknown": "Neznámý",
|
||||
"Unshared": "Nesdílený",
|
||||
"Unused": "Nepoužitý",
|
||||
@@ -162,10 +158,9 @@
|
||||
"Upgrade To {%version%}": "Aktualizovat na {{version}}",
|
||||
"Upgrading": "Aktualizuji",
|
||||
"Upload Rate": "Rychlost odesílání",
|
||||
"Use Compression": "Použít kompresi",
|
||||
"Use HTTPS for GUI": "Použít HTTPS pro grafické rozhraní",
|
||||
"Version": "Verze",
|
||||
"Versions Path": "Cesta verzí",
|
||||
"Versions Path": "Cesta k verzím",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Verze jsou automaticky smazány, pokud jsou starší než maximální časový limit nebo překročí počet souborů povolených pro interval.",
|
||||
"When adding a new device, keep in mind that this device must be added on the other side too.": "Při přidávání nového přístroje mějte na paměti, že je ho třeba také zadat na druhé straně.",
|
||||
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Při přidávání nového adresáře mějte na paměti, že jeho ID je použito ke svázání adresářů napříč přístoji. Rozlišují se malá a velká písmena a musí přesně souhlasit mezi všemi přístroji.",
|
||||
|
||||
@@ -9,20 +9,21 @@
|
||||
"Addresses": "Adressen",
|
||||
"All Data": "Alle Daten",
|
||||
"Allow Anonymous Usage Reporting?": "Übertragung von anonymen Nutzungsberichten erlauben?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Ein externer Programmaufruf handhabt die Versionierung. Es muss die Datei aus dem zu synchronisierendem Ordner entfernen.",
|
||||
"Anonymous Usage Reporting": "Anonymer Nutzungsbericht",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Alle Geräte, die beim Verteiler eingetragen sind, werden auch bei diesem Gerät eingetragen",
|
||||
"Automatic upgrades": "automatische Updates",
|
||||
"Bugs": "Fehler",
|
||||
"CPU Utilization": "Prozessorauslastung",
|
||||
"Changelog": "Versionsinfo",
|
||||
"Changelog": "Änderungsprotokoll",
|
||||
"Close": "Schließen",
|
||||
"Command": "Kommando",
|
||||
"Comment, when used at the start of a line": "Kommentar, wenn am Anfang der Zeile benutzt.",
|
||||
"Compression": "Komprimierung",
|
||||
"Compression is recommended in most setups.": "Datenkomprimierung ist für die meisten Anwendungen empfohlen",
|
||||
"Connection Error": "Verbindungsfehler",
|
||||
"Copied from elsewhere": "Von woanders kopiert",
|
||||
"Copied from original": "Vom Originial kopiert",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg und folgende Unterstützer:",
|
||||
"Copied from original": "Vom Original kopiert",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 die folgenden Unterstützer:",
|
||||
"Delete": "Löschen",
|
||||
"Device ID": "Geräte ID",
|
||||
"Device Identification": "Gerät Identifikation",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Trage durch ein Komma getrennte \"IP:Port\" Adressen oder \"dynamic\" ein um automatische Adresserkennung durchzuführen.",
|
||||
"Enter ignore patterns, one per line.": "Geben Sie Ignoriermuster ein, eines pro Zeile.",
|
||||
"Error": "Fehler",
|
||||
"External File Versioning": "Externe Dateiversionierung",
|
||||
"File Versioning": "Dateiversionierung",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Dateizugriffsrechte beim Suchen nach Veränderungen ignorieren. Bei FAT-Dateisystemen verwenden.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Dateien werden, bevor Syncthing sie löscht oder ersetzt, als datierte Versionen in ein Verzeichnis names .stversions verschoben.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Dateizugriffsrechte beim Suchen nach Veränderungen ignorieren. Bei FAT-Dateisystemen zu verwenden.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Dateien werden, bevor Syncthing sie löscht oder ersetzt, als datierte Versionen in einen Ordner namens .stversions verschoben.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Dateien sind vor Veränderung durch andere Geräte geschützt, auf diesem Gerät durchgeführte Veränderungen werden aber auf den Rest des Verbunds übertragen.",
|
||||
"Folder ID": "Verzeichnis ID",
|
||||
"Folder Master": "Keine Veränderungen zulassen",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Globale Auffindung",
|
||||
"Global Discovery Server": "Globaler Auffindungsserver",
|
||||
"Global State": "Globaler Status",
|
||||
"Idle": "Untätig",
|
||||
"Ignore": "Ignorieren",
|
||||
"Ignore Patterns": "Ignoriermuster",
|
||||
"Ignore Permissions": "Berechtigungen ignorieren",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Umkehrung der angegebenen Bedingung (z.B. schließe nicht aus)",
|
||||
"Keep Versions": "Versionen erhalten",
|
||||
"Last File Received": "Letzte Datei",
|
||||
"Last File Synced": "Letzte Änderung",
|
||||
"Last seen": "Zuletzt online",
|
||||
"Later": "Später",
|
||||
"Latest Release": "Letzte Veröffentlichung",
|
||||
"Local Discovery": "Lokale Auffindung",
|
||||
"Local State": "Lokaler Status",
|
||||
"Maximum Age": "Höchstalter",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Hinweis",
|
||||
"OK": "OK",
|
||||
"Off": "Aus",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Nicht synchronisiert",
|
||||
"Out of Sync Items": "Nicht synchronisierte Objekte",
|
||||
"Outgoing Rate Limit (KiB/s)": "Ausgehendes Datenratelimit (KiB/s)",
|
||||
@@ -128,9 +125,7 @@
|
||||
"Start Browser": "Browser starten",
|
||||
"Stopped": "Gestoppt",
|
||||
"Support": "Support",
|
||||
"Support / Forum": "Support / Forum",
|
||||
"Sync Protocol Listen Addresses": "Adresse(n) für das Synchronisierungsprotokoll",
|
||||
"Synchronization": "Synchronisierung",
|
||||
"Syncing": "Synchronisiere",
|
||||
"Syncthing has been shut down.": "Syncthing wurde heruntergefahren.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing enthält die folgende Software oder Teile davon:",
|
||||
@@ -144,6 +139,7 @@
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Die hier einzutragende Geräte ID kann im \"Bearbeiten > Zeige ID\"-Dialog auf dem anderen Gerät gefunden werden. Leerzeichen und Bindestriche sind optional (werden ignoriert).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Der verschlüsselte Nutzungsbericht wird täglich gesendet. Er wird benutzt um Statistiken über verwendete Betriebssysteme, Verzeichnis-Größen und Programm-Versionen zu erstellen. Sollte der Bericht in Zukunft weitere Daten erfassen, wird dieses Fenster erneut angezeigt.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Die eingegebene Geräte ID scheint nicht gültig zu sein. Es sollte eine 52 oder 56 stellige Zeichenkette aus Buchstaben und Nummern sein. Leerzeichen und Bindestriche sind optional.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Der erste Kommandozeilenparameter ist der Verzeichnis-Pfad und der zweite Parameter ist der relative Pfad in diesem Ordner.",
|
||||
"The folder ID cannot be blank.": "Die Verzeichnis ID darf nicht leer sein.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Die Verzeichnis ID muss eine kurze Kennung (64 Zeichen oder weniger) sein. Sie kann nur aus Buchstaben, Zahlen und dem Punkt- (.), Bindestrich- (-), und Unterstrich- (_) Zeichen bestehen.",
|
||||
"The folder ID must be unique.": "Die Verzeichnis ID muss eindeutig sein.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Die längste Zeit, die alte Versionen vorgehalten werden (in Tagen, 0 bedeutet, alte Versionen für immer zu behalten).",
|
||||
"The number of old versions to keep, per file.": "Anzahl der alten Versionen, die von jeder Datei gespeichert werden sollen.",
|
||||
"The number of versions must be a number and cannot be blank.": "Die Anzahl von Versionen muss eine Zahl und darf nicht leer sein.",
|
||||
"The path cannot be blank.": "Der Pfad darf nicht leer sein",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Das Suchintervall muss eine nicht negative Anzahl von Sekunden sein.",
|
||||
"The rescan interval must be at least 5 seconds.": "Das Suchintervall muss mindestens 5 Sekunden betragen.",
|
||||
"Unknown": "Unbekannt",
|
||||
"Unshared": "Ungeteilt",
|
||||
"Unused": "Ungenutzt",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Update auf {{version}}",
|
||||
"Upgrading": "Wird aktualisiert",
|
||||
"Upload Rate": "Upload",
|
||||
"Use Compression": "Benutze Komprimierung",
|
||||
"Use HTTPS for GUI": "HTTPS für Benutzeroberfläche benutzen",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versionierungspfad",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Διευθύνσεις",
|
||||
"All Data": "Όλα τα δεδομένα",
|
||||
"Allow Anonymous Usage Reporting?": "Να επιτρέπεται η αποστολή ανώνυμων στοιχείων χρήσης;",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Μια εξωτερική εντολή χειρίζεται την διαχείριση εκδόσεων. Πρέπει να καταργήσετε το αρχείο από το φάκελο συγχρονισμένων.",
|
||||
"Anonymous Usage Reporting": "Ανώνυμα στοιχεία χρήσης",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Αν δηλωθεί σαν «βασικός κόμβος», τότε όλες οι συσκευές που είναι δηλωμένες εκεί θα υπάρχουν και στον τοπικό κόμβο.",
|
||||
"Automatic upgrades": "Αυτόματη αναβάθμιση",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Επιβάρυνση του επεξεργαστή",
|
||||
"Changelog": "Πληροφορίες εκδόσεων",
|
||||
"Close": "Τέλος",
|
||||
"Command": "Εντολή",
|
||||
"Comment, when used at the start of a line": "Σχόλιο, όταν χρησιμοποιείται στην αρχή μιας γραμμής",
|
||||
"Compression": "Συμπίεση",
|
||||
"Compression is recommended in most setups.": "Η συμπίεση προτείνεται στις περισσότερες εγκαταστάσεις",
|
||||
"Connection Error": "Σφάλμα σύνδεσης",
|
||||
"Copied from elsewhere": "Έχει αντιγραφεί από κάπου αλλού",
|
||||
"Copied from original": "Έχει αντιγραφεί από το πρωτότυπο",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 από τον Jakob Borg και τους παρακάτω συνεισφορείς:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 από τους παρακάτω συνεισφορείς:",
|
||||
"Delete": "Διαγραφή",
|
||||
"Device ID": "Ταυτότητα συσκευής",
|
||||
"Device Identification": "Ταυτότητα συσκευής",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Γράψε τις διευθύνσεις IP με τη μορφή «ip:θύρα», διαχωρισμένες με κόμμα. Αλλιώς γράψε «dynamic» για να πραγματοποιηθεί η αυτόματη εύρεση διευθύνσεων.",
|
||||
"Enter ignore patterns, one per line.": "Δώσε τα πρότυπα που θα αγνοηθούν, ένα σε κάθε γραμμή.",
|
||||
"Error": "Σφάλμα",
|
||||
"External File Versioning": "Εξωτερική διαχείριση εκδόσεων Αρχείου",
|
||||
"File Versioning": "Τήρηση εκδόσεων",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Να αγνοούνται τα δικαιώματα των αρχείων όταν γίνεται αναζήτηση για αλλαγές. Χρησιμοποιείται σε συστήματα αρχείων τύπου FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Όταν τα αρχεία αντικατασταθούν ή διαγραφούν από το syncthing, τότε να μεταφέρονται χρονοσημασμένα σε φάκελο με το όνομα .stversions.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Τα bit αδεια αρχεία αγνοούνται όταν ψάχνουν για αλλαγές. Χρήση σε συστήματα αρχείων FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Τα αρχεία που μετακινηθηκαν στην ημερομηνία που αναγράφεται τις εκδόσεις σε ένα φάκελο .stversions όταν αντικατασταθούν ή να διαγραφθούν από το Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Τα αρχεία προστατεύονται από αλλαγές που γίνονται σε άλλες συσκευές, αλλά όποιες αλλαγές γίνουν σε αυτή τη συσκευή θα αποσταλούν σε όλη τη συστάδα συσκευών.",
|
||||
"Folder ID": "Ταυτότητα φακέλου",
|
||||
"Folder Master": "Να μην επιτρέπονται αλλαγές",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Να μπορεί να ανευρεθεί καθολικά (από παντού)",
|
||||
"Global Discovery Server": "Διακομιστής καθολικής ανεύρεσης κόμβου",
|
||||
"Global State": "Καθολική κατάσταση",
|
||||
"Idle": "Ανενεργό",
|
||||
"Ignore": "Αγνόησε",
|
||||
"Ignore Patterns": "Πρότυπο για αγνόηση",
|
||||
"Ignore Permissions": "Αγνόησε τα δικαιώματα",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Αντιστροφή της δοσμένης συνθήκης (π.χ. να μην εξαιρείς) ",
|
||||
"Keep Versions": "Διατήρηση εκδόσεων",
|
||||
"Last File Received": "Πιο πρόσφατο αρχείο",
|
||||
"Last File Synced": "Τελευταία αλλαγή",
|
||||
"Last seen": "Τελευταία φορά συνδεδεμένος",
|
||||
"Later": "Αργότερα",
|
||||
"Latest Release": "Τελευταία έκδοση",
|
||||
"Local Discovery": "Τοπική ανεύρεση",
|
||||
"Local State": "Τοπική κατάσταση",
|
||||
"Maximum Age": "Μέγιστη ηλικία",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Σημείωση",
|
||||
"OK": "OK",
|
||||
"Off": "Απενεργοποιημένο",
|
||||
"Offline": "Εκτός σύνδεσης",
|
||||
"Online": "Συνδεδεμένο",
|
||||
"Out Of Sync": "Μη συγχρονισμένα",
|
||||
"Out of Sync Items": "Μη συγχρονισμένα αντικείμενα",
|
||||
"Outgoing Rate Limit (KiB/s)": "Ρυθμός αποστολής (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Εκκίνηση προγράμματος περιήγησης",
|
||||
"Stopped": "Απενεργοποιημένο",
|
||||
"Support": "Υποστήριξη",
|
||||
"Support / Forum": "Υποστήριξη / Φόρουμ",
|
||||
"Sync Protocol Listen Addresses": "Διευθύνσεις για το πρωτόκολλο συγχρονισμού",
|
||||
"Synchronization": "Συγχρονισμός",
|
||||
"Syncing": "Συγχρονίζω",
|
||||
"Syncthing has been shut down.": "Το Syncthing έχει απενεργοποιηθεί.",
|
||||
"Syncthing includes the following software or portions thereof:": "Το Syncthing περιλαμβάνει τα παρακάτω λογισμικά ή μέρη αυτών:",
|
||||
"Syncthing is restarting.": "Το Syncthing επανεκκινείται.",
|
||||
"Syncthing is upgrading.": "Το Syncthing αναβαθμίζεται.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Το Syncthing φαίνεται πως είναι απενεργοποιημένο ή υπάρχει πρόβλημα στη σύνδεσή σου στο διαδίκτυο. Προσπαθώ πάλι…",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Φαίνεται πως το Syncthing έχει κάποιο πρόβλημα με αυτό που ζήτησες. Αν το πρόβλημα επιμείνει φόρτωσε ξανά τη σελίδα ή επανεκκίνησε το Syncthing.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Το Syncthing φαίνεται να αντιμετωπίζει ένα πρόβλημα κατά την επεξεργασία του αιτήματός σας. Παρακαλούμε ανανεώστε την σελίδα ή επανεκκινήστε το Syncthing εάν το πρόβλημα παραμένει.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Τα στατιστικά που έχουν συλλεγεί είναι δημόσια διαθέσιμα στο {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Οι ρυθμίσεις έχουν αποθηκευτεί αλλά δεν έχουν ενεργοποιηθεί. Πρέπει να επανεκκινήσεις το Syncthing για να ισχύσουν οι νέες ρυθμίσεις.",
|
||||
"The device ID cannot be blank.": "Η ταυτότητα της συσκευής δεν μπορεί να είναι κενή",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Η ταυτότητα της συσκευής που θα μπει εδώ βρίσκεται στο μενού «Επεξεργασία > Εμφάνιση ταυτότητας» στην άλλη συσκευή. Κενοί χαρακτήρες και παύλες είναι προαιρετικοί (απλά θα αγνοηθούν).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Η κρυπτογραφημένη αναφορά χρήσης στέλνεται καθημερινά. Χρησιμοποιείται για να παραχθούν στατιστικές για τα λειτουργικά συστήματα που χρησιμοποιούνται, τα μεγέθη των φακέλων και τις εκδόσεις των προγραμμάτων. Αν στο μέλλον συμπεριληφθούν και άλλα δεδομένα στην αναφορά χρήσης, τότε αυτό το παράθυρο θα εμφανιστεί ξανά.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Η ταυτότητα συσκευής που έδωσες δε φαίνεται έγκυρη. Θα πρέπει να είναι μια σειρά από 52 ή 56 χαρακτήρες (γράμματα και αριθμοί). Τα κενά και οι παύλες είναι προαιρετικά (αδιάφορα).",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Η πρώτη παράμετρος γραμμής εντολών είναι η διαδρομή του φακέλου και η δεύτερη παράμετρος είναι η σχετική διαδρομή στο φάκελο.",
|
||||
"The folder ID cannot be blank.": "Η ταυτότητα του φακέλου δεν μπορεί να είναι κενή.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Η ταυτότητα του φακέλου πρέπει να είναι ένα σύντομο αναγνωριστικό (το πολύ 64 χαρακτήρες). Μπορεί να αποτελείται από γράμματα, αριθμούς, την τελεία (.), την παύλα (-) και την κάτω παύλα (_).",
|
||||
"The folder ID must be unique.": "Η ταυτότητα του φακέλου πρέπει να είναι μοναδική.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Η μέγιστη ηλικία παλιότερων εκδόσεων (σε ημέρες, αν δώσεις 0 οι παλιότερες εκδόσεις θα διατηρούνται για πάντα).",
|
||||
"The number of old versions to keep, per file.": "Πόσες παλιότερες εκδόσεις θα διατηρούνται, ανά αρχείο.",
|
||||
"The number of versions must be a number and cannot be blank.": "Ο αριθμός εκδόσεων πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
|
||||
"The path cannot be blank.": "Το μονοπάτι δεν μπορεί να είναι κενό.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Ο χρόνος επανελέγχου για αλλαγές είναι σε δευτερόλεπτα (δηλ. θετικός αριθμός).",
|
||||
"The rescan interval must be at least 5 seconds.": "Ο χρόνος επανελέγχου πρέπει να είναι πάνω από 5 δευτερόλεπτα.",
|
||||
"Unknown": "Άγνωστο",
|
||||
"Unshared": "Δε μοιράζεται",
|
||||
"Unused": "Δε χρησιμοποιείται",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Αναβάθμιση στην έκδοση {{version}}",
|
||||
"Upgrading": "Αναβάθμιση",
|
||||
"Upload Rate": "Ταχύτητα ανεβάσματος",
|
||||
"Use Compression": "Χρήση συμπίεσης",
|
||||
"Use HTTPS for GUI": "Χρήση HTTPS για τη διεπαφή",
|
||||
"Version": "Έκδοση",
|
||||
"Versions Path": "Φάκελος τήρησης εκδόσεων",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Addresses",
|
||||
"All Data": "All Data",
|
||||
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Any devices configured on an introducer device will be added to this device as well.",
|
||||
"Automatic upgrades": "Automatic upgrades",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "CPU Utilisation",
|
||||
"Changelog": "Changelog",
|
||||
"Close": "Close",
|
||||
"Command": "Command",
|
||||
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "Compression is recommended in most setups.",
|
||||
"Connection Error": "Connection Error",
|
||||
"Copied from elsewhere": "Copied from elsewhere",
|
||||
"Copied from original": "Copied from original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg and the following Contributors:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Delete",
|
||||
"Device ID": "Device ID",
|
||||
"Device Identification": "Device Identification",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.",
|
||||
"Enter ignore patterns, one per line.": "Enter ignore patterns, one per line.",
|
||||
"Error": "Error",
|
||||
"External File Versioning": "External File Versioning",
|
||||
"File Versioning": "File Versioning",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "File permission bits are ignored when looking for changes. Use on FAT filesystems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
|
||||
"Folder ID": "Folder ID",
|
||||
"Folder Master": "Folder Master",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Global Discovery",
|
||||
"Global Discovery Server": "Global Discovery Server",
|
||||
"Global State": "Global State",
|
||||
"Idle": "Idle",
|
||||
"Ignore": "Ignore",
|
||||
"Ignore Patterns": "Ignore Patterns",
|
||||
"Ignore Permissions": "Ignore Permissions",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
|
||||
"Keep Versions": "Keep Versions",
|
||||
"Last File Received": "Last File Received",
|
||||
"Last File Synced": "Last File Synced",
|
||||
"Last seen": "Last seen",
|
||||
"Later": "Later",
|
||||
"Latest Release": "Latest Release",
|
||||
"Local Discovery": "Local Discovery",
|
||||
"Local State": "Local State",
|
||||
"Maximum Age": "Maximum Age",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Notice",
|
||||
"OK": "OK",
|
||||
"Off": "Off",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Out of Sync",
|
||||
"Out of Sync Items": "Out of Sync Items",
|
||||
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Start Browser",
|
||||
"Stopped": "Stopped",
|
||||
"Support": "Support",
|
||||
"Support / Forum": "Support / Forum",
|
||||
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
|
||||
"Synchronization": "Synchronisation",
|
||||
"Syncing": "Syncing",
|
||||
"Syncthing has been shut down.": "Syncthing has been shut down.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
|
||||
"Syncthing is restarting.": "Syncthing is restarting.",
|
||||
"Syncthing is upgrading.": "Syncthing is upgrading.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
|
||||
"The device ID cannot be blank.": "The device ID cannot be blank.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "The folder ID cannot be blank.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.",
|
||||
"The folder ID must be unique.": "The folder ID must be unique.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The number of old versions to keep, per file.": "The number of old versions to keep, per file.",
|
||||
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
|
||||
"The path cannot be blank.": "The path cannot be blank.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"Unknown": "Unknown",
|
||||
"Unshared": "Unshared",
|
||||
"Unused": "Unused",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Upgrade to {{version}}",
|
||||
"Upgrading": "Upgrading",
|
||||
"Upload Rate": "Upload Rate",
|
||||
"Use Compression": "Use Compression",
|
||||
"Use HTTPS for GUI": "Use HTTPS for GUI",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versions Path",
|
||||
|
||||
@@ -10,6 +10,8 @@
|
||||
"Addresses": "Addresses",
|
||||
"All Data": "All Data",
|
||||
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
|
||||
"Alphabetic": "Alphabetic",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Any devices configured on an introducer device will be added to this device as well.",
|
||||
"Automatic upgrades": "Automatic upgrades",
|
||||
@@ -17,13 +19,13 @@
|
||||
"CPU Utilization": "CPU Utilization",
|
||||
"Changelog": "Changelog",
|
||||
"Close": "Close",
|
||||
"Command": "Command",
|
||||
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "Compression is recommended in most setups.",
|
||||
"Connection Error": "Connection Error",
|
||||
"Copied from elsewhere": "Copied from elsewhere",
|
||||
"Copied from original": "Copied from original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg and the following Contributors:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Delete",
|
||||
"Device ID": "Device ID",
|
||||
"Device Identification": "Device Identification",
|
||||
@@ -43,9 +45,11 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.",
|
||||
"Enter ignore patterns, one per line.": "Enter ignore patterns, one per line.",
|
||||
"Error": "Error",
|
||||
"External File Versioning": "External File Versioning",
|
||||
"File Pull Order": "File Pull Order",
|
||||
"File Versioning": "File Versioning",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "File permission bits are ignored when looking for changes. Use on FAT filesystems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
|
||||
"Folder ID": "Folder ID",
|
||||
"Folder Master": "Folder Master",
|
||||
@@ -58,7 +62,6 @@
|
||||
"Global Discovery": "Global Discovery",
|
||||
"Global Discovery Server": "Global Discovery Server",
|
||||
"Global State": "Global State",
|
||||
"Idle": "Idle",
|
||||
"Ignore": "Ignore",
|
||||
"Ignore Patterns": "Ignore Patterns",
|
||||
"Ignore Permissions": "Ignore Permissions",
|
||||
@@ -66,11 +69,10 @@
|
||||
"Introducer": "Introducer",
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
|
||||
"Keep Versions": "Keep Versions",
|
||||
"Largest First": "Largest First",
|
||||
"Last File Received": "Last File Received",
|
||||
"Last File Synced": "Last File Synced",
|
||||
"Last seen": "Last seen",
|
||||
"Later": "Later",
|
||||
"Latest Release": "Latest Release",
|
||||
"Local Discovery": "Local Discovery",
|
||||
"Local State": "Local State",
|
||||
"Major Upgrade": "Major Upgrade",
|
||||
@@ -81,13 +83,13 @@
|
||||
"Never": "Never",
|
||||
"New Device": "New Device",
|
||||
"New Folder": "New Folder",
|
||||
"Newest First": "Newest First",
|
||||
"No": "No",
|
||||
"No File Versioning": "No File Versioning",
|
||||
"Notice": "Notice",
|
||||
"OK": "OK",
|
||||
"Off": "Off",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Oldest First": "Oldest First",
|
||||
"Out Of Sync": "Out Of Sync",
|
||||
"Out of Sync Items": "Out of Sync Items",
|
||||
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
|
||||
@@ -100,6 +102,7 @@
|
||||
"Preview Usage Report": "Preview Usage Report",
|
||||
"Quick guide to supported patterns": "Quick guide to supported patterns",
|
||||
"RAM Utilization": "RAM Utilization",
|
||||
"Random": "Random",
|
||||
"Release Notes": "Release Notes",
|
||||
"Rescan": "Rescan",
|
||||
"Rescan All": "Rescan All",
|
||||
@@ -127,27 +130,27 @@
|
||||
"Shutdown Complete": "Shutdown Complete",
|
||||
"Simple File Versioning": "Simple File Versioning",
|
||||
"Single level wildcard (matches within a directory only)": "Single level wildcard (matches within a directory only)",
|
||||
"Smallest First": "Smallest First",
|
||||
"Source Code": "Source Code",
|
||||
"Staggered File Versioning": "Staggered File Versioning",
|
||||
"Start Browser": "Start Browser",
|
||||
"Stopped": "Stopped",
|
||||
"Support": "Support",
|
||||
"Support / Forum": "Support / Forum",
|
||||
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
|
||||
"Synchronization": "Synchronization",
|
||||
"Syncing": "Syncing",
|
||||
"Syncthing has been shut down.": "Syncthing has been shut down.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
|
||||
"Syncthing is restarting.": "Syncthing is restarting.",
|
||||
"Syncthing is upgrading.": "Syncthing is upgrading.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
|
||||
"The device ID cannot be blank.": "The device ID cannot be blank.",
|
||||
"The device ID to enter here can be found in the \"Edit \u003e Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Edit \u003e Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "The folder ID cannot be blank.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.",
|
||||
"The folder ID must be unique.": "The folder ID must be unique.",
|
||||
@@ -157,8 +160,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
|
||||
"The number of old versions to keep, per file.": "The number of old versions to keep, per file.",
|
||||
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
|
||||
"The path cannot be blank.": "The path cannot be blank.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
|
||||
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
|
||||
"This is a major version upgrade.": "This is a major version upgrade.",
|
||||
"Unknown": "Unknown",
|
||||
"Unshared": "Unshared",
|
||||
@@ -168,7 +171,7 @@
|
||||
"Upgrade To {%version%}": "Upgrade To {{version}}",
|
||||
"Upgrading": "Upgrading",
|
||||
"Upload Rate": "Upload Rate",
|
||||
"Use Compression": "Use Compression",
|
||||
"Uptime": "Uptime",
|
||||
"Use HTTPS for GUI": "Use HTTPS for GUI",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Versions Path",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Direcciones",
|
||||
"All Data": "Todos los datos",
|
||||
"Allow Anonymous Usage Reporting?": "Permitir reporte anónimo de uso?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Un comando exterior maneja el control de versiones. Éste tiene que eliminar el archivo de la carpeta sincronizada.",
|
||||
"Anonymous Usage Reporting": "Reporte anónimo de uso",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Cualquier dispositivo configurado en un dispositivo introductor será también agregado a este dispositivo.",
|
||||
"Automatic upgrades": "Actualizaciones automáticas",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Uso de CPU",
|
||||
"Changelog": "Registro de cambios",
|
||||
"Close": "Cerrar",
|
||||
"Command": "Comando",
|
||||
"Comment, when used at the start of a line": "Comentario, cuando es utilizado al inicio de una línea.",
|
||||
"Compression": "Compresión",
|
||||
"Compression is recommended in most setups.": "La compresión de datos es recomendada para la mayoría de las configuraciones.",
|
||||
"Connection Error": "Error de conexión",
|
||||
"Copied from elsewhere": "Copiado desde otra parte.",
|
||||
"Copied from original": "Copiado del original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Derechos de autor © 2014 Jakob Borg y los siguientes colaboradores:",
|
||||
"Copyright © 2015 the following Contributors:": "Derechos de autor © 2015 los siguientes colaboradores:",
|
||||
"Delete": "Suprimir",
|
||||
"Device ID": "ID del dispositivo",
|
||||
"Device Identification": "Identificación del dispositivo",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Ingrese las direcciones \"ip:puerto\" separadas por coma, o \"dynamic\" para descubrir automáticamente las direcciones.",
|
||||
"Enter ignore patterns, one per line.": "Añadir patrones de exclusión, uno por línea.",
|
||||
"Error": "Error",
|
||||
"External File Versioning": "Control de versiones externo",
|
||||
"File Versioning": "Control de versiones",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Los permisos de archivo son ignorados al buscar cambios. Usar en sistemas de archivos FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Los archivos son movidos al directorio .stversions y renombrados a versiones marcadas por fecha cuando son reemplazados o eliminados por Syncthing,",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Los permisos de archivo son ignorados al buscar cambios. Usar el sistemas de archivos FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Lo archivos son movidos al directorio .stversions y renombrados a versiones marcadas por fecha cuando son reemplazados o eliminados por Syncthing,",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Los archivos están protegidos frente a los cambios realizados en otros dispositivos, peros los cambios realizados en este dispositivo serán envíados al resto del grupo",
|
||||
"Folder ID": "ID del repositorio",
|
||||
"Folder Master": "Repositorio maestro",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Búsqueda en internet",
|
||||
"Global Discovery Server": "Servidor global de identificación",
|
||||
"Global State": "Estado global",
|
||||
"Idle": "Inactivo",
|
||||
"Ignore": "Ignorar",
|
||||
"Ignore Patterns": "Patrones de exclusión",
|
||||
"Ignore Permissions": "Ignorar permisos",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversión de la condición dada (es decir, no excluir)",
|
||||
"Keep Versions": "Conservar versiones",
|
||||
"Last File Received": "Último archivo recibido",
|
||||
"Last File Synced": "Último archivo sincronizado.",
|
||||
"Last seen": "Visto por ultima vez",
|
||||
"Later": "Más tarde",
|
||||
"Latest Release": "Última versión",
|
||||
"Local Discovery": "Búsqueda en red local",
|
||||
"Local State": "Estado local",
|
||||
"Maximum Age": "Edad máxima",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Aviso",
|
||||
"OK": "OK",
|
||||
"Off": "Apagado",
|
||||
"Offline": "Fuera de linea",
|
||||
"Online": "En linea",
|
||||
"Out Of Sync": "Fuera de sincronización",
|
||||
"Out of Sync Items": "Ítems no sincronizados",
|
||||
"Outgoing Rate Limit (KiB/s)": "Tasa máxima de envío (KiB/s)",
|
||||
@@ -128,9 +125,7 @@
|
||||
"Start Browser": "Iniciar navegador",
|
||||
"Stopped": "Parado",
|
||||
"Support": "Soporte",
|
||||
"Support / Forum": "Soporte / Foro",
|
||||
"Sync Protocol Listen Addresses": "Dirección de escucha del protocolo de sincronización",
|
||||
"Synchronization": "Sincronización",
|
||||
"Syncing": "Sincronización",
|
||||
"Syncthing has been shut down.": "La sincronización esta apagada",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing incluye los siguientes softwares o partes de ellos:",
|
||||
@@ -144,6 +139,7 @@
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "La ID del dispositivo a introducir se puede encontrar en la opción de menú \"Edición > Mostrar ID\" en el otro dispositivo. Espacios y guiones son opcionales (ignorados).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "El informe de uso se envía encriptado diariamente. Se utiliza para hacer un seguimiento de plataformas comunes, tamaño de repositorios y versiones de la aplicación. Si el conjunto de datos cambia será notificado mediante este dialogo nuevamente.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "La ID del dispositivo introducida no es válida. Debe ser una cadena de 52 o 56 caracteres consistente en letras y números, con espacios y guiones opcionales.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "El primer argumento es la ruta de la carpeta y el segundo argumento es la ruta relativa de esta carpeta.",
|
||||
"The folder ID cannot be blank.": "La ID del repositorio no puede estar en blanco.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "La ID del repositorio debe ser un identificador corto (64 caracteres o menos) consistente solamente en letras, números, punto (.), guion (-) y guion bajo (_).",
|
||||
"The folder ID must be unique.": "La ID del repositorio debe ser única.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "El tiempo máximo para mantener una versión (en días, establece en 0 para mantener versiones para siempre).",
|
||||
"The number of old versions to keep, per file.": "El numero de versiones anteriores a conservar, por archivo.",
|
||||
"The number of versions must be a number and cannot be blank.": "El número de versiones debe ser un número y no puede estar vacío.",
|
||||
"The path cannot be blank.": "La ruta no puede estar vacía.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "El intervalo de reescaneo debe ser un número no negativo de segundos.",
|
||||
"The rescan interval must be at least 5 seconds.": "El intervalo de reescaneo debe ser al menos de 5 segundos.",
|
||||
"Unknown": "Desconocido",
|
||||
"Unshared": "No compartido",
|
||||
"Unused": "No utilizado",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Actualizar a {{version}}",
|
||||
"Upgrading": "Actualizando",
|
||||
"Upload Rate": "Tasa de subida",
|
||||
"Use Compression": "Usar compresión",
|
||||
"Use HTTPS for GUI": "Usar HTTPS para la GUI",
|
||||
"Version": "Versión",
|
||||
"Versions Path": "Ruta de versiones",
|
||||
|
||||
172
gui/assets/lang/lang-fi.json
Normal file
@@ -0,0 +1,172 @@
|
||||
{
|
||||
"API Key": "API-avain",
|
||||
"About": "Tietoja",
|
||||
"Add": "Lisää",
|
||||
"Add Device": "Lisää laite",
|
||||
"Add Folder": "Lisää kansio",
|
||||
"Add new folder?": "Lisää uusi kansio?",
|
||||
"Address": "Osoite",
|
||||
"Addresses": "Osoitteet",
|
||||
"All Data": "Kaikki data",
|
||||
"Allow Anonymous Usage Reporting?": "Salli anonyymi käyttöraportointi?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Ulkoinen komento hallitsee versionnin. Sen täytyy poistaa tiedosto synkronoidusta kansiosta.",
|
||||
"Anonymous Usage Reporting": "Anonyymi käyttöraportointi",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Kaikki esittelijäksi määritetyn laitteen tuntemat laitteet lisätään myös tähän laitteeseen.",
|
||||
"Automatic upgrades": "Automaattiset päivitykset",
|
||||
"Bugs": "Bugit",
|
||||
"CPU Utilization": "CPU:n käyttö",
|
||||
"Changelog": "Muutoshistoria",
|
||||
"Close": "Sulje",
|
||||
"Command": "Komento",
|
||||
"Comment, when used at the start of a line": "Kommentti, käytettäessä rivin alussa",
|
||||
"Compression": "Pakkaus",
|
||||
"Connection Error": "Yhteysvirhe",
|
||||
"Copied from elsewhere": "Kopioitu muualta",
|
||||
"Copied from original": "Kopioitu alkuperäisestä lähteestä",
|
||||
"Copyright © 2015 the following Contributors:": "Tekijänoikeus © 2015 seuraavat avustajat:",
|
||||
"Delete": "Poista",
|
||||
"Device ID": "Laitteen ID",
|
||||
"Device Identification": "Laitteen tunniste",
|
||||
"Device Name": "Laitteen nimi",
|
||||
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Laite {{device}} ({{address}}) haluaa yhdistää. Lisää uusi laite?",
|
||||
"Devices": "Laitteet",
|
||||
"Disconnected": "Yhteys katkaistu",
|
||||
"Documentation": "Dokumentaatio",
|
||||
"Download Rate": "Latausmäärä",
|
||||
"Downloaded": "Ladattu",
|
||||
"Downloading": "Ladataan",
|
||||
"Edit": "Muokkaa",
|
||||
"Edit Device": "Muokkaa laitetta",
|
||||
"Edit Folder": "Muokkaa kansiota",
|
||||
"Editing": "Muokkaus",
|
||||
"Enable UPnP": "Ota UPnP käyttöön",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Syötä pilkuin erotettuna osoitteet muodossa \"ip:portti\" tai syötä \"dynamic\" automaattista osoitteiden selvitystä varten.",
|
||||
"Enter ignore patterns, one per line.": "Syötä ohituslausekkeet, yksi riviä kohden.",
|
||||
"Error": "Virhe",
|
||||
"External File Versioning": "Ulkoinen tiedostoversionti",
|
||||
"File Versioning": "Tiedostoversiointi",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Tiedostojen oikeusbitit jätetään huomiotta etsittäessä muutoksia. Käytä FAT-tiedostojärjestelmissä.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Tiedostot siirretään päivämäärällä merkityiksi versioiksi .stversions-kansioon, kun Syncthing korvaa tai poistaa ne.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Tiedostot on suojattu muilla laitteilla tehdyiltä muutoksilta, mutta tällä laitteella tehdyt muutokset lähetetään muuhun ryhmään.",
|
||||
"Folder ID": "Kansion ID",
|
||||
"Folder Master": "Hallitsijakansio",
|
||||
"Folder Path": "Kansion polku",
|
||||
"Folders": "Kansiot",
|
||||
"GUI Authentication Password": "GUI:n salasana",
|
||||
"GUI Authentication User": "GUI:n käyttäjätunnus",
|
||||
"GUI Listen Addresses": "GUI:n kuunteluosoitteet",
|
||||
"Generate": "Generoi",
|
||||
"Global Discovery": "Globaali etsintä",
|
||||
"Global Discovery Server": "Globaali etsintäpalvelin",
|
||||
"Global State": "Globaali tila",
|
||||
"Ignore": "Ohita",
|
||||
"Ignore Patterns": "Ohituslausekkeet",
|
||||
"Ignore Permissions": "Jätä oikeudet huomiotta",
|
||||
"Incoming Rate Limit (KiB/s)": "Sisääntulevan liikenteen rajoitus (KiB/s)",
|
||||
"Introducer": "Esittelijä",
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Käänteinen ehto (t.s. älä ohita)",
|
||||
"Keep Versions": "Säilytä versiot",
|
||||
"Last File Received": "Viimeksi vastaanotettu tiedosto",
|
||||
"Last seen": "Nähty viimeksi",
|
||||
"Later": "Myöhemmin",
|
||||
"Local Discovery": "Paikallinen etsintä",
|
||||
"Local State": "Paikallinen tila",
|
||||
"Maximum Age": "Maksimi-ikä",
|
||||
"Metadata Only": "Vain metadata",
|
||||
"Move to top of queue": "Siirrä jonon alkuun",
|
||||
"Multi level wildcard (matches multiple directory levels)": "Monitasoinen jokerimerkki (vaikuttaa useassa kansiotasossa)",
|
||||
"Never": "Ei koskaan",
|
||||
"New Device": "Uusi laite",
|
||||
"New Folder": "Uusi kansio",
|
||||
"No": "Ei",
|
||||
"No File Versioning": "Ei tiedostoversiointia",
|
||||
"Notice": "Huomautus",
|
||||
"OK": "OK",
|
||||
"Off": "Pois",
|
||||
"Out Of Sync": "Ei ajan tasalla",
|
||||
"Out of Sync Items": "Kohteet, jotka eivät ole ajan tasalla",
|
||||
"Outgoing Rate Limit (KiB/s)": "Uloslähtevän liikenteen rajoitus (KiB/s)",
|
||||
"Override Changes": "Ohita muutokset",
|
||||
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Polku kansioon paikallisella tietokoneella. Kansio luodaan, ellei sitä ole olemassa. Tilde-merkkiä (~) voidaan käyttää oikotienä polulle",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Polku jonne versiot tullaan tallentamaan (jätä tyhjäksi oletusvalintaa .stversions varten).",
|
||||
"Please wait": "Ole hyvä ja odota",
|
||||
"Preview": "Esikatselu",
|
||||
"Preview Usage Report": "Esikatsele käyttöraportti",
|
||||
"Quick guide to supported patterns": "Tuettujen lausekkeiden pikaohje",
|
||||
"RAM Utilization": "RAM:n käyttö",
|
||||
"Rescan": "Skannaa uudelleen",
|
||||
"Rescan All": "Skannaa kaikki uudelleen",
|
||||
"Rescan Interval": "Uudelleenskannauksen aikaväli",
|
||||
"Restart": "Käynnistä uudelleen",
|
||||
"Restart Needed": "Uudelleenkäynnistys tarvitaan",
|
||||
"Restarting": "Käynnistetään uudelleen",
|
||||
"Reused": "Uudelleenkäytetty",
|
||||
"Save": "Tallenna",
|
||||
"Scanning": "Skannataan",
|
||||
"Select the devices to share this folder with.": "Valitse laitteet, joiden kanssa tämä kansio jaetaan.",
|
||||
"Select the folders to share with this device.": "Valitse kansiot jaettavaksi tämän laitteen kanssa.",
|
||||
"Settings": "Asetukset",
|
||||
"Share": "Jaa",
|
||||
"Share Folder": "Jaa kansio",
|
||||
"Share Folders With Device": "Jaa kansioita laitteen kanssa",
|
||||
"Share With Devices": "Jaa laitteiden kanssa",
|
||||
"Share this folder?": "Jaa tämä kansio?",
|
||||
"Shared With": "Jaettu seuraavien kanssa",
|
||||
"Short identifier for the folder. Must be the same on all cluster devices.": "Lyhyt tunniste kansiolle. Tämän tulee olla sama kaikilla ryhmän laitteilla.",
|
||||
"Show ID": "Näytä ID",
|
||||
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Näytetään ryhmän tiedoissa laitteen ID:n sijaan. Ilmoitetaan muille laitteille vaihtoehtoisena oletusnimenä.",
|
||||
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Näytetään ryhmän tiedoissa laitteen ID:n sijaan. Tyhjä nimi päivitetään laitteen ilmoittamaksi nimeksi.",
|
||||
"Shutdown": "Sammuta",
|
||||
"Shutdown Complete": "Sammutus valmis",
|
||||
"Simple File Versioning": "Yksinkertainen tiedostoversiointi",
|
||||
"Single level wildcard (matches within a directory only)": "Yksitasoinen jokerimerkki (vaikuttaa vain kyseisen kansion sisällä)",
|
||||
"Source Code": "Lähdekoodi",
|
||||
"Staggered File Versioning": "Porrastettu tiedostoversiointi",
|
||||
"Start Browser": "Käynnistä selain",
|
||||
"Stopped": "Pysäytetty",
|
||||
"Support": "Tuki",
|
||||
"Sync Protocol Listen Addresses": "Synkronointiprotokollan kuunteluosoite",
|
||||
"Syncing": "Synkronoidaan",
|
||||
"Syncthing has been shut down.": "Syncthing on sammutettu.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing sisältää seuraavat ohjelmistot tai sen osat:",
|
||||
"Syncthing is restarting.": "Syncthing käynnistyy uudelleen.",
|
||||
"Syncthing is upgrading.": "Syncthing päivittyy.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing näyttää olevan alhaalla tai internetyhteydessä on ongelma. Yritetään uudelleen...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing ei pysty käsittelemään pyyntöäsi. Ole hyvä ja päivitä sivu tai käynnistä Syncthing uudelleen, jos ongelma jatkuu.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Yhdistetyt tilastot ovat julkisesti saatavilla osoitteessa {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Asetukset on tallennettu, mutta niitä ei ole otettu käyttöön. Syncthingin täytyy käynnistyä uudelleen, jotta uudet asetukset saadaan käyttöön.",
|
||||
"The device ID cannot be blank.": "Laitteen ID ei voi olla tyhjä.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Tähän kohtaan syötettävän ID:n löytää \"Muokkaa > Näytä ID\" -valikosta toisesta laitteesta. Välit ja viivat ovat valinnaisia (jätetään huomiotta).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Salattu käyttöraportti lähetetään päivittäin. Sitä käytetään yleisimpien alustojen, kansioiden kokojen ja sovellusversioiden seuraamiseen. Jos raportitavan datan luonne muuttuu, sinua tullaan huomauttamaan tällä dialogilla uudelleen.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Syötetty laite-ID ei näytä kelpaavalta. Sen tulisi olla 52 tai 56 merkkiä pitkä, joka koostuu kirjaimista ja numeroista, jossa välit ja viivat ovat valinnaisia.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Ensimmäinen komentoriviparametri on kansion polku, toinen parametri on suhteellinen polku kansion sisällä.",
|
||||
"The folder ID cannot be blank.": "Kansion ID ei voi olla tyhjä.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Kansion ID:n tulee olla lyhyt tunniste (64 merkkiä tai vähemmän), joka koostuu vain kirjaimista, numeroista, pisteistä (.), viivoista (-), ja alaviivoista (_).",
|
||||
"The folder ID must be unique.": "Kansion ID:n tulee olla uniikki.",
|
||||
"The folder path cannot be blank.": "Kansion polku ei voi olla tyhjä.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Seuraavat aikavälit ovat käytössä: ensimmäisen tunnin ajalta uusi versio säilytetään joka 30 sekunti, ensimmäisen päivän ajalta uusi versio säilytetään tunneittain ja ensimmäisen 30 päivän aikana uusi versio säilytetään päivittäin. Lopulta uusi versio säilytetään viikoittain, kunnes maksimi-ikä saavutetaan.",
|
||||
"The maximum age must be a number and cannot be blank.": "Maksimi-iän tulee olla numero, eikä se voi olla tyhjä.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maksimiaika versioiden säilytykseen (päivissä, aseta 0 säilyttääksesi versiot ikuisesti).",
|
||||
"The number of old versions to keep, per file.": "Säilytettävien vanhojen versioiden määrä tiedostoa kohden.",
|
||||
"The number of versions must be a number and cannot be blank.": "Versioiden määrän rulee olla numero, eikä se voi olla tyhjä.",
|
||||
"The path cannot be blank.": "Polku ei voi olla tyhjä.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Uudelleenskannauksen aikavälin tulee olla ei-negatiivinen numero sekunteja.",
|
||||
"Unknown": "Tuntematon",
|
||||
"Unshared": "Jakamaton",
|
||||
"Unused": "Käyttämätön",
|
||||
"Up to Date": "Ajan tasalla",
|
||||
"Upgrade To {%version%}": "Päivitä versioon {{version}}",
|
||||
"Upgrading": "Päivitetään",
|
||||
"Upload Rate": "Lähetysmäärä",
|
||||
"Use HTTPS for GUI": "Käytä HTTPS:ää GUI:n kanssa",
|
||||
"Version": "Versio",
|
||||
"Versions Path": "Versioiden polku",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versiot poistetaan automaattisesti mikäli ne ovat vanhempia kuin maksimi-ikä tai niiden määrä ylittää sallitun määrän tietyllä aikavälillä.",
|
||||
"When adding a new device, keep in mind that this device must be added on the other side too.": "Lisättäessä laitetta, muista että tämä laite tulee myös lisätä toiseen laitteeseen.",
|
||||
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lisättäessä uutta kansiota, muista että kansion ID:tä käytetään solmimaan kansiot yhteen laitteiden välillä. Ne ovat riippuvaisia kirjankoosta ja niiden tulee täsmätä kaikkien laitteiden välillä.",
|
||||
"Yes": "Kyllä",
|
||||
"You must keep at least one version.": "Sinun tulee säilyttää ainakin yksi versio.",
|
||||
"full documentation": "täysi dokumentaatio",
|
||||
"items": "kohteet",
|
||||
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} haluaa jakaa kansion \"{{folder}}\"."
|
||||
}
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adresses",
|
||||
"All Data": "Toutes les données",
|
||||
"Allow Anonymous Usage Reporting?": "Autoriser le rapport anonyme de statistiques d'utilisation ?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le dossier synchronisé.",
|
||||
"Anonymous Usage Reporting": "Rapport anonyme de statistiques d'utilisation",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Toute machine ajoutée depuis une machine initiatrice sera aussi ajoutée sur cette machine.",
|
||||
"Automatic upgrades": "Mises à jour automatiques",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Utilisation du CPU",
|
||||
"Changelog": "Nouveautés",
|
||||
"Close": "Fermer",
|
||||
"Command": "Commande",
|
||||
"Comment, when used at the start of a line": "Commentaire, lorsque utilisé en début de ligne",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "La compression est recommandée pour la plupart des configurations.",
|
||||
"Connection Error": "Erreur de connexion",
|
||||
"Copied from elsewhere": "Copié d'ailleurs",
|
||||
"Copied from original": "Copié de l'original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg et les contributeurs suivants :",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 Les contributeurs suivants:",
|
||||
"Delete": "Supprimer",
|
||||
"Device ID": "ID du périphérique",
|
||||
"Device Identification": "Identification de l'appareil",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Entrer les adresses \"ip:port\" séparées par une virgule ou \"dynamic\" afin d'activer la recherche automatique de l'adresse.",
|
||||
"Enter ignore patterns, one per line.": "Entrer les masques de filtrage, un par ligne.",
|
||||
"Error": "Erreur",
|
||||
"External File Versioning": "Gestion externe des versions de fichiers",
|
||||
"File Versioning": "Versions de fichier",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Les permissions de fichier sont ignorées lors de la recherche de changements. À utiliser sur les systèmes de fichiers de type FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Les fichiers sont datés et déplacés dans le dossier .stversions lors de leur remplacement ou suppression par syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Les bits de permission de fichier sont ignorés lors de la recherche de changements. Utilisé sur les systèmes de fichiers FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés, avec horodatage, dans un dossier .stversions quand ils sont remplacés ou supprimés par Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Les fichiers sont protégés des changements réalisés sur les autres appareils, mais les changements réalisés sur cet appareil seront transférés au reste du groupe.",
|
||||
"Folder ID": "ID du répertoire",
|
||||
"Folder Master": "Répertoire maître",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Recherche globale",
|
||||
"Global Discovery Server": "Serveur global de recherche",
|
||||
"Global State": "État global",
|
||||
"Idle": "Au repos",
|
||||
"Ignore": "Ignorer",
|
||||
"Ignore Patterns": "Modèles à éviter",
|
||||
"Ignore Permissions": "Ignorer les permissions",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
|
||||
"Keep Versions": "Conserver les versions",
|
||||
"Last File Received": "Dernier fichier reçu",
|
||||
"Last File Synced": "Dernier fichier synchronisé",
|
||||
"Last seen": "Dernière apparition",
|
||||
"Later": "Plus tard",
|
||||
"Latest Release": "Dernière version",
|
||||
"Local Discovery": "Recherche locale",
|
||||
"Local State": "État local",
|
||||
"Maximum Age": "Ancienneté maximum",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Notification",
|
||||
"OK": "OK",
|
||||
"Off": "Éteint",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Non synchronisé",
|
||||
"Out of Sync Items": "Objets non synchronisés",
|
||||
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Démarrer le navigateur web",
|
||||
"Stopped": "Arrêté",
|
||||
"Support": "Aide",
|
||||
"Support / Forum": "Aide / Forum",
|
||||
"Sync Protocol Listen Addresses": "Adresse du protocole de synchronisation",
|
||||
"Synchronization": "Synchronisation",
|
||||
"Syncing": "En cours de synchronisation",
|
||||
"Syncthing has been shut down.": "Syncthing a été éteint.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing intègre les logiciels suivants (ou des éléments provenant de ces logiciels) :",
|
||||
"Syncthing is restarting.": "Syncthing est cours de redémarrage.",
|
||||
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble éprouver des difficultés à appliquer votre requête. Si le problème persiste, veuillez rafraîchir votre navigateur ou redémarrer Syncthing.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble avoir un problème pour traiter votre demande. S'il vous plaît, rafraîchissez la page ou redémarrer Syncthing si le problème persiste.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Les statistiques agrégées sont disponibles publiquement à l'adresse {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuration a été sauvée mais pas activée. Syncthing doit redémarrer afin d'activer la nouvelle configuration.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuration a été enregistrée mais pas activée. Syncthing doit redémarrer afin d'activer la nouvelle configuration.",
|
||||
"The device ID cannot be blank.": "L'ID de l'appareil ne peut être vide.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID de l'appareil à renseigner peut être trouvé dans le menu \"Éditer > Montrer l'ID\" des autres nœuds. Les espaces et les tirets sont optionnels (ils seront ignorés).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des répertoires et les versions de l'application. Si les données rapportées sont modifiées cette boite de dialogue vous redemandera votre confirmation.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID de l'appareil inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 caractères comprenant des lettres, des chiffres et potentiellement des espaces et des traits d'union.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Le premier paramètre de ligne de commande est le chemin du dossier, et le second est le chemin relatif dans le dossier.",
|
||||
"The folder ID cannot be blank.": "L'identifiant (ID) du dossier ne peut être vide.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID du dossier doit être un identifiant court (64 caractères ou moins) comprenant uniquement des lettres, nombres, points (.), traits d'union (-) et tirets bas (_).",
|
||||
"The folder ID must be unique.": "L'ID du répertoire doit être unique.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Le temps maximum de conservation d'une version (en jours, mettre à 0 pour conserver les versions pour toujours)",
|
||||
"The number of old versions to keep, per file.": "Le nombre d'anciennes versions à garder, par fichier.",
|
||||
"The number of versions must be a number and cannot be blank.": "Le nombre de versions doit être numérique, et ne peut pas être vide.",
|
||||
"The path cannot be blank.": "Le chemin ne peut pas être vide.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "L'intervalle d'analyse ne doit pas être un nombre négatif de secondes.",
|
||||
"The rescan interval must be at least 5 seconds.": "L'intervalle de scan doit être d'au minimum 5 secondes.",
|
||||
"Unknown": "Inconnu",
|
||||
"Unshared": "Non partagé",
|
||||
"Unused": "Non utilisé",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
|
||||
"Upgrading": "Mise à jour de Syncthing",
|
||||
"Upload Rate": "Débit d'envoi",
|
||||
"Use Compression": "Utiliser la compression",
|
||||
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
|
||||
"Version": "Version",
|
||||
"Versions Path": "Emplacement des versions",
|
||||
|
||||
@@ -7,8 +7,9 @@
|
||||
"Add new folder?": " ",
|
||||
"Address": "Cím",
|
||||
"Addresses": "Címek",
|
||||
"All Data": "All Data",
|
||||
"All Data": "Minden adat",
|
||||
"Allow Anonymous Usage Reporting?": "Engedélyezed a névtelen felhasználási adatok küldését?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Egy külső program kezeli a fájl verziózást. El kell távolítsa a fájlt a szinkronizált mappából.",
|
||||
"Anonymous Usage Reporting": "Névtelen felhasználási adatok küldése",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Minden eszköz ami a bevezető eszközön lett beállítva hozzá lesz adva ehhez az eszközhöz is.",
|
||||
"Automatic upgrades": "Automatikus frissítés",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Processzor használat",
|
||||
"Changelog": "Változások",
|
||||
"Close": "Bezárás",
|
||||
"Command": "Parancs",
|
||||
"Comment, when used at the start of a line": "Megjegyzés, a sor elején használva",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "A tömörítés a a legtöbb esetben ajánlott",
|
||||
"Compression": "Tömörítés",
|
||||
"Connection Error": "Kapcsolódási hiba",
|
||||
"Copied from elsewhere": "Másolva máshonnan",
|
||||
"Copied from original": "Másolva az eredetiről",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg és az alábbi Közreműködők",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Törlés",
|
||||
"Device ID": "Eszköz azonosító",
|
||||
"Device Identification": "Eszköz azonosító",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Add meg a címeket (ip:port formátumban) vesszővel elválasztva vagy add meg a \"dynamic\" szót az a cím automatikus észleléséhez.",
|
||||
"Enter ignore patterns, one per line.": "Figyelmen kívül hagyáshoz ide írhatod a mintákat, soronként egyet",
|
||||
"Error": "Hiba",
|
||||
"External File Versioning": "Külső fájl verziózás",
|
||||
"File Versioning": "Fájl verziózás",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "A fájl jogosultságok figyelmen kívül hagyása. FAT fájlrendszernél használatos.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "A fájlok időpecsételt verziói a .stversions mappában kerülnek áthelyezésre, amikor felülírásra vagy törlésre kerülnek a Syncthing által.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "A fájlok védve vannak a más eszközökön történt változásokkal szemben, de az ezen az eszközön történt változások érvényesek lesznek a többire.",
|
||||
"Folder ID": "Mappa azonosító",
|
||||
"Folder Master": "Központi mappa",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Globális felfedezés",
|
||||
"Global Discovery Server": "Globális felfedező szerver",
|
||||
"Global State": "Globális állapot",
|
||||
"Idle": "Tétlen",
|
||||
"Ignore": "Visszautasítás",
|
||||
"Ignore Patterns": "Figyelmen kívül hagyás",
|
||||
"Ignore Permissions": "Jogosultságok figyelmen kívül hagyása",
|
||||
@@ -66,14 +67,12 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "A feltétel ellentéte (pl. ki nem hagyás)",
|
||||
"Keep Versions": "Megtartott verziók",
|
||||
"Last File Received": "Utolsó beérkezett fájl",
|
||||
"Last File Synced": "Utolsó szinkronizált fájl",
|
||||
"Last seen": "Utoljára látva",
|
||||
"Later": "Később",
|
||||
"Latest Release": "Utolsó kiadás",
|
||||
"Local Discovery": "Helyi felfedezés",
|
||||
"Local State": "Helyi állapot",
|
||||
"Maximum Age": "Maximális kor",
|
||||
"Metadata Only": "Metadata Only",
|
||||
"Metadata Only": "Csak metaadatok",
|
||||
"Move to top of queue": "Sor elejére mozgatás",
|
||||
"Multi level wildcard (matches multiple directory levels)": "Több szintű helyettesítő karakter (több könyvtár szintre érvényesül)",
|
||||
"Never": "Soha",
|
||||
@@ -83,9 +82,7 @@
|
||||
"No File Versioning": "Nincs fájl verziózás",
|
||||
"Notice": "Megjegyzés",
|
||||
"OK": "Rendben",
|
||||
"Off": "Off",
|
||||
"Offline": "Nincs kapcsolat",
|
||||
"Online": "Kapcsolódva",
|
||||
"Off": "Kikapcsolva",
|
||||
"Out Of Sync": "Nincs szinkronban",
|
||||
"Out of Sync Items": "Nem szinkronizált elemek",
|
||||
"Outgoing Rate Limit (KiB/s)": "Kimenő sávszélesség (KiB/mp)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Böngésző indítása",
|
||||
"Stopped": "Leállítva",
|
||||
"Support": "Támogatás",
|
||||
"Support / Forum": "Támogatás / Fórum",
|
||||
"Sync Protocol Listen Addresses": "Szinkronizációs protokoll címe",
|
||||
"Synchronization": "Szinkronizálás",
|
||||
"Syncing": "Szinkronizálás",
|
||||
"Syncthing has been shut down.": "Syncthing leállítva",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing a következő programokat, vagy komponenseket tartalmazza.",
|
||||
"Syncthing is restarting.": "Syncthing újraindul",
|
||||
"Syncthing is upgrading.": "Syncthing frissül",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Úgy tűnik, hogy a Syncthing nem működik, vagy valami probléma van az hálózati kapcsolattal. Újra próbálom...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "A Syncthing hibát észlelt a kérés teljesítése közben. Töltsd újra a böngészőt vagy indítsd újra a Syncthing-et amennyiben a hiba tartósan fennáll.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Az összevont statisztikák nyilvánosan elérhetők a {{url}} címen.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "A beállítások elmentésre kerültek, de nem lettek aktiválva. Indítsd újra a Syncthing-et, hogy aktiváld őket.",
|
||||
"The device ID cannot be blank.": "Az eszköz azonosító nem lehet üres.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Az eszköz azonosító az \"Azonosító mutatása\" ablakban található az eszközökön. A szóközök és kötőjelek opcionálisak (nem lesznek figyelembe véve)",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "A titkosított felhasználási adatok naponta kerülnek küldésre. Arra használjuk őket hogy kövessük a különböző platformokat, mappa méreteket és program verziókat. Amennyiben az elküldésre kerülő adat megváltozik, ez az ablak újra megjelenik.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "A beírt eszköz azonosító nem valódi. Az azonosító 52 vagy 56 karakter hosszú, számokból és betűkből áll, opcionálisan szóközöket és kötőjeleket tartalmaz.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Az első parancssori paraméter a mappa elérési útja, a második a relatív elérési út a mappában.",
|
||||
"The folder ID cannot be blank.": "A mappa azonosító nem lehet üres.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "A mappa azonosító rövid (maximum 64 karakter), betűkből, számokból és a pont (.), kötőjel (-) és alulvonás (_) karakterekből állhat.",
|
||||
"The folder ID must be unique.": "A mappa azonosító egyedi kell legyen",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "A verziók megtartásának maximális ideje (napokban, ha 0-t adsz meg örökre megmaradnak).",
|
||||
"The number of old versions to keep, per file.": "A megtartott régi verziók száma, fájlonként.",
|
||||
"The number of versions must be a number and cannot be blank.": "A megtartott verziók száma nem lehet üres",
|
||||
"The path cannot be blank.": "Elérési út nem lehet üres.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Az átnézési intervallum nullánál nagyobb másodperc érték kell legyen",
|
||||
"The rescan interval must be at least 5 seconds.": "Az átnézési intervallumnak legalább 5 másodpercnek kell lennie.",
|
||||
"Unknown": "Ismeretlen",
|
||||
"Unshared": "Nincs megosztva",
|
||||
"Unused": "Nincs használatban",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Frissítés a {{version}} verzióra",
|
||||
"Upgrading": "Frissítés",
|
||||
"Upload Rate": "Feltöltési sebesség",
|
||||
"Use Compression": "Tömörítés használata",
|
||||
"Use HTTPS for GUI": "HTTPS használata a GUI-hoz",
|
||||
"Version": "Verzió",
|
||||
"Versions Path": "Verziók útvonala",
|
||||
|
||||
@@ -7,8 +7,9 @@
|
||||
"Add new folder?": "Aggiungere una nuova cartella?",
|
||||
"Address": "Indirizzo",
|
||||
"Addresses": "Indirizzi",
|
||||
"All Data": "Tutti i dati",
|
||||
"All Data": "Tutti i Dati",
|
||||
"Allow Anonymous Usage Reporting?": "Abilitare Statistiche Anonime di Utilizzo?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Il controllo versione è gestito da un comando esterno. Quest'ultimo deve rimuovere il file dalla cartella sincronizzata.",
|
||||
"Anonymous Usage Reporting": "Statistiche Anonime di Utilizzo",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Qualsiasi dispositivo configurato in un introduttore verrà aggiunto anche a questo dispositivo.",
|
||||
"Automatic upgrades": "Aggiornamenti automatici",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Utilizzo CPU",
|
||||
"Changelog": "Changelog",
|
||||
"Close": "Chiudi",
|
||||
"Command": "Comando",
|
||||
"Comment, when used at the start of a line": "Per commentare, va inserito all'inizio di una riga",
|
||||
"Compression": "Compressione",
|
||||
"Compression is recommended in most setups.": "La compressione è raccomandata nella maggior parte delle configurazioni.",
|
||||
"Connection Error": "Errore di Connessione",
|
||||
"Copied from elsewhere": "Copiato da qualche altra parte",
|
||||
"Copied from original": "Copiato dall'originale",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg e i seguenti Collaboratori:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 i seguenti Collaboratori:",
|
||||
"Delete": "Elimina",
|
||||
"Device ID": "ID Dispositivo",
|
||||
"Device Identification": "Identificazione Dispositivo",
|
||||
@@ -33,7 +34,7 @@
|
||||
"Documentation": "Documentazione",
|
||||
"Download Rate": "Velocità Download",
|
||||
"Downloaded": "Scaricato",
|
||||
"Downloading": "Sto scaricando",
|
||||
"Downloading": "Scaricamento in corso",
|
||||
"Edit": "Modifica",
|
||||
"Edit Device": "Modifica Dispositivo",
|
||||
"Edit Folder": "Modifica Cartella",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Inserisci gli indirizzi \"ip:porta\" separati da una virgola, altrimenti inserisci \"dynamic\" per effettuare il rilevamento automatico dell'indirizzo.",
|
||||
"Enter ignore patterns, one per line.": "Inserisci gli schemi di esclusione, uno per riga.",
|
||||
"Error": "Errore",
|
||||
"External File Versioning": "Controllo Versione Esterno",
|
||||
"File Versioning": "Controllo Versione dei File",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Il software evita i bit dei permessi dei file durante il controllo delle modifiche. Utilizzato nei filesystem FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "I file sostituiti o eliminati da syncthing vengono datati e spostati in una cartella .stversions .",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Il software evita i bit dei permessi dei file durante il controllo delle modifiche. Utilizzato nei filesystem FAT.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "I file sostituiti o eliminati da Syncthing vengono datati e spostati in una cartella .stversions.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "I file sono protetti dalle modifiche effettuate negli altri dispositivi, ma le modifiche effettuate in questo dispositivo verranno inviate anche al resto del cluster.",
|
||||
"Folder ID": "ID Cartella",
|
||||
"Folder Master": "Cartella Principale",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Individuazione Globale",
|
||||
"Global Discovery Server": "Server di Ricerca Globale",
|
||||
"Global State": "Stato Globale",
|
||||
"Idle": "Inattivo",
|
||||
"Ignore": "Ignora",
|
||||
"Ignore Patterns": "Schemi Esclusione File",
|
||||
"Ignore Permissions": "Ignora Permessi",
|
||||
@@ -66,14 +67,12 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversione della condizione indicata (ad es. non escludere)",
|
||||
"Keep Versions": "Versioni Mantenute",
|
||||
"Last File Received": "Ultimo File Ricevuto",
|
||||
"Last File Synced": "Ultimo File Sincronizzato",
|
||||
"Last seen": "Ultima connessione",
|
||||
"Later": "Più Tardi",
|
||||
"Latest Release": "Ultima Versione",
|
||||
"Local Discovery": "Individuazione Locale",
|
||||
"Local State": "Stato Locale",
|
||||
"Maximum Age": "Durata Massima",
|
||||
"Metadata Only": "Solo i metadati",
|
||||
"Metadata Only": "Solo i Metadati",
|
||||
"Move to top of queue": "Posiziona in cima alla coda",
|
||||
"Multi level wildcard (matches multiple directory levels)": "Metacarattere multi-livello (corrisponde alle cartelle e alle sotto-cartelle)",
|
||||
"Never": "Mai",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Avviso",
|
||||
"OK": "OK",
|
||||
"Off": "Off",
|
||||
"Offline": "Non in linea",
|
||||
"Online": "In linea",
|
||||
"Out Of Sync": "Non Sincronizzati",
|
||||
"Out of Sync Items": "Elementi Non Sincronizzati",
|
||||
"Outgoing Rate Limit (KiB/s)": "Limite Velocità in Uscita (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Avvia Browser",
|
||||
"Stopped": "Fermato",
|
||||
"Support": "Supporto",
|
||||
"Support / Forum": "Supporto / Forum",
|
||||
"Sync Protocol Listen Addresses": "Indirizzi del Protocollo di Sincronizzazione",
|
||||
"Synchronization": "Sincronizzazione",
|
||||
"Syncing": "Sincronizzazione in corso",
|
||||
"Syncthing has been shut down.": "Syncthing è stato arrestato.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing include i seguenti software o porzioni di questi:",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing utilizza i seguenti software o porzioni di questi:",
|
||||
"Syncthing is restarting.": "Riavvio di Syncthing in corso.",
|
||||
"Syncthing is upgrading.": "Aggiornamento di Syncthing in corso.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing sembra inattivo, oppure c'è un problema con la tua connessione a Internet. Nuovo tentativo…",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Sembra che Syncthing non sia stato in grado di processare il tuo comando. Se il problema persiste, prova a ricaricare la pagina nel tuo navigatore, oppure a rilanciare Syncthing.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Sembra che Syncthing non sia in grado di elaborare il tuo comando. Se il problema persiste prova a ricaricare la pagina nel tuo navigatore oppure prova a riavviare Syncthing.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Le statistiche aggregate sono disponibili pubblicamente su {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configurazione è stata salvata ma non attivata. Devi riavviare Syncthing per attivare la nuova configurazione.",
|
||||
"The device ID cannot be blank.": "L'ID del dispositivo non può essere vuoto.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Trova l'ID nella finestra di dialogo \"Modifica > Mostra ID\" dell'altro dispositivo, poi inseriscilo qui. Gli spazi e i trattini sono opzionali (ignorati).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Quotidianamente il software invia le statistiche di utilizzo in forma criptata. Questi dati riguardano i sistemi operativi utilizzati, le dimensioni delle cartelle e le versioni del software. Se i dati riportati sono cambiati, verrà mostrata di nuovo questa finestra di dialogo.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID del dispositivo inserito non sembra valido. Dovrebbe essere una stringa di 52 o 56 caratteri costituita da lettere e numeri, con spazi e trattini opzionali.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Il primo parametro della riga di comando è il percorso della cartella e il secondo parametro è il percorso relativo nella cartella.",
|
||||
"The folder ID cannot be blank.": "L'ID della cartella non può essere vuoto.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID della cartella dev'essere un identificatore breve (64 caratteri o meno) costituito solamente da lettere, numeri, punti (.), trattini (-) e trattini bassi (_).",
|
||||
"The folder ID must be unique.": "L'ID della cartella dev'essere unico.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "La durata massima di una versione (in giorni, imposta a 0 per mantenere le versioni per sempre).",
|
||||
"The number of old versions to keep, per file.": "Il numero di vecchie versioni da mantenere, per file.",
|
||||
"The number of versions must be a number and cannot be blank.": "Il numero di versioni dev'essere un numero e non può essere vuoto.",
|
||||
"The path cannot be blank.": "Il percorso non può essere vuoto.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "L'intervallo di scansione deve essere un numero superiore a zero secondi.",
|
||||
"The rescan interval must be at least 5 seconds.": "L'intervallo di scansione non può essere inferiore a 5 secondi.",
|
||||
"Unknown": "Sconosciuto",
|
||||
"Unshared": "Non Condiviso",
|
||||
"Unused": "Non Utilizzato",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Aggiorna alla {{version}}",
|
||||
"Upgrading": "Aggiornamento",
|
||||
"Upload Rate": "Velocità Upload",
|
||||
"Use Compression": "Utilizza Compressione",
|
||||
"Use HTTPS for GUI": "Utilizza HTTPS per l'interfaccia grafica",
|
||||
"Version": "Versione",
|
||||
"Versions Path": "Percorso Cartella Versioni",
|
||||
|
||||
172
gui/assets/lang/lang-ko-KR.json
Normal file
@@ -0,0 +1,172 @@
|
||||
{
|
||||
"API Key": "API 키",
|
||||
"About": "궁금하죠",
|
||||
"Add": "추가",
|
||||
"Add Device": "장치 ",
|
||||
"Add Folder": "폴더 추가",
|
||||
"Add new folder?": "새로운 폴더를 추가하시겠습니까?",
|
||||
"Address": "주소",
|
||||
"Addresses": "주소",
|
||||
"All Data": "전체 데이터",
|
||||
"Allow Anonymous Usage Reporting?": "버그 리포트를 보낼까요?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "외부 명령은 버젼을 처리합니다. 동기화된 폴더에서 파일을 날려버려요.",
|
||||
"Anonymous Usage Reporting": "에러 를 보고 받습니다.",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "추가되는 장치에 이 구성이 추가되요.",
|
||||
"Automatic upgrades": "자동 업데이트",
|
||||
"Bugs": "버그",
|
||||
"CPU Utilization": "프로세스 사용",
|
||||
"Changelog": "바뀐점",
|
||||
"Close": "닫기",
|
||||
"Command": "명령",
|
||||
"Comment, when used at the start of a line": "명령행에서 시작을 할수 있어요.",
|
||||
"Compression": "압축",
|
||||
"Connection Error": "앗!! 프로그램이 죽었다.",
|
||||
"Copied from elsewhere": "다른데서 복사됨",
|
||||
"Copied from original": "원본에서 복사됨.",
|
||||
"Copyright © 2015 the following Contributors:": "이것은 번역할 필요가 있을까.",
|
||||
"Delete": "삭제",
|
||||
"Device ID": "장치 아이디",
|
||||
"Device Identification": "장치 식별자",
|
||||
"Device Name": "장치 이름",
|
||||
"Device {%device%} ({%address%}) wants to connect. Add new device?": "다른 장치 {{device}} ({{address}}) 에서 접속요구해요..이놈을 추가할까요?\n ",
|
||||
"Devices": "장치들",
|
||||
"Disconnected": "접속끊김",
|
||||
"Documentation": "문서",
|
||||
"Download Rate": "다운로드 속도",
|
||||
"Downloaded": "다운로드됨",
|
||||
"Downloading": "다운로드중",
|
||||
"Edit": "편집",
|
||||
"Edit Device": "노드 편집",
|
||||
"Edit Folder": "노드 편집",
|
||||
"Editing": "편집중",
|
||||
"Enable UPnP": "UPnP 활성화",
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "여러 개의 \"ip:port\"주소는 쉼표를 이용하여 분리 입력하시오. \"dynamic\" 입력 시, 자동으로 발견됨",
|
||||
"Enter ignore patterns, one per line.": "한줄에 한개식 패턴 무시 입력합니당",
|
||||
"Error": "에러",
|
||||
"External File Versioning": "외부 파일 버젼 관리.",
|
||||
"File Versioning": "파일 버젼관리",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "변화를 주고 받을때 파일권한 무시합니다. 주로 FAT 에 적용되기도합니다.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "변경 / 동기화에 의해 삭제시엔 버젼폴더에 날짜 스템프 식으로 이동합니당.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "파일은 다른장치로부터 보호되지만 내가 변경되는것은 다른 장치에 영향을 줘요.내가 삭제되면 다른놈도 삭제.다른놈이 삭제되면 내가 강제로 동기화해요.",
|
||||
"Folder ID": "폴더 아이디",
|
||||
"Folder Master": "읽기전용[내장치]",
|
||||
"Folder Path": "폴더 경로",
|
||||
"Folders": "폴더들",
|
||||
"GUI Authentication Password": "로그인 비밀번호",
|
||||
"GUI Authentication User": "로그인 사용자",
|
||||
"GUI Listen Addresses": "접속대기 주소",
|
||||
"Generate": "생성",
|
||||
"Global Discovery": "전역 노드 검색",
|
||||
"Global Discovery Server": "전역 노드 검색 서버",
|
||||
"Global State": "전역 중계서버 상태",
|
||||
"Ignore": "보류",
|
||||
"Ignore Patterns": "보류 유형",
|
||||
"Ignore Permissions": "권한 무시",
|
||||
"Incoming Rate Limit (KiB/s)": "전송 대역폭 제한(KB/S)",
|
||||
"Introducer": "유도",
|
||||
"Inversion of the given condition (i.e. do not exclude)": "주어진 조건의 반대(전혀 배제하지 않음)",
|
||||
"Keep Versions": "보관 파일 수량",
|
||||
"Last File Received": "마지막 수신된 파일",
|
||||
"Last seen": "마지막 접속일",
|
||||
"Later": "나중에",
|
||||
"Local Discovery": "로컬 노드 검색",
|
||||
"Local State": "로컬 현황",
|
||||
"Maximum Age": "최대 보존 기간",
|
||||
"Metadata Only": "메타 데이타 만.",
|
||||
"Move to top of queue": "상단으로 이 큐를 이동해요.",
|
||||
"Multi level wildcard (matches multiple directory levels)": "다중 레벨 와일드 카드 (여러 폴더 레벨과 일치해요)",
|
||||
"Never": "사용안함",
|
||||
"New Device": "새로운 디바이스",
|
||||
"New Folder": "새 폴더",
|
||||
"No": "아니오",
|
||||
"No File Versioning": "파일 버젼관리 안함",
|
||||
"Notice": "공지",
|
||||
"OK": "확인",
|
||||
"Off": "꺼짐",
|
||||
"Out Of Sync": "동기화 되지 않음",
|
||||
"Out of Sync Items": "동기화 아이템이 끝났어요.",
|
||||
"Outgoing Rate Limit (KiB/s)": "업로드 속도 제한(KiB/s)",
|
||||
"Override Changes": "덮어쓰기",
|
||||
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "로컬컴퓨터 폴더 경로. 존재하지 않을 시, 자동생성됩니다. 문자 (~)는 - 대한 바로 가기로 사용할 수 있습니다",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "버전 보관 경로(빈칸 작성시, 기본 .stversions 폴더로 지정).",
|
||||
"Please wait": "기다려주십시오",
|
||||
"Preview": "미리보기",
|
||||
"Preview Usage Report": "사용 보고서 미리보기",
|
||||
"Quick guide to supported patterns": "지원 패턴에 대한 안내",
|
||||
"RAM Utilization": "메모리사용량",
|
||||
"Rescan": "재 탐색",
|
||||
"Rescan All": "전체 재탐색",
|
||||
"Rescan Interval": "재 탐색 시간 간격[초]",
|
||||
"Restart": "재시작",
|
||||
"Restart Needed": "재시작 필요함",
|
||||
"Restarting": "재시작 중",
|
||||
"Reused": "재개",
|
||||
"Save": "저장",
|
||||
"Scanning": "탐색중",
|
||||
"Select the devices to share this folder with.": "장치와 공유할 폴더를 선택해유.",
|
||||
"Select the folders to share with this device.": "이 장치와 공유할 폴더를 선택합니다.",
|
||||
"Settings": "설정",
|
||||
"Share": "공유",
|
||||
"Share Folder": "폴더 공유",
|
||||
"Share Folders With Device": "접속노드에 폴더공유",
|
||||
"Share With Devices": "공유 할 노드",
|
||||
"Share this folder?": "폴더를 공유하시겠습니까?",
|
||||
"Shared With": "~와 공유",
|
||||
"Short identifier for the folder. Must be the same on all cluster devices.": "폴더에 대한 식별자 아이디 입니다. 공유하는 장치와 같은 아이디이어야 합니다.",
|
||||
"Show ID": "내 장치 아이디 ",
|
||||
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "장치에 대한 아이디로 표시됩니다. 옵션에 얻은 기본이름으로 다른장치에 통보합니다.",
|
||||
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "아이디가 비어있는 경우 기본 값으로 다른 장치에 업데이트 됩니다.",
|
||||
"Shutdown": "종료",
|
||||
"Shutdown Complete": "종료 완료",
|
||||
"Simple File Versioning": "간단한 파일 버젼관리",
|
||||
"Single level wildcard (matches within a directory only)": "단일 레벨 와일드카드(폴더와 일치하는경우)",
|
||||
"Source Code": "소스 코드",
|
||||
"Staggered File Versioning": "시차가 다른 파일 버젼관리",
|
||||
"Start Browser": "시작 브라우저",
|
||||
"Stopped": "동기화 중지됨",
|
||||
"Support": "지원",
|
||||
"Sync Protocol Listen Addresses": "동기화 수신 주소",
|
||||
"Syncing": "동기화중",
|
||||
"Syncthing has been shut down.": "동기화가 종료되었습니다",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing은 다음 소프트웨어에 포함됩니다.",
|
||||
"Syncthing is restarting.": "동기화 재시작 중",
|
||||
"Syncthing is upgrading.": "동기화 업데이트 중",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "동기화가 중지되었습니다. 인터넷/랜 이 활성화 될때까지 재시도합니다.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "동기화중 문제가 발생했습니다. 페이지를 새로 고쳐보거나 그래도 안되면 동기화도구를 다시 시작하세요.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "집계 /통계에서 사용할수 있는 {{URL}}",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "구성이 저장되었지만 활성화되지 않았습니다. 활성화 하려면 다시 시작하세요.",
|
||||
"The device ID cannot be blank.": "장치 아이디는 필수 입력입니다.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "여기에 입력된 장치아이디가 다른 장치에 표시됩니다. 아이디를 확인 편집>아이디보기 해서 해야하며 공백과 -는 무시(선택사항) 입니다.",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "암호화 사용 보고서는 매일 전송할겁니다. 디버그시 감사하게 쓸게요.보고할게 또 생기면 대화상자 다시 보여줄게요.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "입력한 장치아이디가 보이지 않아요. 52글자에서 56글자까지인데 숫자와 점 -으로 구성해야되요.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "첫 인자는 폴더 경로이고 두번째 인자는 폴더의 상대 경로입니다.",
|
||||
"The folder ID cannot be blank.": "폴더 아이디는 빈칸으로 하면 절대 안되용.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "폴더 아이디는 64글자 이하이며 숫자 . 와 - 과 _ 만 허용되요.",
|
||||
"The folder ID must be unique.": "폴더 아이디는 유일해야 합니다",
|
||||
"The folder path cannot be blank.": "이 폴더 경로는 비어있지 않습니다",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "다음과 같은 간격이 유지됩니다. 첫번째 버젼은 매일 유지/30일 유지/30초마다 유지 최대 버젼을 모두 유지입니다.",
|
||||
"The maximum age must be a number and cannot be blank.": "최대값은 숫자이어야 합니다. 빈값은 허용하지 않습니다.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "버젼을 유지하는 최대 시간 (일단위 / 버젼을 영원히 유지하려면 0을 입력하세요)",
|
||||
"The number of old versions to keep, per file.": "이전 버젼의 수[보관] 파일단위.",
|
||||
"The number of versions must be a number and cannot be blank.": "버젼의 수는 반드시 있어야 합니다.",
|
||||
"The path cannot be blank.": "경로명이 비어있습니다. 입력하세요.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "재검색 하는 간격입니다.(초단위) 이며 양수로 입력해야합니다.",
|
||||
"Unknown": "알수 없음",
|
||||
"Unshared": "공유되지 않음",
|
||||
"Unused": "사용되지 않음",
|
||||
"Up to Date": "최신 데이터",
|
||||
"Upgrade To {%version%}": "최신 버젼 업데이트",
|
||||
"Upgrading": "업데이트중",
|
||||
"Upload Rate": "전송비율",
|
||||
"Use HTTPS for GUI": "HTTPS 형식 폼 사용",
|
||||
"Version": "버젼",
|
||||
"Versions Path": "파일 버젼 저장 경로",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "보관 기간중에 오래된 파일은 삭제됩니다. [지정한 수가 넘친다면]",
|
||||
"When adding a new device, keep in mind that this device must be added on the other side too.": "새 장치를 추가할때 다른측면에서 추가된다하네요.",
|
||||
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "새 폴더를 추가시 폴더 아이디는 장치와 폴더를 묶어 공유가 가능합니다. 대소문자를 구분하며 공유장치와 아이디는 일치해야합니다.틀리면 안된다네요.",
|
||||
"Yes": "네",
|
||||
"You must keep at least one version.": "최소 한개의 버젼을 유지해야한다",
|
||||
"full documentation": "전체 문서",
|
||||
"items": "아이템",
|
||||
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} 에서 \"{{folder}}\" 폴더 추가들어왔어요."
|
||||
}
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adresai",
|
||||
"All Data": "Visiems duomenims",
|
||||
"Allow Anonymous Usage Reporting?": "Siųsti anonimišką vartojimo ataskaitą?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anoniminė vartojimo ataskaita",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Visi supažindintojo įrenginiai bus pridėti prie jūsų įrenginių sąrašo.",
|
||||
"Automatic upgrades": "Automatiniai atnaujinimai",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "Procesoriaus panaudojimas",
|
||||
"Changelog": "Pasikeitimai",
|
||||
"Close": "Uždaryti",
|
||||
"Command": "Command",
|
||||
"Comment, when used at the start of a line": "Komentaras naudojamas naujoje eilutėje",
|
||||
"Compression": "Kompresija",
|
||||
"Compression is recommended in most setups.": "Daugumoje atvejų spaudimas rekomenduojamas.",
|
||||
"Connection Error": "Susijungimo klaida",
|
||||
"Copied from elsewhere": "Nukopijuota iš betkur",
|
||||
"Copied from original": "Nukopijuota iš originalo",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Visos teisės saugomos © 2014 Jakob Borg ir šių bendraautorių:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Trinti",
|
||||
"Device ID": "Įrenginio ID",
|
||||
"Device Identification": "Įrenginio identifikacija",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Įveskite dvitaškiu atskirtą \"ip:port\" adresą arba žodį \"dynamic\" norėdami gauti adresą automatiškai",
|
||||
"Enter ignore patterns, one per line.": "Suveskite nepaisomus šablonus, kiekvieną naujoje eilutėje.",
|
||||
"Error": "Klaida",
|
||||
"External File Versioning": "External File Versioning",
|
||||
"File Versioning": "Versijų valdymas",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Nekreipti dėmesio į failų naudojimosi leidimus.\nTaikyti FAT sistemose.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Syncthing programa talpina senesnes versijas .stversions aplanke.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Failai apsaugoti nuo pakeitimų atliktų kituose įrenginiuose, bet pakeitimai šiame įrenginyje bus nusiųsti kitiems.",
|
||||
"Folder ID": "Aplanko ID",
|
||||
"Folder Master": "Aplanko vadovas",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Visuotinis matomumas",
|
||||
"Global Discovery Server": "Visuotinio matomumo serveris",
|
||||
"Global State": "Visuotinė būsena",
|
||||
"Idle": "Laisvas",
|
||||
"Ignore": "Ignoruoti",
|
||||
"Ignore Patterns": "Nepaisyti šablonų",
|
||||
"Ignore Permissions": "Nepaisyti failų prieigos leidimų",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Apversti sąlygas (pvz.: nenustoti naudoti)",
|
||||
"Keep Versions": "Saugojamų versijų kiekis",
|
||||
"Last File Received": "Paskutinis priimtas failas",
|
||||
"Last File Synced": "Paskutinis gautas failas",
|
||||
"Last seen": "Paskutinį kartą matytas",
|
||||
"Later": "Vėliau",
|
||||
"Latest Release": "Paskutinė versija",
|
||||
"Local Discovery": "Vietinis matomumas",
|
||||
"Local State": "Vietinė būsena",
|
||||
"Maximum Age": "Maksimalus amžius",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Įspėjimas",
|
||||
"OK": "Gerai",
|
||||
"Off": "Netaikoma",
|
||||
"Offline": "Atsijungęs",
|
||||
"Online": "Prisijungęs",
|
||||
"Out Of Sync": "Nesutikrinta",
|
||||
"Out of Sync Items": "Nesutikrinta",
|
||||
"Outgoing Rate Limit (KiB/s)": "Išeinančio srauto maksimalus greitis (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Paleisti naršyklę",
|
||||
"Stopped": "Sustabdyta",
|
||||
"Support": "Pagalba",
|
||||
"Support / Forum": "Palaikymas / Diskusijos",
|
||||
"Sync Protocol Listen Addresses": "Sutapatinimo taisyklių adresas",
|
||||
"Synchronization": "Sutapatinimas",
|
||||
"Syncing": "Sutapatinama",
|
||||
"Syncthing has been shut down.": "Syncthing išjungtas",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing naudoja šias programas ar jų dalis:",
|
||||
"Syncthing is restarting.": "Syncthing perleidžiamas",
|
||||
"Syncthing is upgrading.": "Syncthing atsinaujina.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing išjungta arba problemos su Interneto ryšių. Bandoma iš naujo...",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing turi problemų su jūsų užklausa. Perkraukite puslapį arba perkraukite Syncthing jei problema nedings.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Naudojimosi ataskaitą galite peržiūrėti adresu: {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Nauji nustatymai išsaugoti, bet neaktyvuoti. Perleiskite Syncthing programą iš naujo norėdami įgalinti naujus nustatymus.",
|
||||
"The device ID cannot be blank.": "Įrenginio ID negali būti tuščias.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Įrenginio ID, kurį čia reikia įvesti, gali būti rastas „Redaguoti > Rodyti ID“ dialoge kitame įrenginyje. Tarpai ir brūkšneliai nebūtini (ignoruojami).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Kas dieną siunčiama šifruota naudojimo ataskaita. Ji naudojama sekti, kokios platformos naudojamos, aplankų dydžius ir programų versijas. Jei siunčiamų duomenų turinys pasikeis, šis dialogas bus parodytas iš naujo.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Įvestas neteisingas įrenginio ID. Turi būti 52 ar 56 simbolių eilutė su raidėmis ir skaičiais kuriuos galima atskirti tarpu arba brūkšneliu.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "Aplanko ID negali būti tuščias.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Aplanko vardas negali būti ilgesnis nei 64 simboliai. Galima naudoti tik raides ir skaičius bet tašką (.), brūkšnelį (-) ir pabraukimą (_).",
|
||||
"The folder ID must be unique.": "Aplanko ID turi būti unikalus.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maksimalus laikas kurį bus saugojama versija (dienomis, nustatykite 0 norėdami saugoti amžinai).",
|
||||
"The number of old versions to keep, per file.": "Kiek failo versijų saugoti.",
|
||||
"The number of versions must be a number and cannot be blank.": "Versijų skaičius turi būti skaitmuo ir negali būti tuščias laukelis.",
|
||||
"The path cannot be blank.": "The path cannot be blank.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Nuskaitymo dažnis negali būti neigiamas skaičius.",
|
||||
"The rescan interval must be at least 5 seconds.": "Nuskaityti galima nedažniau nei kas 5 sekundes.",
|
||||
"Unknown": "Nežinoma",
|
||||
"Unshared": "Nesidalinama",
|
||||
"Unused": "Nenaudojamas",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Atnaujinti į {{version}}",
|
||||
"Upgrading": "Atnaujinama",
|
||||
"Upload Rate": "Išsiuntimo greitis",
|
||||
"Use Compression": "Naudoti suspaudimą",
|
||||
"Use HTTPS for GUI": "Valdymo skydeliui naudoti saugų ryšį ",
|
||||
"Version": "Versija",
|
||||
"Versions Path": "Kelias iki versijos",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adresser",
|
||||
"All Data": "Alle data",
|
||||
"Allow Anonymous Usage Reporting?": "Tillat Anonym Innsamling Av Brukerdata?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anonym Innsamling Av Brukerdata",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Enheter konfigurert på en introduksjonsenhet vil også bli lagt til denne enheten.",
|
||||
"Automatic upgrades": "Automatiske oppdateringer",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "CPU-utnyttelse",
|
||||
"Changelog": "Endringslog",
|
||||
"Close": "Lukk",
|
||||
"Command": "Kommando",
|
||||
"Comment, when used at the start of a line": "Kommentar, når det blir brukt i starten av en linje.",
|
||||
"Compression": "Komprimering",
|
||||
"Compression is recommended in most setups.": "Komprimering er anbefalt i de fleste tilfeller.",
|
||||
"Connection Error": "Tilkoblingsfeil",
|
||||
"Copied from elsewhere": "Kopiert fra et annet sted",
|
||||
"Copied from original": "Kopiert fra original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg og følgende Bidragsytere:",
|
||||
"Copyright © 2015 the following Contributors:": "Kopirett © 2015 de følgende bidragsytere:",
|
||||
"Delete": "Slett",
|
||||
"Device ID": "Enhet ID",
|
||||
"Device Identification": "Enhetskjennemerke",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": " Skriv inn kommaseparerte \"ip:port\"-adresser, eller \"dynamic\" for å automatisk finne adressen.",
|
||||
"Enter ignore patterns, one per line.": "Skriv inn mønster som skal utelates, ett per linje.",
|
||||
"Error": "Feilmelding",
|
||||
"External File Versioning": "Ekstern versjonskontroll",
|
||||
"File Versioning": "Versjonskontroll",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Ignorer bit som styrer filtilgang når en ser etter endringer. Bruk på FAT-filsystem.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Filer blir flyttet til tidsstemplede versjoner i en .stversions-mappe når de blir erstattet eller slettet av syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Filer er beskyttet mot endringer som er gjort på andre enheter, men endringer som er gjort på denne enheten blir sendt til resten av gruppen.",
|
||||
"Folder ID": "Mappe ID",
|
||||
"Folder Master": "Styrende Mappe",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Global Søking",
|
||||
"Global Discovery Server": "Global Søkemotor",
|
||||
"Global State": "Global Tilstand",
|
||||
"Idle": "Pause",
|
||||
"Ignore": "Ignorer",
|
||||
"Ignore Patterns": "Utelatelsesmønster",
|
||||
"Ignore Permissions": "Ignorer Tilgangsbit",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Invers av den gitte tilstanden (t.d. ikke ekskluder)",
|
||||
"Keep Versions": "Behold Versjoner",
|
||||
"Last File Received": "Sist Mottatte Fil",
|
||||
"Last File Synced": "Sist Synkroniserte Fil",
|
||||
"Last seen": "Sist sett",
|
||||
"Later": "Senere",
|
||||
"Latest Release": "Nyeste Versjon",
|
||||
"Local Discovery": "Lokal Søking",
|
||||
"Local State": "Lokal Tilstand",
|
||||
"Maximum Age": "Maksimal Levetid",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Merknad",
|
||||
"OK": "OK",
|
||||
"Off": "Av",
|
||||
"Offline": "Frakobla",
|
||||
"Online": "Tilkobla",
|
||||
"Out Of Sync": "Ikke Synkronisert",
|
||||
"Out of Sync Items": "Ikke Synkroniserte Element",
|
||||
"Outgoing Rate Limit (KiB/s)": "Utgående Hastighetsbegrensning (KiB/s)",
|
||||
@@ -128,9 +125,7 @@
|
||||
"Start Browser": "Start Nettleser",
|
||||
"Stopped": "Stoppa",
|
||||
"Support": "Brukerstøtte",
|
||||
"Support / Forum": "Brukerstøtte / Forum",
|
||||
"Sync Protocol Listen Addresses": "Lytteadresse For Synkroniseringsprotokoll",
|
||||
"Synchronization": "Synkronisering",
|
||||
"Syncing": "Synkroniserer",
|
||||
"Syncthing has been shut down.": "Syncthing har blitt slått av.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inkluderer helt eller delvis følgende programvare:",
|
||||
@@ -144,6 +139,7 @@
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhet-IDen lagt til her kan hentes fram via \"Rediger > Vis ID\"-dialogboksen på den andre enheten. Mellomrom og bindestrek er valgfritt (blir ignorert).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Kryptert informasjon om bruken av programmet blir gjort daglig. Dette blir brukt til å følge med på vanlig brukte systemoppsett, størrelser på mapper, og versjoner av programmet. Om datasettet endrer seg vil denne dialogboksen dukke opp og du vil bli bedt om å godkjenne dette.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "IDen for denne enheten er ikke godkjent. Det bør være 52 eller 56 tegn bestående av bokstaver og tall, valgfritt med mellomrom og bindestrek.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "The first command line parameter is the folder path and the second parameter is the relative path in the folder.",
|
||||
"The folder ID cannot be blank.": "Mappe-ID kan ikke være tom.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Mappe-IDen må være et kort kjennemerke (64 tegn eller mindre) bestående kun av bokstaver, tall og tegnene punktum (.), mellomrom (-) og strek (_).",
|
||||
"The folder ID must be unique.": "Mappe-ID må være unik.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maksimal tid å beholde en versjon (i dager, sett til 0 for å beholde versjoner på ubegrenset tid).",
|
||||
"The number of old versions to keep, per file.": "Antall gamle versjoner å beholde, per fil.",
|
||||
"The number of versions must be a number and cannot be blank.": "Antall versjoner må være et tall og kan ikke være tomt.",
|
||||
"The path cannot be blank.": "Plasseringen kan ikke være tom.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Antall sekund i skanneintervallet kan ikke være negativt.",
|
||||
"The rescan interval must be at least 5 seconds.": "Skanneintervallet må være minst 5 sekund.",
|
||||
"Unknown": "Ukjent",
|
||||
"Unshared": "Ikke delt",
|
||||
"Unused": "Ikke i bruk",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Oppgrader Til {{version}}",
|
||||
"Upgrading": "Oppgraderer",
|
||||
"Upload Rate": "Opplastingsrate",
|
||||
"Use Compression": "Bruk Komprimering",
|
||||
"Use HTTPS for GUI": "Bruk HTTPS for GUI",
|
||||
"Version": "Versjon",
|
||||
"Versions Path": "Plassering Av Versjoner",
|
||||
|
||||
@@ -9,6 +9,7 @@
|
||||
"Addresses": "Adressen",
|
||||
"All Data": "Alle Gegevens",
|
||||
"Allow Anonymous Usage Reporting?": "Bijhouden van anonieme gebruikers statistieken toestaan?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "Versiebeheer gebeurt door een extern commando. Het moet het bestand verwijderen van de gesynchroniseerde map.",
|
||||
"Anonymous Usage Reporting": "Bijhouden anonieme gebruikers statistieken",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Toestellen geconfigureerd op een introductie toestel zullen ook aan dit toestel worden toegevoegd.",
|
||||
"Automatic upgrades": "Automatisch bijwerken",
|
||||
@@ -16,13 +17,13 @@
|
||||
"CPU Utilization": "CPU Gebruik",
|
||||
"Changelog": "Logboek",
|
||||
"Close": "Sluiten",
|
||||
"Command": "Commando",
|
||||
"Comment, when used at the start of a line": "Commentaar, indien gebruikt aan het begin van de lijn",
|
||||
"Compression": "Compressie",
|
||||
"Compression is recommended in most setups.": "Gegevenscompressie is aan te raden in de meeste situaties.",
|
||||
"Connection Error": "Verbindingsfout",
|
||||
"Copied from elsewhere": "Van elders gekopieerd",
|
||||
"Copied from original": "Gekopieerd van het origineel",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg en de onderstaande bijdragers:",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 de volgende Bijdragers:",
|
||||
"Delete": "Verwijderen",
|
||||
"Device ID": "Apparaat ID",
|
||||
"Device Identification": "Apparaat identificatie",
|
||||
@@ -42,9 +43,10 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Geef, gescheiden door komma's, \"ip:port\" adressen of \"dynamic\" voor het automatische vinden van de addressen.",
|
||||
"Enter ignore patterns, one per line.": "Geef te negeren patronen, één per regel.",
|
||||
"Error": "Fout",
|
||||
"External File Versioning": "Extern Bestandsversiebeheer",
|
||||
"File Versioning": "Versiebeheer",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Bestands permissiebits worden genegeerd wanneer naar veranderingen wordt gekeken. Gebruik dit op FAT bestandsystemen",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Bestanden worden naar de .stversions map verplaatst met een tijdsaanduiding, wanneer ze aangepast of verwijderd worden door syncthing.",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Toegangsrechten voor bestanden worden genegeerd bij het zoeken naar wijzigingen. Gebruik voor FAT bestandssystemen.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Bestanden zijn beschermt tegen aanpassingen gemaakt door andere apparaten maar aanpassingen op dit apparaat worden doorgestuurd naar de rest van de cluster.",
|
||||
"Folder ID": "Folder ID",
|
||||
"Folder Master": "Hoofdfolder",
|
||||
@@ -57,7 +59,6 @@
|
||||
"Global Discovery": "Globaal zoeken",
|
||||
"Global Discovery Server": "Globale zoekserver",
|
||||
"Global State": "Globale status",
|
||||
"Idle": "Inactief",
|
||||
"Ignore": "Negeren",
|
||||
"Ignore Patterns": "Te negeren patronen",
|
||||
"Ignore Permissions": "Rechten negeren",
|
||||
@@ -66,10 +67,8 @@
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Inversie van de gegeven voorwaarde (bv. niet uitsluiten)",
|
||||
"Keep Versions": "Versies behouden",
|
||||
"Last File Received": "Laatste ontvangen bestand",
|
||||
"Last File Synced": "Laatste Gesynchroniseerde Bestand",
|
||||
"Last seen": "Laatst gezien op",
|
||||
"Later": "Later",
|
||||
"Latest Release": "Laatste uitgave",
|
||||
"Local Discovery": "Lokaal zoeken",
|
||||
"Local State": "Lokale status",
|
||||
"Maximum Age": "Maximum leeftijd",
|
||||
@@ -84,8 +83,6 @@
|
||||
"Notice": "Notificatie",
|
||||
"OK": "OK",
|
||||
"Off": "Uit",
|
||||
"Offline": "Offline",
|
||||
"Online": "Online",
|
||||
"Out Of Sync": "Niet gesynchroniseerd",
|
||||
"Out of Sync Items": "Niet gesynchroniseerde objecten",
|
||||
"Outgoing Rate Limit (KiB/s)": "Uitgaande snelheidslimiet (KiB/s)",
|
||||
@@ -128,22 +125,21 @@
|
||||
"Start Browser": "Start browser",
|
||||
"Stopped": "Gestopt",
|
||||
"Support": "Support",
|
||||
"Support / Forum": "Support / Forum",
|
||||
"Sync Protocol Listen Addresses": "Synchronisatie protocol luister adres",
|
||||
"Synchronization": "Synchronisatie",
|
||||
"Syncing": "Aan het synchroniseren",
|
||||
"Syncthing has been shut down.": "Syncthing is afgesloten",
|
||||
"Syncthing includes the following software or portions thereof:": "De volgende software of delen daarvan zijn onderdeel van syncthing:",
|
||||
"Syncthing is restarting.": "Syncthing is aan het herstarten.",
|
||||
"Syncthing is upgrading.": "Syncthing is aan het upgraden.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing lijkt afgesloten te zijn, of er is een verbindingsprobleem met het internet. Nieuwe poging....",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing heeft problemen bij het verwerken van je vraag. Gelieve de pagina opnieuw te laden of Syncthing te herstarten wanneer het probleem blijft opduiken.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing heeft een probleem met het verwerken van je verzoek. Gelieve de pagina te vernieuwen of Syncthing te herstarten als het probleem zich blijft voordoen.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "The verzamelde statistieken zijn publiek beschikbaar op {{url}}",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "De configuratie is opslagen maar nog niet actief. Syncthing moet opnieuw opgestart worden om de nieuwe configuratie te activeren.",
|
||||
"The device ID cannot be blank.": "Het toestel ID mag niet leeg zijn.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Het verwachte toestel ID kan teruggevonden worden in het \"Aanpassen > Toon ID\" scherm op het andere toestel. Spaties en streepjes zijn facultatief (worden genegeerd).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Het versleutelde gebruiksrapport wordt dagelijks opgestuurd en wordt gebruikt om de verschillende platformen, folder groottes en versies op te volgen. Als de reeks gegevens wijzigt zal opnieuw toestemming gevraagd worden.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Dit toestel ID lijkt ongeldig. Het toestel ID bestaat uit 52 of 56 letters en nummers met facultatieve spaties en streepjes.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "De eerste parameter is het pad naar de map en de tweede parameter is het relatieve pad binnenin de map.",
|
||||
"The folder ID cannot be blank.": "De folder ID mag niet leeg zijn.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "De folder ID mag maximaal 64 tekens lang zijn en bestaat enkel uit letters, nummers, punten (.), streepjes (-) en onderstrepingstekens (_).",
|
||||
"The folder ID must be unique.": "De folder ID moet uniek zijn.",
|
||||
@@ -153,8 +149,8 @@
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "De maximale tijdsduur om een versie te bewaren (in dagen, gebruik 0 om versies voor altijd te bewaren).",
|
||||
"The number of old versions to keep, per file.": "Het aantal versies dat bewaard moet worden per file.",
|
||||
"The number of versions must be a number and cannot be blank.": "Het aantal nummers moet een getal zijn en mag niet leeg blijven.",
|
||||
"The path cannot be blank.": "U dient een locatie in te voeren.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "De scanfrequentie moet een positief getal in seconden zijn.",
|
||||
"The rescan interval must be at least 5 seconds.": "De scanfrequentie moet minimaal 5 seconden zijn.",
|
||||
"Unknown": "Onbekend",
|
||||
"Unshared": "Niet gedeeld",
|
||||
"Unused": "Ongebruikt",
|
||||
@@ -162,7 +158,6 @@
|
||||
"Upgrade To {%version%}": "Upgrade naar {{version}}",
|
||||
"Upgrading": "Bezig met upgrade",
|
||||
"Upload Rate": "Upload snelheid",
|
||||
"Use Compression": "Compressie gebruiken",
|
||||
"Use HTTPS for GUI": "Gebruik HTTPS voor de GUI",
|
||||
"Version": "Versie",
|
||||
"Versions Path": "Locatie versies",
|
||||
|
||||
@@ -4,34 +4,35 @@
|
||||
"Add": "Legg til",
|
||||
"Add Device": "Legg Til Eining",
|
||||
"Add Folder": "Legg Til Mappe",
|
||||
"Add new folder?": "Legg til ny mappe?",
|
||||
"Add new folder?": "Leggja til ny mappe?",
|
||||
"Address": "Adresse",
|
||||
"Addresses": "Adresser",
|
||||
"All Data": "All Data",
|
||||
"Allow Anonymous Usage Reporting?": "Tillat Anonym Innsamling Av Brukardata?",
|
||||
"Anonymous Usage Reporting": "Tillat Anonym Innsamling Av Brukardata",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Einingar konfigurert på ein introduksjonseining vil òg verte lagt til denne eininga.",
|
||||
"All Data": "Alle dataa",
|
||||
"Allow Anonymous Usage Reporting?": "Tillata anonymisert bruksrapportering?",
|
||||
"An external command handles the versioning. It has to remove the file from the synced folder.": "An external command handles the versioning. It has to remove the file from the synced folder.",
|
||||
"Anonymous Usage Reporting": "Anonymisert bruksrapportering",
|
||||
"Any devices configured on an introducer device will be added to this device as well.": "Einingar konfigurert på ei introduksjonseining vil òg verta lagt til denne eininga.",
|
||||
"Automatic upgrades": "Automatiske oppdateringar",
|
||||
"Bugs": "Programfeil",
|
||||
"CPU Utilization": "CPU-utnytting",
|
||||
"Changelog": "Endringslog",
|
||||
"Changelog": "Endringslogg",
|
||||
"Close": "Lukk",
|
||||
"Comment, when used at the start of a line": "Kommentar, når det vert brukt i starten av ei linje.",
|
||||
"Compression": "Compression",
|
||||
"Compression is recommended in most setups.": "Komprimering er tilrådd i dei fleste høve.",
|
||||
"Command": "Kommando",
|
||||
"Comment, when used at the start of a line": "Kommentar, når brukt i starten av linja",
|
||||
"Compression": "Komprimering",
|
||||
"Connection Error": "Tilkoplingsfeil",
|
||||
"Copied from elsewhere": "Kopiert frå ein annan stad",
|
||||
"Copied from original": "Kopiert frå original",
|
||||
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg og følgjande Bidragsytarar:",
|
||||
"Copied from original": "Kopiert frå originalen",
|
||||
"Copyright © 2015 the following Contributors:": "Copyright © 2015 the following Contributors:",
|
||||
"Delete": "Slett",
|
||||
"Device ID": "Eining ID",
|
||||
"Device Identification": "Einingskjennemerke",
|
||||
"Device Name": "Namn På Eining",
|
||||
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Eining {{device}} ({{address}}) vil kople seg til. Legg til eining?",
|
||||
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Eininga {{device}} ({{address}}) vil kopla seg til. Vil du leggja ho til?",
|
||||
"Devices": "Einingar",
|
||||
"Disconnected": "Fråkopla",
|
||||
"Documentation": "Dokumentasjon",
|
||||
"Download Rate": "Nedlastingsrate",
|
||||
"Download Rate": "Nedlastingsfart",
|
||||
"Downloaded": "Lasta ned",
|
||||
"Downloading": "Lastar ned",
|
||||
"Edit": "Rediger",
|
||||
@@ -42,10 +43,11 @@
|
||||
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Skriv inn \"ip:port\"-adresser med komma mellom kvar adresse, eller \"dynamic\" for å automatisk søkja opp adressa.",
|
||||
"Enter ignore patterns, one per line.": "Skriv inn mønster som skal utelatast, eitt per linje.",
|
||||
"Error": "Feilmelding",
|
||||
"File Versioning": "Versjonskontroll",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Ignorer bit som styrer filtilgang når ein ser etter endringar. Bruk på FAT-filsystem.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Filer vert flytta til tidsstempla versjonar i ei .stversions-mappe når dei vert erstatta eller sletta av syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Filer er beskytta mot endringar gjort på andre einingar, men endringar gjort på denne eininga vert sendt til resten av gruppa.",
|
||||
"External File Versioning": "Ekstern filutgåvehandtering",
|
||||
"File Versioning": "Filutgåvekontroll",
|
||||
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "File permission bits are ignored when looking for changes. Use on FAT file systems.",
|
||||
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.",
|
||||
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Filer er beskytta mot endringar gjort på andre einingar, men endringar gjort på denne eininga vert sende til resten av klyngja.",
|
||||
"Folder ID": "Mappe ID",
|
||||
"Folder Master": "Styrande Mappe",
|
||||
"Folder Path": "Mappeplassering",
|
||||
@@ -54,51 +56,46 @@
|
||||
"GUI Authentication User": "GUI Brukar",
|
||||
"GUI Listen Addresses": "GUI Lytteadresse",
|
||||
"Generate": "Generer",
|
||||
"Global Discovery": "Global Søking",
|
||||
"Global Discovery Server": "Global Søkjemotor",
|
||||
"Global Discovery": "Global søking",
|
||||
"Global Discovery Server": "Global søkjetenar",
|
||||
"Global State": "Global Tilstand",
|
||||
"Idle": "Pause",
|
||||
"Ignore": "Ignorer",
|
||||
"Ignore Patterns": "Utelatingsmønster",
|
||||
"Ignore Permissions": "Ignorer Tilgangsbit",
|
||||
"Incoming Rate Limit (KiB/s)": "Innkomande Hastigheitsgrense (KiB/s)",
|
||||
"Ignore Permissions": "Ignorer tilgangar",
|
||||
"Incoming Rate Limit (KiB/s)": "Innkomande hastigheitsgrense (KiB/s)",
|
||||
"Introducer": "Introduktør",
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Invers av den gitte tilstanden (t.d. ikkje ekskluder)",
|
||||
"Inversion of the given condition (i.e. do not exclude)": "Det motsette av den gitte tilstanden (dvs. ekskluder ikkje)",
|
||||
"Keep Versions": "Behald Versjonar",
|
||||
"Last File Received": "Sist Mottatte Fil",
|
||||
"Last File Synced": "Sist Synkroniserte Fil",
|
||||
"Last File Received": "Siste mottatte fila",
|
||||
"Last seen": "Sist sett",
|
||||
"Later": "Seinare",
|
||||
"Latest Release": "Nyaste Versjon",
|
||||
"Local Discovery": "Lokal Søking",
|
||||
"Local Discovery": "Lokal oppdaging",
|
||||
"Local State": "Lokal Tilstand",
|
||||
"Maximum Age": "Maksimal Levetid",
|
||||
"Metadata Only": "Metadata Only",
|
||||
"Move to top of queue": "Move to top of queue",
|
||||
"Multi level wildcard (matches multiple directory levels)": "Multinivåsøk (søkjer på fleire mappenivå)",
|
||||
"Metadata Only": "Berre metadata",
|
||||
"Move to top of queue": "Flytt øvst i køen",
|
||||
"Multi level wildcard (matches multiple directory levels)": "Fleirnivå-jokerteikn (søkjer på fleire mappenivå)",
|
||||
"Never": "Aldri",
|
||||
"New Device": "Ny Eining",
|
||||
"New Folder": "Ny Mappe",
|
||||
"New Device": "Ny eining",
|
||||
"New Folder": "Ny mappe",
|
||||
"No": "Nei",
|
||||
"No File Versioning": "Ingen Versjonskontroll",
|
||||
"No File Versioning": "Ingen versjonskontroll",
|
||||
"Notice": "Merknad",
|
||||
"OK": "OK",
|
||||
"Off": "Off",
|
||||
"Offline": "Fråkopla",
|
||||
"Online": "Tilkopla",
|
||||
"Out Of Sync": "Ikkje Synkronisert",
|
||||
"Out of Sync Items": "Ikkje Synkroniserte Element",
|
||||
"Outgoing Rate Limit (KiB/s)": "Utgåande Hastigheitsgrense (KiB/s)",
|
||||
"Override Changes": "Overstyr Endringar",
|
||||
"Off": "Av",
|
||||
"Out Of Sync": "Ikkje synkronisert",
|
||||
"Out of Sync Items": "Ikkje-synkroniserte element",
|
||||
"Outgoing Rate Limit (KiB/s)": "Utgåande hastigheitsgrense (KiB/s)",
|
||||
"Override Changes": "Overstyr endringar",
|
||||
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Plasseringa av mappa på datamaskinen. Vert oppretta om ho ikkje finst. Krøllstrekteiknet (~) kan brukast som forkorting for",
|
||||
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Plasseringa for lagra versjonar (la denne vera tom for å bruka standard .stversions-mappa i mappa).",
|
||||
"Please wait": "Ver venleg og vent",
|
||||
"Please wait": "Gjer vel og vent",
|
||||
"Preview": "Førehandsvisning",
|
||||
"Preview Usage Report": "Førehandsvis Datainnsamling",
|
||||
"Preview Usage Report": "Førehandsvis bruksrapporten",
|
||||
"Quick guide to supported patterns": "Kjapp innføring i godkjente mønster",
|
||||
"RAM Utilization": "RAM-utnytting",
|
||||
"RAM Utilization": "Minnebruk",
|
||||
"Rescan": "Skann På Ny",
|
||||
"Rescan All": "Rescan All",
|
||||
"Rescan All": "Skann alle på nytt",
|
||||
"Rescan Interval": "Skanneintervall",
|
||||
"Restart": "Omstart",
|
||||
"Restart Needed": "Omstart Trengs",
|
||||
@@ -110,68 +107,66 @@
|
||||
"Select the folders to share with this device.": "Vel mappene du vil dela med denne eininga.",
|
||||
"Settings": "Innstillingar",
|
||||
"Share": "Del",
|
||||
"Share Folder": "Del Mappe",
|
||||
"Share Folders With Device": "Del Mapper Med Eining",
|
||||
"Share Folder": "Del mappe",
|
||||
"Share Folders With Device": "Del mapper med eininga",
|
||||
"Share With Devices": "Del Med Einingar",
|
||||
"Share this folder?": "Del denne mappa?",
|
||||
"Share this folder?": "Dela denne mappa?",
|
||||
"Shared With": "Delt Med",
|
||||
"Short identifier for the folder. Must be the same on all cluster devices.": "Kort kjennemerke på mappa. Må vera det same på alle einingar i ei gruppe.",
|
||||
"Short identifier for the folder. Must be the same on all cluster devices.": "Kort kjennemerke på mappa. Må vera det same på alle einingane i klyngja.",
|
||||
"Show ID": "Vis ID",
|
||||
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Vis i staden for Eining ID i gruppestatus. Vil verta kringkasta til andre einingar som eit valfritt standardnamn.",
|
||||
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Vist i staden for Mappe-ID i gruppestatus. Vil verta oppdatert til namnet eininga kringkastar dersom tomt.",
|
||||
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Vist i staden for einings-ID-en i klyngjestatusen. Vil verta kringkasta til dei andre einingane som eit valfritt standardnamn.",
|
||||
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Vist i staden for mappe-ID-en i klyngjestatuses. Vil verta oppdatert til namnet eininga kringkastar dersom tomt.",
|
||||
"Shutdown": "Slå Av",
|
||||
"Shutdown Complete": "Shutdown Complete",
|
||||
"Simple File Versioning": "Enkel Versjonskontroll",
|
||||
"Single level wildcard (matches within a directory only)": "Enkeltnivåsøk (søkjer kun i ei mappe)",
|
||||
"Shutdown Complete": "Slått av",
|
||||
"Simple File Versioning": "Enkel versjonskontroll",
|
||||
"Single level wildcard (matches within a directory only)": "Enkeltnivå-jokerteikn (søkjer berre i éi mappe)",
|
||||
"Source Code": "Kildekode",
|
||||
"Staggered File Versioning": "Forskyvd Versjonskontroll",
|
||||
"Staggered File Versioning": "Forskuva versjonskontroll",
|
||||
"Start Browser": "Start Nettlesar",
|
||||
"Stopped": "Stoppa",
|
||||
"Support": "Brukarstøtte",
|
||||
"Support / Forum": "Brukarstøtte / Forum",
|
||||
"Sync Protocol Listen Addresses": "Lytteadresse For Synkroniseringsprotokoll",
|
||||
"Synchronization": "Synkronisering",
|
||||
"Syncing": "Synkroniserer",
|
||||
"Syncthing has been shut down.": "Syncthing har blitt slått av.",
|
||||
"Syncthing includes the following software or portions thereof:": "Syncthing inkluderer føljande programvare heilt eller delvis:",
|
||||
"Syncthing is restarting.": "Syncthing startar på ny.",
|
||||
"Syncthing is upgrading.": "Syncthing oppgraderer.",
|
||||
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing ser ut til å vera nede, eller så er det eit problem med nettilkoplinga di. Prøvar på ny …",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
|
||||
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.",
|
||||
"The aggregated statistics are publicly available at {%url%}.": "Samla statistikk er opent tilgjengeleg på {{url}}.",
|
||||
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Instillingane har blitt lagra men ikkje aktivert. Syncthing må starta på ny for å aktivera dei nye instillingane.",
|
||||
"The device ID cannot be blank.": "Eining ID kan ikkje vera tom.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Eining-IDen lagt til her kan hentast fram via \"Rediger > Vis ID\"-dialogboksen på den andre eininga. Mellomrom og bindestrek er valfritt (vert ignorert).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Kryptert informasjon om bruken av programmet vert gjort dagleg. Dette vert brukt til å følgja med på vanleg brukte systemoppsett, storleikar på mapper, og versjonar av programmet. Om datasettet endrar seg vil denne dialogboksen dukka opp og du vil bli bedt om å godkjenna dette.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "IDen for denne eininga er ikkje godkjend. Det bør vera 52 eller 56 teikn samansett av bokstavar og tal, valfritt med mellomrom og bindestrek.",
|
||||
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Einings-ID-en du skal nytta her finn du i meldingsvindauget \"Rediger > Vis ID\" på den andre eininga. Mellomrom og bindestrek er valfrie (vert ignorert).",
|
||||
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Den krypterte bruksrapporten vert send dagleg. Han vert nytta til å spora vanlege plattformer, mappestorleikar og programutgåvene. Om datasettet endrar seg, vil dette meldingsvindauget dukka opp att og du vil verta beden om å godkjenna det.",
|
||||
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Einings-ID-en er ikkje gyldig. Han må vera på 52 eller 56 teikn og vera samansett av bokstavar og tal med valfrie mellomrom og bindestrekar.",
|
||||
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Den første kommandolinjeparameteren er mappebana og den andre syner den relative bana i mappa.",
|
||||
"The folder ID cannot be blank.": "Mappe ID kan ikkje vera tom.",
|
||||
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Mappe-IDen må vera eit kort kjennemerke (64 teikn eller mindre) samansett kun av bokstavar, tal og teikna punktum (.), mellomrom (-) og strek (_).",
|
||||
"The folder ID must be unique.": "Mappe ID må vera unik.",
|
||||
"The folder path cannot be blank.": "Mappeplasseringa kan ikkje vera tom.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Fylgjande intervall vert brukt: den fyrste timen vert ein versjon lagra kvart 30. sekund, den fyrste dagen vert ein versjon lagra kvar time, dei fyrste 30 dagane vert ein versjon lagra kvar dag, og inntil høgaste levetid vert ein versjon lagra kvar uke.",
|
||||
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Desse intervalla vert nytta: den fyrste timen vert ei utgåve lagra kvart 30. sekund, den fyrste dagen vert ei utgåve lagra kvar time, dei fyrste 30 dagane vert ei utgåve lagra kvar dag, og inntil høgaste alderen vert ei utgåve lagra kvar veke.",
|
||||
"The maximum age must be a number and cannot be blank.": "Maksimal levetid må vera eit tal og kan ikkje vera tomt.",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Maksimalt tidsrom å behalda ein versjon (i dagar, set til 0 for å behalda versjonar for ubegrensa tid.)",
|
||||
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Høgaste tidsrom å behalda ei utgåve (i dagar, set til 0 for å behalda versjonane for alltid).",
|
||||
"The number of old versions to keep, per file.": "Tal på gamle versjonar ein skal behalda, per fil.",
|
||||
"The number of versions must be a number and cannot be blank.": "Tal på versjonar må vera eit tal og kan ikkje vera tomt.",
|
||||
"The path cannot be blank.": "Bana kan ikkje vera tom.",
|
||||
"The rescan interval must be a non-negative number of seconds.": "Talet på sekund i skanneintervallet kan ikkje vera negativt.",
|
||||
"The rescan interval must be at least 5 seconds.": "Skanneintervallet må vera minst 5 sekund.",
|
||||
"Unknown": "Ukjent",
|
||||
"Unshared": "Ikkje delt",
|
||||
"Unused": "Ubrukt",
|
||||
"Up to Date": "Oppdatert",
|
||||
"Upgrade To {%version%}": "Oppgrader Til {{version}}",
|
||||
"Upgrading": "Oppgraderer",
|
||||
"Upload Rate": "Opplastingsrate",
|
||||
"Use Compression": "Bruk Komprimering",
|
||||
"Use HTTPS for GUI": "Bruk HTTPS for GUI",
|
||||
"Upload Rate": "Opplastingsfart",
|
||||
"Use HTTPS for GUI": "Bruk HTTPS ved grafisk grensesnitt",
|
||||
"Version": "Versjon",
|
||||
"Versions Path": "Plassering Av Versjonar",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versjonar vert automatisk sletta når maksimal levetid er nådd eller når det tillatte talet på filer er overstige.",
|
||||
"When adding a new device, keep in mind that this device must be added on the other side too.": "Merk at når ein ny eining vert lagt til må denne òg leggast til på andre sida.",
|
||||
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Når ei ny mappe vert lagt til, hugs at Mappe-ID vert brukt til å binda saman mapper mellom einingar. Det er skilnad på store og små bokstavar, så IDane må vera identiske på alle einingane.",
|
||||
"Versions Path": "Utgåvebane",
|
||||
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Utgåver vert automatisk sletta når maksimal levetid er nådd eller når det høgaste tillate talet på filer innan eit intervall vert overskride.",
|
||||
"When adding a new device, keep in mind that this device must be added on the other side too.": "Hugs at når ei ny eining vert lagt til må ho òg leggjast til på andre sida.",
|
||||
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Hugs at når ei ny mappe vert lagt til, vert mappe-ID-en brukt til å binda saman mappene mellom einingane. Det er skilnad på store og små bokstavar, så ID-ane må vera identiske på alle einingane.",
|
||||
"Yes": "Ja",
|
||||
"You must keep at least one version.": "Du må behalda minst ein versjon.",
|
||||
"full documentation": "all dokumentasjon",
|
||||
"items": "element",
|
||||
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vil dela mappa \"{{folder}}\"."
|
||||
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} ønskjer å dela mappa \"{{folder}}\"."
|
||||
}
|
||||