Compare commits

..

24 Commits

Author SHA1 Message Date
ProactiveServices
c953cdc375 gui: Package attribution and copyright bumps (fixes #3861)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3863
2017-01-10 07:50:11 +00:00
Jakob Borg
c20c17e3c6 gui, man: Update docs & translations 2017-01-10 08:41:14 +01:00
Audrius Butkevicius
1a1e35d998 lib/osutil: Replace IsDir with TraversesSymlink (fixes #3839)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3883
LGTM: calmh
2017-01-10 07:09:31 +00:00
Adel Qalieh
8d2a31e38e lib/model: Remove syncthing-specific files (fixes #3819)
Syncthing adds some hidden files when a folder is added, but there is currently
no equivalent cleanup procedure. This change is conservative as not to
accidentally cause data loss.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3874
2017-01-07 17:05:30 +00:00
Jakob Borg
fe377a166a authors: Update adelq 2017-01-07 17:56:46 +01:00
Jakob Borg
5770d80bf9 authors: Add adelq 2017-01-07 17:54:41 +01:00
Jakob Borg
7a16dbd31d lib/scanner: Fix previous commit: don't stop scan completely 2017-01-05 15:05:49 +01:00
Jakob Borg
926c88cfc4 lib/scanner: Never, ever descend into symlinks (ref #3857)
On Windows we would descend into SYMLINKD type links when we scanned
them successfully, as we would return nil from the walk function and the
filepath.Walk iterator apparently thought it OK to descend into the
symlinked directory.

With this change we always return filepath.SkipDir no matter what.

Tested on Windows 10 as admin, does what it should.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3875
2017-01-05 13:26:29 +00:00
Audrius Butkevicius
29d010ec0e lib/model, lib/weakhash: Hash using adler32, add heuristic in puller
Adler32 is much faster, and the heuristic avoid the obvious cases where it
will not help.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3872
2017-01-04 21:04:13 +00:00
Jakob Borg
920274bce4 lib/db: Don't panic on unknown folder in ListFolders (fixes #3584)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3869
2017-01-04 10:34:52 +00:00
Jakob Borg
2ebd6ad77f lib/scanner: Don't stop byte counter ticks before scan is done 2017-01-03 15:03:32 +01:00
Kudalufi
79dd6918f2 cmd/syncthing: Add support for -auditfile= (fixes #3859)
Adds support for -auditfile= where is "-" for stdout, "--" for stderr, or a
filename. It can be left blank (or left out entirely) for the original
behaviour of creating a timestamped filename.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3860
2017-01-03 08:54:28 +00:00
Jakob Borg
79f7f50c4d authors: Add Kudalufi 2017-01-03 09:24:29 +01:00
Jakob Borg
987718baf8 vendor: Update github.com/gogo/protobuf
Also tweaks the proto definitions:

 - [packed=false] on the block_indexes field to retain compat with
   v0.14.16 and earlier.

 - Uses the vendored protobuf package in include paths.

And, "build.go setup" will install the vendored protoc-gen-gogofast.
This should ensure that a proto rebuild isn't so dependent on whatever
version of the compiler and package the developer has installed...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3864
2017-01-03 00:16:21 +00:00
Jakob Borg
4fb9c143ac authors: Update ProactiveServices 2017-01-02 15:12:57 +01:00
Jakob Borg
ec62888539 lib/connections: Allow on the fly changes to rate limits (fixes #3846)
Also replaces github.com/juju/ratelimit with golang.org/x/time/rate as
the latter supports changing the rate on the fly.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3862
2017-01-02 11:29:20 +00:00
Jakob Borg
8c34a76f7a man: refresh.sh requires bash 2017-01-01 20:45:52 +01:00
Jakob Borg
6809d38cde lib/protocol: Revert protobuf encoder changes in v0.14.17 (fixes #3855)
The protobuf encoder now produces packed arrays for things like []int32,
which is actually correct according to the proto3 spec. However
Syncthing v0.14.16 and earlier doesn't support this. This reverts the
encoding change, but keeps the updated decoder so that we are both more
compatible with other proto3 implementations and can move to the updated
encoder in the future.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3856
2017-01-01 17:19:00 +00:00
Mark Pulford
69ae4aa024 cmd/syncthing: Avoid Keepalive/GUI refresh race
This avoids unnecessary browser request failures and retries. Eg:
- Browser reuses existing HTTP connection for GUI refresh request
- Server closes connection with request in flight
- Browser retries GET request.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3854
2017-01-01 12:38:31 +00:00
Jakob Borg
8e8b867fba authors: Add mpx 2017-01-01 13:28:33 +01:00
Jakob Borg
0a118d2979 lib/config, lib/model: Temporarily disable bad tests (ref #3834, #3843) 2017-01-01 13:27:18 +01:00
Nathan Morrison
8daaa5d0d2 gui: Populate global changes on load
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3848
2016-12-30 01:33:27 +00:00
Jakob Borg
eb14f85a57 vendor: Update github.com/syndtr/goleveldb 2016-12-28 12:19:14 +01:00
Jakob Borg
c69c3c7c36 lib/sha256: Smoke test the implementation on startup (hello OpenSUSE!) 2016-12-28 12:15:51 +01:00
725 changed files with 343346 additions and 64793 deletions

1
.gitattributes vendored
View File

@@ -6,4 +6,3 @@ vendor/** -text=auto
# Diffs on these files are meaningless
*.svg -diff
*.pb.go -diff

View File

@@ -6,7 +6,8 @@
# The NICKS list is auto generated from this file.
Aaron Bieber (qbit) <qbit@deftly.net>
Adam Piggott (simplypeachy) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com>
Adam Piggott (ProactiveServices) <aD@simplypeachy.co.uk> <simplypeachy@users.noreply.github.com> <ProactiveServices@users.noreply.github.com>
Adel Qalieh (adelq) <aqalieh95@gmail.com> <adelq@users.noreply.github.com>
Alessandro G. (alessandro.g89) <alessandro.g89@gmail.com>
Alexander Graf (alex2108) <register-github@alex-graf.de>
Alexandre Viau (aviau) <alexandre@alexandreviau.net> <aviau@debian.org>
@@ -63,6 +64,7 @@ Kelong Cong (kc1212) <kc04bc@gmx.com> <kc1212@users.noreply.github.com>
Ken'ichi Kamada (kamadak) <kamada@nanohz.org>
Kevin Allen (ironmig) <kma1660@gmail.com>
Kevin White, Jr. (kwhite17) <kevinwhite1710@gmail.com>
Kurt Fitzner (Kudalufi) <kurt@va1der.ca>
Lars K.W. Gohlke (lkwg82) <lkwg82@gmx.de>
Laurent Etiemble (letiemble) <laurent.etiemble@gmail.com> <laurent.etiemble@monobjc.net>
Leo Arias (elopio) <yo@elopio.net>
@@ -72,6 +74,7 @@ Majed Abdulaziz (majedev) <majed.alhajry@gmail.com>
Marc Laporte (marclaporte) <marc@marclaporte.com> <marc@laporte.name>
Marc Pujol (kilburn) <kilburn@la3.org>
Marcin Dziadus (marcindziadus) <dziadus.marcin@gmail.com>
Mark Pulford (mpx) <mark@kyne.com.au>
Mateusz Naściszewski (mateon1) <matin1111@wp.pl>
Matt Burke (burkemw3) <mburke@amplify.com> <burkemw3@gmail.com>
Max Schulze (kralo) <max.schulze@online.de> <kralo@users.noreply.github.com>

9
NICKS
View File

@@ -4,6 +4,8 @@
0x010C <antoine.lamielle@0x010c.fr>
0x010C <gh@0x010c.fr>
acogdev <jake@acogdev.com>
adelq <aqalieh95@gmail.com>
adelq <adelq@users.noreply.github.com>
alessandro.g89 <alessandro.g89@gmail.com>
alex2108 <register-github@alex-graf.de>
andersonvom <andersonvom@gmail.com>
@@ -63,6 +65,7 @@ kozec <kozec@kozec.com>
kralo <max.schulze@online.de>
kralo <kralo@users.noreply.github.com>
krozycki <rozycki.karol@gmail.com>
Kudalufi <kurt@va1der.ca>
kwhite17 <kevinwhite1710@gmail.com>
letiemble <laurent.etiemble@gmail.com>
letiemble <laurent.etiemble@monobjc.net>
@@ -76,6 +79,7 @@ mateon1 <matin1111@wp.pl>
mogwa1 <devriesb@gmail.com>
moshen <moshen.colin@gmail.com>
Moter8 <moter8@gmail.com>
mpx <mark@kyne.com.au>
mvdan <mvdan@mvdan.cc>
norgeous <daniel@harte.me>
norgeous <daniel@danielharte.co.uk>
@@ -89,6 +93,9 @@ philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
plouj <ploujj@gmail.com>
pluby <phill.luby@newredo.com>
ProactiveServices <aD@simplypeachy.co.uk>
ProactiveServices <simplypeachy@users.noreply.github.com>
ProactiveServices <ProactiveServices@users.noreply.github.com>
pyfisch <pyfisch@gmail.com>
qbit <qbit@deftly.net>
ralder <ralder@yandex.ru>
@@ -99,8 +106,6 @@ rumpelsepp <rumpelsepp@sevenbyte.org>
scienmind <scintertech@cryptolab.net>
sciurius <jvromans@squirrel.nl>
seehuhn <voss@seehuhn.de>
simplypeachy <aD@simplypeachy.co.uk>
simplypeachy <simplypeachy@users.noreply.github.com>
Smiley73 <heiko@zuerker.org>
snnd <dw@risu.io>
Stefan-Code <stefan.github@gmail.com>

View File

@@ -400,6 +400,8 @@ func setup() {
fmt.Println(pkg)
runPrint("go", "get", "-u", pkg)
}
runPrint("go", "install", "-v", "./vendor/github.com/gogo/protobuf/protoc-gen-gogofast")
}
func test(pkgs ...string) {

View File

@@ -38,3 +38,7 @@ In all cases, the appropriate tables and indexes will be created at first
startup. If it doesn't exit with an error, you're fine.
See `stdiscosrv -help` for other options.
##### Third-party attribution
[cznic/lldb](https://github.com/cznic/lldb), Copyright (C) 2014 The lldb Authors.

View File

@@ -19,9 +19,9 @@ import (
"time"
"github.com/golang/groupcache/lru"
"github.com/juju/ratelimit"
"github.com/syncthing/syncthing/lib/protocol"
"golang.org/x/net/context"
"golang.org/x/time/rate"
)
type querysrv struct {
@@ -373,14 +373,14 @@ func (s *querysrv) limit(remote net.IP) bool {
bkt, ok := s.limiter.Get(key)
if ok {
bkt := bkt.(*ratelimit.Bucket)
if bkt.TakeAvailable(1) != 1 {
bkt := bkt.(*rate.Limiter)
if !bkt.Allow() {
// Rate limit exceeded; ignore packet
return true
}
} else {
// One packet per ten seconds average rate, burst ten packets
s.limiter.Add(key, ratelimit.NewBucket(10*time.Second/time.Duration(limitAvg), int64(limitBurst)))
// limitAvg is in packets per ten seconds.
s.limiter.Add(key, rate.NewLimiter(rate.Limit(limitAvg)/10, limitBurst))
}
return false

View File

@@ -13,3 +13,9 @@ If you still want to run it, you can run `go get github.com/syncthing/relaypools
from the build server.
See `relaypoolsrv -help` for configuration options.
##### Third-party attributions
[oschwald/geoip2-golang](https://github.com/oschwald/geoip2-golang), [oschwald/maxminddb-golang](https://github.com/oschwald/maxminddb-golang), Copyright (C) 2015 [Gregory J. Oschwald](mailto:oschwald@gmail.com).
[lib/pq](https://github.com/lib/pq)</a>, Copyright (C) 2011-2013 'pq' Contributors Portions Copyright (C) 2011 Blake Mizerany.

View File

@@ -23,14 +23,12 @@ import (
"time"
"github.com/golang/groupcache/lru"
"github.com/juju/ratelimit"
"github.com/oschwald/geoip2-golang"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
"golang.org/x/time/rate"
)
type location struct {
@@ -65,12 +63,12 @@ var (
dir string
evictionTime = time.Hour
debug bool
getLRUSize = 10 << 10
getLimitBurst int64 = 10
getLimitAvg = 1
postLRUSize = 1 << 10
postLimitBurst int64 = 2
postLimitAvg = 1
getLRUSize = 10 << 10
getLimitBurst = 10
getLimitAvg = 1
postLRUSize = 1 << 10
postLimitBurst = 2
postLimitAvg = 1
getLimit time.Duration
postLimit time.Duration
permRelaysFile string
@@ -99,10 +97,10 @@ func main() {
flag.DurationVar(&evictionTime, "eviction", evictionTime, "After how long the relay is evicted")
flag.IntVar(&getLRUSize, "get-limit-cache", getLRUSize, "Get request limiter cache size")
flag.IntVar(&getLimitAvg, "get-limit-avg", 2, "Allowed average get request rate, per 10 s")
flag.Int64Var(&getLimitBurst, "get-limit-burst", getLimitBurst, "Allowed burst get requests")
flag.IntVar(&getLimitBurst, "get-limit-burst", getLimitBurst, "Allowed burst get requests")
flag.IntVar(&postLRUSize, "post-limit-cache", postLRUSize, "Post request limiter cache size")
flag.IntVar(&postLimitAvg, "post-limit-avg", 2, "Allowed average post request rate, per minute")
flag.Int64Var(&postLimitBurst, "post-limit-burst", postLimitBurst, "Allowed burst post requests")
flag.IntVar(&postLimitBurst, "post-limit-burst", postLimitBurst, "Allowed burst post requests")
flag.StringVar(&permRelaysFile, "perm-relays", "", "Path to list of permanent relays")
flag.StringVar(&ipHeader, "ip-header", "", "Name of header which holds clients ip:port. Only meaningful when running behind a reverse proxy.")
flag.StringVar(&geoipPath, "geoip", "GeoLite2-City.mmdb", "Path to GeoLite2-City database")
@@ -446,7 +444,7 @@ func evict(relay relay) func() {
}
}
func limit(addr string, cache *lru.Cache, lock sync.RWMutex, rate time.Duration, burst int64) bool {
func limit(addr string, cache *lru.Cache, lock sync.RWMutex, intv time.Duration, burst int) bool {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return false
@@ -456,14 +454,14 @@ func limit(addr string, cache *lru.Cache, lock sync.RWMutex, rate time.Duration,
bkt, ok := cache.Get(host)
lock.RUnlock()
if ok {
bkt := bkt.(*ratelimit.Bucket)
if bkt.TakeAvailable(1) != 1 {
bkt := bkt.(*rate.Limiter)
if !bkt.Allow() {
// Rate limit
return true
}
} else {
lock.Lock()
cache.Add(host, ratelimit.NewBucket(rate, burst))
cache.Add(host, rate.NewLimiter(rate.Every(intv), burst))
lock.Unlock()
}
return false

View File

@@ -12,17 +12,15 @@ from the build server.
:exclamation:Warnings:exclamation: - Read or regret
-----
By default, all relay servers will join the default public relay pool, which means that the relay server will be availble for public use, and **will consume your bandwidth** helping others to connect.
By default, all relay servers will join to the default public relay pool, which means that the relay server will be availble for public use, and **will consume your bandwidth** helping others to connect.
If you wish to disable this behaviour, please specify `-pools=""` argument.
If you wish to disable this behaviour, please specify the `-pools=""` argument.
Please note that `strelaysrv` is only usable by `syncthing` **version v0.12 and onwards**.
To run `strelaysrv` you need to have port 22067 available to the internet, which means you might need to allow it through your firewall if you **have a public IP, or setup a port-forwarding** (22067 to 22067) if you are behind a router.
To run `strelaysrv` you need to have port 22067 available to the internet, which means you might need to port forward it and/or allow it through your firewall.
Furthermore, **by default strelaysrv will also expose a /status HTTP endpoint on port 22070**, which is used by the pool servers to peek at metrics of the strelaysrv, such as what are the current transfer rates, how many clients are connected, etc, etc. If you wish this information to be available, similarlly you might want to allow it through your firewall, or port-forward it (22070 to 22070) on your NAT device.
This is **not mandatory** for the strelaysrv to function, and is used only to gather metrics and present them in the overview page of the pool server, displaying stats about the specific relay.
Furthermore, by default `strelaysrv` will also expose a /status HTTP endpoint on port 22070, which is used by the pool servers to read metrics of the `strelaysrv`, such as the current transfer rates, how many clients are connected, etc. If you wish this information to be available you may need to port forward and allow it through your firewall. This is not mandatory for the `strelaysrv` to function, and is used only to gather metrics and present them in the overview page of the pool server.
At the point of writing the endpoint output looks as follows:
@@ -66,7 +64,7 @@ If you wish to disable the /status endpoint, provide `-status-srv=""` as one of
Running for public use
----
Make sure you have a public IP with port 22067 open, or make sure you have port-forwarding (22067 to 22067) if you are behind a router.
Make sure you have a public IP with port 22067 open, or have forwarded port 22067 if you are behind a NAT.
Run the `strelaysrv` with no arguments (or `-debug` if you want more output), and that should be enough for the server to join the public relay pool.
You should see a message saying:
@@ -79,37 +77,37 @@ See `strelaysrv -help` for other options, such as rate limits, timeout intervals
Running for private use
-----
Once you've started the `strelaysrv`, it will generate a key pair and print an URI:
Once you've started the `strelaysrv`, it will generate a key pair and print a URI:
```bash
relay://:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
```
This URI contains partial address of the relay server, as well as it's options which in the future may be taken into account when choosing the best suitable relay out of multiple available.
This URI contains a partial address of the relay server, as well as its options which in the future may be taken into account when choosing the most suitable relay.
Because `-listen` option was not used, the `strelaysrv` does not know it's external IP, therefore you should replace the host part of the URI with your public IP address on which the `strelaysrv` will be available:
Because the `-listen` option was not used `strelaysrv` does not know its external IP, therefore you should replace the host part of the URI with your public IP address on which the `strelaysrv` will be available:
```bash
relay://123.123.123.123:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
relay://192.0.2.1:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
```
If you do not care about certificate pinning (improved security) or do not care about passing verbose settings to the clients, you can shorten the URL to just the host part:
```bash
relay://123.123.123.123:22067
relay://192.0.2.1:22067
```
This URI can then be used in `syncthing` as one of the relay servers.
This URI can then be used in `syncthing` clients as one of the relay servers by adding the URI to the "Sync Protocol Listen Address" field, under Actions and Settings.
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Other items available in this repo
----
##### testutil
A test utility which can be used to test connectivity of a relay server.
You need to generate two x509 key pairs (key.pem and cert.pem), one for the client, another one for the server, in separate directories.
A test utility which can be used to test the connectivity of a relay server.
You need to generate two x509 key pairs (key.pem and cert.pem), one for the client and one for the server, in separate directories.
Afterwards, start the client:
```bash
./testutil -relay="relay://uri.of.relay" -keys=certs/client/ -join
./testutil -relay="relay://192.0.2.1:22067" -keys=certs/client/ -join
```
This prints out the client ID:
@@ -120,7 +118,7 @@ This prints out the client ID:
In the other terminal run the following:
```bash
./testutil -relay="relay://uri.of.relay" -keys=certs/server/ -connect=BG2C5ZA-W7XPFDO-LH222Z6-65F3HJX-ADFTGRT-3SBFIGM-KV26O2Q-E5RMRQ2
./testutil -relay="relay://192.0.2.1:22067" -keys=certs/server/ -connect=BG2C5ZA-W7XPFDO-LH222Z6-65F3HJX-ADFTGRT-3SBFIGM-KV26O2Q-E5RMRQ2
```
Which should then give you an interactive prompt, where you can type things in one terminal, and they get relayed to the other terminal.
@@ -137,5 +135,3 @@ Relay related libraries used by this repo
Only used by the testutil.
[Available here](https://github.com/syncthing/syncthing/tree/master/lib/relay/client)

View File

@@ -20,10 +20,10 @@ import (
"syscall"
"time"
"github.com/juju/ratelimit"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/relay/protocol"
"github.com/syncthing/syncthing/lib/tlsutil"
"golang.org/x/time/rate"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/nat"
@@ -68,8 +68,8 @@ var (
globalLimitBps int
overLimit int32
descriptorLimit int64
sessionLimiter *ratelimit.Bucket
globalLimiter *ratelimit.Bucket
sessionLimiter *rate.Limiter
globalLimiter *rate.Limiter
statusAddr string
poolAddrs string
@@ -215,10 +215,10 @@ func main() {
}
if sessionLimitBps > 0 {
sessionLimiter = ratelimit.NewBucketWithRate(float64(sessionLimitBps), int64(2*sessionLimitBps))
sessionLimiter = rate.NewLimiter(rate.Limit(sessionLimitBps), 2*sessionLimitBps)
}
if globalLimitBps > 0 {
globalLimiter = ratelimit.NewBucketWithRate(float64(globalLimitBps), int64(2*globalLimitBps))
globalLimiter = rate.NewLimiter(rate.Limit(globalLimitBps), 2*globalLimitBps)
}
if statusAddr != "" {

View File

@@ -7,15 +7,16 @@ import (
"encoding/hex"
"fmt"
"log"
"math"
"net"
"sync"
"sync/atomic"
"time"
"github.com/juju/ratelimit"
"github.com/syncthing/syncthing/lib/relay/protocol"
"golang.org/x/time/rate"
syncthingprotocol "github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/relay/protocol"
)
var (
@@ -26,7 +27,7 @@ var (
bytesProxied int64
)
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit, globalRateLimit *ratelimit.Bucket) *session {
func newSession(serverid, clientid syncthingprotocol.DeviceID, sessionRateLimit, globalRateLimit *rate.Limiter) *session {
serverkey := make([]byte, 32)
_, err := rand.Read(serverkey)
if err != nil {
@@ -108,7 +109,7 @@ type session struct {
clientkey []byte
clientid syncthingprotocol.DeviceID
rateLimit func(bytes int64)
rateLimit func(bytes int)
connsChan chan net.Conn
conns []net.Conn
@@ -268,7 +269,7 @@ func (s *session) proxy(c1, c2 net.Conn) error {
}
if s.rateLimit != nil {
s.rateLimit(int64(n))
s.rateLimit(n)
}
c2.SetWriteDeadline(time.Now().Add(networkTimeout))
@@ -283,7 +284,7 @@ func (s *session) String() string {
return fmt.Sprintf("<%s/%s>", hex.EncodeToString(s.clientkey)[:5], hex.EncodeToString(s.serverkey)[:5])
}
func makeRateLimitFunc(sessionRateLimit, globalRateLimit *ratelimit.Bucket) func(int64) {
func makeRateLimitFunc(sessionRateLimit, globalRateLimit *rate.Limiter) func(int) {
// This may be a case of super duper premature optimization... We build an
// optimized function to do the rate limiting here based on what we need
// to do and then use it in the loop.
@@ -298,29 +299,55 @@ func makeRateLimitFunc(sessionRateLimit, globalRateLimit *ratelimit.Bucket) func
if sessionRateLimit == nil {
// We only have a global limiter
return func(bytes int64) {
globalRateLimit.Wait(bytes)
return func(bytes int) {
take(bytes, globalRateLimit)
}
}
if globalRateLimit == nil {
// We only have a session limiter
return func(bytes int64) {
sessionRateLimit.Wait(bytes)
return func(bytes int) {
take(bytes, sessionRateLimit)
}
}
// We have both. Queue the bytes on both the global and session specific
// rate limiters. Wait for both in parallell, so that the actual send
// happens when both conditions are satisfied. In practice this just means
// wait the longer of the two times.
return func(bytes int64) {
t0 := sessionRateLimit.Take(bytes)
t1 := globalRateLimit.Take(bytes)
if t0 > t1 {
time.Sleep(t0)
} else {
time.Sleep(t1)
}
// rate limiters.
return func(bytes int) {
take(bytes, sessionRateLimit, globalRateLimit)
}
}
// take is a utility function to consume tokens from a set of rate.Limiters.
// Tokens are consumed in parallel on all limiters, respecting their
// individual burst sizes.
func take(tokens int, ls ...*rate.Limiter) {
// minBurst is the smallest burst size supported by all limiters.
minBurst := int(math.MaxInt32)
for _, l := range ls {
if burst := l.Burst(); burst < minBurst {
minBurst = burst
}
}
for tokens > 0 {
// chunk is how many tokens we can consume at a time
chunk := tokens
if chunk > minBurst {
chunk = minBurst
}
// maxDelay is the longest delay mandated by any of the limiters for
// the chosen chunk size.
var maxDelay time.Duration
for _, l := range ls {
res := l.ReserveN(time.Now(), chunk)
if del := res.Delay(); del > maxDelay {
maxDelay = del
}
}
time.Sleep(maxDelay)
tokens -= chunk
}
}

View File

@@ -326,8 +326,10 @@ func (s *apiService) Serve() {
handler = debugMiddleware(handler)
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
Handler: handler,
// ReadTimeout must be longer than SyncthingController $scope.refresh
// interval to avoid HTTP keepalive/GUI refresh race.
ReadTimeout: 15 * time.Second,
}
s.fss = newFolderSummaryService(s.cfg, s.model)

View File

@@ -12,6 +12,7 @@ import (
"errors"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
@@ -140,11 +141,11 @@ show time only (2).
Development Settings
--------------------
The following environment variables modify syncthing's behavior in ways that
The following environment variables modify Syncthing's behavior in ways that
are mostly useful for developers. Use with care.
STNODEFAULTFOLDER Don't create a default folder when starting for the first
time. This variable will be ignored anytime after the first
time. This variable will be ignored anytime after the first
run.
STGUIASSETS Directory to load GUI assets from. Overrides compiled in
@@ -153,8 +154,8 @@ are mostly useful for developers. Use with care.
STTRACE A comma separated string of facilities to trace. The valid
facility strings listed below.
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
profiler with HTTP access.
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start
the profiler with HTTP access.
STCPUPROFILE Write a CPU profile to cpu-$pid.pprof on exit.
@@ -167,6 +168,20 @@ are mostly useful for developers. Use with care.
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
STDEADLOCK Used for debugging internal deadlocks. Use only under
direction of a developer.
STDEADLOCKTIMEOUT Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STDEADLOCKTHRESHOLD Used for debugging internal deadlocks; sets debug
sensitivity. Use only under direction of a developer.
STNORESTART Equivalent to the -no-restart argument. Disable the
Syncthing monitor process which handles restarts for some
configuration changes, upgrades, crashes and also log file
writing (stdout is still written).
STNOUPGRADE Disable automatic upgrades.
STHASHING Select the SHA256 hashing package to use. Possible values
@@ -179,7 +194,7 @@ are mostly useful for developers. Use with care.
GOGC Percentage of heap growth at which to trigger GC. Default is
100. Lower numbers keep peak memory usage down, at the price
of CPU usage (ie. performance).
of CPU usage (i.e. performance).
Debugging Facilities
@@ -211,6 +226,7 @@ type RuntimeOptions struct {
hideConsole bool
logFile string
auditEnabled bool
auditFile string
verbose bool
paused bool
unpaused bool
@@ -260,7 +276,7 @@ func parseCommandLineOptions() RuntimeOptions {
flag.IntVar(&options.logFlags, "logflags", options.logFlags, "Select information in log line prefix (see below)")
flag.BoolVar(&options.noBrowser, "no-browser", false, "Do not start browser")
flag.BoolVar(&options.browserOnly, "browser-only", false, "Open GUI in browser")
flag.BoolVar(&options.noRestart, "no-restart", options.noRestart, "Do not restart; just exit")
flag.BoolVar(&options.noRestart, "no-restart", options.noRestart, "Disable monitor process, managed restarts and log file writing")
flag.BoolVar(&options.resetDatabase, "reset-database", false, "Reset the database, forcing a full rescan and resync")
flag.BoolVar(&options.resetDeltaIdxs, "reset-deltas", false, "Reset delta index IDs, forcing a full index exchange")
flag.BoolVar(&options.doUpgrade, "upgrade", false, "Perform upgrade")
@@ -273,6 +289,7 @@ func parseCommandLineOptions() RuntimeOptions {
flag.BoolVar(&options.paused, "paused", false, "Start with all devices and folders paused")
flag.BoolVar(&options.unpaused, "unpaused", false, "Start with all devices and folders unpaused")
flag.StringVar(&options.logFile, "logfile", options.logFile, "Log file name (use \"-\" for stdout)")
flag.StringVar(&options.auditFile, "auditfile", options.auditFile, "Specify audit file (use \"-\" for stdout, \"--\" for stderr)")
if runtime.GOOS == "windows" {
// Allow user to hide the console window
flag.BoolVar(&options.hideConsole, "no-console", false, "Hide console window")
@@ -545,7 +562,7 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
l.SetPrefix("[start] ")
if runtimeOptions.auditEnabled {
startAuditing(mainService)
startAuditing(mainService, runtimeOptions.auditFile)
}
if runtimeOptions.verbose {
@@ -637,9 +654,6 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
},
}
// If the read or write rate should be limited, set up a rate limiter for it.
// This will be used on connections created in the connect and listen routines.
opts := cfg.Options()
if !opts.SymlinksEnabled {
@@ -931,11 +945,31 @@ func copyFile(src, dst string) error {
return nil
}
func startAuditing(mainService *suture.Supervisor) {
auditFile := timestampedLoc(locAuditLog)
fd, err := os.OpenFile(auditFile, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
l.Fatalln("Audit:", err)
func startAuditing(mainService *suture.Supervisor, auditFile string) {
var fd io.Writer
var err error
var auditDest string
var auditFlags int
if auditFile == "-" {
fd = os.Stdout
auditDest = "stdout"
} else if auditFile == "--" {
fd = os.Stderr
auditDest = "stderr"
} else {
if auditFile == "" {
auditFile = timestampedLoc(locAuditLog)
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_EXCL
} else {
auditFlags = os.O_WRONLY | os.O_CREATE | os.O_APPEND
}
fd, err = os.OpenFile(auditFile, auditFlags, 0600)
if err != nil {
l.Fatalln("Audit:", err)
}
auditDest = auditFile
}
auditService := newAuditService(fd)
@@ -945,7 +979,7 @@ func startAuditing(mainService *suture.Supervisor) {
// ensure we capture all events from the start.
auditService.WaitForStart()
l.Infoln("Audit log in", auditFile)
l.Infoln("Audit log in", auditDest)
}
func setupGUI(mainService *suture.Supervisor, cfg *config.Wrapper, m *model.Model, apiSub events.BufferedSubscription, diskSub events.BufferedSubscription, discoverer discover.CachingMux, connectionsService *connections.Service, errors, systemLog logger.Recorder, runtimeOptions RuntimeOptions) {

View File

@@ -154,7 +154,7 @@ func (s *verboseService) formatEvent(ev events.Event) string {
if total > 0 {
pct = 100 * current / total
}
return fmt.Sprintf("Scanning folder %q, %d%% done (%.01f MB/s)", folder, pct, rate)
return fmt.Sprintf("Scanning folder %q, %d%% done (%.01f MiB/s)", folder, pct, rate)
case events.DevicePaused:
data := ev.Data.(map[string]string)

View File

@@ -31,7 +31,7 @@
"Command": "Comando",
"Comment, when used at the start of a line": "Comentar, quant s'utilitza al principi d'una línia",
"Compression": "Compresió",
"Configured": "Configured",
"Configured": "Configurat",
"Connection Error": "Error de connexió",
"Connection Type": "Tipus de connexió",
"Copied from elsewhere": "Copiat de qualsevol lloc",
@@ -45,15 +45,15 @@
"Device Name": "Nom del dispositiu",
"Devices": "Dispositius",
"Disconnected": "Desconnectat",
"Discovered": "Discovered",
"Discovered": "Descobert",
"Discovery": "Descobriment",
"Documentation": "Documentació",
"Download Rate": "Velocitat de descàrrega",
"Downloaded": "Descarregat",
"Downloading": "Descarregant",
"Edit": "Editar",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Editar Dispositiu",
"Edit Folder": "Editar Carpeta",
"Editing": "Editant",
"Enable NAT traversal": "Permetre NAT transversal",
"Enable Relaying": "Permetre Transmissions",
@@ -97,7 +97,7 @@
"Last Scan": "Últim escaneig",
"Last seen": "Vist per última vegada",
"Later": "Més tard",
"Latest Change": "Latest Change",
"Latest Change": "Últim Canvi",
"Listeners": "Escoltants",
"Local Discovery": "Descobriment local",
"Local State": "Estat local",
@@ -138,7 +138,7 @@
"Quick guide to supported patterns": "Guía ràpida de patrons suportats",
"RAM Utilization": "Utilització de la RAM",
"Random": "Aleatori",
"Reduced by ignore patterns": "Reduced by ignore patterns",
"Reduced by ignore patterns": "Reduït ignorant patrons",
"Release Notes": "Notes de la versió",
"Remote Devices": "Dispositius Remots",
"Remove": "Eliminar",
@@ -156,8 +156,8 @@
"Scanning": "Rastrejant",
"Select the devices to share this folder with.": "Selecciona els dispositius amb els que compartir aquesta carpeta.",
"Select the folders to share with this device.": "Selecciona les carpetes per a compartir amb aquest dispositiu.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Enviar i Rebre",
"Send Only": "Enviar Solament",
"Settings": "Ajustos",
"Share": "Compartir",
"Share Folder": "Compartir carpeta",
@@ -236,8 +236,8 @@
"Yes": "Sí",
"You must keep at least one version.": "Es deu mantindre al menys una versió.",
"days": "dies",
"directories": "directories",
"files": "files",
"directories": "directoris",
"files": "arxius",
"full documentation": "Documentació completa",
"items": "Elements",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vol compartit la carpeta \"{{folder}}\".",

View File

@@ -8,7 +8,7 @@
"Add": "Hinzufügen",
"Add Device": "Gerät hinzufügen",
"Add Folder": "Ordner hinzufügen",
"Add Remote Device": "Fern-Gerät hinzufügen",
"Add Remote Device": "Gerät hinzufügen",
"Add new folder?": "Neuen Ordner hinzufügen?",
"Address": "Adresse",
"Addresses": "Adressen",
@@ -165,7 +165,7 @@
"Share With Devices": "Teile mit diesen Geräten",
"Share this folder?": "Diesen Ordner teilen?",
"Shared With": "Geteilt mit",
"Show ID": "Kennung anzeigen",
"Show ID": "Eigene Kennung",
"Show QR": "Zeige QR Code",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Wird anstatt der Gerätekennung im Verbund-Status angezeigt. Wird als optionaler Standardname an andere Geräte bekannt gegeben.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Wird auf diesem Gerät als Gerätename angezeigt und an die anderen Geräte im Geräte-Verbund weitergegeben. Wenn kein Gerätename anegegeben wird, wird der Name des entfernten Gerätes genommen.",

View File

@@ -88,7 +88,7 @@
"Ignore Patterns": "Πρότυπο για αγνόηση",
"Ignore Permissions": "Αγνόησε τα δικαιώματα",
"Incoming Rate Limit (KiB/s)": "Περιορισμός ταχύτητας λήψης (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Με μια εσφαλμένη ρύθμιση μπορεί να προκληθεί ζημιά στα περιεχόμενα των φακέλων και το Syncthing ενδέχεται να σταματήσει να λειτουργεί.",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Με μια εσφαλμένη ρύθμιση μπορεί να προκληθεί ζημιά στα περιεχόμενα των φακέλων και το Syncthing να σταματήσει να λειτουργεί.",
"Introducer": "Βασικός κόμβος",
"Inversion of the given condition (i.e. do not exclude)": "Αντιστροφή της δοσμένης συνθήκης (π.χ. να μην εξαιρείς) ",
"Keep Versions": "Διατήρηση εκδόσεων",

View File

@@ -31,7 +31,7 @@
"Command": "Acción",
"Comment, when used at the start of a line": "Comentar, cuando se usa al comienzo de una línea",
"Compression": "Compresión",
"Configured": "Configured",
"Configured": "Configurado",
"Connection Error": "Error de conexión",
"Connection Type": "Tipo de conexión",
"Copied from elsewhere": "Copiado de otro sitio",
@@ -45,15 +45,15 @@
"Device Name": "Nombre del Dispositivo",
"Devices": "Dispositivos",
"Disconnected": "Desconectado",
"Discovered": "Discovered",
"Discovered": "Descubierto",
"Discovery": "Descubrimiento",
"Documentation": "Documentación",
"Download Rate": "Velocidad de descarga",
"Downloaded": "Descargado",
"Downloading": "Descargando",
"Edit": "Editar",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Editar Dispositivo",
"Edit Folder": "Editar Carpeta",
"Editing": "Editando",
"Enable NAT traversal": "Permitir NAT transversal",
"Enable Relaying": "Habilitar Retransmisión",
@@ -97,7 +97,7 @@
"Last Scan": "Último escaneo",
"Last seen": "Visto por última vez",
"Later": "Más tarde",
"Latest Change": "Latest Change",
"Latest Change": "Último Cambio",
"Listeners": "Oyentes",
"Local Discovery": "Descubrimiento local",
"Local State": "Estado local",
@@ -138,7 +138,7 @@
"Quick guide to supported patterns": "Guía rápida de patrones soportados",
"RAM Utilization": "Uso de RAM",
"Random": "Aleatorio",
"Reduced by ignore patterns": "Reduced by ignore patterns",
"Reduced by ignore patterns": "Reducido por patrones de ignorar",
"Release Notes": "Notas de la versión",
"Remote Devices": "Otros dispositivos",
"Remove": "Eliminar",
@@ -156,8 +156,8 @@
"Scanning": "Analizando",
"Select the devices to share this folder with.": "Selecciona los dispositivos con los que compartir esta carpeta.",
"Select the folders to share with this device.": "Selecciona las carpetas para compartir con este dispositivo.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Enviar y Recibir",
"Send Only": "Solo Enviar",
"Settings": "Ajustes",
"Share": "Compartir",
"Share Folder": "Compartir carpeta",
@@ -236,8 +236,8 @@
"Yes": "Si",
"You must keep at least one version.": "Debes mantener al menos una versión.",
"days": "días",
"directories": "directories",
"files": "files",
"directories": "directorios",
"files": "archivos",
"full documentation": "Documentación completa",
"items": "Elementos",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} quiere compartir la carpeta \"{{folder}}\".",

View File

@@ -52,8 +52,8 @@
"Downloaded": "Ladattu",
"Downloading": "Ladataan",
"Edit": "Muokkaa",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Muokkaa laitetta",
"Edit Folder": "Muokkaa kansiota",
"Editing": "Muokkaus",
"Enable NAT traversal": "Aktivoi osoitteenmuunnoksen kierto",
"Enable Relaying": "Aktivoi yhteyden välitys",
@@ -97,7 +97,7 @@
"Last Scan": "Viimeisin skannaus",
"Last seen": "Nähty viimeksi",
"Later": "Myöhemmin",
"Latest Change": "Latest Change",
"Latest Change": "Viimeisin muutos",
"Listeners": "Kuuntelijat",
"Local Discovery": "Paikallinen etsintä",
"Local State": "Paikallinen tila",
@@ -156,8 +156,8 @@
"Scanning": "Skannataan",
"Select the devices to share this folder with.": "Valitse laitteet, joiden kanssa tämä kansio jaetaan.",
"Select the folders to share with this device.": "Valitse kansiot jaettavaksi tämän laitteen kanssa.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Lähetä & vastaanota",
"Send Only": "Vain lähetys",
"Settings": "Asetukset",
"Share": "Jaa",
"Share Folder": "Jaa kansio",

View File

@@ -49,8 +49,8 @@
"Discovery": "Découverte",
"Documentation": "Documentation",
"Download Rate": "Débit de réception",
"Downloaded": "Téléchargé",
"Downloading": "Téléchargement",
"Downloaded": "Reçu",
"Downloading": "Réception",
"Edit": "Modifier",
"Edit Device": "Modifier l'appareil",
"Edit Folder": "Modifier le partage",
@@ -87,7 +87,7 @@
"Ignore": "Ignorer",
"Ignore Patterns": "Règles d'exclusion",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit entrant (KiB/s)",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Introducer": "Appareil introducteur",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
@@ -124,7 +124,7 @@
"Options": "Options",
"Out of Sync": "Désynchronisé",
"Out of Sync Items": "Éléments non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
"Outgoing Rate Limit (KiB/s)": "Limite du débit d'envoi (Kio/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin vers le répertoire dans l'appareil local. Il sera créé s'il n'existe pas. Vous pouvez entrer un chemin absolu (p.ex \"/home/moi/Sync/Exemple\") ou relatif à celui du programme (p.ex \"..\\Partages\\Exemple\" - utile pour installation portable). Le caractère tilde (~, ou ~+Espace sous Windows XP+Azerty) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
@@ -157,7 +157,7 @@
"Select the devices to share this folder with.": "Synchroniser avec :",
"Select the folders to share with this device.": "Sélectionner les partages auxquels participe cet appareil.",
"Send & Receive": "Envoi & réception",
"Send Only": "Envoi seulement",
"Send Only": "Envoi (lecture seule)",
"Settings": "Configuration",
"Share": "Partager",
"Share Folder": "Partager",

View File

@@ -48,9 +48,9 @@
"Discovered": "Découvert",
"Discovery": "Découverte",
"Documentation": "Documentation",
"Download Rate": "Vitesse de réception",
"Downloaded": "Téléchargé",
"Downloading": "Téléchargement",
"Download Rate": "Débit de réception",
"Downloaded": "Reçu",
"Downloading": "Réception",
"Edit": "Modifier",
"Edit Device": "Modifier l'appareil",
"Edit Folder": "Modifier le partage",
@@ -87,7 +87,7 @@
"Ignore": "Ignorer",
"Ignore Patterns": "Exclusions ...",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Ko/s)",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Kio/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Introducer": "Appareil introducteur",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
@@ -124,7 +124,7 @@
"Options": "Options",
"Out of Sync": "Désynchronisé",
"Out of Sync Items": "Éléments non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit d'émission (Ko/s)",
"Outgoing Rate Limit (KiB/s)": "Limite du débit d'envoi (Kio/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin vers le répertoire dans l'appareil local. Il sera créé s'il n'existe pas. Vous pouvez entrer un chemin absolu (p.ex \"/home/moi/Sync/Exemple\") ou relatif à celui du programme (p.ex \"..\\Partages\\Exemple\" - utile pour installation portable). Le caractère tilde (~, ou ~+Espace sous Windows XP+Azerty) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les copies doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
@@ -156,8 +156,8 @@
"Scanning": "Analyse en cours",
"Select the devices to share this folder with.": "Synchroniser avec :",
"Select the folders to share with this device.": "Sélectionner les partages auxquels participe cet appareil.",
"Send & Receive": "Envoyer & recevoir",
"Send Only": "Envoyer seulement",
"Send & Receive": "Envoi & réception",
"Send Only": "Envoi (lecture seule)",
"Settings": "Configuration",
"Share": "Partager",
"Share Folder": "Partager",
@@ -224,7 +224,7 @@
"Upgrade": "Mettre à jour",
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
"Upgrading": "Mise à jour de Syncthing",
"Upload Rate": "Vitesse d'émission",
"Upload Rate": "Débit d'envoi",
"Uptime": "Durée de fonctionnement",
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
"Version": "Version",

View File

@@ -52,8 +52,8 @@
"Downloaded": "Ynladen",
"Downloading": "Oan it ynladen",
"Edit": "Bewurkje",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Apparaat Bewurkje",
"Edit Folder": "Map Bewurkje",
"Editing": "Bewurkjen",
"Enable NAT traversal": "NAT-trochkruse ynskeakelje",
"Enable Relaying": "Trochjaan tastean",
@@ -97,7 +97,7 @@
"Last Scan": "Lêst Skent",
"Last seen": "Lêst sjoen",
"Later": "Letter",
"Latest Change": "Latest Change",
"Latest Change": "Meast Resinte Feroarings",
"Listeners": "Harkers",
"Local Discovery": "Lokale ûntdekking",
"Local State": "Lokale tastân",
@@ -156,8 +156,8 @@
"Scanning": "Oan it skennen",
"Select the devices to share this folder with.": "Sykje de apparaten út om dizze map mei te dielen.",
"Select the folders to share with this device.": "Sykje de mappen út om mei dit apparaat te dielen.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Stjoere & Untfange",
"Send Only": "Allinnich Stjoere",
"Settings": "Ynstellings",
"Share": "Diele",
"Share Folder": "Map diele",

View File

@@ -52,8 +52,8 @@
"Downloaded": "Scaricato",
"Downloading": "Scaricamento in corso",
"Edit": "Modifica",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Modifica Dispositivo",
"Edit Folder": "Modifica Cartella",
"Editing": "Modifica di",
"Enable NAT traversal": "Abilita NAT traversal",
"Enable Relaying": "Abilita Reindirizzamento",
@@ -156,8 +156,8 @@
"Scanning": "Scansione in corso",
"Select the devices to share this folder with.": "Seleziona i dispositivi con i quali condividere questa cartella.",
"Select the folders to share with this device.": "Seleziona le cartelle da condividere con questo dispositivo.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Invia & Ricevi",
"Send Only": "Invia Soltanto",
"Settings": "Impostazioni",
"Share": "Condividi",
"Share Folder": "Condividi la Cartella",

View File

@@ -8,7 +8,7 @@
"Add": "追加",
"Add Device": "デバイスを追加",
"Add Folder": "フォルダーを追加",
"Add Remote Device": "他のデバイスを追加",
"Add Remote Device": "接続先デバイスを追加",
"Add new folder?": "新しいフォルダーとして追加しますか?",
"Address": "アドレス",
"Addresses": "アドレス",
@@ -140,7 +140,7 @@
"Random": "ランダム",
"Reduced by ignore patterns": "無視パターン該当分を除く",
"Release Notes": "リリースノート",
"Remote Devices": "他のデバイス",
"Remote Devices": "接続先デバイス",
"Remove": "除去",
"Required identifier for the folder. Must be the same on all cluster devices.": "フォルダーの識別子で、必須です。このフォルダーを共有する全てのデバイス上で同一でなくてはなりません。",
"Rescan": "再スキャン",
@@ -236,8 +236,8 @@
"Yes": "はい",
"You must keep at least one version.": "少なくとも一つのバージョンを保存してください。",
"days": "日",
"directories": "ディレクトリ",
"files": "ファイル",
"directories": "個のディレクトリ",
"files": "個のファイル",
"full documentation": "詳細なマニュアル",
"items": "項目",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} がフォルダー \"{{folder}}\" を共有するよう求めています。",

View File

@@ -52,8 +52,8 @@
"Downloaded": "다운로드됨",
"Downloading": "다운로드 중",
"Edit": "편집",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "기기 편집",
"Edit Folder": "폴더 편집",
"Editing": "편집",
"Enable NAT traversal": "NAT traversal 활성화",
"Enable Relaying": "Relaying 활성화",
@@ -156,8 +156,8 @@
"Scanning": "탐색중",
"Select the devices to share this folder with.": "이 폴더를 공유할 장치를 선택합니다.",
"Select the folders to share with this device.": "이 장치와 공유할 폴더를 선택합니다.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "송신 & 수신",
"Send Only": "송신 전용",
"Settings": "설정",
"Share": "공유",
"Share Folder": "폴더 공유",

View File

@@ -103,7 +103,7 @@
"Local State": "Lokal tilstand",
"Local State (Total)": "Lokal tilstand (Total)",
"Major Upgrade": "Storoppgradering",
"Master": "Styrende",
"Master": "Beskyttet",
"Maximum Age": "Maksimal levetid",
"Metadata Only": "Kun metadata",
"Minimum Free Disk Space": "Nødvendig ledig diskplass",

View File

@@ -156,8 +156,8 @@
"Scanning": "Verificando",
"Select the devices to share this folder with.": "Selecione os dispositivos com os quais esta pasta será compartilhada.",
"Select the folders to share with this device.": "Selecione as pastas a serem compartilhadas com este dispositivo.",
"Send & Receive": "Enviar e receber",
"Send Only": "Somente enviar",
"Send & Receive": "Envia e recebe",
"Send Only": "Somente envia",
"Settings": "Configurações",
"Share": "Compartilhar",
"Share Folder": "Compartilhar pasta",

View File

@@ -31,7 +31,7 @@
"Command": "Команда",
"Comment, when used at the start of a line": "Коментар, якщо використовується на початку рядка",
"Compression": "Стиснення",
"Configured": "Configured",
"Configured": "Налаштовано",
"Connection Error": "Помилка з’єднання",
"Connection Type": "Тип з*єднання",
"Copied from elsewhere": "Скопійовано з іншого місця",
@@ -45,15 +45,15 @@
"Device Name": "Назва пристрою",
"Devices": "Пристрої",
"Disconnected": "З’єднання відсутнє",
"Discovered": "Discovered",
"Discovered": "Виявлено",
"Discovery": "Сервери координації NAT",
"Documentation": "Документація",
"Download Rate": "Швидкість завантаження",
"Downloaded": "Завантажено",
"Downloading": "Завантаження",
"Edit": "Редагувати",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Edit Device": "Налаштування пристрою",
"Edit Folder": "Налаштування директорії",
"Editing": "Редагування",
"Enable NAT traversal": "Увімкнути NAT traversal",
"Enable Relaying": "Увімкнути ретрансляцію (relaying)",
@@ -97,7 +97,7 @@
"Last Scan": "Останнє сканування",
"Last seen": "З’являвся останній раз",
"Later": "Пізніше",
"Latest Change": "Latest Change",
"Latest Change": "Найостанніша зміна",
"Listeners": "Приймачі (TCP & Relay)",
"Local Discovery": "Локальне виявлення (LAN)",
"Local State": "Локальний статус",
@@ -138,7 +138,7 @@
"Quick guide to supported patterns": "Швидкий посібник по шаблонам, що підтримуються",
"RAM Utilization": "Використання RAM",
"Random": "Випадково",
"Reduced by ignore patterns": "Reduced by ignore patterns",
"Reduced by ignore patterns": "Зменшено шаблонами ігнорування",
"Release Notes": "Примітки до випуску",
"Remote Devices": "Віддалені пристрої",
"Remove": "Видалити",
@@ -156,8 +156,8 @@
"Scanning": "Сканування",
"Select the devices to share this folder with.": "Оберіть пристрої, які матимуть доступ до цієї директорії.",
"Select the folders to share with this device.": "Оберіть директорії до яких матиме доступ цей пристрій.",
"Send & Receive": "Send & Receive",
"Send Only": "Send Only",
"Send & Receive": "Відправити та отримати",
"Send Only": "Лише відправити",
"Settings": "Налаштування",
"Share": "Розповсюдити ",
"Share Folder": "Розповсюдити каталог",
@@ -236,8 +236,8 @@
"Yes": "Так",
"You must keep at least one version.": "Ви повинні зберігати щонайменше одну версію.",
"days": "днів",
"directories": "directories",
"files": "files",
"directories": "директорії",
"files": "файли",
"full documentation": "повна документація",
"items": "елементи",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} хоче поділитися директорією \"{{folder}}\".",

View File

@@ -9,10 +9,10 @@
</h1>
<hr/>
<p translate>Copyright &copy; 2014-2016 the following Contributors:</p>
<p translate>Copyright &copy; 2014-2017 the following Contributors:</p>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Alexander Graf, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Aaron Bieber, Adam Piggott, Alessandro G., Alexandre Viau, Andrew Dunham, Andrey D, Antoine Lamielle, Arthur Axel fREW Schmidt, Bart De Vries, Ben Curthoys, Ben Sidhom, Benny Ng, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Colin Kennedy, Daniel Bergmann, Daniel Martí, David Rimmer, Denis A., Dennis Wilson, Dominik Heidler, Elias Jarlebring, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Heiko Zuerker, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jens Diemer, Jochen Voss, Johan Vromans, Karol Różycki, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Laurent Etiemble, Leo Arias, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mateusz Naściszewski, Matt Burke, Max Schulze, Michael Jephcote, Michael Tilli, Nate Morrison, Pascal Jungblut, Peter Hoeg, Phill Luby, Piotr Bejda, Roman Zaynetdinov, Scott Klupfel, Simon Frei, Stefan Kuntz, Tim Abell, Tim Howes, Tobias Nygren, Tomas Cerveny, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, William A. Kennington III, Wulf Weich, Xavier O., Yannic A.
Jakob Borg, Audrius Butkevicius, Alexander Graf, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Aaron Bieber, Adam Piggott, Adel Qalieh, Alessandro G., Alexandre Viau, Andrew Dunham, Andrey D, Antoine Lamielle, Arthur Axel fREW Schmidt, Bart De Vries, Ben Curthoys, Ben Sidhom, Benny Ng, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Colin Kennedy, Daniel Bergmann, Daniel Martí, David Rimmer, Denis A., Dennis Wilson, Dominik Heidler, Elias Jarlebring, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Heiko Zuerker, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jens Diemer, Jochen Voss, Johan Vromans, Karol Różycki, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Kevin White, Jr., Kurt Fitzner, Laurent Etiemble, Leo Arias, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mark Pulford, Mateusz Naściszewski, Matt Burke, Max Schulze, Michael Jephcote, Michael Tilli, Nate Morrison, Pascal Jungblut, Peter Hoeg, Phill Luby, Piotr Bejda, Roman Zaynetdinov, Scott Klupfel, Simon Frei, Stefan Kuntz, Tim Abell, Tim Howes, Tobias Nygren, Tomas Cerveny, Tully Robinson, Tyler Brazier, Unrud, Veeti Paananen, Victor Buinsky, Vil Brekin, William A. Kennington III, Wulf Weich, Xavier O., Yannic A.
</div>
</div>
<hr/>
@@ -20,16 +20,22 @@ Jakob Borg, Audrius Butkevicius, Alexander Graf, Anderson Mesquita, Antony Male,
<p translate>Syncthing includes the following software or portions thereof:</p>
<ul class="list-unstyled two-columns">
<li><a href="https://golang.org">The Go Programming Language</a>, Copyright &copy; 2012 The Go Authors.</li>
<li><a href="https://github.com/sasha-s/go-deadlock">sasha-s/go-deadlock</a>, Copyright &copy; 2016 sasha-s</li>
<li><a href="https://github.com/rcrowley/go-metrics">rcrowley/go-metrics</a>, Copyright &copy; 2012 Richard Crowley.</li>
<li><a href="https://github.com/minio/sha256-simd">minio/sha256-simd</a>, Copyright &copy; 2016 Minio, Inc.</li>
<li><a href="https://github.com/jackpal/gateway">jackpal/gateway</a>, Copyright &copy; 2010 Jack Palevich.</li>
<li><a href="https://github.com/gogo/protobuf">gogo/protobuf</a>, Copyright &copy; 2013 The GoGo Authors.</li>
<li><a href="https://github.com/gobwas/glob">gobwas/glob</a>, Copyright &copy; 2016 Sergey Kamardin.</li>
<li><a href="https://github.com/d4l3k/messagediff">d4l3k/messagediff</a>, Copyright &copy; 2015 Tristan Rice.</li>
<li><a href="https://github.com/bkaradzic/go-lz4">bkaradzic/go-lz4</a>, Copyright &copy; 2011-2012 Branimir Karadzic, 2013 Damian Gryski.</li>
<li><a href="https://github.com/kardianos/osext">kardianos/osext</a>, Copyright &copy; 2012 Daniel Theophanes.</li>
<li><a href="https://github.com/golang/snappy">golang/snappy</a>, Copyright &copy; 2011 The Snappy-Go Authors.</li>
<li><a href="https://github.com/juju/ratelimit">juju/ratelimit</a>, Copyright &copy; 2015 Canonical Ltd.</li>
<li><a href="https://github.com/thejerf/suture">thejerf/suture</a>, Copyright &copy; 2014-2015 Barracuda Networks, Inc.</li>
<li><a href="https://github.com/syndtr/goleveldb">syndtr/goleveldb</a>, Copyright &copy; 2012, Suryandaru Triandana</li>
<li><a href="https://github.com/syndtr/goleveldb">syndtr/goleveldb</a>, Copyright &copy; 2012 Suryandaru Triandana.</li>
<li><a href="https://github.com/vitrun/qart">vitrun/qart</a>, Copyright &copy; The Go Authors.</li>
<li><a href="https://angularjs.org/">AngularJS</a>, Copyright &copy; 2010-2015 Google, Inc.</li>
<li><a href="http://getbootstrap.com/">Bootstrap</a>, Copyright &copy; 2011-2015 Twitter, Inc.</li>
<li><a href="http://fontawesome.io/">Font Awesome</a>, Copyright &copy; 2015 Dave Gandy</li>
<li><a href="https://angularjs.org/">AngularJS</a>, Copyright &copy; 2010-2016 Google, Inc.</li>
<li><a href="http://getbootstrap.com/">Bootstrap</a>, Copyright &copy; 2011-2016 Twitter, Inc.</li>
<li>Font Awesome by Dave Gandy - <a href="http://fontawesome.io/">http://fontawesome.io</a></li>
</ul>
</div>
<div class="modal-footer">

View File

@@ -92,6 +92,7 @@ angular.module('syncthing.core')
refreshConnectionStats();
refreshDeviceStats();
refreshFolderStats();
refreshGlobalChanges();
refreshThemes();
$http.get(urlbase + '/system/version').success(function (data) {
@@ -624,7 +625,7 @@ angular.module('syncthing.core')
}, 2500);
var refreshGlobalChanges = debounce(function () {
$http.get(urlbase + "/events/disk?limit=15").success(function (data) {
$http.get(urlbase + "/events/disk?limit=25").success(function (data) {
data = data.reverse();
$scope.globalChangeEvents = data;

View File

@@ -42,6 +42,8 @@ func (validationError) String() string {
}
func TestReplaceCommit(t *testing.T) {
t.Skip("broken, fails randomly, #3834")
w := Wrap("/dev/null", Configuration{Version: 0})
if w.RawCopy().Version != 0 {
t.Fatal("Config incorrect")

View File

@@ -108,6 +108,7 @@ func TestDeviceConfig(t *testing.T) {
Versioning: VersioningConfiguration{
Params: map[string]string{},
},
WeakHashThresholdPct: 25,
},
}

View File

@@ -40,8 +40,8 @@ type FolderConfiguration struct {
DisableSparseFiles bool `xml:"disableSparseFiles" json:"disableSparseFiles"`
DisableTempIndexes bool `xml:"disableTempIndexes" json:"disableTempIndexes"`
Fsync bool `xml:"fsync" json:"fsync"`
DisableWeakHash bool `xml:"disableWeakHash" json:"disableWeakHash"`
Paused bool `xml:"paused" json:"paused"`
WeakHashThresholdPct int `xml:"weakHashThresholdPct" json:"weakHashThresholdPct"` // Use weak hash if more than X percent of the file has changed. Set to -1 to always use weak hash.
cachedPath string
@@ -146,6 +146,10 @@ func (f *FolderConfiguration) prepare() {
if f.Versioning.Params == nil {
f.Versioning.Params = make(map[string]string)
}
if f.WeakHashThresholdPct == 0 {
f.WeakHashThresholdPct = 25
}
}
func (f *FolderConfiguration) cleanedPath() string {

View File

@@ -1,33 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package connections
import (
"io"
"github.com/juju/ratelimit"
)
type LimitedReader struct {
reader io.Reader
bucket *ratelimit.Bucket
}
func NewReadLimiter(r io.Reader, b *ratelimit.Bucket) *LimitedReader {
return &LimitedReader{
reader: r,
bucket: b,
}
}
func (r *LimitedReader) Read(buf []byte) (int, error) {
n, err := r.reader.Read(buf)
if r.bucket != nil {
r.bucket.Wait(int64(n))
}
return n, err
}

View File

@@ -1,32 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package connections
import (
"io"
"github.com/juju/ratelimit"
)
type LimitedWriter struct {
writer io.Writer
bucket *ratelimit.Bucket
}
func NewWriteLimiter(w io.Writer, b *ratelimit.Bucket) *LimitedWriter {
return &LimitedWriter{
writer: w,
bucket: b,
}
}
func (w *LimitedWriter) Write(buf []byte) (int, error) {
if w.bucket != nil {
w.bucket.Wait(int64(len(buf)))
}
return w.writer.Write(buf)
}

164
lib/connections/limiter.go Normal file
View File

@@ -0,0 +1,164 @@
// Copyright (C) 2017 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package connections
import (
"fmt"
"io"
"sync/atomic"
"github.com/syncthing/syncthing/lib/config"
"golang.org/x/net/context"
"golang.org/x/time/rate"
)
// limiter manages a read and write rate limit, reacting to config changes
// as appropriate.
type limiter struct {
write *rate.Limiter
read *rate.Limiter
limitsLAN atomicBool
}
const limiterBurstSize = 4 * 128 << 10
func newLimiter(cfg *config.Wrapper) *limiter {
l := &limiter{
write: rate.NewLimiter(rate.Inf, limiterBurstSize),
read: rate.NewLimiter(rate.Inf, limiterBurstSize),
}
cfg.Subscribe(l)
prev := config.Configuration{Options: config.OptionsConfiguration{MaxRecvKbps: -1, MaxSendKbps: -1}}
l.CommitConfiguration(prev, cfg.RawCopy())
return l
}
func (lim *limiter) newReadLimiter(r io.Reader, isLAN bool) io.Reader {
return &limitedReader{reader: r, limiter: lim, isLAN: isLAN}
}
func (lim *limiter) newWriteLimiter(w io.Writer, isLAN bool) io.Writer {
return &limitedWriter{writer: w, limiter: lim, isLAN: isLAN}
}
func (lim *limiter) VerifyConfiguration(from, to config.Configuration) error {
return nil
}
func (lim *limiter) CommitConfiguration(from, to config.Configuration) bool {
if from.Options.MaxRecvKbps == to.Options.MaxRecvKbps &&
from.Options.MaxSendKbps == to.Options.MaxSendKbps &&
from.Options.LimitBandwidthInLan == to.Options.LimitBandwidthInLan {
return true
}
// The rate variables are in KiB/s in the config (despite the camel casing
// of the name). We multiply by 1024 to get bytes/s.
if to.Options.MaxRecvKbps <= 0 {
lim.read.SetLimit(rate.Inf)
} else {
lim.read.SetLimit(1024 * rate.Limit(to.Options.MaxRecvKbps))
}
if to.Options.MaxSendKbps < 0 {
lim.write.SetLimit(rate.Inf)
} else {
lim.write.SetLimit(1024 * rate.Limit(to.Options.MaxSendKbps))
}
lim.limitsLAN.set(to.Options.LimitBandwidthInLan)
sendLimitStr := "is unlimited"
recvLimitStr := "is unlimited"
if to.Options.MaxSendKbps > 0 {
sendLimitStr = fmt.Sprintf("limit is %d KiB/s", to.Options.MaxSendKbps)
}
if to.Options.MaxRecvKbps > 0 {
recvLimitStr = fmt.Sprintf("limit is %d KiB/s", to.Options.MaxRecvKbps)
}
l.Infof("Send rate %s, receive rate %s", sendLimitStr, recvLimitStr)
if to.Options.LimitBandwidthInLan {
l.Infoln("Rate limits apply to LAN connections")
} else {
l.Infoln("Rate limits do not apply to LAN connections")
}
return true
}
func (lim *limiter) String() string {
// required by config.Committer interface
return "connections.limiter"
}
// limitedReader is a rate limited io.Reader
type limitedReader struct {
reader io.Reader
limiter *limiter
isLAN bool
}
func (r *limitedReader) Read(buf []byte) (int, error) {
n, err := r.reader.Read(buf)
if !r.isLAN || r.limiter.limitsLAN.get() {
take(r.limiter.read, n)
}
return n, err
}
// limitedWriter is a rate limited io.Writer
type limitedWriter struct {
writer io.Writer
limiter *limiter
isLAN bool
}
func (w *limitedWriter) Write(buf []byte) (int, error) {
if !w.isLAN || w.limiter.limitsLAN.get() {
take(w.limiter.write, len(buf))
}
return w.writer.Write(buf)
}
// take is a utility function to consume tokens from a rate.Limiter. No call
// to WaitN can be larger than the limiter burst size so we split it up into
// several calls when necessary.
func take(l *rate.Limiter, tokens int) {
if tokens < limiterBurstSize {
// This is the by far more common case so we get it out of the way
// early.
l.WaitN(context.TODO(), tokens)
return
}
for tokens > 0 {
// Consume limiterBurstSize tokens at a time until we're done.
if tokens > limiterBurstSize {
l.WaitN(context.TODO(), limiterBurstSize)
tokens -= limiterBurstSize
} else {
l.WaitN(context.TODO(), tokens)
tokens = 0
}
}
}
type atomicBool int32
func (b *atomicBool) set(v bool) {
if v {
atomic.StoreInt32((*int32)(b), 1)
} else {
atomic.StoreInt32((*int32)(b), 0)
}
}
func (b *atomicBool) get() bool {
return atomic.LoadInt32((*int32)(b)) != 0
}

View File

@@ -10,12 +10,10 @@ import (
"crypto/tls"
"errors"
"fmt"
"io"
"net"
"net/url"
"time"
"github.com/juju/ratelimit"
"github.com/syncthing/syncthing/lib/config"
"github.com/syncthing/syncthing/lib/discover"
"github.com/syncthing/syncthing/lib/events"
@@ -29,6 +27,7 @@ import (
_ "github.com/syncthing/syncthing/lib/upnp"
"github.com/thejerf/suture"
"golang.org/x/time/rate"
)
var (
@@ -37,7 +36,7 @@ var (
)
const (
perDeviceWarningRate = 1.0 / (15 * 60) // Once per 15 minutes
perDeviceWarningIntv = 15 * time.Minute
tlsHandshakeTimeout = 10 * time.Second
)
@@ -80,8 +79,7 @@ type Service struct {
bepProtocolName string
tlsDefaultCommonName string
lans []*net.IPNet
writeRateLimit *ratelimit.Bucket
readRateLimit *ratelimit.Bucket
limiter *limiter
natService *nat.Service
natServiceToken *suture.ServiceToken
@@ -112,6 +110,7 @@ func NewService(cfg *config.Wrapper, myID protocol.DeviceID, mdl Model, tlsCfg *
bepProtocolName: bepProtocolName,
tlsDefaultCommonName: tlsDefaultCommonName,
lans: lans,
limiter: newLimiter(cfg),
natService: nat.NewService(myID, cfg),
listenersMut: sync.NewRWMutex(),
@@ -135,17 +134,6 @@ func NewService(cfg *config.Wrapper, myID protocol.DeviceID, mdl Model, tlsCfg *
}
cfg.Subscribe(service)
// The rate variables are in KiB/s in the UI (despite the camel casing
// of the name). We multiply by 1024 here to get B/s.
options := service.cfg.Options()
if options.MaxSendKbps > 0 {
service.writeRateLimit = ratelimit.NewBucketWithRate(float64(1024*options.MaxSendKbps), int64(5*1024*options.MaxSendKbps))
}
if options.MaxRecvKbps > 0 {
service.readRateLimit = ratelimit.NewBucketWithRate(float64(1024*options.MaxRecvKbps), int64(5*1024*options.MaxRecvKbps))
}
// There are several moving parts here; one routine per listening address
// (handled in configuration changing) to handle incoming connections,
// one routine to periodically attempt outgoing connections, one routine to
@@ -279,20 +267,12 @@ next:
continue next
}
// If rate limiting is set, and based on the address we should
// limit the connection, then we wrap it in a limiter.
limit := s.shouldLimit(c.RemoteAddr())
wr := io.Writer(c)
if limit && s.writeRateLimit != nil {
wr = NewWriteLimiter(c, s.writeRateLimit)
}
rd := io.Reader(c)
if limit && s.readRateLimit != nil {
rd = NewReadLimiter(c, s.readRateLimit)
}
// Wrap the connection in rate limiters. The limiter itself will
// keep up with config changes to the rate and whether or not LAN
// connections are limited.
isLAN := s.isLAN(c.RemoteAddr())
wr := s.limiter.newWriteLimiter(c, isLAN)
rd := s.limiter.newReadLimiter(c, isLAN)
name := fmt.Sprintf("%s-%s (%s)", c.LocalAddr(), c.RemoteAddr(), c.Type())
protoConn := protocol.NewConnection(remoteID, rd, wr, s.model, name, deviceCfg.Compression)
@@ -434,21 +414,17 @@ func (s *Service) connect() {
}
}
func (s *Service) shouldLimit(addr net.Addr) bool {
if s.cfg.Options().LimitBandwidthInLan {
return true
}
func (s *Service) isLAN(addr net.Addr) bool {
tcpaddr, ok := addr.(*net.TCPAddr)
if !ok {
return true
return false
}
for _, lan := range s.lans {
if lan.Contains(tcpaddr.IP) {
return false
return true
}
}
return !tcpaddr.IP.IsLoopback()
return tcpaddr.IP.IsLoopback()
}
func (s *Service) createListener(factory listenerFactory, uri *url.URL) bool {
@@ -644,7 +620,7 @@ func urlsToStrings(urls []*url.URL) []string {
return strings
}
var warningLimiters = make(map[protocol.DeviceID]*ratelimit.Bucket)
var warningLimiters = make(map[protocol.DeviceID]*rate.Limiter)
var warningLimitersMut = sync.NewMutex()
func warningFor(dev protocol.DeviceID, msg string) {
@@ -652,10 +628,10 @@ func warningFor(dev protocol.DeviceID, msg string) {
defer warningLimitersMut.Unlock()
lim, ok := warningLimiters[dev]
if !ok {
lim = ratelimit.NewBucketWithRate(perDeviceWarningRate, 1)
lim = rate.NewLimiter(rate.Every(perDeviceWarningIntv), 1)
warningLimiters[dev] = lim
}
if lim.TakeAvailable(1) == 1 {
if lim.Allow() {
l.Warnln(msg)
}
}

View File

@@ -526,9 +526,9 @@ func (db *Instance) ListFolders() []string {
folderExists := make(map[string]bool)
for dbi.Next() {
folder := string(db.globalKeyFolder(dbi.Key()))
if !folderExists[folder] {
folderExists[folder] = true
folder, ok := db.globalKeyFolder(dbi.Key())
if ok && !folderExists[string(folder)] {
folderExists[string(folder)] = true
}
}
@@ -558,8 +558,8 @@ func (db *Instance) dropFolder(folder []byte) {
// Remove all items related to the given folder from the global bucket
dbi = t.NewIterator(util.BytesPrefix([]byte{KeyTypeGlobal}), nil)
for dbi.Next() {
itemFolder := db.globalKeyFolder(dbi.Key())
if bytes.Equal(folder, itemFolder) {
itemFolder, ok := db.globalKeyFolder(dbi.Key())
if ok && bytes.Equal(folder, itemFolder) {
db.Delete(dbi.Key(), nil)
}
}
@@ -680,12 +680,8 @@ func (db *Instance) globalKeyName(key []byte) []byte {
}
// globalKeyFolder returns the folder name from the key
func (db *Instance) globalKeyFolder(key []byte) []byte {
folder, ok := db.folderIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen:]))
if !ok {
panic("bug: lookup of nonexistent folder ID")
}
return folder
func (db *Instance) globalKeyFolder(key []byte) ([]byte, bool) {
return db.folderIdx.Val(binary.BigEndian.Uint32(key[keyPrefixLen:]))
}
func (db *Instance) getIndexID(device, folder []byte) protocol.IndexID {

View File

@@ -45,7 +45,10 @@ func TestGlobalKey(t *testing.T) {
key := db.globalKey(fld, name)
fld2 := db.globalKeyFolder(key)
fld2, ok := db.globalKeyFolder(key)
if !ok {
t.Error("should have been found")
}
if !bytes.Equal(fld2, fld) {
t.Errorf("wrong folder %q != %q", fld2, fld)
}
@@ -53,4 +56,9 @@ func TestGlobalKey(t *testing.T) {
if !bytes.Equal(name2, name) {
t.Errorf("wrong name %q != %q", name2, name)
}
_, ok = db.globalKeyFolder([]byte{1, 2, 3, 4, 5})
if ok {
t.Error("should not have been found")
}
}

View File

@@ -5,7 +5,7 @@
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate go run ../../script/protofmt.go structs.proto
//go:generate protoc -I ../../../../../ -I ../../../../gogo/protobuf/protobuf -I . --gogofast_out=. structs.proto
//go:generate protoc -I ../../../../../ -I ../../vendor/ -I ../../vendor/github.com/gogo/protobuf/protobuf -I . --gogofast_out=. structs.proto
package db

View File

@@ -21,6 +21,8 @@ import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import protocol "github.com/syncthing/syncthing/lib/protocol"
import github_com_syncthing_syncthing_lib_protocol "github.com/syncthing/syncthing/lib/protocol"
import io "io"
// Reference imports to suppress errors if they are not otherwise used.
@@ -30,7 +32,9 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type FileVersion struct {
Version protocol.Vector `protobuf:"bytes,1,opt,name=version" json:"version"`
@@ -52,19 +56,19 @@ func (*VersionList) Descriptor() ([]byte, []int) { return fileDescriptorStructs,
// Must be the same as FileInfo but without the blocks field
type FileInfoTruncated struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Type protocol.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=protocol.FileInfoType" json:"type,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
ModifiedBy protocol.ShortID `protobuf:"varint,12,opt,name=modified_by,json=modifiedBy,proto3,customtype=protocol.ShortID" json:"modified_by"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
Invalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
Version protocol.Vector `protobuf:"bytes,9,opt,name=version" json:"version"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
SymlinkTarget string `protobuf:"bytes,17,opt,name=symlink_target,json=symlinkTarget,proto3" json:"symlink_target,omitempty"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Type protocol.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=protocol.FileInfoType" json:"type,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
ModifiedBy github_com_syncthing_syncthing_lib_protocol.ShortID `protobuf:"varint,12,opt,name=modified_by,json=modifiedBy,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.ShortID" json:"modified_by"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
Invalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
Version protocol.Vector `protobuf:"bytes,9,opt,name=version" json:"version"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
SymlinkTarget string `protobuf:"bytes,17,opt,name=symlink_target,json=symlinkTarget,proto3" json:"symlink_target,omitempty"`
}
func (m *FileInfoTruncated) Reset() { *m = FileInfoTruncated{} }
@@ -76,59 +80,59 @@ func init() {
proto.RegisterType((*VersionList)(nil), "db.VersionList")
proto.RegisterType((*FileInfoTruncated)(nil), "db.FileInfoTruncated")
}
func (m *FileVersion) Marshal() (data []byte, err error) {
func (m *FileVersion) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *FileVersion) MarshalTo(data []byte) (int, error) {
func (m *FileVersion) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(m.Version.ProtoSize()))
n1, err := m.Version.MarshalTo(data[i:])
i = encodeVarintStructs(dAtA, i, uint64(m.Version.ProtoSize()))
n1, err := m.Version.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n1
if len(m.Device) > 0 {
data[i] = 0x12
dAtA[i] = 0x12
i++
i = encodeVarintStructs(data, i, uint64(len(m.Device)))
i += copy(data[i:], m.Device)
i = encodeVarintStructs(dAtA, i, uint64(len(m.Device)))
i += copy(dAtA[i:], m.Device)
}
return i, nil
}
func (m *VersionList) Marshal() (data []byte, err error) {
func (m *VersionList) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *VersionList) MarshalTo(data []byte) (int, error) {
func (m *VersionList) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Versions) > 0 {
for _, msg := range m.Versions {
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(msg.ProtoSize()))
n, err := msg.MarshalTo(data[i:])
i = encodeVarintStructs(dAtA, i, uint64(msg.ProtoSize()))
n, err := msg.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
@@ -138,136 +142,136 @@ func (m *VersionList) MarshalTo(data []byte) (int, error) {
return i, nil
}
func (m *FileInfoTruncated) Marshal() (data []byte, err error) {
func (m *FileInfoTruncated) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *FileInfoTruncated) MarshalTo(data []byte) (int, error) {
func (m *FileInfoTruncated) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Name) > 0 {
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(len(m.Name)))
i += copy(data[i:], m.Name)
i = encodeVarintStructs(dAtA, i, uint64(len(m.Name)))
i += copy(dAtA[i:], m.Name)
}
if m.Type != 0 {
data[i] = 0x10
dAtA[i] = 0x10
i++
i = encodeVarintStructs(data, i, uint64(m.Type))
i = encodeVarintStructs(dAtA, i, uint64(m.Type))
}
if m.Size != 0 {
data[i] = 0x18
dAtA[i] = 0x18
i++
i = encodeVarintStructs(data, i, uint64(m.Size))
i = encodeVarintStructs(dAtA, i, uint64(m.Size))
}
if m.Permissions != 0 {
data[i] = 0x20
dAtA[i] = 0x20
i++
i = encodeVarintStructs(data, i, uint64(m.Permissions))
i = encodeVarintStructs(dAtA, i, uint64(m.Permissions))
}
if m.ModifiedS != 0 {
data[i] = 0x28
dAtA[i] = 0x28
i++
i = encodeVarintStructs(data, i, uint64(m.ModifiedS))
i = encodeVarintStructs(dAtA, i, uint64(m.ModifiedS))
}
if m.Deleted {
data[i] = 0x30
dAtA[i] = 0x30
i++
if m.Deleted {
data[i] = 1
dAtA[i] = 1
} else {
data[i] = 0
dAtA[i] = 0
}
i++
}
if m.Invalid {
data[i] = 0x38
dAtA[i] = 0x38
i++
if m.Invalid {
data[i] = 1
dAtA[i] = 1
} else {
data[i] = 0
dAtA[i] = 0
}
i++
}
if m.NoPermissions {
data[i] = 0x40
dAtA[i] = 0x40
i++
if m.NoPermissions {
data[i] = 1
dAtA[i] = 1
} else {
data[i] = 0
dAtA[i] = 0
}
i++
}
data[i] = 0x4a
dAtA[i] = 0x4a
i++
i = encodeVarintStructs(data, i, uint64(m.Version.ProtoSize()))
n2, err := m.Version.MarshalTo(data[i:])
i = encodeVarintStructs(dAtA, i, uint64(m.Version.ProtoSize()))
n2, err := m.Version.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n2
if m.Sequence != 0 {
data[i] = 0x50
dAtA[i] = 0x50
i++
i = encodeVarintStructs(data, i, uint64(m.Sequence))
i = encodeVarintStructs(dAtA, i, uint64(m.Sequence))
}
if m.ModifiedNs != 0 {
data[i] = 0x58
dAtA[i] = 0x58
i++
i = encodeVarintStructs(data, i, uint64(m.ModifiedNs))
i = encodeVarintStructs(dAtA, i, uint64(m.ModifiedNs))
}
if m.ModifiedBy != 0 {
data[i] = 0x60
dAtA[i] = 0x60
i++
i = encodeVarintStructs(data, i, uint64(m.ModifiedBy))
i = encodeVarintStructs(dAtA, i, uint64(m.ModifiedBy))
}
if len(m.SymlinkTarget) > 0 {
data[i] = 0x8a
dAtA[i] = 0x8a
i++
data[i] = 0x1
dAtA[i] = 0x1
i++
i = encodeVarintStructs(data, i, uint64(len(m.SymlinkTarget)))
i += copy(data[i:], m.SymlinkTarget)
i = encodeVarintStructs(dAtA, i, uint64(len(m.SymlinkTarget)))
i += copy(dAtA[i:], m.SymlinkTarget)
}
return i, nil
}
func encodeFixed64Structs(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
func encodeFixed64Structs(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Structs(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
func encodeFixed32Structs(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintStructs(data []byte, offset int, v uint64) int {
func encodeVarintStructs(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
dAtA[offset] = uint8(v)
return offset + 1
}
func (m *FileVersion) ProtoSize() (n int) {
@@ -353,8 +357,8 @@ func sovStructs(x uint64) (n int) {
func sozStructs(x uint64) (n int) {
return sovStructs(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *FileVersion) Unmarshal(data []byte) error {
l := len(data)
func (m *FileVersion) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -366,7 +370,7 @@ func (m *FileVersion) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -394,7 +398,7 @@ func (m *FileVersion) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -408,7 +412,7 @@ func (m *FileVersion) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.Version.Unmarshal(data[iNdEx:postIndex]); err != nil {
if err := m.Version.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -424,7 +428,7 @@ func (m *FileVersion) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -438,14 +442,14 @@ func (m *FileVersion) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Device = append(m.Device[:0], data[iNdEx:postIndex]...)
m.Device = append(m.Device[:0], dAtA[iNdEx:postIndex]...)
if m.Device == nil {
m.Device = []byte{}
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
skippy, err := skipStructs(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -464,8 +468,8 @@ func (m *FileVersion) Unmarshal(data []byte) error {
}
return nil
}
func (m *VersionList) Unmarshal(data []byte) error {
l := len(data)
func (m *VersionList) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -477,7 +481,7 @@ func (m *VersionList) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -505,7 +509,7 @@ func (m *VersionList) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -520,13 +524,13 @@ func (m *VersionList) Unmarshal(data []byte) error {
return io.ErrUnexpectedEOF
}
m.Versions = append(m.Versions, FileVersion{})
if err := m.Versions[len(m.Versions)-1].Unmarshal(data[iNdEx:postIndex]); err != nil {
if err := m.Versions[len(m.Versions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
skippy, err := skipStructs(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -545,8 +549,8 @@ func (m *VersionList) Unmarshal(data []byte) error {
}
return nil
}
func (m *FileInfoTruncated) Unmarshal(data []byte) error {
l := len(data)
func (m *FileInfoTruncated) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -558,7 +562,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -586,7 +590,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -601,7 +605,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = string(data[iNdEx:postIndex])
m.Name = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 0 {
@@ -615,7 +619,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.Type |= (protocol.FileInfoType(b) & 0x7F) << shift
if b < 0x80 {
@@ -634,7 +638,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.Size |= (int64(b) & 0x7F) << shift
if b < 0x80 {
@@ -653,7 +657,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.Permissions |= (uint32(b) & 0x7F) << shift
if b < 0x80 {
@@ -672,7 +676,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.ModifiedS |= (int64(b) & 0x7F) << shift
if b < 0x80 {
@@ -691,7 +695,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -711,7 +715,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -731,7 +735,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -751,7 +755,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -765,7 +769,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.Version.Unmarshal(data[iNdEx:postIndex]); err != nil {
if err := m.Version.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -781,7 +785,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.Sequence |= (int64(b) & 0x7F) << shift
if b < 0x80 {
@@ -800,7 +804,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.ModifiedNs |= (int32(b) & 0x7F) << shift
if b < 0x80 {
@@ -819,9 +823,9 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.ModifiedBy |= (protocol.ShortID(b) & 0x7F) << shift
m.ModifiedBy |= (github_com_syncthing_syncthing_lib_protocol.ShortID(b) & 0x7F) << shift
if b < 0x80 {
break
}
@@ -838,7 +842,7 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -853,11 +857,11 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.SymlinkTarget = string(data[iNdEx:postIndex])
m.SymlinkTarget = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
skippy, err := skipStructs(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -876,8 +880,8 @@ func (m *FileInfoTruncated) Unmarshal(data []byte) error {
}
return nil
}
func skipStructs(data []byte) (n int, err error) {
l := len(data)
func skipStructs(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -888,7 +892,7 @@ func skipStructs(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -906,7 +910,7 @@ func skipStructs(data []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
if dAtA[iNdEx-1] < 0x80 {
break
}
}
@@ -923,7 +927,7 @@ func skipStructs(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -946,7 +950,7 @@ func skipStructs(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -957,7 +961,7 @@ func skipStructs(data []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipStructs(data[start:])
next, err := skipStructs(dAtA[start:])
if err != nil {
return 0, err
}
@@ -981,34 +985,39 @@ var (
ErrIntOverflowStructs = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("structs.proto", fileDescriptorStructs) }
var fileDescriptorStructs = []byte{
// 442 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x51, 0x5d, 0x8b, 0xd3, 0x40,
0x14, 0x6d, 0xb7, 0xd9, 0x7e, 0xdc, 0xd8, 0xd5, 0x1d, 0x64, 0x19, 0x0a, 0xb6, 0x4b, 0x41, 0x10,
0xc1, 0x54, 0x57, 0x7c, 0xf1, 0x71, 0x1f, 0x16, 0x04, 0x11, 0x19, 0x97, 0xf5, 0xb1, 0x34, 0x99,
0xdb, 0xec, 0x60, 0x32, 0x53, 0x33, 0x93, 0x42, 0xfd, 0x25, 0xbe, 0xb9, 0x3f, 0xa7, 0x8f, 0xfe,
0x02, 0xd1, 0xfa, 0x47, 0x9c, 0xce, 0xa4, 0x31, 0x8f, 0xfb, 0x10, 0xb8, 0xe7, 0x9e, 0x73, 0xee,
0x3d, 0x93, 0x0b, 0x43, 0x6d, 0x8a, 0x32, 0x31, 0x3a, 0x5a, 0x15, 0xca, 0x28, 0x72, 0xc4, 0xe3,
0xd1, 0x8b, 0x54, 0x98, 0xdb, 0x32, 0x8e, 0x12, 0x95, 0xcf, 0x52, 0x95, 0xaa, 0x99, 0xa3, 0xe2,
0x72, 0xe9, 0x90, 0x03, 0xae, 0xf2, 0x96, 0xd1, 0x9b, 0x86, 0x5c, 0x6f, 0x64, 0x62, 0x6e, 0x85,
0x4c, 0x1b, 0x55, 0x26, 0x62, 0x3f, 0x21, 0x51, 0xd9, 0x2c, 0xc6, 0x95, 0xb7, 0x4d, 0x3f, 0x43,
0x78, 0x25, 0x32, 0xbc, 0xc1, 0x42, 0x0b, 0x25, 0xc9, 0x4b, 0xe8, 0xad, 0x7d, 0x49, 0xdb, 0xe7,
0xed, 0x67, 0xe1, 0xc5, 0xa3, 0xe8, 0x60, 0x8a, 0x6e, 0x30, 0x31, 0xaa, 0xb8, 0x0c, 0xb6, 0xbf,
0x26, 0x2d, 0x76, 0x90, 0x91, 0x33, 0xe8, 0x72, 0x5c, 0x8b, 0x04, 0xe9, 0x91, 0x35, 0x3c, 0x60,
0x15, 0x9a, 0x5e, 0x41, 0x58, 0x0d, 0x7d, 0x2f, 0xb4, 0x21, 0xaf, 0xa0, 0x5f, 0x39, 0xb4, 0x9d,
0xdc, 0xb1, 0x93, 0x1f, 0x46, 0x3c, 0x8e, 0x1a, 0xbb, 0xab, 0xc1, 0xb5, 0xec, 0x6d, 0xf0, 0xfd,
0x6e, 0xd2, 0x9a, 0xfe, 0xe8, 0xc0, 0xe9, 0x5e, 0xf5, 0x4e, 0x2e, 0xd5, 0x75, 0x51, 0xca, 0x64,
0x61, 0x90, 0x13, 0x02, 0x81, 0x5c, 0xe4, 0xe8, 0x42, 0x0e, 0x98, 0xab, 0xc9, 0x73, 0x08, 0xcc,
0x66, 0xe5, 0x73, 0x9c, 0x5c, 0x9c, 0xfd, 0x0f, 0x5e, 0xdb, 0x2d, 0xcb, 0x9c, 0x66, 0xef, 0xd7,
0xe2, 0x1b, 0xd2, 0x8e, 0xd5, 0x76, 0x98, 0xab, 0xc9, 0x39, 0x84, 0x2b, 0x2c, 0x72, 0xa1, 0x7d,
0xca, 0xc0, 0x52, 0x43, 0xd6, 0x6c, 0x91, 0x27, 0x00, 0xb9, 0xe2, 0x62, 0x29, 0x90, 0xcf, 0x35,
0x3d, 0x76, 0xde, 0xc1, 0xa1, 0xf3, 0x89, 0x50, 0xe8, 0x71, 0xcc, 0xd0, 0xe6, 0xa3, 0x5d, 0xcb,
0xf5, 0xd9, 0x01, 0xee, 0x19, 0x21, 0xd7, 0x8b, 0x4c, 0x70, 0xda, 0xf3, 0x4c, 0x05, 0xc9, 0x53,
0x38, 0x91, 0x6a, 0xde, 0xdc, 0xdb, 0x77, 0x82, 0xa1, 0x54, 0x1f, 0x1b, 0x9b, 0x1b, 0x77, 0x19,
0xdc, 0xef, 0x2e, 0x23, 0xe8, 0x6b, 0xfc, 0x5a, 0xa2, 0xb4, 0x97, 0x01, 0x97, 0xb4, 0xc6, 0x64,
0x02, 0x61, 0xfd, 0x0e, 0xbb, 0x31, 0xb4, 0xf4, 0x31, 0xab, 0x9f, 0xf6, 0x41, 0xef, 0x53, 0xe9,
0x4d, 0x9e, 0x09, 0xf9, 0x65, 0x6e, 0x16, 0x45, 0x8a, 0x86, 0x9e, 0xba, 0x1f, 0x3d, 0xac, 0xba,
0xd7, 0xae, 0xe9, 0x2f, 0x74, 0xf9, 0x78, 0xfb, 0x67, 0xdc, 0xda, 0xee, 0xc6, 0xed, 0x9f, 0xf6,
0xfb, 0xbd, 0x1b, 0xb7, 0xee, 0xfe, 0x8e, 0xdb, 0x71, 0xd7, 0xe5, 0x7b, 0xfd, 0x2f, 0x00, 0x00,
0xff, 0xff, 0xb1, 0x2f, 0x12, 0xb6, 0xda, 0x02, 0x00, 0x00,
// 483 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x52, 0x4f, 0x6b, 0xdb, 0x4e,
0x10, 0xf5, 0xc6, 0x8a, 0xff, 0xac, 0xe2, 0xfc, 0x7e, 0x59, 0x4a, 0x58, 0x0c, 0x95, 0x85, 0xa1,
0x20, 0x0a, 0x95, 0x5b, 0x87, 0x5e, 0xda, 0x9b, 0x29, 0x81, 0x40, 0x29, 0x45, 0x09, 0xe9, 0xa5,
0x60, 0x2c, 0x69, 0x2c, 0x2f, 0x95, 0x76, 0x55, 0xed, 0xca, 0xa0, 0x7e, 0x92, 0x1e, 0xf3, 0x71,
0x7c, 0xec, 0xb9, 0x87, 0xd0, 0xba, 0x9f, 0xa3, 0x50, 0xb4, 0x92, 0x15, 0x1d, 0xdb, 0xdb, 0xbc,
0xd9, 0xf7, 0xe6, 0xbd, 0x61, 0x16, 0x8f, 0xa4, 0xca, 0xf2, 0x40, 0x49, 0x37, 0xcd, 0x84, 0x12,
0xe4, 0x28, 0xf4, 0xc7, 0xcf, 0x22, 0xa6, 0x36, 0xb9, 0xef, 0x06, 0x22, 0x99, 0x45, 0x22, 0x12,
0x33, 0xfd, 0xe4, 0xe7, 0x6b, 0x8d, 0x34, 0xd0, 0x55, 0x25, 0x19, 0xbf, 0x6c, 0xd1, 0x65, 0xc1,
0x03, 0xb5, 0x61, 0x3c, 0x6a, 0x55, 0x31, 0xf3, 0xab, 0x09, 0x81, 0x88, 0x67, 0x3e, 0xa4, 0x95,
0x6c, 0xfa, 0x01, 0x9b, 0x97, 0x2c, 0x86, 0x5b, 0xc8, 0x24, 0x13, 0x9c, 0x3c, 0xc7, 0xfd, 0x6d,
0x55, 0x52, 0x64, 0x23, 0xc7, 0x9c, 0xff, 0xef, 0x1e, 0x44, 0xee, 0x2d, 0x04, 0x4a, 0x64, 0x0b,
0x63, 0x77, 0x3f, 0xe9, 0x78, 0x07, 0x1a, 0x39, 0xc7, 0xbd, 0x10, 0xb6, 0x2c, 0x00, 0x7a, 0x64,
0x23, 0xe7, 0xc4, 0xab, 0xd1, 0xf4, 0x12, 0x9b, 0xf5, 0xd0, 0xb7, 0x4c, 0x2a, 0xf2, 0x02, 0x0f,
0x6a, 0x85, 0xa4, 0xc8, 0xee, 0x3a, 0xe6, 0xfc, 0x3f, 0x37, 0xf4, 0xdd, 0x96, 0x77, 0x3d, 0xb8,
0xa1, 0xbd, 0x32, 0xbe, 0xde, 0x4d, 0x3a, 0xd3, 0xdf, 0x5d, 0x7c, 0x56, 0xb2, 0xae, 0xf8, 0x5a,
0xdc, 0x64, 0x39, 0x0f, 0x56, 0x0a, 0x42, 0x42, 0xb0, 0xc1, 0x57, 0x09, 0xe8, 0x90, 0x43, 0x4f,
0xd7, 0xe4, 0x29, 0x36, 0x54, 0x91, 0x56, 0x39, 0x4e, 0xe7, 0xe7, 0x0f, 0xc1, 0x1b, 0x79, 0x91,
0x82, 0xa7, 0x39, 0xa5, 0x5e, 0xb2, 0x2f, 0x40, 0xbb, 0x36, 0x72, 0xba, 0x9e, 0xae, 0x89, 0x8d,
0xcd, 0x14, 0xb2, 0x84, 0xc9, 0x2a, 0xa5, 0x61, 0x23, 0x67, 0xe4, 0xb5, 0x5b, 0xe4, 0x31, 0xc6,
0x89, 0x08, 0xd9, 0x9a, 0x41, 0xb8, 0x94, 0xf4, 0x58, 0x6b, 0x87, 0x87, 0xce, 0x35, 0xa1, 0xb8,
0x1f, 0x42, 0x0c, 0x0a, 0x42, 0xda, 0xb3, 0x91, 0x33, 0xf0, 0x0e, 0xb0, 0x7c, 0x61, 0x7c, 0xbb,
0x8a, 0x59, 0x48, 0xfb, 0xd5, 0x4b, 0x0d, 0xc9, 0x13, 0x7c, 0xca, 0xc5, 0xb2, 0xed, 0x3b, 0xd0,
0x84, 0x11, 0x17, 0xef, 0x5b, 0xce, 0xad, 0xbb, 0x0c, 0xff, 0xee, 0x2e, 0x63, 0x3c, 0x90, 0xf0,
0x39, 0x07, 0x1e, 0x00, 0xc5, 0x3a, 0x69, 0x83, 0xc9, 0x04, 0x9b, 0xcd, 0x1e, 0x5c, 0x52, 0xd3,
0x46, 0xce, 0xb1, 0xd7, 0xac, 0xf6, 0x4e, 0x92, 0x8f, 0x2d, 0x82, 0x5f, 0xd0, 0x13, 0x1b, 0x39,
0xc6, 0xe2, 0x75, 0x69, 0xf0, 0xfd, 0x7e, 0x72, 0xf1, 0x0f, 0x3f, 0xcd, 0xbd, 0xde, 0x88, 0x4c,
0x5d, 0xbd, 0x79, 0x98, 0xbe, 0x28, 0xca, 0x9d, 0x65, 0x91, 0xc4, 0x8c, 0x7f, 0x5a, 0xaa, 0x55,
0x16, 0x81, 0xa2, 0x67, 0xfa, 0x8c, 0xa3, 0xba, 0x7b, 0xa3, 0x9b, 0xd5, 0xfd, 0x17, 0x8f, 0x76,
0x3f, 0xad, 0xce, 0x6e, 0x6f, 0xa1, 0x6f, 0x7b, 0x0b, 0xfd, 0xd8, 0x5b, 0x9d, 0xbb, 0x5f, 0x16,
0xf2, 0x7b, 0xda, 0xe0, 0xe2, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x9e, 0xcd, 0xe3, 0xfd, 0x38,
0x03, 0x00, 0x00,
}

View File

@@ -28,7 +28,7 @@ message FileInfoTruncated {
uint32 permissions = 4;
int64 modified_s = 5;
int32 modified_ns = 11;
uint64 modified_by = 12 [(gogoproto.customtype) = "protocol.ShortID", (gogoproto.nullable) = false];
uint64 modified_by = 12 [(gogoproto.customtype) = "github.com/syncthing/syncthing/lib/protocol.ShortID", (gogoproto.nullable) = false];
bool deleted = 6;
bool invalid = 7;
bool no_permissions = 8;

View File

@@ -5,7 +5,7 @@
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate go run ../../script/protofmt.go local.proto
//go:generate protoc -I ../../../../../ -I ../../../../gogo/protobuf/protobuf -I . --gogofast_out=. local.proto
//go:generate protoc -I ../../vendor/ -I ../../vendor/github.com/gogo/protobuf/protobuf -I . --gogofast_out=. local.proto
package discover

View File

@@ -29,7 +29,9 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type Announce struct {
ID github_com_syncthing_syncthing_lib_protocol.DeviceID `protobuf:"bytes,1,opt,name=id,proto3,customtype=github.com/syncthing/syncthing/lib/protocol.DeviceID" json:"id"`
@@ -45,77 +47,77 @@ func (*Announce) Descriptor() ([]byte, []int) { return fileDescriptorLocal, []in
func init() {
proto.RegisterType((*Announce)(nil), "discover.Announce")
}
func (m *Announce) Marshal() (data []byte, err error) {
func (m *Announce) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *Announce) MarshalTo(data []byte) (int, error) {
func (m *Announce) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintLocal(data, i, uint64(m.ID.ProtoSize()))
n1, err := m.ID.MarshalTo(data[i:])
i = encodeVarintLocal(dAtA, i, uint64(m.ID.ProtoSize()))
n1, err := m.ID.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n1
if len(m.Addresses) > 0 {
for _, s := range m.Addresses {
data[i] = 0x12
dAtA[i] = 0x12
i++
l = len(s)
for l >= 1<<7 {
data[i] = uint8(uint64(l)&0x7f | 0x80)
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
data[i] = uint8(l)
dAtA[i] = uint8(l)
i++
i += copy(data[i:], s)
i += copy(dAtA[i:], s)
}
}
if m.InstanceID != 0 {
data[i] = 0x18
dAtA[i] = 0x18
i++
i = encodeVarintLocal(data, i, uint64(m.InstanceID))
i = encodeVarintLocal(dAtA, i, uint64(m.InstanceID))
}
return i, nil
}
func encodeFixed64Local(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
func encodeFixed64Local(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Local(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
func encodeFixed32Local(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintLocal(data []byte, offset int, v uint64) int {
func encodeVarintLocal(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
dAtA[offset] = uint8(v)
return offset + 1
}
func (m *Announce) ProtoSize() (n int) {
@@ -148,8 +150,8 @@ func sovLocal(x uint64) (n int) {
func sozLocal(x uint64) (n int) {
return sovLocal(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *Announce) Unmarshal(data []byte) error {
l := len(data)
func (m *Announce) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -161,7 +163,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -189,7 +191,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -203,7 +205,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.ID.Unmarshal(data[iNdEx:postIndex]); err != nil {
if err := m.ID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -219,7 +221,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -234,7 +236,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, string(data[iNdEx:postIndex]))
m.Addresses = append(m.Addresses, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
case 3:
if wireType != 0 {
@@ -248,7 +250,7 @@ func (m *Announce) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
m.InstanceID |= (int64(b) & 0x7F) << shift
if b < 0x80 {
@@ -257,7 +259,7 @@ func (m *Announce) Unmarshal(data []byte) error {
}
default:
iNdEx = preIndex
skippy, err := skipLocal(data[iNdEx:])
skippy, err := skipLocal(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -276,8 +278,8 @@ func (m *Announce) Unmarshal(data []byte) error {
}
return nil
}
func skipLocal(data []byte) (n int, err error) {
l := len(data)
func skipLocal(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -288,7 +290,7 @@ func skipLocal(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -306,7 +308,7 @@ func skipLocal(data []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
if dAtA[iNdEx-1] < 0x80 {
break
}
}
@@ -323,7 +325,7 @@ func skipLocal(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -346,7 +348,7 @@ func skipLocal(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -357,7 +359,7 @@ func skipLocal(data []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipLocal(data[start:])
next, err := skipLocal(dAtA[start:])
if err != nil {
return 0, err
}
@@ -381,21 +383,24 @@ var (
ErrIntOverflowLocal = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("local.proto", fileDescriptorLocal) }
var fileDescriptorLocal = []byte{
// 235 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xce, 0xc9, 0x4f, 0x4e,
0xcc, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x48, 0xc9, 0x2c, 0x4e, 0xce, 0x2f, 0x4b,
0x2d, 0x92, 0xd2, 0x4d, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf,
0x4f, 0xcf, 0xd7, 0x07, 0x2b, 0x48, 0x2a, 0x4d, 0x03, 0xf3, 0xc0, 0x1c, 0x30, 0x0b, 0xa2, 0x51,
0x69, 0x2d, 0x23, 0x17, 0x87, 0x63, 0x5e, 0x5e, 0x7e, 0x69, 0x5e, 0x72, 0xaa, 0x50, 0x10, 0x17,
0x53, 0x66, 0x8a, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x8f, 0x93, 0xd3, 0x89, 0x7b, 0xf2, 0x0c, 0xb7,
0xee, 0xc9, 0x9b, 0x20, 0x99, 0x57, 0x5c, 0x99, 0x97, 0x5c, 0x92, 0x91, 0x99, 0x97, 0x8e, 0xc4,
0xca, 0xc9, 0x4c, 0x82, 0x58, 0x91, 0x9c, 0x9f, 0xa3, 0xe7, 0x92, 0x5a, 0x96, 0x99, 0x9c, 0xea,
0xe9, 0xf2, 0xe8, 0x9e, 0x3c, 0x93, 0xa7, 0x4b, 0x10, 0xd0, 0x34, 0x21, 0x19, 0x2e, 0xce, 0xc4,
0x94, 0x94, 0xa2, 0xd4, 0xe2, 0xe2, 0xd4, 0x62, 0x09, 0x26, 0x05, 0x66, 0x0d, 0xce, 0x20, 0x84,
0x80, 0x90, 0x3e, 0x17, 0x77, 0x66, 0x5e, 0x71, 0x49, 0x22, 0xd0, 0xf6, 0x78, 0xa0, 0xd5, 0xcc,
0x40, 0xab, 0x99, 0x9d, 0xf8, 0x80, 0xda, 0xb9, 0x3c, 0xa1, 0xc2, 0x40, 0x63, 0xb8, 0x60, 0x4a,
0x3c, 0x53, 0x9c, 0x44, 0x4e, 0x3c, 0x94, 0x63, 0x38, 0xf1, 0x48, 0x8e, 0xf1, 0x02, 0x10, 0x3f,
0x78, 0x24, 0xc7, 0xb0, 0xe0, 0xb1, 0x1c, 0x63, 0x12, 0x1b, 0xd8, 0x05, 0xc6, 0x80, 0x00, 0x00,
0x00, 0xff, 0xff, 0xa4, 0x46, 0x4f, 0x13, 0x14, 0x01, 0x00, 0x00,
// 241 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x4c, 0x8e, 0x4f, 0x4e, 0x84, 0x30,
0x14, 0xc6, 0x29, 0x24, 0x66, 0xa6, 0x63, 0x5c, 0x10, 0x17, 0xc4, 0x98, 0x42, 0x5c, 0xb1, 0x11,
0x16, 0x7a, 0x01, 0x09, 0x9b, 0x6e, 0xb9, 0x80, 0x81, 0xb6, 0x32, 0x2f, 0xc1, 0x3e, 0x43, 0x61,
0x12, 0x6f, 0xe3, 0x05, 0xbc, 0x07, 0x4b, 0xd7, 0x2e, 0x1a, 0xad, 0x17, 0x31, 0xe9, 0x68, 0x86,
0xdd, 0xf7, 0xfd, 0xf2, 0x7b, 0x7f, 0xe8, 0x6e, 0x40, 0xd1, 0x0e, 0xc5, 0xcb, 0x88, 0x13, 0xc6,
0x1b, 0x09, 0x46, 0xe0, 0x41, 0x8d, 0x57, 0xb7, 0x3d, 0x4c, 0xfb, 0xb9, 0x2b, 0x04, 0x3e, 0x97,
0x3d, 0xf6, 0x58, 0x7a, 0xa1, 0x9b, 0x9f, 0x7c, 0xf3, 0xc5, 0xa7, 0xe3, 0xe0, 0xcd, 0x3b, 0xa1,
0x9b, 0x07, 0xad, 0x71, 0xd6, 0x42, 0xc5, 0x0d, 0x0d, 0x41, 0x26, 0x24, 0x23, 0xf9, 0x79, 0x55,
0x2d, 0x36, 0x0d, 0x3e, 0x6d, 0x7a, 0xbf, 0xda, 0x67, 0x5e, 0xb5, 0x98, 0xf6, 0xa0, 0xfb, 0x55,
0x1a, 0xa0, 0x3b, 0x9e, 0x10, 0x38, 0x14, 0xb5, 0x3a, 0x80, 0x50, 0xbc, 0x76, 0x36, 0x0d, 0x79,
0xdd, 0x84, 0x20, 0xe3, 0x6b, 0xba, 0x6d, 0xa5, 0x1c, 0x95, 0x31, 0xca, 0x24, 0x61, 0x16, 0xe5,
0xdb, 0xe6, 0x04, 0xe2, 0x92, 0xee, 0x40, 0x9b, 0xa9, 0xd5, 0x42, 0x3d, 0x82, 0x4c, 0xa2, 0x8c,
0xe4, 0x51, 0x75, 0xe1, 0x6c, 0x4a, 0xf9, 0x1f, 0xe6, 0x75, 0x43, 0xff, 0x15, 0x2e, 0xab, 0xcb,
0xe5, 0x9b, 0x05, 0x8b, 0x63, 0xe4, 0xc3, 0x31, 0xf2, 0xe5, 0x58, 0xf0, 0xf6, 0xc3, 0x48, 0x77,
0xe6, 0x3f, 0xb8, 0xfb, 0x0d, 0x00, 0x00, 0xff, 0xff, 0xa4, 0x46, 0x4f, 0x13, 0x14, 0x01, 0x00,
0x00,
}

View File

@@ -121,7 +121,6 @@ var (
errDevicePaused = errors.New("device is paused")
errDeviceIgnored = errors.New("device is ignored")
errNotRelative = errors.New("not a relative path")
errNotDir = errors.New("parent is not a directory")
)
// NewModel creates and starts a new model. The model starts in read-only mode,
@@ -330,6 +329,11 @@ func (m *Model) RemoveFolder(folder string) {
m.fmut.Lock()
m.pmut.Lock()
// Delete syncthing specific files
folderCfg := m.folderCfgs[folder]
folderPath := folderCfg.Path()
os.Remove(filepath.Join(folderPath, ".stfolder"))
m.tearDownFolderLocked(folder)
// Remove it from the database
db.DropFolder(m.db, folder)
@@ -1154,8 +1158,8 @@ func (m *Model) Request(deviceID protocol.DeviceID, folder, name string, offset
return protocol.ErrNoSuchFile
}
if !osutil.IsDir(folderPath, filepath.Dir(name)) {
l.Debugf("%v REQ(in) for file not in dir: %s: %q / %q o=%d s=%d", m, deviceID, folder, name, offset, len(buf))
if err := osutil.TraversesSymlink(folderPath, filepath.Dir(name)); err != nil {
l.Debugf("%v REQ(in) traversal check: %s - %s: %q / %q o=%d s=%d", m, err, deviceID, folder, name, offset, len(buf))
return protocol.ErrNoSuchFile
}
@@ -2438,6 +2442,9 @@ func (m *Model) CommitConfiguration(from, to config.Configuration) bool {
from.Options.ListenAddresses = to.Options.ListenAddresses
from.Options.RelaysEnabled = to.Options.RelaysEnabled
from.Options.UnackedNotificationIDs = to.Options.UnackedNotificationIDs
from.Options.MaxRecvKbps = to.Options.MaxRecvKbps
from.Options.MaxSendKbps = to.Options.MaxSendKbps
from.Options.LimitBandwidthInLan = to.Options.LimitBandwidthInLan
// All of the other generic options require restart. Or at least they may;
// removing this check requires going through those options carefully and
// making sure there are individual services that handle them correctly.

View File

@@ -2199,6 +2199,8 @@ func TestIssue3829(t *testing.T) {
}
func TestNoRequestsFromPausedDevices(t *testing.T) {
t.Skip("broken, fails randomly, #3843")
dbi := db.OpenMemory()
fcfg := config.NewFolderConfiguration("default", "testdata")

View File

@@ -47,6 +47,7 @@ type pullBlockState struct {
type copyBlocksState struct {
*sharedPullerState
blocks []protocol.BlockInfo
have int
}
// Which filemode bits to preserve
@@ -430,8 +431,8 @@ func (f *sendReceiveFolder) pullerIteration(ignores *ignore.Matcher) int {
for _, fi := range processDirectly {
// Verify that the thing we are handling lives inside a directory,
// and not a symlink or empty space.
if !osutil.IsDir(f.dir, filepath.Dir(fi.Name)) {
f.newError(fi.Name, errNotDir)
if err := osutil.TraversesSymlink(f.dir, filepath.Dir(fi.Name)); err != nil {
f.newError(fi.Name, err)
continue
}
@@ -519,8 +520,8 @@ nextFile:
// Verify that the thing we are handling lives inside a directory,
// and not a symlink or empty space.
if !osutil.IsDir(f.dir, filepath.Dir(fi.Name)) {
f.newError(fi.Name, errNotDir)
if err := osutil.TraversesSymlink(f.dir, filepath.Dir(fi.Name)); err != nil {
f.newError(fi.Name, err)
continue
}
@@ -1003,7 +1004,9 @@ func (f *sendReceiveFolder) renameFile(source, target protocol.FileInfo) {
func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, copyChan chan<- copyBlocksState, finisherChan chan<- *sharedPullerState) {
curFile, hasCurFile := f.model.CurrentFolderFile(f.folderID, file.Name)
if hasCurFile && len(curFile.Blocks) == len(file.Blocks) && scanner.BlocksEqual(curFile.Blocks, file.Blocks) {
have, need := scanner.BlockDiff(curFile.Blocks, file.Blocks)
if hasCurFile && len(need) == 0 {
// We are supposed to copy the entire file, and then fetch nothing. We
// are only updating metadata, so we don't actually *need* to make the
// copy.
@@ -1158,6 +1161,7 @@ func (f *sendReceiveFolder) handleFile(file protocol.FileInfo, copyChan chan<- c
cs := copyBlocksState{
sharedPullerState: &s,
blocks: blocks,
have: len(have),
}
copyChan <- cs
}
@@ -1216,7 +1220,12 @@ func (f *sendReceiveFolder) copierRoutine(in <-chan copyBlocksState, pullChan ch
f.model.fmut.RUnlock()
var weakHashFinder *weakhash.Finder
if !f.DisableWeakHash {
blocksPercentChanged := 0
if tot := len(state.file.Blocks); tot > 0 {
blocksPercentChanged = (tot - state.have) * 100 / tot
}
if blocksPercentChanged >= f.WeakHashThresholdPct {
hashesToFind := make([]uint32, 0, len(state.blocks))
for _, block := range state.blocks {
if block.WeakHash != 0 {

View File

@@ -322,7 +322,7 @@ func TestWeakHash(t *testing.T) {
go fo.copierRoutine(copyChan, pullChan, finisherChan)
// Test 1 - no weak hashing, file gets fully repulled (`expectBlocks` pulls).
fo.DisableWeakHash = true
fo.WeakHashThresholdPct = 101
fo.handleFile(desiredFile, copyChan, finisherChan)
var pulls []pullBlockState
@@ -350,7 +350,7 @@ func TestWeakHash(t *testing.T) {
}
// Test 2 - using weak hash, expectPulls blocks pulled.
fo.DisableWeakHash = false
fo.WeakHashThresholdPct = -1
fo.handleFile(desiredFile, copyChan, finisherChan)
pulls = pulls[:0]

View File

@@ -1,49 +0,0 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package osutil
import (
"os"
"path/filepath"
"strings"
)
// IsDir returns true if base and every path component of name up to and
// including filepath.Join(base, name) is a directory (and not a symlink or
// similar). Base and name must both be clean and name must be relative to
// base.
func IsDir(base, name string) bool {
path := base
info, err := Lstat(path)
if err != nil {
return false
}
if !info.IsDir() {
return false
}
if name == "." {
// The result of calling IsDir("some/where", filepath.Dir("foo"))
return true
}
parts := strings.Split(name, string(os.PathSeparator))
for _, part := range parts {
path = filepath.Join(path, part)
info, err := Lstat(path)
if err != nil {
return false
}
if info.Mode()&os.ModeSymlink != 0 {
return false
}
if !info.IsDir() {
return false
}
}
return true
}

View File

@@ -0,0 +1,76 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package osutil
import (
"fmt"
"os"
"path/filepath"
"strings"
)
// TraversesSymlinkError is an error indicating symlink traversal
type TraversesSymlinkError struct {
path string
}
func (e TraversesSymlinkError) Error() string {
return fmt.Sprintf("traverses symlink: %s", e.path)
}
// NotADirectoryError is an error indicating an expected path is not a directory
type NotADirectoryError struct {
path string
}
func (e NotADirectoryError) Error() string {
return fmt.Sprintf("not a directory: %s", e.path)
}
// TraversesSymlink returns an error if base and any path component of name up to and
// including filepath.Join(base, name) traverses a symlink.
// Base and name must both be clean and name must be relative to base.
func TraversesSymlink(base, name string) error {
path := base
info, err := Lstat(path)
if err != nil {
return err
}
if !info.IsDir() {
return &NotADirectoryError{
path: base,
}
}
if name == "." {
// The result of calling TraversesSymlink("some/where", filepath.Dir("foo"))
return nil
}
parts := strings.Split(name, string(os.PathSeparator))
for _, part := range parts {
path = filepath.Join(path, part)
info, err := Lstat(path)
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
if info.Mode()&os.ModeSymlink != 0 {
return &TraversesSymlinkError{
path: strings.TrimPrefix(path, base),
}
}
if !info.IsDir() {
return &NotADirectoryError{
path: strings.TrimPrefix(path, base),
}
}
}
return nil
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/syncthing/syncthing/lib/symlinks"
)
func TestIsDir(t *testing.T) {
func TestTraversesSymlink(t *testing.T) {
if !symlinks.Supported {
t.Skip("pointless test")
return
@@ -35,40 +35,42 @@ func TestIsDir(t *testing.T) {
}
cases := []struct {
name string
isDir bool
name string
traverses bool
}{
// Exist
{".", true},
{"a", true},
{"a/b", true},
{"a/b/c", true},
{".", false},
{"a", false},
{"a/b", false},
{"a/b/c", false},
// Don't exist
{"x", false},
{"a/x", false},
{"a/b/x", false},
{"a/x/c", false},
// Symlink or behind symlink
{"a/l", false},
{"a/l/c", false},
{"a/l", true},
{"a/l/c", true},
// Non-existing behind a symlink
{"a/l/x", true},
}
for _, tc := range cases {
if res := osutil.IsDir("testdata", tc.name); res != tc.isDir {
t.Errorf("IsDir(%q) = %v, should be %v", tc.name, res, tc.isDir)
if res := osutil.TraversesSymlink("testdata", tc.name); tc.traverses == (res == nil) {
t.Errorf("TraversesSymlink(%q) = %v, should be %v", tc.name, res, tc.traverses)
}
}
}
var isDirResult bool
var traversesSymlinkResult error
func BenchmarkIsDir(b *testing.B) {
func BenchmarkTraversesSymlink(b *testing.B) {
os.RemoveAll("testdata")
defer os.RemoveAll("testdata")
os.MkdirAll("testdata/a/b/c", 0755)
for i := 0; i < b.N; i++ {
isDirResult = osutil.IsDir("testdata", "a/b/c")
traversesSymlinkResult = osutil.TraversesSymlink("testdata", "a/b/c")
}
b.ReportAllocs()

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,3 @@
// protoc protoc -I ../../../../../ -I ../../../../gogo/protobuf/protobuf -I . --gogofast_out=. message.proto
syntax = "proto3";
package protocol;
@@ -175,7 +173,7 @@ message FileDownloadProgressUpdate {
FileDownloadProgressUpdateType update_type = 1;
string name = 2;
Vector version = 3 [(gogoproto.nullable) = false];
repeated int32 block_indexes = 4;
repeated int32 block_indexes = 4 [packed=false];
}
enum FileDownloadProgressUpdateType {

View File

@@ -1,7 +1,7 @@
// Copyright (C) 2014 The Protocol Authors.
//go:generate go run ../../script/protofmt.go bep.proto
//go:generate protoc -I ../../../../../ -I ../../../../gogo/protobuf/protobuf -I . --gogofast_out=. bep.proto
//go:generate protoc -I ../../vendor/ -I ../../vendor/github.com/gogo/protobuf/protobuf -I . --gogofast_out=. bep.proto
package protocol

View File

@@ -1,7 +1,7 @@
// Copyright (C) 2014 The Protocol Authors.
//go:generate go run ../../script/protofmt.go deviceid_test.proto
//go:generate protoc -I ../../../../../ -I ../../../../gogo/protobuf/protobuf -I . --gogofast_out=. deviceid_test.proto
//go:generate protoc -I ../../vendor/ -I ../../vendor/github.com/gogo/protobuf/protobuf -I . --gogofast_out=. deviceid_test.proto
package protocol

View File

@@ -28,7 +28,9 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type TestOldDeviceID struct {
Test []byte `protobuf:"bytes,1,opt,name=test,proto3" json:"test,omitempty"`
@@ -52,49 +54,49 @@ func init() {
proto.RegisterType((*TestOldDeviceID)(nil), "protocol.TestOldDeviceID")
proto.RegisterType((*TestNewDeviceID)(nil), "protocol.TestNewDeviceID")
}
func (m *TestOldDeviceID) Marshal() (data []byte, err error) {
func (m *TestOldDeviceID) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *TestOldDeviceID) MarshalTo(data []byte) (int, error) {
func (m *TestOldDeviceID) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Test) > 0 {
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintDeviceidTest(data, i, uint64(len(m.Test)))
i += copy(data[i:], m.Test)
i = encodeVarintDeviceidTest(dAtA, i, uint64(len(m.Test)))
i += copy(dAtA[i:], m.Test)
}
return i, nil
}
func (m *TestNewDeviceID) Marshal() (data []byte, err error) {
func (m *TestNewDeviceID) Marshal() (dAtA []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *TestNewDeviceID) MarshalTo(data []byte) (int, error) {
func (m *TestNewDeviceID) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintDeviceidTest(data, i, uint64(m.Test.ProtoSize()))
n1, err := m.Test.MarshalTo(data[i:])
i = encodeVarintDeviceidTest(dAtA, i, uint64(m.Test.ProtoSize()))
n1, err := m.Test.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
@@ -102,31 +104,31 @@ func (m *TestNewDeviceID) MarshalTo(data []byte) (int, error) {
return i, nil
}
func encodeFixed64DeviceidTest(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
func encodeFixed64DeviceidTest(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32DeviceidTest(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
func encodeFixed32DeviceidTest(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintDeviceidTest(data []byte, offset int, v uint64) int {
func encodeVarintDeviceidTest(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
dAtA[offset] = uint8(v)
return offset + 1
}
func (m *TestOldDeviceID) ProtoSize() (n int) {
@@ -160,8 +162,8 @@ func sovDeviceidTest(x uint64) (n int) {
func sozDeviceidTest(x uint64) (n int) {
return sovDeviceidTest(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *TestOldDeviceID) Unmarshal(data []byte) error {
l := len(data)
func (m *TestOldDeviceID) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -173,7 +175,7 @@ func (m *TestOldDeviceID) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -201,7 +203,7 @@ func (m *TestOldDeviceID) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -215,14 +217,14 @@ func (m *TestOldDeviceID) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Test = append(m.Test[:0], data[iNdEx:postIndex]...)
m.Test = append(m.Test[:0], dAtA[iNdEx:postIndex]...)
if m.Test == nil {
m.Test = []byte{}
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipDeviceidTest(data[iNdEx:])
skippy, err := skipDeviceidTest(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -241,8 +243,8 @@ func (m *TestOldDeviceID) Unmarshal(data []byte) error {
}
return nil
}
func (m *TestNewDeviceID) Unmarshal(data []byte) error {
l := len(data)
func (m *TestNewDeviceID) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -254,7 +256,7 @@ func (m *TestNewDeviceID) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -282,7 +284,7 @@ func (m *TestNewDeviceID) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -296,13 +298,13 @@ func (m *TestNewDeviceID) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.Test.Unmarshal(data[iNdEx:postIndex]); err != nil {
if err := m.Test.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipDeviceidTest(data[iNdEx:])
skippy, err := skipDeviceidTest(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -321,8 +323,8 @@ func (m *TestNewDeviceID) Unmarshal(data []byte) error {
}
return nil
}
func skipDeviceidTest(data []byte) (n int, err error) {
l := len(data)
func skipDeviceidTest(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -333,7 +335,7 @@ func skipDeviceidTest(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -351,7 +353,7 @@ func skipDeviceidTest(data []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
if dAtA[iNdEx-1] < 0x80 {
break
}
}
@@ -368,7 +370,7 @@ func skipDeviceidTest(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -391,7 +393,7 @@ func skipDeviceidTest(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -402,7 +404,7 @@ func skipDeviceidTest(data []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipDeviceidTest(data[start:])
next, err := skipDeviceidTest(dAtA[start:])
if err != nil {
return 0, err
}
@@ -426,17 +428,19 @@ var (
ErrIntOverflowDeviceidTest = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("deviceid_test.proto", fileDescriptorDeviceidTest) }
var fileDescriptorDeviceidTest = []byte{
// 171 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x12, 0x4e, 0x49, 0x2d, 0xcb,
// 176 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4e, 0x49, 0x2d, 0xcb,
0x4c, 0x4e, 0xcd, 0x4c, 0x89, 0x2f, 0x49, 0x2d, 0x2e, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17,
0xe2, 0x00, 0x53, 0xc9, 0xf9, 0x39, 0x52, 0xba, 0xe9, 0x99, 0x25, 0x19, 0xa5, 0x49, 0x7a, 0xc9,
0xf9, 0xb9, 0xfa, 0xe9, 0xf9, 0xe9, 0xf9, 0xfa, 0x60, 0x99, 0xa4, 0xd2, 0x34, 0x30, 0x0f, 0xcc,
0x01, 0xb3, 0x20, 0x1a, 0x95, 0x54, 0xb9, 0xf8, 0x43, 0x80, 0xc6, 0xf8, 0xe7, 0xa4, 0xb8, 0x80,
0x8d, 0xf5, 0x74, 0x11, 0x12, 0xe2, 0x62, 0x01, 0x99, 0x2c, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0x13,
0x04, 0x66, 0x2b, 0x99, 0x43, 0x94, 0xf9, 0xa5, 0x96, 0xc3, 0x95, 0xa9, 0x20, 0x2b, 0x73, 0x12,
0x38, 0x71, 0x4f, 0x9e, 0xe1, 0xd6, 0x3d, 0x79, 0x0e, 0x98, 0x3c, 0x44, 0xa3, 0x93, 0xcc, 0x89,
0x87, 0x72, 0x0c, 0x17, 0x80, 0xf8, 0xc4, 0x23, 0x39, 0xc6, 0x0b, 0x40, 0xfc, 0xe0, 0x91, 0x1c,
0xc3, 0x0b, 0x20, 0x5e, 0xf0, 0x58, 0x8e, 0x31, 0x89, 0x0d, 0xec, 0x08, 0x63, 0x40, 0x00, 0x00,
0x00, 0xff, 0xff, 0x35, 0x9c, 0x00, 0x78, 0xd4, 0x00, 0x00, 0x00,
0x01, 0xb3, 0x20, 0x1a, 0x95, 0x54, 0xb9, 0xf8, 0x43, 0x52, 0x8b, 0x4b, 0xfc, 0x73, 0x52, 0x5c,
0xc0, 0xc6, 0x7a, 0xba, 0x08, 0x09, 0x71, 0xb1, 0x80, 0x4c, 0x96, 0x60, 0x54, 0x60, 0xd4, 0xe0,
0x09, 0x02, 0xb3, 0x95, 0xcc, 0x21, 0xca, 0xfc, 0x52, 0xcb, 0xe1, 0xca, 0x54, 0x90, 0x95, 0x39,
0x09, 0x9c, 0xb8, 0x27, 0xcf, 0x70, 0xeb, 0x9e, 0x3c, 0x07, 0x4c, 0x1e, 0xa2, 0xd1, 0x49, 0xe6,
0xc4, 0x43, 0x39, 0x86, 0x0b, 0x0f, 0xe5, 0x18, 0x4e, 0x3c, 0x92, 0x63, 0xbc, 0xf0, 0x48, 0x8e,
0xf1, 0xc1, 0x23, 0x39, 0x86, 0x17, 0x8f, 0xe4, 0x18, 0x16, 0x3c, 0x96, 0x63, 0x4c, 0x62, 0x03,
0x3b, 0xc2, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x35, 0x9c, 0x00, 0x78, 0xd4, 0x00, 0x00, 0x00,
}

View File

@@ -13,6 +13,8 @@ import (
"testing/quick"
"time"
"encoding/hex"
"github.com/syncthing/syncthing/lib/rand"
)
@@ -179,6 +181,37 @@ func TestMarshalCloseMessage(t *testing.T) {
}
}
func TestMarshalFDPU(t *testing.T) {
if testing.Short() {
quickCfg.MaxCount = 10
}
f := func(m1 FileDownloadProgressUpdate) bool {
if len(m1.Version.Counters) == 0 {
m1.Version.Counters = nil
}
return testMarshal(t, "close", &m1, &FileDownloadProgressUpdate{})
}
if err := quick.Check(f, quickCfg); err != nil {
t.Error(err)
}
}
func TestUnmarshalFDPUv16v17(t *testing.T) {
var fdpu FileDownloadProgressUpdate
m0, _ := hex.DecodeString("08cda1e2e3011278f3918787f3b89b8af2958887f0aa9389f3a08588f3aa8f96f39aa8a5f48b9188f19286a0f3848da4f3aba799f3beb489f0a285b9f487b684f2a3bda2f48598b4f2938a89f2a28badf187a0a2f2aebdbdf4849494f4808fbbf2b3a2adf2bb95bff0a6ada4f198ab9af29a9c8bf1abb793f3baabb2f188a6ba1a0020bb9390f60220f6d9e42220b0c7e2b2fdffffffff0120fdb2dfcdfbffffffff0120cedab1d50120bd8784c0feffffffff0120ace99591fdffffffff0120eed7d09af9ffffffff01")
if err := fdpu.Unmarshal(m0); err != nil {
t.Fatal("Unmarshalling message from v0.14.16:", err)
}
m1, _ := hex.DecodeString("0880f1969905128401f099b192f0abb1b9f3b280aff19e9aa2f3b89e84f484b39df1a7a6b0f1aea4b1f0adac94f3b39caaf1939281f1928a8af0abb1b0f0a8b3b3f3a88e94f2bd85acf29c97a9f2969da6f0b7a188f1908ea2f09a9c9bf19d86a6f29aada8f389bb95f0bf9d88f1a09d89f1b1a4b5f29b9eabf298a59df1b2a589f2979ebdf0b69880f18986b21a440a1508c7d8fb8897ca93d90910e8c4d8e8f2f8f0ccee010a1508afa8ffd8c085b393c50110e5bdedc3bddefe9b0b0a1408a1bedddba4cac5da3c10b8e5d9958ca7e3ec19225ae2f88cb2f8ffffffff018ceda99cfbffffffff01b9c298a407e295e8e9fcffffffff01f3b9ade5fcffffffff01c08bfea9fdffffffff01a2c2e5e1ffffffffff0186dcc5dafdffffffff01e9ffc7e507c9d89db8fdffffffff01")
if err := fdpu.Unmarshal(m1); err != nil {
t.Fatal("Unmarshalling message from v0.14.16:", err)
}
}
func testMarshal(t *testing.T, prefix string, m1, m2 message) bool {
buf, err := m1.Marshal()
if err != nil {
@@ -194,7 +227,7 @@ func testMarshal(t *testing.T, prefix string, m1, m2 message) bool {
bs2, _ := json.MarshalIndent(m2, "", " ")
if !bytes.Equal(bs1, bs2) {
ioutil.WriteFile(prefix+"-1.txt", bs1, 0644)
ioutil.WriteFile(prefix+"-2.txt", bs1, 0644)
ioutil.WriteFile(prefix+"-2.txt", bs2, 0644)
return false
}

View File

@@ -11,9 +11,9 @@ import (
"fmt"
"io"
"github.com/chmduquesne/rollinghash/adler32"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sha256"
"github.com/syncthing/syncthing/lib/weakhash"
)
var SHA256OfNothing = []uint8{0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55}
@@ -26,7 +26,8 @@ type Counter interface {
func Blocks(r io.Reader, blocksize int, sizehint int64, counter Counter) ([]protocol.BlockInfo, error) {
hf := sha256.New()
hashLength := hf.Size()
whf := weakhash.NewHash(blocksize)
whf := adler32.New()
mhf := io.MultiWriter(hf, whf)
var blocks []protocol.BlockInfo
var hashes, thisHash []byte
@@ -46,7 +47,7 @@ func Blocks(r io.Reader, blocksize int, sizehint int64, counter Counter) ([]prot
var offset int64
for {
lr := io.LimitReader(r, int64(blocksize))
n, err := io.CopyBuffer(hf, io.TeeReader(lr, whf), buf)
n, err := io.CopyBuffer(mhf, lr, buf)
if err != nil {
return nil, err
}

View File

@@ -122,3 +122,25 @@ func TestDiff(t *testing.T) {
}
}
}
func TestDiffEmpty(t *testing.T) {
emptyCases := []struct {
a []protocol.BlockInfo
b []protocol.BlockInfo
need int
have int
}{
{nil, nil, 0, 0},
{[]protocol.BlockInfo{{Offset: 3, Size: 1}}, nil, 0, 0},
{nil, []protocol.BlockInfo{{Offset: 3, Size: 1}}, 1, 0},
}
for _, emptyCase := range emptyCases {
h, n := BlockDiff(emptyCase.a, emptyCase.b)
if len(h) != emptyCase.have {
t.Errorf("incorrect have: %d != %d", len(h), emptyCase.have)
}
if len(n) != emptyCase.need {
t.Errorf("incorrect have: %d != %d", len(h), emptyCase.have)
}
}
}

View File

@@ -169,13 +169,14 @@ func (w *walker) walk() (chan protocol.FileInfo, error) {
realToHashChan := make(chan protocol.FileInfo)
done := make(chan struct{})
progress := newByteCounter()
defer progress.Close()
newParallelHasher(w.Dir, w.BlockSize, w.Hashers, finishedChan, realToHashChan, progress, done, w.Cancel)
// A routine which actually emits the FolderScanProgress events
// every w.ProgressTicker ticks, until the hasher routines terminate.
go func() {
defer progress.Close()
for {
select {
case <-done:
@@ -185,7 +186,7 @@ func (w *walker) walk() (chan protocol.FileInfo, error) {
case <-ticker.C:
current := progress.Total()
rate := progress.Rate()
l.Debugf("Walk %s %s current progress %d/%d at %.01f MB/s (%d%%)", w.Dir, w.Subs, current, total, rate/1024/1024, current*100/total)
l.Debugf("Walk %s %s current progress %d/%d at %.01f MiB/s (%d%%)", w.Dir, w.Subs, current, total, rate/1024/1024, current*100/total)
events.Default.Log(events.FolderScanProgress, map[string]interface{}{
"folder": w.Folder,
"current": current,
@@ -277,11 +278,14 @@ func (w *walker) walkAndHashFiles(fchan, dchan chan protocol.FileInfo) filepath.
switch {
case info.Mode()&os.ModeSymlink == os.ModeSymlink:
var shouldSkip bool
shouldSkip, err = w.walkSymlink(absPath, relPath, dchan)
if err == nil && shouldSkip {
return skip
if err := w.walkSymlink(absPath, relPath, dchan); err != nil {
return err
}
if info.IsDir() {
// under no circumstances shall we descend into a symlink
return filepath.SkipDir
}
return nil
case info.Mode().IsDir():
err = w.walkDir(relPath, info, dchan)
@@ -375,17 +379,14 @@ func (w *walker) walkDir(relPath string, info os.FileInfo, dchan chan protocol.F
return nil
}
// walkSymlinks returns true if the symlink should be skipped, or an error if
// we should stop walking altogether. filepath.Walk isn't supposed to
// transcend into symlinks at all, but there are rumours that this may have
// happened anyway under some circumstances, possibly Windows reparse points
// or something. Hence the "skip" return from this one.
func (w *walker) walkSymlink(absPath, relPath string, dchan chan protocol.FileInfo) (skip bool, err error) {
// walkSymlink returns nil or an error, if the error is of the nature that
// it should stop the entire walk.
func (w *walker) walkSymlink(absPath, relPath string, dchan chan protocol.FileInfo) error {
// If the target is a directory, do NOT descend down there. This will
// cause files to get tracked, and removing the symlink will as a result
// remove files in their real location.
if !symlinks.Supported {
return true, nil
return nil
}
// We always rehash symlinks as they have no modtime or
@@ -396,7 +397,7 @@ func (w *walker) walkSymlink(absPath, relPath string, dchan chan protocol.FileIn
target, targetType, err := symlinks.Read(absPath)
if err != nil {
l.Debugln("readlink error:", absPath, err)
return true, nil
return nil
}
// A symlink is "unchanged", if
@@ -408,7 +409,7 @@ func (w *walker) walkSymlink(absPath, relPath string, dchan chan protocol.FileIn
// - the target was the same
cf, ok := w.CurrentFiler.CurrentFile(relPath)
if ok && !cf.IsDeleted() && cf.IsSymlink() && !cf.IsInvalid() && SymlinkTypeEqual(targetType, cf) && cf.SymlinkTarget == target {
return true, nil
return nil
}
f := protocol.FileInfo{
@@ -424,10 +425,10 @@ func (w *walker) walkSymlink(absPath, relPath string, dchan chan protocol.FileIn
select {
case dchan <- f:
case <-w.Cancel:
return false, errors.New("cancelled")
return errors.New("cancelled")
}
return false, nil
return nil
}
// normalizePath returns the normalized relative path (possibly after fixing

View File

@@ -9,6 +9,7 @@ package sha256
import (
"crypto/rand"
cryptoSha256 "crypto/sha256"
"encoding/hex"
"fmt"
"hash"
"os"
@@ -66,6 +67,8 @@ func SelectAlgo() {
// implementation as it may be disabled for incompatibility reasons.
cryptoPerf = cpuBenchOnce(benchmarkingIterations*benchmarkingDuration, cryptoSha256.New)
}
verifyCorrectness()
}
// Report prints a line with the measured hash performance rates for the
@@ -134,3 +137,24 @@ func formatRate(rate float64) string {
}
return fmt.Sprintf("%.*f MB/s", decimals, rate)
}
func verifyCorrectness() {
// The currently selected algo should in fact perform a SHA256 calculation.
// $ echo "Syncthing Magic Testing Value" | openssl dgst -sha256 -hex
correct := "87f6cfd24131724c6ec43495594c5c22abc7d2b86bcc134bc6f10b7ec3dda4ee"
input := "Syncthing Magic Testing Value\n"
h := New()
h.Write([]byte(input))
sum := hex.EncodeToString(h.Sum(nil))
if sum != correct {
panic("sha256 is broken")
}
arr := Sum256([]byte(input))
sum = hex.EncodeToString(arr[:])
if sum != correct {
panic("sha256 is broken")
}
}

View File

@@ -9,9 +9,12 @@ package weakhash
import (
"os"
"testing"
"github.com/chmduquesne/rollinghash/adler32"
)
const testFile = "../model/testdata/~syncthing~file.tmp"
const size = 128 << 10
func BenchmarkFind1MFile(b *testing.B) {
b.ReportAllocs()
@@ -21,10 +24,38 @@ func BenchmarkFind1MFile(b *testing.B) {
if err != nil {
b.Fatal(err)
}
_, err = Find(fd, []uint32{0, 1, 2}, 128<<10)
_, err = Find(fd, []uint32{0, 1, 2}, size)
if err != nil {
b.Fatal(err)
}
fd.Close()
}
}
func BenchmarkWeakHashAdler32(b *testing.B) {
data := make([]byte, size)
hf := adler32.New()
for i := 0; i < b.N; i++ {
hf.Write(data)
}
_ = hf.Sum32()
b.SetBytes(size)
}
func BenchmarkWeakHashAdler32Roll(b *testing.B) {
data := make([]byte, size)
hf := adler32.New()
hf.Write(data)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for i := 0; i <= size; i++ {
hf.Roll('a')
}
}
b.SetBytes(size)
}

View File

@@ -8,22 +8,16 @@ package weakhash
import (
"bufio"
"hash"
"io"
"os"
"github.com/chmduquesne/rollinghash/adler32"
)
const (
Size = 4
)
func NewHash(size int) hash.Hash32 {
return &digest{
buf: make([]byte, size),
size: size,
}
}
// Find finds all the blocks of the given size within io.Reader that matches
// the hashes provided, and returns a hash -> slice of offsets within reader
// map, that produces the same weak hash.
@@ -33,7 +27,7 @@ func Find(ir io.Reader, hashesToFind []uint32, size int) (map[uint32][]int64, er
}
r := bufio.NewReader(ir)
hf := NewHash(size)
hf := adler32.New()
n, err := io.CopyN(hf, r, int64(size))
if err == io.EOF {
@@ -66,56 +60,11 @@ func Find(ir io.Reader, hashesToFind []uint32, size int) (map[uint32][]int64, er
} else if err != nil {
return offsets, err
}
hf.Write([]byte{bt})
hf.Roll(bt)
}
return offsets, nil
}
// Using this: http://tutorials.jenkov.com/rsync/checksums.html
// Example implementations: https://gist.github.com/csabahenk/1096262/revisions
// Alternative that could be used is adler32 http://blog.liw.fi/posts/rsync-in-python/#comment-fee8d5e07794fdba3fe2d76aa2706a13
type digest struct {
buf []byte
size int
a uint16
b uint16
j int
}
func (d *digest) Write(data []byte) (int, error) {
for _, c := range data {
// TODO: Use this in Go 1.6
// d.a = d.a - uint16(d.buf[d.j]) + uint16(c)
// d.b = d.b - uint16(d.size)*uint16(d.buf[d.j]) + d.a
d.a -= uint16(d.buf[d.j])
d.a += uint16(c)
d.b -= uint16(d.size) * uint16(d.buf[d.j])
d.b += d.a
d.buf[d.j] = c
d.j = (d.j + 1) % d.size
}
return len(data), nil
}
func (d *digest) Reset() {
for i := range d.buf {
d.buf[i] = 0x0
}
d.a = 0
d.b = 0
d.j = 0
}
func (d *digest) Sum(b []byte) []byte {
r := d.Sum32()
return append(b, byte(r>>24), byte(r>>16), byte(r>>8), byte(r))
}
func (d *digest) Sum32() uint32 { return uint32(d.a) | (uint32(d.b) << 16) }
func (digest) Size() int { return Size }
func (digest) BlockSize() int { return 1 }
func NewFinder(path string, size int, hashesToFind []uint32) (*Finder, error) {
file, err := os.Open(path)
if err != nil {

View File

@@ -18,129 +18,6 @@ import (
)
var payload = []byte("abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz")
var hashes = []uint32{
64225674,
64881038,
65536402,
66191766,
66847130,
67502494,
68157858,
68813222,
69468586,
70123950,
70779314,
71434678,
72090042,
72745406,
73400770,
74056134,
74711498,
75366862,
76022226,
76677590,
77332954,
77988318,
78643682,
77595084,
74842550,
70386080,
64225674,
64881038,
65536402,
66191766,
66847130,
67502494,
68157858,
68813222,
69468586,
70123950,
70779314,
71434678,
72090042,
72745406,
73400770,
74056134,
74711498,
75366862,
76022226,
76677590,
77332954,
77988318,
78643682,
77595084,
74842550,
70386080,
64225674,
64881038,
65536402,
66191766,
66847130,
67502494,
68157858,
68813222,
69468586,
70123950,
70779314,
71434678,
72090042,
72745406,
73400770,
74056134,
74711498,
75366862,
76022226,
76677590,
77332954,
77988318,
78643682,
77595084,
74842550,
70386080,
64225674,
64881038,
65536402,
66191766,
66847130,
67502494,
68157858,
68813222,
69468586,
70123950,
70779314,
71434678,
72090042,
72745406,
73400770,
74056134,
74711498,
75366862,
76022226,
76677590,
77332954,
77988318,
78643682,
71893365,
71893365,
}
// Tested using an alternative C implementation at https://gist.github.com/csabahenk/1096262
func TestHashCorrect(t *testing.T) {
h := NewHash(Size)
pos := 0
for pos < Size {
h.Write([]byte{payload[pos]})
pos++
}
for i := 0; pos < len(payload); i++ {
if h.Sum32() != hashes[i] {
t.Errorf("mismatch at %d", i)
}
h.Write([]byte{payload[pos]})
pos++
}
}
func TestFinder(t *testing.T) {
f, err := ioutil.TempFile("", "")
@@ -154,7 +31,7 @@ func TestFinder(t *testing.T) {
t.Error(err)
}
hashes := []uint32{64881038, 65536402}
hashes := []uint32{65143183, 65798547}
finder, err := NewFinder(f.Name(), 4, hashes)
if err != nil {
t.Error(err)
@@ -162,8 +39,8 @@ func TestFinder(t *testing.T) {
defer finder.Close()
expected := map[uint32][]int64{
64881038: []int64{1, 27, 53, 79},
65536402: []int64{2, 28, 54, 80},
65143183: []int64{1, 27, 53, 79},
65798547: []int64{2, 28, 54, 80},
}
actual := make(map[uint32][]int64)

View File

@@ -1,4 +1,4 @@
#!/bin/sh
#!/bin/bash
base=https://docs.syncthing.net/man/
pages=(

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "STDISCOSRV" "1" "December 22, 2016" "v0.14" "Syncthing"
.TH "STDISCOSRV" "1" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
stdiscosrv \- Syncthing Discovery Server
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "STRELAYSRV" "1" "December 22, 2016" "v0.14" "Syncthing"
.TH "STRELAYSRV" "1" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
strelaysrv \- Syncthing Relay Server
.
@@ -37,8 +37,10 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nf
.ft C
strelaysrv [\-debug] [\-ext\-address=<address>] [\-global\-rate=<bytes/s>] [\-keys=<dir>] [\-listen=<listen addr>]
[\-message\-timeout=<duration>] [\-network\-timeout=<duration>] [\-per\-session\-rate=<bytes/s>]
[\-ping\-interval=<duration>] [\-pools=<pool addresses>] [\-provided\-by=<string>] [\-status\-srv=<listen addr>]
[\-message\-timeout=<duration>] [\-nat] [\-nat\-lease=<duration> [\-nat\-renewal=<duration>]
[\-nat\-timeout=<duration>] [\-network\-timeout=<duration>] [\-per\-session\-rate=<bytes/s>]
[\-ping\-interval=<duration>] [\-pools=<pool addresses>] [\-protocol=<string>] [\-provided\-by=<string>]
[\-status\-srv=<listen addr>]
.ft P
.fi
.UNINDENT
@@ -84,6 +86,32 @@ Maximum amount of time we wait for relevant messages to arrive (default 1m0s).
.UNINDENT
.INDENT 0.0
.TP
.B \-nat
Use UPnP/NAT\-PMP to acquire external port mapping
.UNINDENT
.sp
\&..cmdoption:: \-nat\-lease=<duration>
.INDENT 0.0
.INDENT 3.5
NAT lease length in minutes (default 60)
.UNINDENT
.UNINDENT
.sp
\&..cmdoption:: \-nat\-renewal=<duration>
.INDENT 0.0
.INDENT 3.5
NAT renewal frequency in minutes (default 30)
.UNINDENT
.UNINDENT
.sp
\&..cmdoption:: \-nat\-timeout=<duration>
.INDENT 0.0
.INDENT 3.5
NAT discovery timeout in seconds (default 10)
.UNINDENT
.UNINDENT
.INDENT 0.0
.TP
.B \-network\-timeout=<duration>
Timeout for network operations between the client and the relay. If no data
is received between the client and the relay in this period of time, the
@@ -110,6 +138,11 @@ a pool, thereby remaining a private relay.
.UNINDENT
.INDENT 0.0
.TP
.B \-protocol=<string>
Protocol used for listening. \(aqtcp\(aq for IPv4 and IPv6, \(aqtcp4\(aq for IPv4, \(aqtcp6\(aq for IPv6 (default "tcp").
.UNINDENT
.INDENT 0.0
.TP
.B \-provided\-by=<string>
An optional description about who provides the relay.
.UNINDENT
@@ -147,6 +180,23 @@ global relay pool, unless a \fB\-pools=""\fP argument is given.
.sp
To make the relay server start automatically at boot, use the recommended
procedure for your operating system.
.SS Client configuration
.sp
Syncthing can be configured to use specific relay servers (exclusively of the public pool) by adding the required servers to the Sync Protocol Listen Address field, under Actions and Settings. The format is as follows:
.INDENT 0.0
.INDENT 3.5
relay://<host name|IP>[:port]/?id=<relay device ID>
.UNINDENT
.UNINDENT
.sp
For example:
.INDENT 0.0
.INDENT 3.5
relay://private\-relay\-1.example.com:443/?id=ITZRNXE\-YNROGBZ\-HXTH5P7\-VK5NYE5\-QHRQGE2\-7JQ6VNJ\-KZUEDIU\-5PPR5AM
.UNINDENT
.UNINDENT
.sp
The relay\(aqs device ID is output on start\-up.
.SS Running on port 443 as an unprivileged user
.sp
It is recommended that you run the relay on port 443 (or another port which is

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-BEP" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-BEP" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-bep \- Block Exchange Protocol v1
.
@@ -748,9 +748,9 @@ directions.
.fi
.UNINDENT
.UNINDENT
.SS Read Only
.SS Send Only
.sp
In read only mode, a device does not apply any updates from the cluster, but
In send\-only mode, a device does not apply any updates from the cluster, but
publishes changes of its local folder to the cluster as usual. The local
folder can be seen as a "master copy" that is never affected by the actions
of other cluster devices.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-CONFIG" "5" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-CONFIG" "5" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-config \- Syncthing Configuration
.
@@ -208,7 +208,7 @@ Controls how the folder is handled by Syncthing. Possible values are:
The folder is in default mode. Sending local and accepting remote changes.
.TP
.B readonly
The folder is in "master" mode \-\- it will not be modified by
The folder is in "send\-only" mode \-\- it will not be modified by
Syncthing on this device.
.UNINDENT
.TP
@@ -363,6 +363,15 @@ Disable all compression.
.B introducer
Set to true if this device should be trusted as an introducer, i.e. we
should copy their list of devices per folder when connecting.
.UNINDENT
.sp
\fBSEE ALSO:\fP
.INDENT 0.0
.INDENT 3.5
introducer
.UNINDENT
.UNINDENT
.INDENT 0.0
.TP
.B skipIntroductionRemovals
Set to true if you wish to follow only introductions and not de\-introductions.
@@ -722,14 +731,14 @@ accidentally if you sync your home folder between devices. A common symptom
of syncing configuration files is two devices ending up with the same Device ID.
.sp
If you want to use Syncthing to backup your configuration files, it is recommended
that the files you are backing up are in a folder\-master to prevent other
that the files you are backing up are in a folder\-sendonly to prevent other
devices from overwriting the per device configuration. The folder on the remote
device(s) should not be used as configuration for the remote devices.
.sp
If you\(aqd like to sync your home folder in non\-master mode, you may add the
If you\(aqd like to sync your home folder in non\-send\-only mode, you may add the
folder that stores the configuration files to the ignore list\&.
If you\(aqd also like to backup your configuration files, add another folder in
master mode for just the configuration folder.
send\-only mode for just the configuration folder.
.SH AUTHOR
The Syncthing Authors
.SH COPYRIGHT

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-DEVICE-IDS" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-DEVICE-IDS" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-device-ids \- Understanding Device IDs
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-EVENT-API" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-EVENT-API" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-event-api \- Event API
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-FAQ" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-FAQ" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-faq \- Frequently Asked Questions
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-GLOBALDISCO" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-GLOBALDISCO" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-globaldisco \- Global Discovery Protocol v3
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-LOCALDISCO" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-LOCALDISCO" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-localdisco \- Local Discovery Protocol v4
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-NETWORKING" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-NETWORKING" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-networking \- Firewall Setup
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-RELAY" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-RELAY" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-relay \- Relay Protocol v1
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-REST-API" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-REST-API" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-rest-api \- REST API
.
@@ -776,7 +776,7 @@ This is an expensive call, increasing CPU and RAM usage on the device. Use spari
.UNINDENT
.SS POST /rest/db/override
.sp
Request override of a master folder.
Request override of a send\-only folder.
Takes the mandatory parameter \fIfolder\fP (folder ID).
.INDENT 0.0
.INDENT 3.5

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-SECURITY" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-SECURITY" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-security \- Security Principles
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-STIGNORE" "5" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-STIGNORE" "5" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-stignore \- Prevent files from being synchronized to other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING-VERSIONING" "7" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING-VERSIONING" "7" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing-versioning \- Keep automatic backups of deleted files by other nodes
.

View File

@@ -1,6 +1,6 @@
.\" Man page generated from reStructuredText.
.
.TH "SYNCTHING" "1" "December 22, 2016" "v0.14" "Syncthing"
.TH "SYNCTHING" "1" "January 07, 2017" "v0.14" "Syncthing"
.SH NAME
syncthing \- Syncthing
.
@@ -37,9 +37,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nf
.ft C
syncthing [\-audit] [\-browser\-only] [\-generate=<dir>] [\-gui\-address=<address>] [\-gui\-apikey=<key>]
[\-home=<dir>] [\-logfile=<filename>] [\-logflags=<flags>] [\-no\-browser]
[\-no\-console] [\-no\-restart] [\-paths] [\-paused] [\-reset] [\-upgrade] [\-upgrade\-check]
[\-upgrade\-to=<url>] [\-verbose] [\-version]
[\-home=<dir>] [\-logfile=<filename>] [\-logflags=<flags>] [\-no\-browser] [\-no\-console]
[\-no\-restart] [\-paths] [\-paused] [\-reset\-database] [\-reset\-deltas] [\-upgrade]
[\-upgrade\-check] [\-upgrade\-to=<url>] [\-verbose] [\-version]
.ft P
.fi
.UNINDENT
@@ -123,8 +123,13 @@ Print the paths used for configuration, keys, database, GUI overrides, default s
.UNINDENT
.INDENT 0.0
.TP
.B \-reset
Reset the database.
.B \-reset\-database
Reset the database, forcing a full rescan and resync.
.UNINDENT
.INDENT 0.0
.TP
.B \-reset\-deltas
Reset delta index IDs, forcing a full index exchange.
.UNINDENT
.INDENT 0.0
.TP
@@ -185,7 +190,7 @@ exit. For example, \fB128 + 9 (SIGKILL) = 137\fP\&.
.sp
The following environment variables modify Syncthing\(aqs behavior in ways that
are mostly useful for developers. Use with care.
If you start syncthing from within service managers like systemd or supervisor
If you start Syncthing from within service managers like systemd or supervisor,
path expansion may not be supported.
.INDENT 0.0
.TP
@@ -197,48 +202,87 @@ variable will be ignored anytime after the first run.
Directory to load GUI assets from. Overrides compiled in assets.
.TP
.B STTRACE
A comma separated string of facilities to trace. The valid facility strings
Used to increase the debugging verbosity in specific or all facilities, generally mapping to a Go package. Enabling any of these also enables microsecond timestamps, file names plus line numbers. Enter a comma\-separated string of facilities to trace. \fBsyncthing \-help\fP always outputs an up\-to\-date list. The valid facility strings
are:
.INDENT 7.0
.TP
.B beacon
the beacon package
.B Main and operational facilities:
.INDENT 7.0
.TP
.B discover
the discover package
.TP
.B events
the events package
.TP
.B files
the files package
.TP
.B http
the main package; HTTP requests
.TP
.B locks
the sync package; trace long held locks
.TP
.B net
the main package; connections & network messages
.B main
Main package.
.TP
.B model
the model package
The root hub; the largest chunk of the system. File pulling, index transmission and requests for chunks.
.TP
.B config
Configuration loading and saving.
.TP
.B db
The database layer.
.TP
.B scanner
the scanner package
File change detection and hashing.
.TP
.B stats
the stats package
.B versioner
File versioning.
.UNINDENT
.TP
.B Networking facilities:
.INDENT 7.0
.TP
.B beacon
Multicast and broadcast discovery packets.
.TP
.B connections
Connection handling.
.TP
.B dialer
Dialing connections.
.TP
.B discover
Remote device discovery requests, replies and registration of devices.
.TP
.B relay
Relay interaction.
.TP
.B protocol
The BEP protocol.
.TP
.B nat
NAT discovery and port mapping.
.TP
.B pmp
NAT\-PMP discovery and port mapping.
.TP
.B upnp
the upnp package
UPnP discovery and port mapping.
.UNINDENT
.TP
.B xdr
the xdr package
.B Other facilities:
.INDENT 7.0
.TP
.B events
Event generation and logging.
.TP
.B http
REST API.
.TP
.B sha256
SHA256 hashing package (this facility currently unused).
.TP
.B stats
Persistent device and folder statistics.
.TP
.B sync
Mutexes. Used for debugging race conditions and deadlocks.
.TP
.B upgrade
Binary upgrades.
.TP
.B all
all of the above
All of the above.
.UNINDENT
.UNINDENT
.TP
.B STPROFILER
@@ -259,9 +303,22 @@ Write block profiles to \fBblock\-$pid\-$timestamp.pprof\fP every 20 seconds.
Write running performance statistics to \fBperf\-$pid.csv\fP\&. Not supported on
Windows.
.TP
.B STDEADLOCK
Placeholder
.TP
.B STDEADLOCKTIMEOUT
Placeholder
.TP
.B STDEADLOCKTHRESHOLD
Placeholder
.TP
.B STNOUPGRADE
Disable automatic upgrades.
.TP
.B STHASHING
Specifiy which hashing package to use. Defaults to automatic based on
peformance. Specify "minio" (compatibility) or "standard" for the default Go implementation.
.TP
.B GOMAXPROCS
Set the maximum number of CPU cores to use. Defaults to all available CPU
cores.
@@ -269,7 +326,7 @@ cores.
.B GOGC
Percentage of heap growth at which to trigger GC. Default is 100. Lower
numbers keep peak memory usage down, at the price of CPU usage
(ie. performance).
(i.e. performance).
.UNINDENT
.SH SEE ALSO
.sp

View File

@@ -0,0 +1,97 @@
// Package rollinghash/adler32 implements a rolling version of hash/adler32
package adler32
import (
vanilla "hash/adler32"
"github.com/chmduquesne/rollinghash"
)
const (
mod = 65521
)
const Size = 4
type digest struct {
a, b uint32
// window is treated like a circular buffer, where the oldest element
// is indicated by d.oldest
window []byte
oldest int
n uint32
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.a = 1
d.b = 0
d.window = nil
d.oldest = 0
}
// New returns a new rollinghash.Hash32 computing the rolling Adler-32
// checksum. The window is copied from the last Write(). This window is
// only used to determine which is the oldest element (leaving the
// window). The calls to Roll() do not recompute the whole checksum.
func New() rollinghash.Hash32 {
return &digest{a: 1, b: 0, window: nil, oldest: 0}
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int { return Size }
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all
// writes are a multiple of the block size.
func (d *digest) BlockSize() int { return 1 }
// Write (via the embedded io.Writer interface) adds more data to the
// running hash. It never returns an error.
func (d *digest) Write(p []byte) (int, error) {
// Copy the window
d.window = make([]byte, len(p))
copy(d.window, p)
// Piggy-back on the core implementation
h := vanilla.New()
h.Write(p)
s := h.Sum32()
d.a, d.b = s&0xffff, s>>16
d.n = uint32(len(p)) % mod
return len(d.window), nil
}
func (d *digest) Sum32() uint32 {
return d.b<<16 | d.a
}
func (d *digest) Sum(b []byte) []byte {
v := d.Sum32()
return append(b, byte(v>>24), byte(v>>16), byte(v>>8), byte(v))
}
// Roll updates the checksum of the window from the leaving byte and the
// entering byte. See
// http://stackoverflow.com/questions/40985080/why-does-my-rolling-adler32-checksum-not-work-in-go-modulo-arithmetic
func (d *digest) Roll(b byte) {
if len(d.window) == 0 {
d.window = make([]byte, 1)
d.window[0] = b
}
// extract the entering/leaving bytes and update the circular buffer.
enter := uint32(b)
leave := uint32(d.window[d.oldest])
d.window[d.oldest] = b
d.oldest += 1
if d.oldest >= len(d.window) {
d.oldest = 0
}
// compute
d.a = (d.a + mod + enter - leave) % mod
d.b = (d.b + (d.n*leave/mod+1)*mod + d.a - (d.n * leave) - 1) % mod
}

View File

@@ -0,0 +1,143 @@
// Package rollinghash/buzhash implements buzhash as described by
// https://en.wikipedia.org/wiki/Rolling_hash#Cyclic_polynomial
package buzhash
import rollinghash "github.com/chmduquesne/rollinghash"
// 256 random integers generated with a dummy python script
var bytehash = [256]uint32{
0xa5659a00, 0x2dbfda02, 0xac29a407, 0xce942c08, 0x48513609,
0x325f158, 0xb54e5e13, 0xa9063618, 0xa5793419, 0x554b081a,
0xe5643dac, 0xfb50e41c, 0x2b31661d, 0x335da61f, 0xe702f7b0,
0xe31c1424, 0x6dfed825, 0xd30cf628, 0xba626a2a, 0x74b9c22b,
0xa5d1942d, 0xf364ae2f, 0x70d2e84c, 0x190ad208, 0x92e3b740,
0xd7e9f435, 0x15763836, 0x930ecab4, 0x641ea65e, 0xc0b2eb0a,
0x2675e03e, 0x1a24c63f, 0xeddbcbb7, 0x3ea42bb2, 0x815f5849,
0xa55c284b, 0xbb30964c, 0x6f7acc4e, 0x74538a50, 0x66df9652,
0x2bae8454, 0xfe9d8055, 0x8c866fd4, 0x82f0a63d, 0x8f26365e,
0xe66c3460, 0x6423266, 0x60696abc, 0xf75de6d, 0xd20c86e,
0x69f8c6f, 0x8ac0f470, 0x273aab68, 0x4e044c74, 0xb2ec7875,
0xf642d676, 0xd719e877, 0xee557e78, 0xdd20be7a, 0xd252707e,
0xfa507a7f, 0xee537683, 0x6aac7684, 0x340e3485, 0x1c291288,
0xab89c8c, 0xbe6e6c8d, 0xf99cf2f7, 0x69c65890, 0xd3757491,
0xfeb63895, 0x67067a96, 0xa0089b19, 0x6c449898, 0x4eca749a,
0x1101229b, 0x6b86d29d, 0x9c21be9e, 0xc5904933, 0xe1e820a3,
0x6bd524a6, 0xd4695ea7, 0xc3d007e0, 0xbed8e4a9, 0x1c49d8af,
0xedbae4b1, 0x1d2af6b4, 0x79526b9, 0xbc1d5abb, 0x6a2eb8bc,
0x611b3695, 0x745c3cc4, 0x81005276, 0x5f442c8, 0x42dc30ca,
0x55e460cb, 0x47648cc, 0x20da7122, 0xc4eedccd, 0xc21c14d0,
0x27b5dfa9, 0x7e961fce, 0x8d0296d6, 0xce3684d7, 0x28e96da,
0xedf7dcdc, 0x6817a0df, 0x51caae0, 0x8f226e1, 0xa1a00ce3,
0xf811c6e5, 0x13e96ee6, 0xd4d4e4d1, 0xab160ee9, 0xb2cf06ea,
0xf4ab6eb, 0x998f56f1, 0x16974cf2, 0xd42438f5, 0xe00ba6f7,
0xbf01b8f8, 0x7a8a00f9, 0xdded6a7f, 0xb0ce58fd, 0xe5d81901,
0xcc823b03, 0xc962e704, 0x2b4aff05, 0x5bcb7181, 0xe7207108,
0xf3c93109, 0x1ffb650a, 0x37a31ad7, 0xfe27322d, 0x15b16d11,
0x51a70512, 0xb579d92e, 0x53658284, 0x91fedb1b, 0x2ef0b122,
0x93966523, 0xfa66af26, 0xa7fac32b, 0x7a81692c, 0x4f8d7f2e,
0xf9875730, 0xa5ab2331, 0x79db8333, 0x8be32937, 0xf900af39,
0xd09d4f3a, 0x9b22053d, 0xd2053e1c, 0xd0deaa35, 0x4a975740,
0xcb3706e0, 0x40aea6cd, 0x769fdd44, 0x7e3e4947, 0xc20ac949,
0x3788c34b, 0x9b23f74c, 0xb33e441d, 0x705d8a8d, 0x6a5e3a84,
0xb4f955e3, 0xf681a155, 0x7dec1b56, 0x7bf5df58, 0xd3fa255a,
0x3797c15c, 0xbf511562, 0xb048d65, 0xcd04f367, 0xae3a8368,
0x769c856d, 0xc7bb9d6f, 0xe43e1f71, 0xa24de03e, 0x7f8cb376,
0x618b778, 0x19e02f33, 0x2f810eea, 0x2b1ce595, 0x4f2f7180,
0x72903140, 0x26a44584, 0x6af97e96, 0xb08acb86, 0x4d25cd41,
0x1d74fd89, 0xe0f5b277, 0xbad158c, 0x5fed3b8d, 0x68b26794,
0xcbe58795, 0xc1180797, 0xa1352399, 0x71dacd9c, 0x42b5549a,
0xbf5371a0, 0x7ed41fa1, 0x6fe29a3, 0xa779fba5, 0x48a095a7,
0xc2cad5a8, 0x7d7f15a9, 0xccd195aa, 0x2a9047ac, 0x3ec66ef2,
0x252743ae, 0xdd8827af, 0x85fc5055, 0xb9d5c7b2, 0x5a224fb4,
0xec26e7b6, 0xe4d8f7b7, 0x6e5aa58d, 0xeff753b9, 0x6c391fbb,
0x989f65bc, 0x2fe4a7c1, 0x9d1d9bc3, 0xa09aadc6, 0x2df33fc8,
0x5ec27933, 0x5e7f41cb, 0xb920f7cd, 0xc1a603ce, 0xf0888fcf,
0xdc4ad1d1, 0x34b3dbd4, 0x170981d5, 0x22e5b5d6, 0x13049bd7,
0xf12a8b95, 0xff7e87d9, 0xabb74b84, 0x215cff4f, 0xaf24f7dc,
0xc87461d, 0x41a55e0, 0xfde9b9e1, 0x1d1956fb, 0x13d60de4,
0x435f93e5, 0xe0ab5de6, 0x5c1d3fe7, 0x411a1fe8, 0x55e102a9,
0x3d9b07eb, 0xdd6b8dee, 0x741293f3, 0xa5b10ca9, 0x5abad5fd,
0x22372f55,
}
// The size of the checksum.
const Size = 4
// digest represents the partial evaluation of a checksum.
type digest struct {
sum uint32
nRotate uint
nRotateComplement uint // redundant, but pre-computed to spare an operation
// window is treated like a circular buffer, where the oldest element
// is indicated by d.oldest
window []byte
oldest int
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.window = nil
d.oldest = 0
d.sum = 0
}
func New() rollinghash.Hash32 {
return &digest{sum: 0, window: nil, oldest: 0}
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int { return Size }
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all
// writes are a multiple of the block size.
func (d *digest) BlockSize() int { return 1 }
// Write (via the embedded io.Writer interface) adds more data to the
// running hash. It never returns an error.
func (d *digest) Write(data []byte) (int, error) {
// Copy the window
d.window = make([]byte, len(data))
copy(d.window, data)
for _, c := range d.window {
d.sum = d.sum<<1 | d.sum>>31
d.sum ^= bytehash[int(c)]
}
d.nRotate = uint(len(d.window)) % 32
d.nRotateComplement = 32 - d.nRotate
return len(d.window), nil
}
func (d *digest) Sum32() uint32 {
return d.sum
}
func (d *digest) Sum(b []byte) []byte {
v := d.Sum32()
return append(b, byte(v>>24), byte(v>>16), byte(v>>8), byte(v))
}
// Roll updates the checksum of the window from the leaving byte and the
// entering byte.
func (d *digest) Roll(c byte) {
if len(d.window) == 0 {
d.window = make([]byte, 1)
d.window[0] = c
}
// extract the entering/leaving bytes and update the circular buffer.
hn := bytehash[int(c)]
h0 := bytehash[int(d.window[d.oldest])]
d.window[d.oldest] = c
l := len(d.window)
d.oldest += 1
if d.oldest >= l {
d.oldest = 0
}
d.sum = (d.sum<<1 | d.sum>>31) ^ (h0<<d.nRotate | h0>>d.nRotateComplement) ^ hn
}

View File

@@ -0,0 +1,89 @@
// Package rollinghash/rabinkarp32 implements a particular case of
// rabin-karp where the modulus is 0xffffffff (32 bits of '1')
package rabinkarp32
import rollinghash "github.com/chmduquesne/rollinghash"
// The size of a rabinkarp32 checksum.
const Size = 4
// digest represents the partial evaluation of a checksum.
type digest struct {
a uint32
h uint32
aPowerN uint32
// window is treated like a circular buffer, where the oldest element
// is indicated by d.oldest
window []byte
oldest int
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.h = 0
d.aPowerN = 1
d.window = nil
d.oldest = 0
}
func NewFromInt(a uint32) rollinghash.Hash32 {
return &digest{a: a, h: 0, aPowerN: 1, window: nil, oldest: 0}
}
func New() rollinghash.Hash32 {
return NewFromInt(65521) // largest prime fitting in 16 bits
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int { return Size }
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all
// writes are a multiple of the block size.
func (d *digest) BlockSize() int { return 1 }
// Write (via the embedded io.Writer interface) adds more data to the
// running hash. It never returns an error.
func (d *digest) Write(data []byte) (int, error) {
// Copy the window
d.window = make([]byte, len(data))
copy(d.window, data)
for _, c := range d.window {
d.h *= d.a
d.h += uint32(c)
d.aPowerN *= d.a
}
return len(d.window), nil
}
func (d *digest) Sum32() uint32 {
return d.h
}
func (d *digest) Sum(b []byte) []byte {
v := d.Sum32()
return append(b, byte(v>>24), byte(v>>16), byte(v>>8), byte(v))
}
// Roll updates the checksum of the window from the leaving byte and the
// entering byte.
func (d *digest) Roll(c byte) {
if len(d.window) == 0 {
d.window = make([]byte, 1)
d.window[0] = c
}
// extract the entering/leaving bytes and update the circular buffer.
enter := uint32(c)
leave := uint32(d.window[d.oldest])
d.window[d.oldest] = c
l := len(d.window)
d.oldest += 1
if d.oldest >= l {
d.oldest = 0
}
d.h = d.h*d.a + enter - leave*d.aPowerN
}

View File

@@ -0,0 +1,40 @@
/*
Package rollinghash implements rolling versions of some hashes
*/
package rollinghash
import "hash"
type Roller interface {
// Roll updates the hash of a rolling window from the entering byte.
// A copy of the window is internally kept from the last Write().
// This copy is updated along with the internal state of the checksum
// in order to determine the new hash very quickly.
Roll(b byte)
}
// rollinghash.Hash extends hash.Hash by adding the method Roll. A
// rollinghash.Hash can be updated byte by byte, by specifying which byte
// enters the window.
type Hash interface {
hash.Hash
Roller
}
// rollinghash.Hash32 extends hash.Hash by adding the method Roll. A
// rollinghash.Hash32 can be updated byte by byte, by specifying which
// byte enters the window.
type Hash32 interface {
hash.Hash32
Roller
}
// rollinghash.Hash64 extends hash.Hash by adding the method Roll. A
// rollinghash.Hash64 can be updated byte by byte, by specifying which
// byte enters the window.
type Hash64 interface {
hash.Hash64
Roller
}

3
vendor/github.com/gogo/protobuf/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,3 @@
._*
*.js
*.js.map

8
vendor/github.com/gogo/protobuf/.mailmap generated vendored Normal file
View File

@@ -0,0 +1,8 @@
Walter Schulze <awalterschulze@gmail.com> Walter Schulze <walter@vastech.co.za>
Walter Schulze <awalterschulze@gmail.com> <walter@vastech.co.za>
Walter Schulze <awalterschulze@gmail.com> awalterschulze <awalterschulze@gmail.com>
Walter Schulze <awalterschulze@gmail.com> awalterschulze@gmail.com <awalterschulze@gmail.com>
John Tuley <john@tuley.org> <jtuley@pivotal.io>
Anton Povarov <anton.povarov@gmail.com> <antoxa@corp.badoo.com>
Denis Smirnov <denis.smirnov.91@gmail.com> dennwc
DongYun Kang <ceram1000@gmail.com> <ceram1000@gmail.com>

25
vendor/github.com/gogo/protobuf/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,25 @@
env:
- PROTOBUF_VERSION=2.6.1
- PROTOBUF_VERSION=3.0.2
- PROTOBUF_VERSION=3.1.0
before_install:
- ./install-protobuf.sh
- PATH=/home/travis/bin:$PATH protoc --version
script:
- PATH=/home/travis/bin:$PATH make buildserverall
- echo $TRAVIS_GO_VERSION
- if [ "$TRAVIS_GO_VERSION" == 1.7.1 ] && [[ "$PROTOBUF_VERSION" == 3.1.0 ]]; then ! git status --porcelain | read || (git status; git diff; exit 1); fi
language: go
go:
- 1.5.4
- 1.6.3
- 1.7.1
matrix:
allow_failures:
- go: 1.5.4
- go: 1.6.3

14
vendor/github.com/gogo/protobuf/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,14 @@
# This is the official list of GoGo authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS file, which
# lists people. For example, employees are listed in CONTRIBUTORS,
# but not in AUTHORS, because the employer holds the copyright.
# Names should be added to this file as one of
# Organization's name
# Individual's name <submission email address>
# Individual's name <submission email address> <email2> <emailN>
# Please keep the list sorted.
Vastech SA (PTY) LTD
Walter Schulze <awalterschulze@gmail.com>

16
vendor/github.com/gogo/protobuf/CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,16 @@
Anton Povarov <anton.povarov@gmail.com>
Clayton Coleman <ccoleman@redhat.com>
Denis Smirnov <denis.smirnov.91@gmail.com>
DongYun Kang <ceram1000@gmail.com>
Dwayne Schultz <dschultz@pivotal.io>
Georg Apitz <gapitz@pivotal.io>
Gustav Paul <gustav.paul@gmail.com>
Johan Brandhorst <johan.brandhorst@gmail.com>
John Tuley <john@tuley.org>
Laurent <laurent@adyoulike.com>
Patrick Lee <patrick@dropbox.com>
Stephen J Day <stephen.day@docker.com>
Tamir Duberstein <tamird@gmail.com>
Todd Eisenberger <teisenberger@dropbox.com>
Tormod Erevik Lea <tormodlea@gmail.com>
Walter Schulze <awalterschulze@gmail.com>

5
vendor/github.com/gogo/protobuf/GOLANG_CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,5 @@
The contributors to the Go protobuf repository:
# This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS.

View File

@@ -1,7 +1,7 @@
Extensions for Protocol Buffers to create more go like structures.
Protocol Buffers for Go with Gadgets
Copyright (c) 2013, Vastech SA (PTY) LTD. All rights reserved.
http://github.com/gogo/protobuf/gogoproto
Copyright (c) 2013, The GoGo Authors. All rights reserved.
http://github.com/gogo/protobuf
Go support for Protocol Buffers - Google's data interchange format

141
vendor/github.com/gogo/protobuf/Makefile generated vendored Normal file
View File

@@ -0,0 +1,141 @@
# Protocol Buffers for Go with Gadgets
#
# Copyright (c) 2013, The GoGo Authors. All rights reserved.
# http://github.com/gogo/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
.PHONY: nuke regenerate tests clean install gofmt vet contributors
all: clean install regenerate install tests errcheck vet
buildserverall: clean install regenerate install tests vet js
install:
go install ./proto
go install ./gogoproto
go install ./jsonpb
go install ./protoc-gen-gogo
go install ./protoc-gen-gofast
go install ./protoc-gen-gogofast
go install ./protoc-gen-gogofaster
go install ./protoc-gen-gogoslick
go install ./protoc-gen-gostring
go install ./protoc-min-version
go install ./protoc-gen-combo
go install ./gogoreplace
clean:
go clean ./...
nuke:
go clean -i ./...
gofmt:
gofmt -l -s -w .
regenerate:
make -C protoc-gen-gogo/descriptor regenerate
make -C protoc-gen-gogo/plugin regenerate
make -C protoc-gen-gogo/testdata regenerate
make -C gogoproto regenerate
make -C proto/testdata regenerate
make -C jsonpb/jsonpb_test_proto regenerate
make -C _conformance regenerate
make -C types regenerate
make -C test regenerate
make -C test/example regenerate
make -C test/unrecognized regenerate
make -C test/group regenerate
make -C test/unrecognizedgroup regenerate
make -C test/enumstringer regenerate
make -C test/unmarshalmerge regenerate
make -C test/moredefaults regenerate
make -C test/issue8 regenerate
make -C test/enumprefix regenerate
make -C test/enumcustomname regenerate
make -C test/packed regenerate
make -C test/protosize regenerate
make -C test/tags regenerate
make -C test/oneof regenerate
make -C test/oneof3 regenerate
make -C test/theproto3 regenerate
make -C test/mapsproto2 regenerate
make -C test/issue42order regenerate
make -C proto generate-test-pbs
make -C test/importdedup regenerate
make -C test/custombytesnonstruct regenerate
make -C test/required regenerate
make -C test/casttype regenerate
make -C test/castvalue regenerate
make -C vanity/test regenerate
make -C test/sizeunderscore regenerate
make -C test/issue34 regenerate
make -C test/empty-issue70 regenerate
make -C test/indeximport-issue72 regenerate
make -C test/fuzztests regenerate
make -C test/oneofembed regenerate
make -C test/asymetric-issue125 regenerate
make -C test/filedotname regenerate
make -C test/nopackage regenerate
make -C test/types regenerate
make -C test/proto3extension regenerate
make -C test/stdtypes regenerate
make -C test/data regenerate
make gofmt
tests:
go build ./test/enumprefix
go test ./...
vet:
go vet ./...
go tool vet --shadow .
errcheck:
go get github.com/kisielk/errcheck
errcheck ./test/...
drone:
sudo apt-get install protobuf-compiler
(cd $(GOPATH)/src/github.com/gogo/protobuf && make buildserverall)
testall:
make -C protoc-gen-gogo/testdata test
make -C vanity/test test
make tests
bench:
(cd test/mixbench && go build .)
(cd test/mixbench && ./mixbench)
contributors:
git log --format='%aN <%aE>' | sort -fu > CONTRIBUTORS
js:
go get github.com/gopherjs/gopherjs
gopherjs build github.com/gogo/protobuf/protoc-gen-gogo
update:
(cd protobuf && make update)

258
vendor/github.com/gogo/protobuf/README generated vendored Normal file
View File

@@ -0,0 +1,258 @@
GoGoProtobuf http://github.com/gogo/protobuf extends
GoProtobuf http://github.com/golang/protobuf
# Go support for Protocol Buffers
Google's data interchange format.
Copyright 2010 The Go Authors.
https://github.com/golang/protobuf
This package and the code it generates requires at least Go 1.4.
This software implements Go bindings for protocol buffers. For
information about protocol buffers themselves, see
https://developers.google.com/protocol-buffers/
## Installation ##
To use this software, you must:
- Install the standard C++ implementation of protocol buffers from
https://developers.google.com/protocol-buffers/
- Of course, install the Go compiler and tools from
https://golang.org/
See
https://golang.org/doc/install
for details or, if you are using gccgo, follow the instructions at
https://golang.org/doc/install/gccgo
- Grab the code from the repository and install the proto package.
The simplest way is to run `go get -u github.com/golang/protobuf/{proto,protoc-gen-go}`.
The compiler plugin, protoc-gen-go, will be installed in $GOBIN,
defaulting to $GOPATH/bin. It must be in your $PATH for the protocol
compiler, protoc, to find it.
This software has two parts: a 'protocol compiler plugin' that
generates Go source files that, once compiled, can access and manage
protocol buffers; and a library that implements run-time support for
encoding (marshaling), decoding (unmarshaling), and accessing protocol
buffers.
There is support for gRPC in Go using protocol buffers.
See the note at the bottom of this file for details.
There are no insertion points in the plugin.
GoGoProtobuf provides extensions for protocol buffers and GoProtobuf
see http://github.com/gogo/protobuf/gogoproto/doc.go
## Using protocol buffers with Go ##
Once the software is installed, there are two steps to using it.
First you must compile the protocol buffer definitions and then import
them, with the support library, into your program.
To compile the protocol buffer definition, run protoc with the --gogo_out
parameter set to the directory you want to output the Go code to.
protoc --gogo_out=. *.proto
The generated files will be suffixed .pb.go. See the Test code below
for an example using such a file.
The package comment for the proto library contains text describing
the interface provided in Go for protocol buffers. Here is an edited
version.
If you are using any gogo.proto extensions you will need to specify the
proto_path to include the descriptor.proto and gogo.proto.
gogo.proto is located in github.com/gogo/protobuf/gogoproto
This should be fine, since your import is the same.
descriptor.proto is located in either github.com/gogo/protobuf/protobuf
or code.google.com/p/protobuf/trunk/src/
Its import is google/protobuf/descriptor.proto so it might need some help.
protoc --gogo_out=. -I=.:github.com/gogo/protobuf/protobuf *.proto
==========
The proto package converts data structures to and from the
wire format of protocol buffers. It works in concert with the
Go source code generated for .proto files by the protocol compiler.
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
Helpers for getting values are superseded by the
GetFoo methods and their use is deprecated.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed with the enum's type name. Enum types have
a String method, and a Enum method to assist in message construction.
- Nested groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Oneof field sets are given a single field in their message,
with distinguished wrapper types for each possible field value.
- Marshal and Unmarshal are functions to encode and decode the wire format.
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Getters are only generated for message and oneof fields.
- Enum types do not get an Enum method.
Consider file test.proto, containing
```proto
package example;
enum FOO { X = 17; };
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
}
```
To create and play with a Test object from the example package,
```go
package main
import (
"log"
"github.com/gogo/protobuf/proto"
"path/to/example"
)
func main() {
test := &example.Test {
Label: proto.String("hello"),
Type: proto.Int32(17),
Reps: []int64{1, 2, 3},
Optionalgroup: &example.Test_OptionalGroup {
RequiredField: proto.String("good bye"),
},
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
}
newTest := &example.Test{}
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
}
// etc.
}
```
## Parameters ##
To pass extra parameters to the plugin, use a comma-separated
parameter list separated from the output directory by a colon:
protoc --gogo_out=plugins=grpc,import_path=mypackage:. *.proto
- `import_prefix=xxx` - a prefix that is added onto the beginning of
all imports. Useful for things like generating protos in a
subdirectory, or regenerating vendored protobufs in-place.
- `import_path=foo/bar` - used as the package if no input files
declare `go_package`. If it contains slashes, everything up to the
rightmost slash is ignored.
- `plugins=plugin1+plugin2` - specifies the list of sub-plugins to
load. The only plugin in this repo is `grpc`.
- `Mfoo/bar.proto=quux/shme` - declares that foo/bar.proto is
associated with Go package quux/shme. This is subject to the
import_prefix parameter.
## gRPC Support ##
If a proto file specifies RPC services, protoc-gen-go can be instructed to
generate code compatible with gRPC (http://www.grpc.io/). To do this, pass
the `plugins` parameter to protoc-gen-go; the usual way is to insert it into
the --go_out argument to protoc:
protoc --gogo_out=plugins=grpc:. *.proto
## Compatibility ##
The library and the generated code are expected to be stable over time.
However, we reserve the right to make breaking changes without notice for the
following reasons:
- Security. A security issue in the specification or implementation may come to
light whose resolution requires breaking compatibility. We reserve the right
to address such security issues.
- Unspecified behavior. There are some aspects of the Protocol Buffers
specification that are undefined. Programs that depend on such unspecified
behavior may break in future releases.
- Specification errors or changes. If it becomes necessary to address an
inconsistency, incompleteness, or change in the Protocol Buffers
specification, resolving the issue could affect the meaning or legality of
existing programs. We reserve the right to address such issues, including
updating the implementations.
- Bugs. If the library has a bug that violates the specification, a program
that depends on the buggy behavior may break if the bug is fixed. We reserve
the right to fix such bugs.
- Adding methods or fields to generated structs. These may conflict with field
names that already exist in a schema, causing applications to break. When the
code generator encounters a field in the schema that would collide with a
generated field or method name, the code generator will append an underscore
to the generated field or method name.
- Adding, removing, or changing methods or fields in generated structs that
start with `XXX`. These parts of the generated code are exported out of
necessity, but should not be considered part of the public API.
- Adding, removing, or changing unexported symbols in generated code.
Any breaking changes outside of these will be announced 6 months in advance to
protobuf@googlegroups.com.
You should, whenever possible, use generated code created by the `protoc-gen-go`
tool built at the same commit as the `proto` package. The `proto` package
declares package-level constants in the form `ProtoPackageIsVersionX`.
Application code and generated code may depend on one of these constants to
ensure that compilation will fail if the available version of the proto library
is too old. Whenever we make a change to the generated code that requires newer
library support, in the same commit we will increment the version number of the
generated code and declare a new package-level constant whose name incorporates
the latest version number. Removing a compatibility constant is considered a
breaking change and would be subject to the announcement policy stated above.
## Plugins ##
The `protoc-gen-go/generator` package exposes a plugin interface,
which is used by the gRPC code generation. This interface is not
supported and is subject to incompatible changes without notice.

116
vendor/github.com/gogo/protobuf/Readme.md generated vendored Normal file
View File

@@ -0,0 +1,116 @@
# Protocol Buffers for Go with Gadgets
[![Build Status](https://travis-ci.org/gogo/protobuf.svg?branch=master)](https://travis-ci.org/gogo/protobuf)
gogoprotobuf is a fork of <a href="https://github.com/golang/protobuf">golang/protobuf</a> with extra code generation features.
This code generation is used to achieve:
- fast marshalling and unmarshalling
- more canonical Go structures
- goprotobuf compatibility
- less typing by optionally generating extra helper code
- peace of mind by optionally generating test and benchmark code
- other serialization formats
Keeping track of how up to date gogoprotobuf is relative to golang/protobuf is done in this
<a href="https://github.com/gogo/protobuf/issues/191">issue</a>
## Users
These projects use gogoprotobuf:
- <a href="http://godoc.org/github.com/coreos/etcd">etcd</a> - <a href="https://blog.gopheracademy.com/advent-2015/etcd-distributed-key-value-store-with-grpc-http2/">blog</a> - <a href="https://github.com/coreos/etcd/blob/master/etcdserver/etcdserverpb/etcdserver.proto">sample proto file</a>
- <a href="https://www.spacemonkey.com/">spacemonkey</a> - <a href="https://www.spacemonkey.com/blog/posts/go-space-monkey">blog</a>
- <a href="http://badoo.com">badoo</a> - <a href="https://github.com/badoo/lsd/blob/32061f501c5eca9c76c596d790b450501ba27b2f/proto/lsd.proto">sample proto file</a>
- <a href="https://github.com/mesos/mesos-go">mesos-go</a> - <a href="https://github.com/mesos/mesos-go/blob/master/mesosproto/mesos.proto">sample proto file</a>
- <a href="https://github.com/mozilla-services/heka">heka</a> - <a href="https://github.com/mozilla-services/heka/commit/eb72fbf7d2d28249fbaf8d8dc6607f4eb6f03351">the switch from golang/protobuf to gogo/protobuf when it was still on code.google.com</a>
- <a href="https://github.com/cockroachdb/cockroach">cockroachdb</a> - <a href="https://github.com/cockroachdb/cockroach/blob/651d54d393e391a30154e9117ab4b18d9ee6d845/roachpb/metadata.proto">sample proto file</a>
- <a href="https://github.com/jbenet/go-ipfs">go-ipfs</a> - <a href="https://github.com/ipfs/go-ipfs/blob/2b6da0c024f28abeb16947fb452787196a6b56a2/merkledag/pb/merkledag.proto">sample proto file</a>
- <a href="https://github.com/philhofer/rkive">rkive-go</a> - <a href="https://github.com/philhofer/rkive/blob/e5dd884d3ea07b341321073882ae28aa16dd11be/rpbc/riak_dt.proto">sample proto file</a>
- <a href="https://www.dropbox.com">dropbox</a>
- <a href="https://srclib.org/">srclib</a> - <a href="https://github.com/sourcegraph/srclib/blob/6538858f0c410cac5c63440317b8d009e889d3fb/graph/def.proto">sample proto file</a>
- <a href="http://www.adyoulike.com/">adyoulike</a>
- <a href="http://www.cloudfoundry.org/">cloudfoundry</a> - <a href="https://github.com/cloudfoundry/bbs/blob/d673710b8c4211037805129944ee4c5373d6588a/models/events.proto">sample proto file</a>
- <a href="http://kubernetes.io/">kubernetes</a> - <a href="https://github.com/kubernetes/kubernetes/tree/88d8628137f94ee816aaa6606ae8cd045dee0bff/cmd/libs/go2idl">go2idl built on top of gogoprotobuf</a>
- <a href="https://dgraph.io/">dgraph</a> - <a href="https://github.com/dgraph-io/dgraph/releases/tag/v0.4.3">release notes</a> - <a href="https://discuss.dgraph.io/t/gogoprotobuf-is-extremely-fast/639">benchmarks</a></a>
- <a href="https://github.com/centrifugal/centrifugo">centrifugo</a> - <a href="https://forum.golangbridge.org/t/centrifugo-real-time-messaging-websocket-or-sockjs-server-v1-5-0-released/2861">release notes</a> - <a href="https://medium.com/@fzambia/centrifugo-protobuf-inside-json-outside-21d39bdabd68#.o3icmgjqd">blog</a>
- <a href="https://github.com/docker/swarmkit">docker swarmkit</a> - <a href="https://github.com/docker/swarmkit/blob/63600e01af3b8da2a0ed1c9fa6e1ae4299d75edb/api/objects.proto">sample proto file</a>
- <a href="https://nats.io/">nats.io</a> - <a href="https://github.com/nats-io/go-nats-streaming/blob/master/pb/protocol.proto">go-nats-streaming</a>
- <a href="https://github.com/pingcap/tidb">tidb</a> - Communication between <a href="https://github.com/pingcap/tipb/blob/master/generate-go.sh#L4">tidb</a> and <a href="https://github.com/pingcap/kvproto/blob/master/generate_go.sh#L3">tikv</a>
Please lets us know if you are using gogoprotobuf by posting on our <a href="https://groups.google.com/forum/#!topic/gogoprotobuf/Brw76BxmFpQ">GoogleGroup</a>.
### Mentioned
- <a href="http://www.slideshare.net/albertstrasheim/serialization-in-go">Cloudflare - go serialization talk - Albert Strasheim</a>
- <a href="http://gophercon.sourcegraph.com/post/83747547505/writing-a-high-performance-database-in-go">gophercon</a>
- <a href="https://github.com/alecthomas/go_serialization_benchmarks">alecthomas' go serialization benchmarks</a>
## Getting Started
There are several ways to use gogoprotobuf, but for all you need to install go and protoc.
After that you can choose:
- Speed
- More Speed and more generated code
- Most Speed and most customization
### Installation
To install it, you must first have Go (at least version 1.3.3) installed (see [http://golang.org/doc/install](http://golang.org/doc/install)). Go 1.7.1 is continuously tested.
Next, install the standard protocol buffer implementation from [https://github.com/google/protobuf](https://github.com/google/protobuf).
Most versions from 2.3.1 should not give any problems, but 2.6.1, 3.0.2 and 3.1.0 are continuously tested.
### Speed
Install the protoc-gen-gofast binary
go get github.com/gogo/protobuf/protoc-gen-gofast
Use it to generate faster marshaling and unmarshaling go code for your protocol buffers.
protoc --gofast_out=. myproto.proto
This does not allow you to use any of the other gogoprotobuf [extensions](https://github.com/gogo/protobuf/blob/master/extensions.md).
### More Speed and more generated code
Fields without pointers cause less time in the garbage collector.
More code generation results in more convenient methods.
Other binaries are also included:
protoc-gen-gogofast (same as gofast, but imports gogoprotobuf)
protoc-gen-gogofaster (same as gogofast, without XXX_unrecognized, less pointer fields)
protoc-gen-gogoslick (same as gogofaster, but with generated string, gostring and equal methods)
Installing any of these binaries is easy. Simply run:
go get github.com/gogo/protobuf/proto
go get github.com/gogo/protobuf/{binary}
go get github.com/gogo/protobuf/gogoproto
These binaries allow you to using gogoprotobuf [extensions](https://github.com/gogo/protobuf/blob/master/extensions.md).
### Most Speed and most customization
Customizing the fields of the messages to be the fields that you actually want to use removes the need to copy between the structs you use and structs you use to serialize.
gogoprotobuf also offers more serialization formats and generation of tests and even more methods.
Please visit the [extensions](https://github.com/gogo/protobuf/blob/master/extensions.md) page for more documentation.
Install protoc-gen-gogo:
go get github.com/gogo/protobuf/proto
go get github.com/gogo/protobuf/jsonpb
go get github.com/gogo/protobuf/protoc-gen-gogo
go get github.com/gogo/protobuf/gogoproto
## GRPC
It works the same as golang/protobuf, simply specify the plugin.
Here is an example using gofast:
protoc --gofast_out=plugins=grpc:. my.proto

40
vendor/github.com/gogo/protobuf/_conformance/Makefile generated vendored Normal file
View File

@@ -0,0 +1,40 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2016 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
regenerate:
protoc-min-version --version="3.0.0" --proto_path=$(GOPATH)/src:$(GOPATH)/src/github.com/gogo/protobuf/protobuf:. --gogo_out=\
Mgoogle/protobuf/any.proto=github.com/gogo/protobuf/types,\
Mgoogle/protobuf/duration.proto=github.com/gogo/protobuf/types,\
Mgoogle/protobuf/struct.proto=github.com/gogo/protobuf/types,\
Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,\
Mgoogle/protobuf/wrappers.proto=github.com/gogo/protobuf/types,\
Mgoogle/protobuf/field_mask.proto=github.com/gogo/protobuf/types\
:. conformance_proto/conformance.proto

View File

@@ -0,0 +1,161 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// conformance implements the conformance test subprocess protocol as
// documented in conformance.proto.
package main
import (
"encoding/binary"
"fmt"
"io"
"os"
pb "github.com/gogo/protobuf/_conformance/conformance_proto"
"github.com/gogo/protobuf/jsonpb"
"github.com/gogo/protobuf/proto"
)
func main() {
var sizeBuf [4]byte
inbuf := make([]byte, 0, 4096)
outbuf := proto.NewBuffer(nil)
for {
if _, err := io.ReadFull(os.Stdin, sizeBuf[:]); err == io.EOF {
break
} else if err != nil {
fmt.Fprintln(os.Stderr, "go conformance: read request:", err)
os.Exit(1)
}
size := binary.LittleEndian.Uint32(sizeBuf[:])
if int(size) > cap(inbuf) {
inbuf = make([]byte, size)
}
inbuf = inbuf[:size]
if _, err := io.ReadFull(os.Stdin, inbuf); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: read request:", err)
os.Exit(1)
}
req := new(pb.ConformanceRequest)
if err := proto.Unmarshal(inbuf, req); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: parse request:", err)
os.Exit(1)
}
res := handle(req)
if err := outbuf.Marshal(res); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: marshal response:", err)
os.Exit(1)
}
binary.LittleEndian.PutUint32(sizeBuf[:], uint32(len(outbuf.Bytes())))
if _, err := os.Stdout.Write(sizeBuf[:]); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: write response:", err)
os.Exit(1)
}
if _, err := os.Stdout.Write(outbuf.Bytes()); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: write response:", err)
os.Exit(1)
}
outbuf.Reset()
}
}
var jsonMarshaler = jsonpb.Marshaler{
OrigName: true,
}
func handle(req *pb.ConformanceRequest) *pb.ConformanceResponse {
var err error
var msg pb.TestAllTypes
switch p := req.Payload.(type) {
case *pb.ConformanceRequest_ProtobufPayload:
err = proto.Unmarshal(p.ProtobufPayload, &msg)
case *pb.ConformanceRequest_JsonPayload:
err = jsonpb.UnmarshalString(p.JsonPayload, &msg)
if err != nil && err.Error() == "unmarshaling Any not supported yet" {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_Skipped{
Skipped: err.Error(),
},
}
}
default:
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_RuntimeError{
RuntimeError: "unknown request payload type",
},
}
}
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_ParseError{
ParseError: err.Error(),
},
}
}
switch req.RequestedOutputFormat {
case pb.WireFormat_PROTOBUF:
p, err := proto.Marshal(&msg)
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_SerializeError{
SerializeError: err.Error(),
},
}
}
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_ProtobufPayload{
ProtobufPayload: p,
},
}
case pb.WireFormat_JSON:
p, err := jsonMarshaler.MarshalToString(&msg)
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_SerializeError{
SerializeError: err.Error(),
},
}
}
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_JsonPayload{
JsonPayload: p,
},
}
default:
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_RuntimeError{
RuntimeError: "unknown output format",
},
}
}
}

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,285 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto3";
package conformance;
option java_package = "com.google.protobuf.conformance";
import "google/protobuf/any.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/field_mask.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/timestamp.proto";
import "google/protobuf/wrappers.proto";
// This defines the conformance testing protocol. This protocol exists between
// the conformance test suite itself and the code being tested. For each test,
// the suite will send a ConformanceRequest message and expect a
// ConformanceResponse message.
//
// You can either run the tests in two different ways:
//
// 1. in-process (using the interface in conformance_test.h).
//
// 2. as a sub-process communicating over a pipe. Information about how to
// do this is in conformance_test_runner.cc.
//
// Pros/cons of the two approaches:
//
// - running as a sub-process is much simpler for languages other than C/C++.
//
// - running as a sub-process may be more tricky in unusual environments like
// iOS apps, where fork/stdin/stdout are not available.
enum WireFormat {
UNSPECIFIED = 0;
PROTOBUF = 1;
JSON = 2;
}
// Represents a single test case's input. The testee should:
//
// 1. parse this proto (which should always succeed)
// 2. parse the protobuf or JSON payload in "payload" (which may fail)
// 3. if the parse succeeded, serialize the message in the requested format.
message ConformanceRequest {
// The payload (whether protobuf of JSON) is always for a TestAllTypes proto
// (see below).
oneof payload {
bytes protobuf_payload = 1;
string json_payload = 2;
}
// Which format should the testee serialize its message to?
WireFormat requested_output_format = 3;
}
// Represents a single test case's output.
message ConformanceResponse {
oneof result {
// This string should be set to indicate parsing failed. The string can
// provide more information about the parse error if it is available.
//
// Setting this string does not necessarily mean the testee failed the
// test. Some of the test cases are intentionally invalid input.
string parse_error = 1;
// If the input was successfully parsed but errors occurred when
// serializing it to the requested output format, set the error message in
// this field.
string serialize_error = 6;
// This should be set if some other error occurred. This will always
// indicate that the test failed. The string can provide more information
// about the failure.
string runtime_error = 2;
// If the input was successfully parsed and the requested output was
// protobuf, serialize it to protobuf and set it in this field.
bytes protobuf_payload = 3;
// If the input was successfully parsed and the requested output was JSON,
// serialize to JSON and set it in this field.
string json_payload = 4;
// For when the testee skipped the test, likely because a certain feature
// wasn't supported, like JSON input/output.
string skipped = 5;
}
}
// This proto includes every type of field in both singular and repeated
// forms.
message TestAllTypes {
message NestedMessage {
int32 a = 1;
TestAllTypes corecursive = 2;
}
enum NestedEnum {
FOO = 0;
BAR = 1;
BAZ = 2;
NEG = -1; // Intentionally negative.
}
// Singular
int32 optional_int32 = 1;
int64 optional_int64 = 2;
uint32 optional_uint32 = 3;
uint64 optional_uint64 = 4;
sint32 optional_sint32 = 5;
sint64 optional_sint64 = 6;
fixed32 optional_fixed32 = 7;
fixed64 optional_fixed64 = 8;
sfixed32 optional_sfixed32 = 9;
sfixed64 optional_sfixed64 = 10;
float optional_float = 11;
double optional_double = 12;
bool optional_bool = 13;
string optional_string = 14;
bytes optional_bytes = 15;
NestedMessage optional_nested_message = 18;
ForeignMessage optional_foreign_message = 19;
NestedEnum optional_nested_enum = 21;
ForeignEnum optional_foreign_enum = 22;
string optional_string_piece = 24 [ctype=STRING_PIECE];
string optional_cord = 25 [ctype=CORD];
TestAllTypes recursive_message = 27;
// Repeated
repeated int32 repeated_int32 = 31;
repeated int64 repeated_int64 = 32;
repeated uint32 repeated_uint32 = 33;
repeated uint64 repeated_uint64 = 34;
repeated sint32 repeated_sint32 = 35;
repeated sint64 repeated_sint64 = 36;
repeated fixed32 repeated_fixed32 = 37;
repeated fixed64 repeated_fixed64 = 38;
repeated sfixed32 repeated_sfixed32 = 39;
repeated sfixed64 repeated_sfixed64 = 40;
repeated float repeated_float = 41;
repeated double repeated_double = 42;
repeated bool repeated_bool = 43;
repeated string repeated_string = 44;
repeated bytes repeated_bytes = 45;
repeated NestedMessage repeated_nested_message = 48;
repeated ForeignMessage repeated_foreign_message = 49;
repeated NestedEnum repeated_nested_enum = 51;
repeated ForeignEnum repeated_foreign_enum = 52;
repeated string repeated_string_piece = 54 [ctype=STRING_PIECE];
repeated string repeated_cord = 55 [ctype=CORD];
// Map
map < int32, int32> map_int32_int32 = 56;
map < int64, int64> map_int64_int64 = 57;
map < uint32, uint32> map_uint32_uint32 = 58;
map < uint64, uint64> map_uint64_uint64 = 59;
map < sint32, sint32> map_sint32_sint32 = 60;
map < sint64, sint64> map_sint64_sint64 = 61;
map < fixed32, fixed32> map_fixed32_fixed32 = 62;
map < fixed64, fixed64> map_fixed64_fixed64 = 63;
map <sfixed32, sfixed32> map_sfixed32_sfixed32 = 64;
map <sfixed64, sfixed64> map_sfixed64_sfixed64 = 65;
map < int32, float> map_int32_float = 66;
map < int32, double> map_int32_double = 67;
map < bool, bool> map_bool_bool = 68;
map < string, string> map_string_string = 69;
map < string, bytes> map_string_bytes = 70;
map < string, NestedMessage> map_string_nested_message = 71;
map < string, ForeignMessage> map_string_foreign_message = 72;
map < string, NestedEnum> map_string_nested_enum = 73;
map < string, ForeignEnum> map_string_foreign_enum = 74;
oneof oneof_field {
uint32 oneof_uint32 = 111;
NestedMessage oneof_nested_message = 112;
string oneof_string = 113;
bytes oneof_bytes = 114;
bool oneof_bool = 115;
uint64 oneof_uint64 = 116;
float oneof_float = 117;
double oneof_double = 118;
NestedEnum oneof_enum = 119;
}
// Well-known types
google.protobuf.BoolValue optional_bool_wrapper = 201;
google.protobuf.Int32Value optional_int32_wrapper = 202;
google.protobuf.Int64Value optional_int64_wrapper = 203;
google.protobuf.UInt32Value optional_uint32_wrapper = 204;
google.protobuf.UInt64Value optional_uint64_wrapper = 205;
google.protobuf.FloatValue optional_float_wrapper = 206;
google.protobuf.DoubleValue optional_double_wrapper = 207;
google.protobuf.StringValue optional_string_wrapper = 208;
google.protobuf.BytesValue optional_bytes_wrapper = 209;
repeated google.protobuf.BoolValue repeated_bool_wrapper = 211;
repeated google.protobuf.Int32Value repeated_int32_wrapper = 212;
repeated google.protobuf.Int64Value repeated_int64_wrapper = 213;
repeated google.protobuf.UInt32Value repeated_uint32_wrapper = 214;
repeated google.protobuf.UInt64Value repeated_uint64_wrapper = 215;
repeated google.protobuf.FloatValue repeated_float_wrapper = 216;
repeated google.protobuf.DoubleValue repeated_double_wrapper = 217;
repeated google.protobuf.StringValue repeated_string_wrapper = 218;
repeated google.protobuf.BytesValue repeated_bytes_wrapper = 219;
google.protobuf.Duration optional_duration = 301;
google.protobuf.Timestamp optional_timestamp = 302;
google.protobuf.FieldMask optional_field_mask = 303;
google.protobuf.Struct optional_struct = 304;
google.protobuf.Any optional_any = 305;
google.protobuf.Value optional_value = 306;
repeated google.protobuf.Duration repeated_duration = 311;
repeated google.protobuf.Timestamp repeated_timestamp = 312;
repeated google.protobuf.FieldMask repeated_fieldmask = 313;
repeated google.protobuf.Struct repeated_struct = 324;
repeated google.protobuf.Any repeated_any = 315;
repeated google.protobuf.Value repeated_value = 316;
// Test field-name-to-JSON-name convention.
// (protobuf says names can be any valid C/C++ identifier.)
int32 fieldname1 = 401;
int32 field_name2 = 402;
int32 _field_name3 = 403;
int32 field__name4_ = 404;
int32 field0name5 = 405;
int32 field_0_name6 = 406;
int32 fieldName7 = 407;
int32 FieldName8 = 408;
int32 field_Name9 = 409;
int32 Field_Name10 = 410;
int32 FIELD_NAME11 = 411;
int32 FIELD_name12 = 412;
int32 __field_name13 = 413;
int32 __Field_name14 = 414;
int32 field__name15 = 415;
int32 field__Name16 = 416;
int32 field_name17__ = 417;
int32 Field_name18__ = 418;
}
message ForeignMessage {
int32 c = 1;
}
enum ForeignEnum {
FOREIGN_FOO = 0;
FOREIGN_BAR = 1;
FOREIGN_BAZ = 2;
}

190
vendor/github.com/gogo/protobuf/bench.md generated vendored Normal file
View File

@@ -0,0 +1,190 @@
# Benchmarks
## How to reproduce
For a comparison run:
make bench
followed by [benchcmp](http://code.google.com/p/go/source/browse/misc/benchcmp benchcmp) on the resulting files:
$GOROOT/misc/benchcmp $GOPATH/src/github.com/gogo/protobuf/test/mixbench/marshal.txt $GOPATH/src/github.com/gogo/protobuf/test/mixbench/marshaler.txt
$GOROOT/misc/benchcmp $GOPATH/src/github.com/gogo/protobuf/test/mixbench/unmarshal.txt $GOPATH/src/github.com/gogo/protobuf/test/mixbench/unmarshaler.txt
Benchmarks ran on Revision: 11c56be39364
June 2013
Processor 2,66 GHz Intel Core i7
Memory 8 GB 1067 MHz DDR3
## Marshaler
<table>
<tr><td>benchmark</td><td>old ns/op</td><td>new ns/op</td><td>delta</td></tr>
<tr><td>BenchmarkNidOptNativeProtoMarshal</td><td>2656</td><td>889</td><td>-66.53%</td></tr>
<tr><td>BenchmarkNinOptNativeProtoMarshal</td><td>2651</td><td>1015</td><td>-61.71%</td></tr>
<tr><td>BenchmarkNidRepNativeProtoMarshal</td><td>42661</td><td>12519</td><td>-70.65%</td></tr>
<tr><td>BenchmarkNinRepNativeProtoMarshal</td><td>42306</td><td>12354</td><td>-70.80%</td></tr>
<tr><td>BenchmarkNidRepPackedNativeProtoMarshal</td><td>34148</td><td>11902</td><td>-65.15%</td></tr>
<tr><td>BenchmarkNinRepPackedNativeProtoMarshal</td><td>33375</td><td>11969</td><td>-64.14%</td></tr>
<tr><td>BenchmarkNidOptStructProtoMarshal</td><td>7148</td><td>3727</td><td>-47.86%</td></tr>
<tr><td>BenchmarkNinOptStructProtoMarshal</td><td>6956</td><td>3481</td><td>-49.96%</td></tr>
<tr><td>BenchmarkNidRepStructProtoMarshal</td><td>46551</td><td>19492</td><td>-58.13%</td></tr>
<tr><td>BenchmarkNinRepStructProtoMarshal</td><td>46715</td><td>19043</td><td>-59.24%</td></tr>
<tr><td>BenchmarkNidEmbeddedStructProtoMarshal</td><td>5231</td><td>2050</td><td>-60.81%</td></tr>
<tr><td>BenchmarkNinEmbeddedStructProtoMarshal</td><td>4665</td><td>2000</td><td>-57.13%</td></tr>
<tr><td>BenchmarkNidNestedStructProtoMarshal</td><td>181106</td><td>103604</td><td>-42.79%</td></tr>
<tr><td>BenchmarkNinNestedStructProtoMarshal</td><td>182053</td><td>102069</td><td>-43.93%</td></tr>
<tr><td>BenchmarkNidOptCustomProtoMarshal</td><td>1209</td><td>310</td><td>-74.36%</td></tr>
<tr><td>BenchmarkNinOptCustomProtoMarshal</td><td>1435</td><td>277</td><td>-80.70%</td></tr>
<tr><td>BenchmarkNidRepCustomProtoMarshal</td><td>4126</td><td>763</td><td>-81.51%</td></tr>
<tr><td>BenchmarkNinRepCustomProtoMarshal</td><td>3972</td><td>769</td><td>-80.64%</td></tr>
<tr><td>BenchmarkNinOptNativeUnionProtoMarshal</td><td>973</td><td>303</td><td>-68.86%</td></tr>
<tr><td>BenchmarkNinOptStructUnionProtoMarshal</td><td>1536</td><td>521</td><td>-66.08%</td></tr>
<tr><td>BenchmarkNinEmbeddedStructUnionProtoMarshal</td><td>2327</td><td>884</td><td>-62.01%</td></tr>
<tr><td>BenchmarkNinNestedStructUnionProtoMarshal</td><td>2070</td><td>743</td><td>-64.11%</td></tr>
<tr><td>BenchmarkTreeProtoMarshal</td><td>1554</td><td>838</td><td>-46.07%</td></tr>
<tr><td>BenchmarkOrBranchProtoMarshal</td><td>3156</td><td>2012</td><td>-36.25%</td></tr>
<tr><td>BenchmarkAndBranchProtoMarshal</td><td>3183</td><td>1996</td><td>-37.29%</td></tr>
<tr><td>BenchmarkLeafProtoMarshal</td><td>965</td><td>606</td><td>-37.20%</td></tr>
<tr><td>BenchmarkDeepTreeProtoMarshal</td><td>2316</td><td>1283</td><td>-44.60%</td></tr>
<tr><td>BenchmarkADeepBranchProtoMarshal</td><td>2719</td><td>1492</td><td>-45.13%</td></tr>
<tr><td>BenchmarkAndDeepBranchProtoMarshal</td><td>4663</td><td>2922</td><td>-37.34%</td></tr>
<tr><td>BenchmarkDeepLeafProtoMarshal</td><td>1849</td><td>1016</td><td>-45.05%</td></tr>
<tr><td>BenchmarkNilProtoMarshal</td><td>439</td><td>76</td><td>-82.53%</td></tr>
<tr><td>BenchmarkNidOptEnumProtoMarshal</td><td>514</td><td>152</td><td>-70.43%</td></tr>
<tr><td>BenchmarkNinOptEnumProtoMarshal</td><td>550</td><td>158</td><td>-71.27%</td></tr>
<tr><td>BenchmarkNidRepEnumProtoMarshal</td><td>647</td><td>207</td><td>-68.01%</td></tr>
<tr><td>BenchmarkNinRepEnumProtoMarshal</td><td>662</td><td>213</td><td>-67.82%</td></tr>
<tr><td>BenchmarkTimerProtoMarshal</td><td>934</td><td>271</td><td>-70.99%</td></tr>
<tr><td>BenchmarkMyExtendableProtoMarshal</td><td>608</td><td>185</td><td>-69.57%</td></tr>
<tr><td>BenchmarkOtherExtenableProtoMarshal</td><td>1112</td><td>332</td><td>-70.14%</td></tr>
</table>
<table>
<tr><td>benchmark</td><td>old MB/s</td><td>new MB/s</td><td>speedup</td></tr>
<tr><td>BenchmarkNidOptNativeProtoMarshal</td><td>126.86</td><td>378.86</td><td>2.99x</td></tr>
<tr><td>BenchmarkNinOptNativeProtoMarshal</td><td>114.27</td><td>298.42</td><td>2.61x</td></tr>
<tr><td>BenchmarkNidRepNativeProtoMarshal</td><td>164.25</td><td>561.20</td><td>3.42x</td></tr>
<tr><td>BenchmarkNinRepNativeProtoMarshal</td><td>166.10</td><td>568.23</td><td>3.42x</td></tr>
<tr><td>BenchmarkNidRepPackedNativeProtoMarshal</td><td>99.10</td><td>283.97</td><td>2.87x</td></tr>
<tr><td>BenchmarkNinRepPackedNativeProtoMarshal</td><td>101.30</td><td>282.31</td><td>2.79x</td></tr>
<tr><td>BenchmarkNidOptStructProtoMarshal</td><td>176.83</td><td>339.07</td><td>1.92x</td></tr>
<tr><td>BenchmarkNinOptStructProtoMarshal</td><td>163.59</td><td>326.57</td><td>2.00x</td></tr>
<tr><td>BenchmarkNidRepStructProtoMarshal</td><td>178.84</td><td>427.49</td><td>2.39x</td></tr>
<tr><td>BenchmarkNinRepStructProtoMarshal</td><td>178.70</td><td>437.69</td><td>2.45x</td></tr>
<tr><td>BenchmarkNidEmbeddedStructProtoMarshal</td><td>124.24</td><td>317.56</td><td>2.56x</td></tr>
<tr><td>BenchmarkNinEmbeddedStructProtoMarshal</td><td>132.03</td><td>307.99</td><td>2.33x</td></tr>
<tr><td>BenchmarkNidNestedStructProtoMarshal</td><td>192.91</td><td>337.86</td><td>1.75x</td></tr>
<tr><td>BenchmarkNinNestedStructProtoMarshal</td><td>192.44</td><td>344.45</td><td>1.79x</td></tr>
<tr><td>BenchmarkNidOptCustomProtoMarshal</td><td>29.77</td><td>116.03</td><td>3.90x</td></tr>
<tr><td>BenchmarkNinOptCustomProtoMarshal</td><td>22.29</td><td>115.38</td><td>5.18x</td></tr>
<tr><td>BenchmarkNidRepCustomProtoMarshal</td><td>35.14</td><td>189.80</td><td>5.40x</td></tr>
<tr><td>BenchmarkNinRepCustomProtoMarshal</td><td>36.50</td><td>188.40</td><td>5.16x</td></tr>
<tr><td>BenchmarkNinOptNativeUnionProtoMarshal</td><td>32.87</td><td>105.39</td><td>3.21x</td></tr>
<tr><td>BenchmarkNinOptStructUnionProtoMarshal</td><td>66.40</td><td>195.76</td><td>2.95x</td></tr>
<tr><td>BenchmarkNinEmbeddedStructUnionProtoMarshal</td><td>93.24</td><td>245.26</td><td>2.63x</td></tr>
<tr><td>BenchmarkNinNestedStructUnionProtoMarshal</td><td>57.49</td><td>160.06</td><td>2.78x</td></tr>
<tr><td>BenchmarkTreeProtoMarshal</td><td>137.64</td><td>255.12</td><td>1.85x</td></tr>
<tr><td>BenchmarkOrBranchProtoMarshal</td><td>137.80</td><td>216.10</td><td>1.57x</td></tr>
<tr><td>BenchmarkAndBranchProtoMarshal</td><td>136.64</td><td>217.89</td><td>1.59x</td></tr>
<tr><td>BenchmarkLeafProtoMarshal</td><td>214.48</td><td>341.53</td><td>1.59x</td></tr>
<tr><td>BenchmarkDeepTreeProtoMarshal</td><td>95.85</td><td>173.03</td><td>1.81x</td></tr>
<tr><td>BenchmarkADeepBranchProtoMarshal</td><td>82.73</td><td>150.78</td><td>1.82x</td></tr>
<tr><td>BenchmarkAndDeepBranchProtoMarshal</td><td>96.72</td><td>153.98</td><td>1.59x</td></tr>
<tr><td>BenchmarkDeepLeafProtoMarshal</td><td>117.34</td><td>213.41</td><td>1.82x</td></tr>
<tr><td>BenchmarkNidOptEnumProtoMarshal</td><td>3.89</td><td>13.16</td><td>3.38x</td></tr>
<tr><td>BenchmarkNinOptEnumProtoMarshal</td><td>1.82</td><td>6.30</td><td>3.46x</td></tr>
<tr><td>BenchmarkNidRepEnumProtoMarshal</td><td>12.36</td><td>38.50</td><td>3.11x</td></tr>
<tr><td>BenchmarkNinRepEnumProtoMarshal</td><td>12.08</td><td>37.53</td><td>3.11x</td></tr>
<tr><td>BenchmarkTimerProtoMarshal</td><td>73.81</td><td>253.87</td><td>3.44x</td></tr>
<tr><td>BenchmarkMyExtendableProtoMarshal</td><td>13.15</td><td>43.08</td><td>3.28x</td></tr>
<tr><td>BenchmarkOtherExtenableProtoMarshal</td><td>24.28</td><td>81.09</td><td>3.34x</td></tr>
</table>
## Unmarshaler
<table>
<tr><td>benchmark</td><td>old ns/op</td><td>new ns/op</td><td>delta</td></tr>
<tr><td>BenchmarkNidOptNativeProtoUnmarshal</td><td>2521</td><td>1006</td><td>-60.10%</td></tr>
<tr><td>BenchmarkNinOptNativeProtoUnmarshal</td><td>2529</td><td>1750</td><td>-30.80%</td></tr>
<tr><td>BenchmarkNidRepNativeProtoUnmarshal</td><td>49067</td><td>35299</td><td>-28.06%</td></tr>
<tr><td>BenchmarkNinRepNativeProtoUnmarshal</td><td>47990</td><td>35456</td><td>-26.12%</td></tr>
<tr><td>BenchmarkNidRepPackedNativeProtoUnmarshal</td><td>26456</td><td>23950</td><td>-9.47%</td></tr>
<tr><td>BenchmarkNinRepPackedNativeProtoUnmarshal</td><td>26499</td><td>24037</td><td>-9.29%</td></tr>
<tr><td>BenchmarkNidOptStructProtoUnmarshal</td><td>6803</td><td>3873</td><td>-43.07%</td></tr>
<tr><td>BenchmarkNinOptStructProtoUnmarshal</td><td>6786</td><td>4154</td><td>-38.79%</td></tr>
<tr><td>BenchmarkNidRepStructProtoUnmarshal</td><td>56276</td><td>31970</td><td>-43.19%</td></tr>
<tr><td>BenchmarkNinRepStructProtoUnmarshal</td><td>48750</td><td>31832</td><td>-34.70%</td></tr>
<tr><td>BenchmarkNidEmbeddedStructProtoUnmarshal</td><td>4556</td><td>1973</td><td>-56.69%</td></tr>
<tr><td>BenchmarkNinEmbeddedStructProtoUnmarshal</td><td>4485</td><td>1975</td><td>-55.96%</td></tr>
<tr><td>BenchmarkNidNestedStructProtoUnmarshal</td><td>223395</td><td>135844</td><td>-39.19%</td></tr>
<tr><td>BenchmarkNinNestedStructProtoUnmarshal</td><td>226446</td><td>134022</td><td>-40.82%</td></tr>
<tr><td>BenchmarkNidOptCustomProtoUnmarshal</td><td>1859</td><td>300</td><td>-83.86%</td></tr>
<tr><td>BenchmarkNinOptCustomProtoUnmarshal</td><td>1486</td><td>402</td><td>-72.95%</td></tr>
<tr><td>BenchmarkNidRepCustomProtoUnmarshal</td><td>8229</td><td>1669</td><td>-79.72%</td></tr>
<tr><td>BenchmarkNinRepCustomProtoUnmarshal</td><td>8253</td><td>1649</td><td>-80.02%</td></tr>
<tr><td>BenchmarkNinOptNativeUnionProtoUnmarshal</td><td>840</td><td>307</td><td>-63.45%</td></tr>
<tr><td>BenchmarkNinOptStructUnionProtoUnmarshal</td><td>1395</td><td>639</td><td>-54.19%</td></tr>
<tr><td>BenchmarkNinEmbeddedStructUnionProtoUnmarshal</td><td>2297</td><td>1167</td><td>-49.19%</td></tr>
<tr><td>BenchmarkNinNestedStructUnionProtoUnmarshal</td><td>1820</td><td>889</td><td>-51.15%</td></tr>
<tr><td>BenchmarkTreeProtoUnmarshal</td><td>1521</td><td>720</td><td>-52.66%</td></tr>
<tr><td>BenchmarkOrBranchProtoUnmarshal</td><td>2669</td><td>1385</td><td>-48.11%</td></tr>
<tr><td>BenchmarkAndBranchProtoUnmarshal</td><td>2667</td><td>1420</td><td>-46.76%</td></tr>
<tr><td>BenchmarkLeafProtoUnmarshal</td><td>1171</td><td>584</td><td>-50.13%</td></tr>
<tr><td>BenchmarkDeepTreeProtoUnmarshal</td><td>2065</td><td>1081</td><td>-47.65%</td></tr>
<tr><td>BenchmarkADeepBranchProtoUnmarshal</td><td>2695</td><td>1178</td><td>-56.29%</td></tr>
<tr><td>BenchmarkAndDeepBranchProtoUnmarshal</td><td>4055</td><td>1918</td><td>-52.70%</td></tr>
<tr><td>BenchmarkDeepLeafProtoUnmarshal</td><td>1758</td><td>865</td><td>-50.80%</td></tr>
<tr><td>BenchmarkNilProtoUnmarshal</td><td>564</td><td>63</td><td>-88.79%</td></tr>
<tr><td>BenchmarkNidOptEnumProtoUnmarshal</td><td>762</td><td>73</td><td>-90.34%</td></tr>
<tr><td>BenchmarkNinOptEnumProtoUnmarshal</td><td>764</td><td>163</td><td>-78.66%</td></tr>
<tr><td>BenchmarkNidRepEnumProtoUnmarshal</td><td>1078</td><td>447</td><td>-58.53%</td></tr>
<tr><td>BenchmarkNinRepEnumProtoUnmarshal</td><td>1071</td><td>479</td><td>-55.28%</td></tr>
<tr><td>BenchmarkTimerProtoUnmarshal</td><td>1128</td><td>362</td><td>-67.91%</td></tr>
<tr><td>BenchmarkMyExtendableProtoUnmarshal</td><td>808</td><td>217</td><td>-73.14%</td></tr>
<tr><td>BenchmarkOtherExtenableProtoUnmarshal</td><td>1233</td><td>517</td><td>-58.07%</td></tr>
</table>
<table>
<tr><td>benchmark</td><td>old MB/s</td><td>new MB/s</td><td>speedup</td></tr>
<tr><td>BenchmarkNidOptNativeProtoUnmarshal</td><td>133.67</td><td>334.98</td><td>2.51x</td></tr>
<tr><td>BenchmarkNinOptNativeProtoUnmarshal</td><td>119.77</td><td>173.08</td><td>1.45x</td></tr>
<tr><td>BenchmarkNidRepNativeProtoUnmarshal</td><td>143.23</td><td>199.12</td><td>1.39x</td></tr>
<tr><td>BenchmarkNinRepNativeProtoUnmarshal</td><td>146.07</td><td>198.16</td><td>1.36x</td></tr>
<tr><td>BenchmarkNidRepPackedNativeProtoUnmarshal</td><td>127.80</td><td>141.04</td><td>1.10x</td></tr>
<tr><td>BenchmarkNinRepPackedNativeProtoUnmarshal</td><td>127.55</td><td>140.78</td><td>1.10x</td></tr>
<tr><td>BenchmarkNidOptStructProtoUnmarshal</td><td>185.79</td><td>326.31</td><td>1.76x</td></tr>
<tr><td>BenchmarkNinOptStructProtoUnmarshal</td><td>167.68</td><td>273.66</td><td>1.63x</td></tr>
<tr><td>BenchmarkNidRepStructProtoUnmarshal</td><td>147.88</td><td>260.39</td><td>1.76x</td></tr>
<tr><td>BenchmarkNinRepStructProtoUnmarshal</td><td>171.20</td><td>261.97</td><td>1.53x</td></tr>
<tr><td>BenchmarkNidEmbeddedStructProtoUnmarshal</td><td>142.86</td><td>329.42</td><td>2.31x</td></tr>
<tr><td>BenchmarkNinEmbeddedStructProtoUnmarshal</td><td>137.33</td><td>311.83</td><td>2.27x</td></tr>
<tr><td>BenchmarkNidNestedStructProtoUnmarshal</td><td>154.97</td><td>259.47</td><td>1.67x</td></tr>
<tr><td>BenchmarkNinNestedStructProtoUnmarshal</td><td>154.32</td><td>258.42</td><td>1.67x</td></tr>
<tr><td>BenchmarkNidOptCustomProtoUnmarshal</td><td>19.36</td><td>119.66</td><td>6.18x</td></tr>
<tr><td>BenchmarkNinOptCustomProtoUnmarshal</td><td>21.52</td><td>79.50</td><td>3.69x</td></tr>
<tr><td>BenchmarkNidRepCustomProtoUnmarshal</td><td>17.62</td><td>86.86</td><td>4.93x</td></tr>
<tr><td>BenchmarkNinRepCustomProtoUnmarshal</td><td>17.57</td><td>87.92</td><td>5.00x</td></tr>
<tr><td>BenchmarkNinOptNativeUnionProtoUnmarshal</td><td>38.07</td><td>104.12</td><td>2.73x</td></tr>
<tr><td>BenchmarkNinOptStructUnionProtoUnmarshal</td><td>73.08</td><td>159.54</td><td>2.18x</td></tr>
<tr><td>BenchmarkNinEmbeddedStructUnionProtoUnmarshal</td><td>94.00</td><td>185.92</td><td>1.98x</td></tr>
<tr><td>BenchmarkNinNestedStructUnionProtoUnmarshal</td><td>65.35</td><td>133.75</td><td>2.05x</td></tr>
<tr><td>BenchmarkTreeProtoUnmarshal</td><td>141.28</td><td>297.13</td><td>2.10x</td></tr>
<tr><td>BenchmarkOrBranchProtoUnmarshal</td><td>162.56</td><td>313.96</td><td>1.93x</td></tr>
<tr><td>BenchmarkAndBranchProtoUnmarshal</td><td>163.06</td><td>306.15</td><td>1.88x</td></tr>
<tr><td>BenchmarkLeafProtoUnmarshal</td><td>176.72</td><td>354.19</td><td>2.00x</td></tr>
<tr><td>BenchmarkDeepTreeProtoUnmarshal</td><td>107.50</td><td>205.30</td><td>1.91x</td></tr>
<tr><td>BenchmarkADeepBranchProtoUnmarshal</td><td>83.48</td><td>190.88</td><td>2.29x</td></tr>
<tr><td>BenchmarkAndDeepBranchProtoUnmarshal</td><td>110.97</td><td>234.60</td><td>2.11x</td></tr>
<tr><td>BenchmarkDeepLeafProtoUnmarshal</td><td>123.40</td><td>250.73</td><td>2.03x</td></tr>
<tr><td>BenchmarkNidOptEnumProtoUnmarshal</td><td>2.62</td><td>27.16</td><td>10.37x</td></tr>
<tr><td>BenchmarkNinOptEnumProtoUnmarshal</td><td>1.31</td><td>6.11</td><td>4.66x</td></tr>
<tr><td>BenchmarkNidRepEnumProtoUnmarshal</td><td>7.42</td><td>17.88</td><td>2.41x</td></tr>
<tr><td>BenchmarkNinRepEnumProtoUnmarshal</td><td>7.47</td><td>16.69</td><td>2.23x</td></tr>
<tr><td>BenchmarkTimerProtoUnmarshal</td><td>61.12</td><td>190.34</td><td>3.11x</td></tr>
<tr><td>BenchmarkMyExtendableProtoUnmarshal</td><td>9.90</td><td>36.71</td><td>3.71x</td></tr>
<tr><td>BenchmarkOtherExtenableProtoUnmarshal</td><td>21.90</td><td>52.13</td><td>2.38x</td></tr>
</table>

Some files were not shown because too many files have changed in this diff Show More