Compare commits

...

100 Commits

Author SHA1 Message Date
Jakob Borg
7114cacb85 gui, man: Update docs & translations 2016-08-10 11:41:46 +02:00
Jakob Borg
e52be3d83e lib/connections, lib/model: Refactor connection close handling (fixes #3466)
So there were some issues here. The main problem was that
model.Close(deviceID) was overloaded to mean "the connection was closed
by the protocol layer" and "i want to close this connection". That meant
it could get called twice - once *to* close the connection and then once
more when the connection *was* closed.

After this refactor there is instead a Closed(conn) method that is the
callback. I didn't need to change the parameter in the end, but I think
it's clearer what it means when it takes the connection that was closed
instead of a device ID. To close a connection, the new close(deviceID)
method is used instead, which only closes the underlying connection and
leaves the cleanup to the Closed() callback.

I also changed how we do connection switching. Instead of the connection
service calling close and then adding the connection, it just adds the
new connection. The model knows that it already has a connection and
makes sure to close and clean out that one before adding the new
connection.

To make sure to sequence this properly I added a new map of channels
that get created on connection add and closed by Closed(), so that
AddConnection() can do the close and wait for the cleanup to happen
before proceeding.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3490
2016-08-10 09:37:32 +00:00
Antoine Lamielle
c9cf01e0b6 gui: weighting % of devices according to folder size (fixes #1300)
The completion of remote devices was based only on the average of the percentages of all folders, which is irrelevant in case of two folders with very different sizes.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3481
LGTM: calmh, AudriusButkevicius
2016-08-09 19:58:44 +00:00
Jakob Borg
dcbf68e104 lib/versioner: Hack to make test coverage stable from run to run
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3485
2016-08-08 18:27:55 +00:00
Jakob Borg
c2d8c07137 lib/events: Hack to make test coverage stable from run to run
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3484
2016-08-08 18:09:40 +00:00
Jakob Borg
a4ed50ca85 build, lib: Correct total test coverage calculation
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3483
2016-08-08 16:29:32 +00:00
Jakob Borg
b3788c8ea0 authors: Fixup 0x010C 2016-08-08 08:34:36 +02:00
Jakob Borg
946c074a41 authors: Add 0x010C 2016-08-08 08:19:02 +02:00
Jakob Borg
19f79afb0f build: Setup should install golint 2016-08-07 21:58:27 +02:00
Audrius Butkevicius
af3b6f9c83 lib/model, lib/config: Support "live" device removal, folder unsharing and folder configuration changes
Furthermore:
1. Cleans configs received, migrates them as we receive them.
2. Clears indexes of devices we no longer share the folder with

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3478
2016-08-07 16:21:59 +00:00
Jakob Borg
fbe42c156d gui: Move "ignore patterns" away from "remove" in folder edit dialog 2016-08-07 14:26:32 +02:00
Jakob Borg
a1f6cbd354 lib/protocol: Clean away outdated files 2016-08-07 14:24:25 +02:00
Audrius Butkevicius
a4f052ad31 lib/connections: Fix connection switching
It seems that it would be impossible to drop down to relay after establishing a direct connection
Also, we should not drop the existing connection until after we've passed the validation steps,
and it seems it's being dropped in two places unnecesserily at the moment.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3480
2016-08-07 12:20:37 +00:00
Jakob Borg
ea87bcefd6 lib/protocol, lib/model: Implement high precision time stamps (fixes #3305)
This adds a new nanoseconds field to the FileInfo, populates it during
scans and sets the non-truncated time in Chtimes calls.

The actual file modification time is defined as modified_s seconds +
modified_ns nanoseconds. It's expected that the modified_ns field is <=
1e9 (that is, all whole seconds should go in the modified_s field) but
not really enforced. Given that it's an int32 the timestamp can be
adjusted += ~2.9 seconds by the modified_ns field...

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3431
2016-08-06 13:05:59 +00:00
Jakob Borg
0655991a19 lib/db, lib/fs, lib/model: Introduce fs.MtimeFS, remove VirtualMtimeRepo
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3479
2016-08-05 17:45:45 +00:00
Jakob Borg
f368d2278f lib/config, lib/connections: Refactor handling of ignored devices (fixes #3470)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3471
2016-08-05 09:29:49 +00:00
Jakob Borg
1eb6db6ca8 cmd/syncthing, lib/...: Correctly handle ignores & invalid file names (fixes #3012, fixes #3457, fixes #3458)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3464
2016-08-05 07:13:52 +00:00
Jakob Borg
a25b63e2df cmd/syncthing: Delete old format indexes after a while (fixes #3468)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3469
2016-08-02 15:44:09 +00:00
Jakob Borg
ffe7a2fcd7 cmd/syncthing, lib/config: Enable HTTP CPU/heap profile collection for users
This adds a config to enable debug functions on the API server, which is
by default disabled. When enabled, the /rest/debug things become
available and become available without requiring a CSRF token (although
authentication is required if configured).

We also add a new endpoint /rest/debug/cpuprof?duration=15s (with the
duration being configurable, defaulting to 30s). This runs a CPU profile
for the duration and returns it as a file. It sets headers so that a
browser will save the file with an informative name.

The same is done for heap profiles, /rest/debug/heapprof, which does not
take any parameters.

The purpose of this is that any user can enable debugging under
advanced, then point their browser to the endpoint above and get a file
that contains a CPU or heap profile we can use, with the filename
telling us what version and architecture the profile is from.

On the command line, this becomes

    curl -O -J http://localhost:8082/rest/debug/cpuprof?duration=5s
    curl: Saved to filename
    'syncthing-cpu-darwin-amd64-v0.14.3+4-g935bcc0-110307.pprof'

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3467
2016-08-02 11:06:45 +00:00
Audrius Butkevicius
08b5a7908f gui: Add one-off notifications that need to be acked
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3254
2016-08-02 08:07:30 +00:00
derekriemer
a8cd9d0154 gui: Improve accessibility (fixes #3297)
skip-check: authors

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3463
2016-07-31 22:59:44 +00:00
Jakob Borg
297240facf all: Rename LocalVersion to Sequence (fixes #3461)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3462
2016-07-29 19:54:24 +00:00
Jakob Borg
a022b0cfff gui, man: Update docs & translations 2016-07-28 13:15:14 +02:00
Jakob Borg
72026db599 lib/db, lib/model: Create temp sorting database in config dir (fixes #3449)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3454
2016-07-27 21:38:43 +00:00
Jakob Borg
aafc96f58f lib/model, lib/protocol: Sequence ClusterConfig properly (fixes #3448)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3452
2016-07-27 21:36:25 +00:00
Jakob Borg
7c7e8648ff lib/model: Trigger a puller iteration on connection (fixes #3451)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3453
2016-07-27 21:35:41 +00:00
Jakob Borg
24e2ce0764 build: Allow easy influencing build user and build host
To facilitate reproducible builds.
2016-07-27 23:27:47 +02:00
aviau
d7cb4d407b man: Include stdiscosrv and strelaysrv manpages
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3450
2016-07-27 15:00:10 +00:00
Jakob Borg
66a506e72b lib/scanner: Correctly scan symlinks (fixes #3445)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3446
2016-07-26 11:55:25 +00:00
Jakob Borg
25a7b0a6f8 gui, man: Update docs & translations 2016-07-26 10:53:00 +02:00
Jakob Borg
7aaa1dd8a3 lib/scanner: Recheck file size and modification time after hashing (ref #3440)
To catch the case where the file changed. Also make sure we never let a
size-vs-blocklist mismatch slip through.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3443
2016-07-26 08:51:39 +00:00
Jakob Borg
2a6f164923 lib/scanner: When scanning a file, stick to the size given by Lstat (fixes #3440)
Otherwise if the file grows during scanning the block list will be out
of sync with the stated size and things get confused. We could fixup the
size afterwards based on the block list, but then we might see other
inconsistencies as the mtime should have changed to reflect the new size
etc. Better stick to the original state and let the next scan pick up
the change.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3442
2016-07-25 19:16:49 +00:00
Jakob Borg
0f28626bb4 cmd/syncthing: Generate FolderCompletion events for folders shared with a connecting device (fixes #3436)
This used to happen by itself as the connecting device always sent an
Index message and we triggered on that. Nowadays there's no guarantee
for that, but we anyway need to send out one event to let listeners know
the state of folders shared with the device.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3438
2016-07-25 10:42:17 +00:00
Jakob Borg
6ed22d0885 lib/model: Stricter temporary file permissions
We could have a file to sync with permissions rw------- but we'd create
the temp file with rw-rw-rw- minus umask, usually rw-r--r--. This
potentially exposes private data while the file is being synced.

Similarly, when ignorePerms was set and we were reusing a temp files we
would set the permissions to rw-r--r-- explicitly, potentially
overriding a strict umask that would otherwise have had the file be
rw-------.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3437
2016-07-25 10:18:05 +00:00
Jakob Borg
6715b91a6c build: Remove unused docker-* commands 2016-07-25 08:10:46 +02:00
Jakob Borg
694da60659 lib/db: Reinstate database update locking
The previous commit loosened the locking around database updates.
Apparently that was not fine - what happens is that parallell updates
to the same file for different devices stomp on each others updates to
the global index, leaving it missing one of the two devices.
2016-07-23 20:32:15 +02:00
Jakob Borg
47fa4b0a2c cmd/syncthing, lib/db, lib/model, lib/protocol: Implement delta indexes (fixes #438)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3427
2016-07-23 12:46:31 +00:00
Jakob Borg
8ab6b60778 lib/model: Sort outgoing index updates by LocalVersion
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3411
2016-07-21 17:21:15 +00:00
Jakob Borg
e1a4f81e50 gui, man: Update docs & translations 2016-07-17 23:45:22 +02:00
Jakob Borg
7b7e35d339 lib/protocol: Hello message length is an int16
It used to be an int32, but that's unnecessary and the spec now says
int16. Also relaxes the size requirement to that which fits in a signed
int16 instead of limiting to 1024 bytes, to allow for future growth.

As reported in
https://forum.syncthing.net/t/difference-between-documented-and-implemented-protocol/7798

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3406
2016-07-17 21:41:20 +00:00
Jakob Borg
3176629410 cmd, lib: Fix ineffectual assignments (ineffasign) and comment spelling
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3405
2016-07-15 14:23:20 +00:00
Cedric Staniewski
e3ccc45d19 gui: Fix usage statistics URL in report usage preview (fixes #3397)
This applies the fix from 9d75652 to the usage report preview.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3402
2016-07-12 22:31:11 +00:00
Jakob Borg
beec9e834e gui, man: Update docs & translations 2016-07-10 09:23:58 +02:00
Audrius Butkevicius
f6f0486ff9 repo: Add message about voting
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3398
2016-07-09 15:58:55 +00:00
Jakob Borg
518f446d31 cmd/strelaypoolsrv: Fix vet warnings about type inference
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3393
2016-07-08 06:40:46 +00:00
Jakob Borg
fbbd510088 vendor: Update to latest github.com/syndtr/goleveldb 2016-07-06 09:57:15 +02:00
Jakob Borg
e440d30028 lib/protocol: Allow unknown message types
This lets us add message types in the future, for authentication or
other purposes, without completely breaking old clients. I see this as
similar behavior to adding fields to messages - newer clients must
simple be aware that older ones may ignore the message and act
accordingly.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3390
2016-07-05 09:29:28 +00:00
Jakob Borg
44d30c83bf lib/config, cmd/syncthing: Handle committing configuration better (fixes #3077)
This slightly changes the interface used for committing configuration
changes. The two parts are now:

 - VerifyConfiguration, which runs synchronously and locked, and can
   abort the config change. These callbacks shouldn't *do* anything
   apart from looking at the config changes and saying yes or no. No
   change from previously.

 - CommitConfiguration, which runs asynchronously (one goroutine per
   call) *after* replacing the config and releasing any locks. Returning
   false from these methods sets the "requires restart" flag, which now
   lives in the config.Wrapper.

This should be deadlock free as the CommitConfiguration calls can take
as long as they like and can wait for locks to be released when they
need to tweak things. I think this should be safe compared to before as
the CommitConfiguration calls were always made from a random background
goroutine (typically one from the HTTP server), so it was always
concurrent with everything else anyway.

Hence the CommitResponse type is gone, instead you get an error back on
verification failure only, and need to explicitly check
w.RequiresRestart() afterwards if you care.

As an added bonus this fixes a bug where we would reset the "requires
restart" indicator if a config that did not require restart was saved,
even if we already were in the requires-restart state.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3386
2016-07-04 20:32:34 +00:00
Jakob Borg
7ff7b55732 cmd/strelaypoolsrv: Remove unused var (metalint) 2016-07-04 21:22:53 +02:00
Jakob Borg
44346b3a5a cmd/strelaypoolsrv: Fixup import in main 2016-07-04 14:58:29 +02:00
Jakob Borg
23a538d61a script: Copyright in protofmt.go 2016-07-04 14:55:17 +02:00
Jakob Borg
dcb5026f33 script, lib/discover: Fixup copyright checks 2016-07-04 14:53:11 +02:00
Jakob Borg
778ff9daa9 script: Fixup check-authors after strelaypoolsrv merge 2016-07-04 14:46:24 +02:00
Jakob Borg
ce9dc809bc build, cmd/strelaypoolsrv: Build assets using standard script 2016-07-04 13:34:44 +02:00
Jakob Borg
59370588dd vendor: Add dependencies for strelaypoolsrv 2016-07-04 13:34:34 +02:00
Jakob Borg
7d434aa9c4 build: Add strelaypoolsrv target 2016-07-04 13:34:28 +02:00
Jakob Borg
59ce7c0424 cmd/strelaypoolsrv: Merge relaypoolsrv repo into main
* relaypoolsrv/master: (32 commits)
  Fetch deps of deps X_x
  Here we go with gvt bugs
  Screw godep
  Add solaris support back in
  Add font awesome
  No value is less than zero
  Screw solaris
  Godeps
  Refactor javascript, always show table, add sorting
  Add local geoip
  Update dependencies
  Hey look, had to check all code out on linux to fix the deps
  Update godeps, reduce amount of time spent testing a relay. Goddamit godeps.
  Add timeouts, deal with overlapping markers, add a table, increase circle radiuses
  Fix a couple of issues with the relays map (geoip, 'data unavailable')
  Rate infos are in kbps, not kBps
  Add support for header holding IP address
  Update relay parameters even if it already exists (fixes #3)
  Add missing space
  Add homepage
  ...
2016-07-04 13:33:57 +02:00
Jakob Borg
9a0e5a7c18 lib/discover: Add instance ID to local discovery (fixes #3278)
A random "instance ID" is generated on each start of the local discovery
service. The instance ID is included in the announcement. When we see a
new instance ID we treat is a new device and respond with an
announcement of our own. Hence devices get to know each other quickly on
restart.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3385
2016-07-04 11:16:48 +00:00
Jakob Borg
8d0019595f cmd/syncthing: Update code name for v0.14
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3384
2016-07-04 10:58:45 +00:00
aviau
6ff74cfcab build, cmd/stdiscosrv, cmd/strelaysrv: Rename binaries to add "st" prefix
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3371
2016-07-04 10:51:22 +00:00
Jakob Borg
aa50ef4069 lib/model: Invalidate files with trailing white space on Windows (fixes #3227)
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3383
2016-07-04 10:44:30 +00:00
Jakob Borg
fa0101bd60 lib/protocol, lib/discover, lib/db: Use protocol buffer serialization (fixes #3080)
This changes the BEP protocol to use protocol buffer serialization
instead of XDR, and therefore also the database format. The local
discovery protocol is also updated to be protocol buffer format.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3276
LGTM: AudriusButkevicius
2016-07-04 10:40:29 +00:00
Cedric Staniewski
21f5b16e47 gui: Sort device folder lists by label
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3381
2016-07-03 21:11:39 +00:00
Cedric Staniewski
223a835f33 lib/discover: Respect the listen address scheme (fixes #3346)
This is a supplement patch to commit a58f69b which only fixed global
discovery. This patch adds the missing parts for the local discovery.

If the listen address scheme is set to tcp4:// or tcp6:// and no
explicit host is specified, an address should not be considered if the
source address does not match this scheme.

This prevents invalid URIs like tcp4://<IPv6 address>:<port> or tcp6://<IPv4
address>:<port> for local discovery.

GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/3380
2016-07-03 20:43:26 +00:00
Audrius Butkevicius
e9063c639a Fetch deps of deps X_x 2016-04-17 15:03:02 +01:00
Audrius Butkevicius
8d6dedc15b Here we go with gvt bugs 2016-04-17 14:57:31 +01:00
Audrius Butkevicius
1bc4c1a8ac Screw godep 2016-04-17 14:49:00 +01:00
AudriusButkevicius
1a35c440e8 Add solaris support back in 2016-04-14 19:28:06 -04:00
Audrius Butkevicius
2c6c84ac61 Add font awesome 2016-04-14 22:31:56 +01:00
Audrius Butkevicius
bd666daf82 No value is less than zero 2016-04-14 22:26:31 +01:00
AudriusButkevicius
ca3831c4f5 Screw solaris 2016-04-14 17:21:44 -04:00
AudriusButkevicius
bbe0d34f43 Godeps 2016-04-14 17:19:56 -04:00
Audrius Butkevicius
dd364c962f Refactor javascript, always show table, add sorting 2016-04-14 22:01:25 +01:00
Audrius Butkevicius
50068b0b0f Add local geoip 2016-04-13 21:34:11 +01:00
Jakob Borg
175769b53e Update dependencies 2015-12-04 15:27:55 +01:00
Audrius Butkevicius
07722dc33d Hey look, had to check all code out on linux to fix the deps 2015-11-27 21:02:19 +00:00
Audrius Butkevicius
f39f816a98 Update godeps, reduce amount of time spent testing a relay. Goddamit godeps. 2015-11-23 21:33:22 +00:00
Audrius Butkevicius
845f31b98f Add timeouts, deal with overlapping markers, add a table, increase circle radiuses 2015-11-22 22:47:48 +00:00
Audrius Butkevicius
89b6c32cee Merge pull request #6 from canton7/feature/fix-map
Fix a couple of issues with the relays map (geoip, 'data unavailable')
2015-11-22 14:27:58 +00:00
Antony Male
6ee36fe361 Fix a couple of issues with the relays map (geoip, 'data unavailable')
- Move to ipinfo.io for geoip, rather than Telize. Telize has been closed
   down. ipinfo.io has apparently got decent availability, and allows
   1,000 requests per day on the free tier. Since requests are made by the
   client, this should be more than enough (and the total across all clients
   should still be less than this).

 - Fix issue where one nonresponsive relay would cause 'data unavailable'
   to be shown for many relays. This was caused by the relay status
   promise not being correctly added to the list of things being waited
   for before the map was rendered. Any delayed relay status requests
   would therefore occur after the map was rendered, which was too late.
2015-11-22 14:10:29 +00:00
Audrius Butkevicius
b61d7c2428 Merge pull request #5 from andyleap/patch-1
Rate infos are in kbps, not kBps
2015-11-10 10:05:54 -05:00
andyleap
bcc5d7c00f Rate infos are in kbps, not kBps 2015-11-10 09:52:07 -05:00
Audrius Butkevicius
925f60d9c3 Add support for header holding IP address 2015-11-03 21:23:35 +00:00
Audrius Butkevicius
8b3f5fda07 Update relay parameters even if it already exists (fixes #3) 2015-10-31 17:27:43 +00:00
Audrius Butkevicius
ac17b2c584 Add missing space 2015-10-29 19:42:42 +00:00
Jakob Borg
c67c861dc6 Merge pull request #2 from syncthing/homepage
Add homepage
2015-10-29 17:45:21 +01:00
Audrius Butkevicius
09ba9e6259 Add homepage 2015-10-24 00:06:02 +01:00
Audrius Butkevicius
0e167f5c24 Add CORS headers 2015-10-22 21:44:50 +01:00
Audrius Butkevicius
c885903ff2 Change endpoint URL, as we might want to run some stats pages 2015-10-17 00:05:44 +01:00
Audrius Butkevicius
a91a836224 Merge pull request #1 from syncthing/deps
Use vendored dependencies, new relay/client location
2015-09-22 19:18:21 +01:00
Jakob Borg
8450ab8dab Use vendored dependencies, new relay/client location 2015-09-22 19:51:40 +02:00
Jakob Borg
168889d999 Option for perm relay file, keep test cert in temp dir 2015-09-22 09:02:18 +02:00
Jakob Borg
e1339628d9 Default values tweak 2015-09-22 08:55:06 +02:00
Audrius Butkevicius
425f61cf34 Division by zero not good 2015-09-21 21:51:12 +00:00
Audrius Butkevicius
7d9df5abc6 Update README.md 2015-09-21 22:06:12 +01:00
Audrius Butkevicius
118cba4d9b Add build file 2015-09-21 20:53:01 +00:00
AudriusButkevicius
3cacb48f3c Add IP based rate limiting, check if client IP matches advertised relay, reorder stuff 2015-09-07 18:13:50 +01:00
AudriusButkevicius
6965812d79 Relays are matched by ip:port pairs 2015-09-07 09:14:14 +01:00
AudriusButkevicius
78fb7fe9f9 Implementation 2015-09-06 20:52:31 +01:00
Audrius Butkevicius
d7c8075862 Initial commit 2015-09-06 17:29:14 +01:00
657 changed files with 601457 additions and 7194 deletions

1
.gitattributes vendored
View File

@@ -6,3 +6,4 @@ vendor/** -text=auto
# Diffs on these files are meaningless
*.svg -diff
*.pb.go -diff

4
.gitignore vendored
View File

@@ -1,7 +1,7 @@
/syncthing
/discosrv
/stdiscosrv
syncthing.exe
discosrv.exe
stdiscosrv.exe
*.tar.gz
*.zip
*.asc

View File

@@ -13,6 +13,7 @@ Alexandre Viau (aviau) <alexandre@alexandreviau.net> <aviau@debian.org>
Anderson Mesquita (andersonvom) <andersonvom@gmail.com>
Andrew Dunham (andrew-d) <andrew@du.nham.ca>
Andrey D (scienmind) <scintertech@cryptolab.net>
Antoine Lamielle (0x010C) <antoine.lamielle@0x010c.fr> <gh@0x010c.fr>
Antony Male (canton7) <antony.male@gmail.com>
Arthur Axel fREW Schmidt (frioux) <frew@afoolishmanifesto.com> <frioux@gmail.com>
Audrius Butkevicius (AudriusButkevicius) <audrius.butkevicius@gmail.com>

2
NICKS
View File

@@ -1,6 +1,8 @@
# This file maps email addresses used in commits to nicks used the changelog.
# It is auto generated from the AUTHORS file by script/authors.go.
0x010C <antoine.lamielle@0x010c.fr>
0x010C <gh@0x010c.fr>
acogdev <jake@acogdev.com>
alessandro.g89 <alessandro.g89@gmail.com>
alex2108 <register-github@alex-graf.de>

View File

@@ -27,6 +27,11 @@ There are a few examples for keeping Syncthing running in the background
on your system in [the etc directory][3]. There are also several [GUI
implementations][11] for Windows, Mac and Linux.
## Vote on features/bugs
We'd like to encourage you to [vote][12] on issues that matter to you.
This helps the team understand what are the biggest pain points for our users, and could potentially influence what is being worked on next.
## Getting in Touch
The first and best point of contact is the [Forum][8]. There is also an IRC
@@ -66,3 +71,4 @@ All code is licensed under the [MPLv2 License][7].
[9]: https://kiwiirc.com/client/irc.freenode.net/#syncthing
[10]: https://github.com/syncthing/syncthing/issues
[11]: http://docs.syncthing.net/users/contrib.html#gui-wrappers
[12]: https://www.bountysource.com/teams/syncthing/issues

121
build.go
View File

@@ -96,38 +96,57 @@ var targets = map[string]target{
{src: "etc/linux-systemd/user/syncthing.service", dst: "deb/usr/lib/systemd/user/syncthing.service", perm: 0644},
},
},
"discosrv": {
name: "discosrv",
buildPkg: "./cmd/discosrv",
binaryName: "discosrv", // .exe will be added automatically for Windows builds
"stdiscosrv": {
name: "stdiscosrv",
buildPkg: "./cmd/stdiscosrv",
binaryName: "stdiscosrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/discosrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/discosrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "cmd/stdiscosrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/stdiscosrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
debianFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/discosrv/README.md", dst: "deb/usr/share/doc/discosrv/README.txt", perm: 0644},
{src: "cmd/discosrv/LICENSE", dst: "deb/usr/share/doc/discosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/discosrv/AUTHORS.txt", perm: 0644},
{src: "cmd/stdiscosrv/README.md", dst: "deb/usr/share/doc/stdiscosrv/README.txt", perm: 0644},
{src: "cmd/stdiscosrv/LICENSE", dst: "deb/usr/share/doc/stdiscosrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/stdiscosrv/AUTHORS.txt", perm: 0644},
{src: "man/stdiscosrv.1", dst: "deb/usr/share/man/man1/stdiscosrv.1", perm: 0644},
},
tags: []string{"purego"},
},
"relaysrv": {
name: "relaysrv",
buildPkg: "./cmd/relaysrv",
binaryName: "relaysrv", // .exe will be added automatically for Windows builds
"strelaysrv": {
name: "strelaysrv",
buildPkg: "./cmd/strelaysrv",
binaryName: "strelaysrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/relaysrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/relaysrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "cmd/strelaysrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
debianFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/relaysrv/README.md", dst: "deb/usr/share/doc/relaysrv/README.txt", perm: 0644},
{src: "cmd/relaysrv/LICENSE", dst: "deb/usr/share/doc/relaysrv/LICENSE.txt", perm: 0644},
{src: "cmd/strelaysrv/README.md", dst: "deb/usr/share/doc/strelaysrv/README.txt", perm: 0644},
{src: "cmd/strelaysrv/LICENSE", dst: "deb/usr/share/doc/strelaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/strelaysrv/AUTHORS.txt", perm: 0644},
{src: "man/strelaysrv.1", dst: "deb/usr/share/man/man1/strelaysrv.1", perm: 0644},
},
},
"strelaypoolsrv": {
name: "strelaypoolsrv",
buildPkg: "./cmd/strelaypoolsrv",
binaryName: "strelaypoolsrv", // .exe will be added automatically for Windows builds
archiveFiles: []archiveFile{
{src: "{{binary}}", dst: "{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "AUTHORS.txt", perm: 0644},
},
debianFiles: []archiveFile{
{src: "{{binary}}", dst: "deb/usr/bin/{{binary}}", perm: 0755},
{src: "cmd/strelaypoolsrv/README.md", dst: "deb/usr/share/doc/relaysrv/README.txt", perm: 0644},
{src: "cmd/strelaypoolsrv/LICENSE", dst: "deb/usr/share/doc/relaysrv/LICENSE.txt", perm: 0644},
{src: "AUTHORS", dst: "deb/usr/share/doc/relaysrv/AUTHORS.txt", perm: 0644},
},
},
@@ -238,8 +257,8 @@ func runCommand(cmd string, target target) {
case "assets":
rebuildAssets()
case "xdr":
xdr()
case "proto":
proto()
case "translate":
translate()
@@ -271,9 +290,12 @@ func runCommand(cmd string, target target) {
case "metalint":
if isGometalinterInstalled() {
dirs := []string{".", "./cmd/...", "./lib/..."}
gometalinter("deadcode", dirs, "test/util.go")
gometalinter("structcheck", dirs)
gometalinter("varcheck", dirs)
ok := gometalinter("deadcode", dirs, "test/util.go")
ok = gometalinter("structcheck", dirs) && ok
ok = gometalinter("varcheck", dirs) && ok
if !ok {
os.Exit(1)
}
}
default:
@@ -326,6 +348,7 @@ func checkRequiredGoVersion() (float64, bool) {
}
func setup() {
runPrint("go", "get", "-v", "github.com/golang/lint/golint")
runPrint("go", "get", "-v", "golang.org/x/tools/cmd/cover")
runPrint("go", "get", "-v", "golang.org/x/net/html")
runPrint("go", "get", "-v", "github.com/FiloSottile/gvt")
@@ -543,16 +566,17 @@ func listFiles(dir string) []string {
func rebuildAssets() {
runPipe("lib/auto/gui.files.go", "go", "run", "script/genassets.go", "gui")
runPipe("cmd/strelaypoolsrv/auto/gui.go", "go", "run", "script/genassets.go", "cmd/strelaypoolsrv/gui")
}
func lazyRebuildAssets() {
if shouldRebuildAssets() {
if shouldRebuildAssets("lib/auto/gui.files.go", "gui") || shouldRebuildAssets("cmd/strelaypoolsrv/auto/gui.go", "cmd/strelaypoolsrv/auto/gui") {
rebuildAssets()
}
}
func shouldRebuildAssets() bool {
info, err := os.Stat("lib/auto/gui.files.go")
func shouldRebuildAssets(target, srcdir string) bool {
info, err := os.Stat(target)
if err != nil {
// If the file doesn't exist, we must rebuild it
return true
@@ -562,7 +586,7 @@ func shouldRebuildAssets() bool {
// so we should rebuild it.
currentBuild := info.ModTime()
assetsAreNewer := false
filepath.Walk("gui", func(path string, info os.FileInfo, err error) error {
filepath.Walk(srcdir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
@@ -576,8 +600,8 @@ func shouldRebuildAssets() bool {
return assetsAreNewer
}
func xdr() {
runPrint("go", "generate", "./lib/discover", "./lib/db", "./lib/protocol", "./lib/relay/protocol")
func proto() {
runPrint("go", "generate", "./lib/...")
}
func translate() {
@@ -734,6 +758,10 @@ func buildStamp() int64 {
}
func buildUser() string {
if v := os.Getenv("BUILD_USER"); v != "" {
return v
}
u, err := user.Current()
if err != nil {
return "unknown-user"
@@ -742,6 +770,10 @@ func buildUser() string {
}
func buildHost() string {
if v := os.Getenv("BUILD_HOST"); v != "" {
return v
}
h, err := os.Hostname()
if err != nil {
return "unknown-host"
@@ -956,13 +988,17 @@ func lint(pkg string) {
}
analCommentPolicy := regexp.MustCompile(`exported (function|method|const|type|var) [^\s]+ should have comment`)
for _, line := range bytes.Split(bs, []byte("\n")) {
if analCommentPolicy.Match(line) {
for _, line := range strings.Split(string(bs), "\n") {
if line == "" {
continue
}
if len(line) > 0 {
log.Printf("%s", line)
if analCommentPolicy.MatchString(line) {
continue
}
if strings.Contains(line, ".pb.go:") {
continue
}
log.Println(line)
}
}
@@ -1003,7 +1039,7 @@ func isGometalinterInstalled() bool {
return true
}
func gometalinter(linter string, dirs []string, excludes ...string) {
func gometalinter(linter string, dirs []string, excludes ...string) bool {
params := []string{"--disable-all"}
params = append(params, fmt.Sprintf("--deadline=%ds", 60))
params = append(params, "--enable="+linter)
@@ -1016,12 +1052,19 @@ func gometalinter(linter string, dirs []string, excludes ...string) {
params = append(params, dir)
}
bs, err := runError("gometalinter", params...)
bs, _ := runError("gometalinter", params...)
if len(bs) > 0 {
log.Printf("%s", bs)
}
if err != nil {
log.Printf("%v", err)
nerr := 0
for _, line := range strings.Split(string(bs), "\n") {
if line == "" {
continue
}
if strings.Contains(line, ".pb.go:") {
continue
}
log.Println(line)
nerr++
}
return nerr == 0
}

View File

@@ -104,7 +104,7 @@ case "${1:-default}" in
# For every package in the repo
for dir in $(go list ./lib/... ./cmd/...) ; do
# run the tests
GOPATH="$(pwd)/Godeps/_workspace:$GOPATH" go test -race -coverprofile=profile.out $dir
GOPATH="$(pwd)/Godeps/_workspace:$GOPATH" go test -coverprofile=profile.out $dir
if [ -f profile.out ] ; then
# and if there was test output, append it to coverage.out
grep -v "mode: " profile.out >> coverage.out
@@ -112,6 +112,11 @@ case "${1:-default}" in
fi
done
notCovered=$(egrep -c '\s0$' coverage.out)
total=$(wc -l coverage.out | awk '{print $1}')
coverPct=$(awk "BEGIN{print (1 - $notCovered / $total) * 100}")
echo "Total coverage is $coverPct%"
gocov convert coverage.out | gocov-xml > coverage.xml
# This is usually run from within Jenkins. If it is, we need to
@@ -131,55 +136,6 @@ case "${1:-default}" in
go2xunit -output tests.xml -fail < tests.out
;;
docker-all)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -c './build.sh clean \
&& ./build.sh test-cov \
&& ./build.sh bench \
&& ./build.sh all'
;;
docker-test)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc './build.sh clean \
&& go run build.go -race \
&& export GOPATH=$(pwd)/Godeps/_workspace:$GOPATH \
&& cd test \
&& go test -tags integration -v -timeout 90m -short \
&& git clean -fxd .'
;;
docker-lint)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc 'go run build.go lint'
;;
docker-vet)
img=${DOCKERIMG:-syncthing/build:latest}
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
"$img" \
sh -euxc 'go run build.go vet'
;;
*)
echo "Unknown build command $1"
;;

View File

@@ -9,6 +9,7 @@ package main
import (
"bytes"
"crypto/rand"
"encoding/binary"
"flag"
"log"
"strings"
@@ -66,24 +67,25 @@ func recv(bc beacon.Interface) {
seen := make(map[string]bool)
for {
data, src := bc.Recv()
var ann discover.Announce
ann.UnmarshalXDR(data)
if m := binary.BigEndian.Uint32(data); m != discover.Magic {
log.Printf("Incorrect magic %x in announcement from %v", m, src)
continue
}
if bytes.Equal(ann.This.ID, myID) {
var ann discover.Announce
ann.Unmarshal(data[4:])
if bytes.Equal(ann.ID, myID) {
// This is one of our own fake packets, don't print it.
continue
}
// Print announcement details for the first packet from a given
// device ID and source address, or if -all was given.
key := string(ann.This.ID) + src.String()
key := string(ann.ID) + src.String()
if all || !seen[key] {
log.Printf("Announcement from %v\n", src)
log.Printf(" %v at %s\n", protocol.DeviceIDFromBytes(ann.This.ID), strings.Join(addrStrs(ann.This), ", "))
for _, dev := range ann.Extra {
log.Printf(" %v at %s\n", protocol.DeviceIDFromBytes(dev.ID), strings.Join(addrStrs(dev), ", "))
}
log.Printf(" %v at %s\n", protocol.DeviceIDFromBytes(ann.ID), strings.Join(ann.Addresses, ", "))
seen[key] = true
}
}
@@ -92,15 +94,10 @@ func recv(bc beacon.Interface) {
// sends fake discovery announcements once every second
func send(bc beacon.Interface) {
ann := discover.Announce{
Magic: discover.AnnouncementMagic,
This: discover.Device{
ID: myID,
Addresses: []discover.Address{
{URL: "tcp://fake.example.com:12345"},
},
},
ID: myID,
Addresses: []string{"tcp://fake.example.com:12345"},
}
bs, _ := ann.MarshalXDR()
bs, _ := ann.Marshal()
for {
bc.Send(bs)
@@ -108,15 +105,6 @@ func send(bc beacon.Interface) {
}
}
// returns the list of address URLs
func addrStrs(dev discover.Device) []string {
ss := make([]string, len(dev.Addresses))
for i, addr := range dev.Addresses {
ss[i] = addr.URL
}
return ss
}
// returns a random but recognizable device ID
func randomDeviceID() []byte {
var id [32]byte

View File

@@ -1,12 +1,12 @@
discosrv
========
stdiscosrv
==========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/discosrv.svg?style=flat-square)](http://build.syncthing.net/job/discosrv/lastBuild/)
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/stdiscosrv.svg?style=flat-square)](http://build.syncthing.net/job/stdiscosrv/lastBuild/)
This is the global discovery server for the `syncthing` project.
To get it, run `go get github.com/syncthing/discosrv` or download the
[latest build](http://build.syncthing.net/job/discosrv/lastSuccessfulBuild/artifact/)
To get it, run `go get github.com/syncthing/stdiscosrv` or download the
[latest build](http://build.syncthing.net/job/stdiscosrv/lastSuccessfulBuild/artifact/)
from the build server.
Usage
@@ -19,15 +19,15 @@ By default it will use in-memory `ql` backend. If you wish to persist the
information on disk between restarts in `ql`, specify a file DSN:
```bash
$ discosrv -db-dsn="file:///var/run/discosrv.db"
$ stdiscosrv -db-dsn="file:///var/run/stdiscosrv.db"
```
For `postgres`, you will need to create a database and a user with permissions
to create tables in it, then start the discosrv as follows:
to create tables in it, then start the stdiscosrv as follows:
```bash
$ export DISCOSRV_DB_DSN="postgres://user:password@localhost/databasename"
$ discosrv -db-backend="postgres"
$ export STDISCOSRV_DB_DSN="postgres://user:password@localhost/databasename"
$ stdiscosrv -db-backend="postgres"
```
You can pass the DSN as command line option, but the value what you pass in will
@@ -37,4 +37,4 @@ to other users.
In all cases, the appropriate tables and indexes will be created at first
startup. If it doesn't exit with an error, you're fine.
See `discosrv -help` for other options.
See `stdiscosrv -help` for other options.

View File

View File

@@ -38,7 +38,7 @@ func init() {
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`discosrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
LongVersion = fmt.Sprintf(`stdiscosrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
@@ -48,7 +48,7 @@ var (
globalStats stats
statsFile string
backend = "ql"
dsn = getEnvDefault("DISCOSRV_DB_DSN", "memory://discosrv")
dsn = getEnvDefault("STDISCOSRV_DB_DSN", "memory://stdiscosrv")
certFile = "cert.pem"
keyFile = "key.pem"
debug = false

View File

@@ -28,7 +28,7 @@ func postgresSetup(db *sql.DB) error {
}
row := db.QueryRow(`SELECT 'DevicesDeviceIDIndex'::regclass`)
if err := row.Scan(nil); err != nil {
if err = row.Scan(nil); err != nil {
_, err = db.Exec(`CREATE INDEX DevicesDeviceIDIndex ON Devices (DeviceID)`)
}
if err != nil {
@@ -36,7 +36,7 @@ func postgresSetup(db *sql.DB) error {
}
row = db.QueryRow(`SELECT 'DevicesSeenIndex'::regclass`)
if err := row.Scan(nil); err != nil {
if err = row.Scan(nil); err != nil {
_, err = db.Exec(`CREATE INDEX DevicesSeenIndex ON Devices (Seen)`)
}
if err != nil {
@@ -53,7 +53,7 @@ func postgresSetup(db *sql.DB) error {
}
row = db.QueryRow(`SELECT 'AddressesDeviceIDSeenIndex'::regclass`)
if err := row.Scan(nil); err != nil {
if err = row.Scan(nil); err != nil {
_, err = db.Exec(`CREATE INDEX AddressesDeviceIDSeenIndex ON Addresses (DeviceID, Seen)`)
}
if err != nil {
@@ -61,7 +61,7 @@ func postgresSetup(db *sql.DB) error {
}
row = db.QueryRow(`SELECT 'AddressesDeviceIDAddressIndex'::regclass`)
if err := row.Scan(nil); err != nil {
if err = row.Scan(nil); err != nil {
_, err = db.Exec(`CREATE INDEX AddressesDeviceIDAddressIndex ON Addresses (DeviceID, Address)`)
}
if err != nil {

View File

View File

@@ -40,7 +40,8 @@ func main() {
log.Println("Lstat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Printf(" Time: %v", fi.ModTime())
log.Printf(" %d.%09d", fi.ModTime().Unix(), fi.ModTime().Nanosecond())
log.Println()
if !fi.Mode().IsDir() && !fi.Mode().IsRegular() {
@@ -52,7 +53,8 @@ func main() {
log.Println("Stat:")
log.Printf(" Size: %d bytes", fi.Size())
log.Printf(" Mode: 0%o", fi.Mode())
log.Printf(" Time: %v (%d)", fi.ModTime(), fi.ModTime().Unix())
log.Printf(" Time: %v", fi.ModTime())
log.Printf(" %d.%09d", fi.ModTime().Unix(), fi.ModTime().Nanosecond())
log.Println()
}

View File

@@ -10,53 +10,71 @@ import (
"encoding/binary"
"fmt"
"log"
"time"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
)
func dump(ldb *leveldb.DB) {
func dump(ldb *db.Instance) {
it := ldb.NewIterator(nil, nil)
var dev protocol.DeviceID
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := nulString(key[1 : 1+64])
devBytes := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
copy(dev[:], devBytes)
fmt.Printf("[device] F:%q N:%q D:%v\n", folder, name, dev)
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
fmt.Printf("[device] F:%d D:%d N:%q", folder, device, name)
var f protocol.FileInfo
err := f.UnmarshalXDR(it.Value())
err := f.Unmarshal(it.Value())
if err != nil {
log.Fatal(err)
}
fmt.Printf(" N:%q\n F:%#o\n M:%d\n V:%v\n S:%d\n B:%d\n", f.Name, f.Flags, f.Modified, f.Version, f.Size(), len(f.Blocks))
fmt.Printf(" V:%v\n", f)
case db.KeyTypeGlobal:
folder := nulString(key[1 : 1+64])
name := nulString(key[1+64:])
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
var flv db.VersionList
flv.UnmarshalXDR(it.Value())
fmt.Printf("[global] F:%q N:%q V: %s\n", folder, name, flv)
flv.Unmarshal(it.Value())
fmt.Printf("[global] F:%d N:%q V:%s\n", folder, name, flv)
case db.KeyTypeBlock:
folder := nulString(key[1 : 1+64])
hash := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
fmt.Printf("[block] F:%q H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
fmt.Printf("[block] F:%d H:%x N:%q I:%d\n", folder, hash, name, binary.BigEndian.Uint32(it.Value()))
case db.KeyTypeDeviceStatistic:
fmt.Printf("[dstat]\n %x\n %x\n", it.Key(), it.Value())
fmt.Printf("[dstat] K:%x V:%x\n", it.Key(), it.Value())
case db.KeyTypeFolderStatistic:
fmt.Printf("[fstat]\n %x\n %x\n", it.Key(), it.Value())
fmt.Printf("[fstat] K:%x V:%x\n", it.Key(), it.Value())
case db.KeyTypeVirtualMtime:
fmt.Printf("[mtime]\n %x\n %x\n", it.Key(), it.Value())
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
val := it.Value()
var real, virt time.Time
real.UnmarshalBinary(val[:len(val)/2])
virt.UnmarshalBinary(val[len(val)/2:])
fmt.Printf("[mtime] F:%d N:%q R:%v V:%v\n", folder, name, real, virt)
case db.KeyTypeFolderIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
fmt.Printf("[folderidx] K:%d V:%q\n", key, it.Value())
case db.KeyTypeDeviceIdx:
key := binary.BigEndian.Uint32(it.Key()[1:])
val := it.Value()
if len(val) == 0 {
fmt.Printf("[deviceidx] K:%d V:<nil>\n", key)
} else {
dev := protocol.DeviceIDFromBytes(val)
fmt.Printf("[deviceidx] K:%d V:%s\n", key, dev)
}
default:
fmt.Printf("[???]\n %x\n %x\n", it.Key(), it.Value())

View File

@@ -8,11 +8,10 @@ package main
import (
"container/heap"
"encoding/binary"
"fmt"
"github.com/syncthing/syncthing/lib/db"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syndtr/goleveldb/leveldb"
)
type SizedElement struct {
@@ -38,33 +37,31 @@ func (h *ElementHeap) Pop() interface{} {
return x
}
func dumpsize(ldb *leveldb.DB) {
func dumpsize(ldb *db.Instance) {
h := &ElementHeap{}
heap.Init(h)
it := ldb.NewIterator(nil, nil)
var dev protocol.DeviceID
var ele SizedElement
for it.Next() {
key := it.Key()
switch key[0] {
case db.KeyTypeDevice:
folder := nulString(key[1 : 1+64])
devBytes := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
copy(dev[:], devBytes)
ele.key = fmt.Sprintf("DEVICE:%s:%s:%s", dev, folder, name)
folder := binary.BigEndian.Uint32(key[1:])
device := binary.BigEndian.Uint32(key[1+4:])
name := nulString(key[1+4+4:])
ele.key = fmt.Sprintf("DEVICE:%d:%d:%s", folder, device, name)
case db.KeyTypeGlobal:
folder := nulString(key[1 : 1+64])
name := nulString(key[1+64:])
ele.key = fmt.Sprintf("GLOBAL:%s:%s", folder, name)
folder := binary.BigEndian.Uint32(key[1:])
name := nulString(key[1+4:])
ele.key = fmt.Sprintf("GLOBAL:%d:%s", folder, name)
case db.KeyTypeBlock:
folder := nulString(key[1 : 1+64])
hash := key[1+64 : 1+64+32]
name := nulString(key[1+64+32:])
ele.key = fmt.Sprintf("BLOCK:%s:%x:%s", folder, hash, name)
folder := binary.BigEndian.Uint32(key[1:])
hash := key[1+4 : 1+4+32]
name := nulString(key[1+4+32:])
ele.key = fmt.Sprintf("BLOCK:%d:%x:%s", folder, hash, name)
case db.KeyTypeDeviceStatistic:
ele.key = fmt.Sprintf("DEVICESTATS:%s", key[1:])
@@ -75,6 +72,14 @@ func dumpsize(ldb *leveldb.DB) {
case db.KeyTypeVirtualMtime:
ele.key = fmt.Sprintf("MTIME:%s", key[1:])
case db.KeyTypeFolderIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("FOLDERIDX:%d", id)
case db.KeyTypeDeviceIdx:
id := binary.BigEndian.Uint32(key[1:])
ele.key = fmt.Sprintf("DEVICEIDX:%d", id)
default:
ele.key = fmt.Sprintf("UNKNOWN:%x", key)
}

View File

@@ -13,8 +13,7 @@ import (
"os"
"path/filepath"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syncthing/syncthing/lib/db"
)
func main() {
@@ -28,16 +27,12 @@ func main() {
path := flag.Arg(0)
if path == "" {
path = filepath.Join(defaultConfigDir(), "index-v0.11.0.db")
path = filepath.Join(defaultConfigDir(), "index-v0.14.0.db")
}
fmt.Println("Path:", path)
ldb, err := leveldb.OpenFile(path, &opt.Options{
ErrorIfMissing: true,
Strict: opt.StrictAll,
OpenFilesCacheCapacity: 100,
})
ldb, err := db.Open(path)
if err != nil {
log.Fatal(err)
}

View File

@@ -0,0 +1,15 @@
# relaypoolsrv
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/relaypoolsrv.svg?style=flat-square)](http://build.syncthing.net/job/relaypoolsrv/lastBuild/)
This is the relay pool server for the `syncthing` project, which allows community hosted [relaysrv](https://github.com/syncthing/relaysrv)'s to join the public pool.
Servers that join the pool are then advertised to users of `syncthing` as potential connection points for those who are unable to connect directly due to NAT or firewall issues.
There is very little reason why you'd want to run this yourself, as `relaypoolsrv` is just used for announcement and lookup of public relay servers. If you are looking to setup a private or a public relay, please check the documentation for [relaysrv](https://github.com/syncthing/relaysrv), which also explains how to join the default public pool.
If you still want to run it, you can run `go get github.com/syncthing/relaypoolsrv` download it or download the
[latest build](http://build.syncthing.net/job/relaypoolsrv/lastSuccessfulBuild/artifact/)
from the build server.
See `relaypoolsrv -help` for configuration options.

1
cmd/strelaypoolsrv/auto/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
gui.go

View File

@@ -0,0 +1,395 @@
<!DOCTYPE html>
<html lang="en" ng-app="syncthing" ng-controller="relayDataController">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
<meta name="author" content="">
<title>Relay stats</title>
<link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet">
<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.6.1/css/font-awesome.min.css">
<style>
#map {
height: 600px;
}
.ng-cloak {
display: none;
}
table {
font-size: 11px !important;
width: 100%;
border: 1px;
}
td {
padding: 0px !important;
}
tfoot td {
font-weight: bold;
}
</style>
</head>
<body class="ng-cloak">
<div class="container">
<h1>Relay Pool Data</h2>
<div ng-if="relays === undefined" class="text-center">
<img src="//cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif"/>
<p>Please wait while we gather data</p>
</div>
<div>
<div ng-show="relays !== undefined" class="ng-hide">
<p>
Currently {{ relays.length }} relays online ({{ totals.goMaxProcs }} cores in total).
</p>
</div>
<div id="map"></div> <!-- Can't hide the map, otherwise it freaks out -->
<p>The circle size represents how much bytes the relay transfered relative to other relays</p>
</div>
<div>
<table class="table table-striped table-condensed table">
<thead>
<tr>
<th rowspan="2">Address</td>
<th rowspan="2">
<a ng-click="sortType = 'status.numActiveSessions || -1'; sortReverse = !sortReverse">
Sessions
<span ng-show="sortType == 'status.numActiveSessions || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.numActiveSessions || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.numConnections || -1'; sortReverse = !sortReverse">
Connections
<span ng-show="sortType == 'status.numConnections || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.numConnections || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.bytesProxied || -1'; sortReverse = !sortReverse">
Data relayed
<span ng-show="sortType == 'status.bytesProxied || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.bytesProxied || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th colspan="6" class="text-center">Transfer rate in the last period</th>
<th rowspan="2">
<a ng-click="sortType = 'status.uptimeSeconds || -1'; sortReverse = !sortReverse">
Uptime hours
<span ng-show="sortType == 'status.uptimeSeconds || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.uptimeSeconds || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th rowspan="2">
<a ng-click="sortType = 'status.options[\'provided-by\'] || \'\''; sortReverse = !sortReverse">
Provided by
<span ng-show="sortType == 'status.options[\'provided-by\'] || \'\'' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.options[\'provided-by\'] || \'\'' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
</tr>
<tr>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[0] || -1'; sortReverse = !sortReverse">
10s
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[0] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[0] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[1] || -1'; sortReverse = !sortReverse">
1m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[1] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[1] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[2] || -1'; sortReverse = !sortReverse">
5m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[2] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[2] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[3] || -1'; sortReverse = !sortReverse">
15m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[3] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[3] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[4] || -1'; sortReverse = !sortReverse">
30m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[4] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[4] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
<th>
<a ng-click="sortType = 'status.kbps10s1m5m15m30m60m[5] || -1'; sortReverse = !sortReverse">
60m
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[5] || -1' && !sortReverse" class="fa fa-caret-down"></span>
<span ng-show="sortType == 'status.kbps10s1m5m15m30m60m[5] || -1' && sortReverse" class="fa fa-caret-up"></span>
</a>
</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="relay in relays | orderBy:sortType:sortReverse ">
<td>{{ relay.address }}</td>
<td ng-if="relay.status === undefined" colspan="11" class="text-center">Looking up...</td>
<td ng-if-start="relay.status !== undefined">{{ relay.status.numActiveSessions }}</td>
<td>{{ relay.status.numConnections }}</td>
<td>{{ relay.status.bytesProxied | bytes }}</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[0] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[1] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[2] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[3] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[4] * 128 | bytes }}/s</td>
<td>{{ relay.status.kbps10s1m5m15m30m60m[5] * 128 | bytes }}/s</td>
<td ng-if="relay.status.uptimeSeconds != undefined">{{ relay.status.uptimeSeconds/60/60 | number:0 }}</td>
<td ng-if="relay.status.uptimeSeconds == undefined"></td>
<td title="{{ relay.status.options['provided-by'] || '' }}" ng-if-end>
{{ relay.status.options['provided-by'] || '' | limitTo:50 }}
<span ng-if="(relay.status.options['provided-by'] || '').length > 50">&hellip;
</td>
</tr>
</tbody>
<tfoot>
<tr>
<td>Totals</td>
<td>{{ totals.numActiveSessions }}</td>
<td>{{ totals.numConnections }}</td>
<td>{{ totals.bytesProxied | bytes }}</td>
<td>{{ totals.kbps10s1m5m15m30m60m[0] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[1] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[2] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[3] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[4] * 128 | bytes }}/s</td>
<td>{{ totals.kbps10s1m5m15m30m60m[5] * 128 | bytes }}/s</td>
<td>{{ totals.uptimeSeconds/60/60 | number:0 }} hours</td>
<td>{{ relays.length }} relays</td>
</tr>
</tfoor>
</table>
</div>
<hr>
<p>
This product includes GeoLite2 data created by MaxMind, available from
<a href="http://www.maxmind.com">http://www.maxmind.com</a>.
</p>
</div>
<script src="//code.jquery.com/jquery-2.1.4.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.min.js"></script>
<script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<script src="//maps.googleapis.com/maps/api/js"></script>
</body>
<script>
angular.module('syncthing', [
])
.config(function($httpProvider) {
$httpProvider.defaults.timeout = 5000;
})
.filter('bytes', function() {
return function(bytes, precision) {
if (isNaN(parseFloat(bytes)) || !isFinite(bytes)) return '-';
if (typeof precision === 'undefined') precision = 1;
var units = ['bytes', 'kB', 'MB', 'GB', 'TB', 'PB'],
number = Math.floor(Math.log(bytes) / Math.log(1024));
var value = (bytes / Math.pow(1000, Math.floor(number)));
if (!isFinite(value)) {
value = 0;
precision = 0;
}
if (!isFinite(number)) {
units = 'bytes';
} else {
units = units[number];
}
return value.toFixed(precision) + ' ' + units;
}
})
.controller('relayDataController', ['$scope', '$rootScope', '$http', '$q', '$compile', '$timeout', function($scope, $rootScope, $http, $q, $compile, $timeout) {
$scope.totals = {
bytesProxied: 0,
goMaxProcs: 0,
kbps10s1m5m15m30m60m: [0, 0, 0, 0, 0, 0],
numActiveSessions: 0,
numConnections: 0,
numPendingSessionKeys: 0,
numProxies: 0,
uptimeSeconds: 0,
};
$scope.map = new google.maps.Map(document.getElementById('map'), {
zoom: 1,
mapTypeId: google.maps.MapTypeId.ROADMAP
});
$scope.mapBounds = new google.maps.LatLngBounds();
$scope.tooltipTemplate = $('#infoTemplate').html();
$scope.usedLocations = {};
$scope.sortType = 'status.numActiveSessions || -1';
$scope.sortReverse = true;
$http.get("/endpoint").then(function(response) {
$scope.relays = response.data.relays;
var promises = [];
angular.forEach($scope.relays, function(relay) {
relay.uri = constructURI(relay.url);
relay.address = relay.url.split('/')[2];
addMarkerToMap(relay);
promises.push(getRelayStatus(relay));
});
// Can only add circles once we know the totals for transfers, which means
// we need to resolve all statuses.
$q.all(promises).then(function() {
angular.forEach($scope.relays, function(relay) {
if (relay.status) {
addCircleToMap(relay);
}
});
});
$scope.map.fitBounds($scope.mapBounds);
if ($scope.relays.length == 1) {
$scope.map.setZoom(13);
}
});
function addMarkerToMap(relay) {
var loc = relay.location.latitude + "," + relay.location.longitude;
// Deal with overlapping markers
while (loc in $scope.usedLocations) {
var locParts = loc.split(',');
locParts = [parseFloat(locParts[0]), parseFloat(locParts[1])];
locParts[Math.round(Math.random())] += 0.5 * (Math.random() >= 0.5 ? 1 : -1);
loc = locParts.join(',');
}
$scope.usedLocations[loc] = true;
var locParts = loc.split(',');
relay.marker = new google.maps.Marker({
map: $scope.map,
position: new google.maps.LatLng(locParts[0], locParts[1]),
title: relay.url,
});
var scope = $rootScope.$new(true);
scope.relay = relay;
relay.marker.info = new google.maps.InfoWindow({
content: $compile($scope.tooltipTemplate)(scope)[0],
});
relay.marker.addListener('mouseover', function() {
relay.marker.info.open($scope.map, relay.marker);
});
relay.marker.addListener('mouseout', function() {
relay.marker.info.close();
});
$scope.mapBounds.extend(relay.marker.position);
}
function addCircleToMap(relay) {
relay.marker.circle = new google.maps.Circle({
strokeColor: '#FF0000',
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: '#FF0000',
fillOpacity: 0.35,
map: $scope.map,
center: relay.marker.position,
radius: ((relay.status.bytesProxied * 100) / $scope.totals.bytesProxied) * 10000
});
}
function getRelayStatus(relay) {
// Normal timeout doesn't deal with relays which accept the TCP connection
// but don't respond (some firewalls do that), so deal with it this way.
var timeoutRequest = $q.defer();
var resolveStatus = $q.defer();
$http.get("http://" + relay.uri.hostname + (relay.uri.args.statusAddr || ":22070") + "/status", { timeout: timeoutRequest.promise }).then(function (response) {
relay.status = response.data;
resolveStatus.resolve();
angular.forEach($scope.totals, function(value, key) {
if (typeof $scope.totals[key] == 'number') {
$scope.totals[key] += response.data[key];
} else if (typeof $scope.totals[key] == 'object' && $scope.totals[key] instanceof Array) {
angular.forEach($scope.totals[key], function(value, index) {
$scope.totals[key][index] += response.data[key][index];
});
}
});
}, function() {
relay.status = null;
resolveStatus.resolve();
});
$timeout(function() {
timeoutRequest.resolve();
}, 5000);
return resolveStatus.promise;
}
function constructURI(url) {
var uri = document.createElement('a');
// HAX, otherwise doesn't work
uri.href = url.replace('relay://', 'http://');
// Convert query string to object
uri.args = {};
angular.forEach(uri.search.replace(/^\?/, '').split('&'), function(query) {
var split = query.split('=');
uri.args[split[0]] = split[1];
});
return uri;
}
}]);
</script>
<script type="text/template" id="infoTemplate">
<div>
<p><b>{{ relay.uri.hostname }}</b> <span ng-if="relay.status.options['provided-by']">provided by <u>{{ relay.status.options['provided-by'] }}</u></span></p>
<div ng-if="relay.status">
<span ng-if="relay.status.startTime">Start time: {{ relay.status.startTime | date:"medium" }}</br></span>
<span ng-if="relay.status.bytesProxied != undefined">Proxied: {{ relay.status.bytesProxied | bytes }}</br></span>
<span ng-if="relay.status.numActiveSessions != undefined">Sessions: {{ relay.status.numActiveSessions }}</br></span>
<span ng-if="relay.status.numConnections != undefined">Clients: {{ relay.status.numConnections }}</br></span>
<span ng-if="relay.status.options.pools">Pools: {{ relay.status.options.pools.join(', ') }}</br></span>
<span ng-if="relay.status.options['global-rate'] != undefined">
<span ng-if="relay.status.options['global-rate'] > 0">Global rate limit: {{ relay.status.options['global-rate'] | bytes }}/s</span>
<span ng-if="relay.status.options['global-rate'] == 0">Global rate limit: unlimited</span>
</br>
</span>
<span ng-if="relay.status.options['per-session-rate'] != undefined">
<span ng-if="relay.status.options['per-session-rate'] > 0">Session rate limit: {{ relay.status.options['per-session-rate'] | bytes }}/s</span>
<span ng-if="relay.status.options['per-session-rate'] == 0">Session rate limit: unlimited</span>
</br>
</span>
</div>
<div ng-if="!relay.status">
Data unavailable.
<div>
</div>
</script>
</html>

536
cmd/strelaypoolsrv/main.go Normal file
View File

@@ -0,0 +1,536 @@
// Copyright (C) 2015 Audrius Butkevicius and Contributors (see the CONTRIBUTORS file).
//go:generate go run genassets.go gui auto/gui.go
package main
import (
"bytes"
"compress/gzip"
"crypto/tls"
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"math/rand"
"mime"
"net"
"net/http"
"net/url"
"path/filepath"
"strings"
"time"
"github.com/golang/groupcache/lru"
"github.com/juju/ratelimit"
"github.com/oschwald/geoip2-golang"
"github.com/syncthing/syncthing/cmd/strelaypoolsrv/auto"
"github.com/syncthing/syncthing/lib/relay/client"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syncthing/syncthing/lib/tlsutil"
)
type location struct {
Latitude float64 `json:"latitude"`
Longitude float64 `json:"longitude"`
}
type relay struct {
URL string `json:"url"`
Location location `json:"location"`
uri *url.URL
}
func (r relay) String() string {
return r.URL
}
type request struct {
relay relay
uri *url.URL
result chan result
}
type result struct {
err error
eviction time.Duration
}
var (
testCert tls.Certificate
listen = ":80"
dir string
evictionTime = time.Hour
debug bool
getLRUSize = 10 << 10
getLimitBurst int64 = 10
getLimitAvg = 1
postLRUSize = 1 << 10
postLimitBurst int64 = 2
postLimitAvg = 1
getLimit time.Duration
postLimit time.Duration
permRelaysFile string
ipHeader string
geoipPath string
getMut = sync.NewRWMutex()
getLRUCache *lru.Cache
postMut = sync.NewRWMutex()
postLRUCache *lru.Cache
requests = make(chan request, 10)
mut = sync.NewRWMutex()
knownRelays = make([]relay, 0)
permanentRelays = make([]relay, 0)
evictionTimers = make(map[string]*time.Timer)
)
func main() {
flag.StringVar(&listen, "listen", listen, "Listen address")
flag.StringVar(&dir, "keys", dir, "Directory where http-cert.pem and http-key.pem is stored for TLS listening")
flag.BoolVar(&debug, "debug", debug, "Enable debug output")
flag.DurationVar(&evictionTime, "eviction", evictionTime, "After how long the relay is evicted")
flag.IntVar(&getLRUSize, "get-limit-cache", getLRUSize, "Get request limiter cache size")
flag.IntVar(&getLimitAvg, "get-limit-avg", 2, "Allowed average get request rate, per 10 s")
flag.Int64Var(&getLimitBurst, "get-limit-burst", getLimitBurst, "Allowed burst get requests")
flag.IntVar(&postLRUSize, "post-limit-cache", postLRUSize, "Post request limiter cache size")
flag.IntVar(&postLimitAvg, "post-limit-avg", 2, "Allowed average post request rate, per minute")
flag.Int64Var(&postLimitBurst, "post-limit-burst", postLimitBurst, "Allowed burst post requests")
flag.StringVar(&permRelaysFile, "perm-relays", "", "Path to list of permanent relays")
flag.StringVar(&ipHeader, "ip-header", "", "Name of header which holds clients ip:port. Only meaningful when running behind a reverse proxy.")
flag.StringVar(&geoipPath, "geoip", "GeoLite2-City.mmdb", "Path to GeoLite2-City database")
flag.Parse()
getLimit = 10 * time.Second / time.Duration(getLimitAvg)
postLimit = time.Minute / time.Duration(postLimitAvg)
getLRUCache = lru.New(getLRUSize)
postLRUCache = lru.New(postLRUSize)
var listener net.Listener
var err error
if permRelaysFile != "" {
loadPermanentRelays(permRelaysFile)
}
testCert = createTestCertificate()
go requestProcessor()
if dir != "" {
if debug {
log.Println("Starting TLS listener on", listen)
}
certFile, keyFile := filepath.Join(dir, "http-cert.pem"), filepath.Join(dir, "http-key.pem")
var cert tls.Certificate
cert, err = tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Fatalln("Failed to load HTTP X509 key pair:", err)
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
MinVersion: tls.VersionTLS10, // No SSLv3
CipherSuites: []uint16{
// No RC4
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
},
}
listener, err = tls.Listen("tcp", listen, tlsCfg)
} else {
if debug {
log.Println("Starting plain listener on", listen)
}
listener, err = net.Listen("tcp", listen)
}
if err != nil {
log.Fatalln("listen:", err)
}
handler := http.NewServeMux()
handler.HandleFunc("/", handleAssets)
handler.HandleFunc("/endpoint", handleRequest)
srv := http.Server{
Handler: handler,
ReadTimeout: 10 * time.Second,
}
err = srv.Serve(listener)
if err != nil {
log.Fatalln("serve:", err)
}
}
func handleAssets(w http.ResponseWriter, r *http.Request) {
assets := auto.Assets()
path := r.URL.Path[1:]
if path == "" {
path = "index.html"
}
bs, ok := assets[path]
if !ok {
w.WriteHeader(http.StatusNotFound)
return
}
mtype := mimeTypeForFile(path)
if len(mtype) != 0 {
w.Header().Set("Content-Type", mtype)
}
if strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
w.Header().Set("Content-Encoding", "gzip")
} else {
// ungzip if browser not send gzip accepted header
var gr *gzip.Reader
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
gr.Close()
}
w.Header().Set("Content-Length", fmt.Sprintf("%d", len(bs)))
w.Write(bs)
}
func mimeTypeForFile(file string) string {
// We use a built in table of the common types since the system
// TypeByExtension might be unreliable. But if we don't know, we delegate
// to the system.
ext := filepath.Ext(file)
switch ext {
case ".htm", ".html":
return "text/html"
case ".css":
return "text/css"
case ".js":
return "application/javascript"
case ".json":
return "application/json"
case ".png":
return "image/png"
case ".ttf":
return "application/x-font-ttf"
case ".woff":
return "application/x-font-woff"
case ".svg":
return "image/svg+xml"
default:
return mime.TypeByExtension(ext)
}
}
func handleRequest(w http.ResponseWriter, r *http.Request) {
if ipHeader != "" {
r.RemoteAddr = r.Header.Get(ipHeader)
}
w.Header().Set("Access-Control-Allow-Origin", "*")
switch r.Method {
case "GET":
if limit(r.RemoteAddr, getLRUCache, getMut, getLimit, int64(getLimitBurst)) {
w.WriteHeader(429)
return
}
handleGetRequest(w, r)
case "POST":
if limit(r.RemoteAddr, postLRUCache, postMut, postLimit, int64(postLimitBurst)) {
w.WriteHeader(429)
return
}
handlePostRequest(w, r)
default:
if debug {
log.Println("Unhandled HTTP method", r.Method)
}
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
func handleGetRequest(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
mut.RLock()
relays := append(permanentRelays, knownRelays...)
mut.RUnlock()
// Shuffle
for i := range relays {
j := rand.Intn(i + 1)
relays[i], relays[j] = relays[j], relays[i]
}
json.NewEncoder(w).Encode(map[string][]relay{
"relays": relays,
})
}
func handlePostRequest(w http.ResponseWriter, r *http.Request) {
var newRelay relay
err := json.NewDecoder(r.Body).Decode(&newRelay)
r.Body.Close()
if err != nil {
if debug {
log.Println("Failed to parse payload")
}
http.Error(w, err.Error(), 500)
return
}
uri, err := url.Parse(newRelay.URL)
if err != nil {
if debug {
log.Println("Failed to parse URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
return
}
host, port, err := net.SplitHostPort(uri.Host)
if err != nil {
if debug {
log.Println("Failed to split URI", newRelay.URL)
}
http.Error(w, err.Error(), 500)
return
}
// Get the IP address of the client
rhost, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
if debug {
log.Println("Failed to split remote address", r.RemoteAddr)
}
http.Error(w, err.Error(), 500)
return
}
// The client did not provide an IP address, use the IP address of the client.
if host == "" {
uri.Host = net.JoinHostPort(rhost, port)
newRelay.URL = uri.String()
} else if host != rhost {
if debug {
log.Println("IP address advertised does not match client IP address", r.RemoteAddr, uri)
}
http.Error(w, "IP address does not match client IP", http.StatusUnauthorized)
return
}
newRelay.uri = uri
newRelay.Location = getLocation(uri.Host)
for _, current := range permanentRelays {
if current.uri.Host == newRelay.uri.Host {
if debug {
log.Println("Asked to add a relay", newRelay, "which exists in permanent list")
}
http.Error(w, "Invalid request", 500)
return
}
}
reschan := make(chan result)
select {
case requests <- request{newRelay, uri, reschan}:
result := <-reschan
if result.err != nil {
http.Error(w, result.err.Error(), 500)
return
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(map[string]time.Duration{
"evictionIn": result.eviction,
})
default:
if debug {
log.Println("Dropping request")
}
w.WriteHeader(429)
}
}
func requestProcessor() {
for request := range requests {
if debug {
log.Println("Request for", request.relay)
}
if !client.TestRelay(request.uri, []tls.Certificate{testCert}, time.Second, 2*time.Second, 3) {
if debug {
log.Println("Test for relay", request.relay, "failed")
}
request.result <- result{fmt.Errorf("test failed"), 0}
continue
}
mut.Lock()
timer, ok := evictionTimers[request.relay.uri.Host]
if ok {
if debug {
log.Println("Stopping existing timer for", request.relay)
}
timer.Stop()
}
for i, current := range knownRelays {
if current.uri.Host == request.relay.uri.Host {
if debug {
log.Println("Relay", request.relay, "already exists")
}
// Evict the old entry anyway, as configuration might have changed.
last := len(knownRelays) - 1
knownRelays[i] = knownRelays[last]
knownRelays = knownRelays[:last]
goto found
}
}
if debug {
log.Println("Adding new relay", request.relay)
}
found:
knownRelays = append(knownRelays, request.relay)
evictionTimers[request.relay.uri.Host] = time.AfterFunc(evictionTime, evict(request.relay))
mut.Unlock()
request.result <- result{nil, evictionTime}
}
}
func evict(relay relay) func() {
return func() {
mut.Lock()
defer mut.Unlock()
if debug {
log.Println("Evicting", relay)
}
for i, current := range knownRelays {
if current.uri.Host == relay.uri.Host {
if debug {
log.Println("Evicted", relay)
}
last := len(knownRelays) - 1
knownRelays[i] = knownRelays[last]
knownRelays = knownRelays[:last]
}
}
delete(evictionTimers, relay.uri.Host)
}
}
func limit(addr string, cache *lru.Cache, lock sync.RWMutex, rate time.Duration, burst int64) bool {
host, _, err := net.SplitHostPort(addr)
if err != nil {
return false
}
lock.RLock()
bkt, ok := cache.Get(host)
lock.RUnlock()
if ok {
bkt := bkt.(*ratelimit.Bucket)
if bkt.TakeAvailable(1) != 1 {
// Rate limit
return true
}
} else {
lock.Lock()
cache.Add(host, ratelimit.NewBucket(rate, burst))
lock.Unlock()
}
return false
}
func loadPermanentRelays(file string) {
content, err := ioutil.ReadFile(file)
if err != nil {
log.Fatal(err)
}
for _, line := range strings.Split(string(content), "\n") {
if len(line) == 0 {
continue
}
uri, err := url.Parse(line)
if err != nil {
if debug {
log.Println("Skipping permanent relay", line, "due to parse error", err)
}
continue
}
permanentRelays = append(permanentRelays, relay{
URL: line,
Location: getLocation(uri.Host),
uri: uri,
})
if debug {
log.Println("Adding permanent relay", line)
}
}
}
func createTestCertificate() tls.Certificate {
tmpDir, err := ioutil.TempDir("", "relaypoolsrv")
if err != nil {
log.Fatal(err)
}
certFile, keyFile := filepath.Join(tmpDir, "cert.pem"), filepath.Join(tmpDir, "key.pem")
cert, err := tlsutil.NewCertificate(certFile, keyFile, "relaypoolsrv", 3072)
if err != nil {
log.Fatalln("Failed to create test X509 key pair:", err)
}
return cert
}
func getLocation(host string) location {
db, err := geoip2.Open(geoipPath)
if err != nil {
return location{}
}
defer db.Close()
addr, err := net.ResolveTCPAddr("tcp", host)
if err != nil {
return location{}
}
city, err := db.City(addr.IP)
if err != nil {
return location{}
}
return location{
Latitude: city.Location.Latitude,
Longitude: city.Location.Longitude,
}
}

22
cmd/strelaysrv/LICENSE Normal file
View File

@@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 The Syncthing Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,12 +1,12 @@
relaysrv
========
strelaysrv
==========
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/relaysrv.svg?style=flat-square)](http://build.syncthing.net/job/relaysrv/lastBuild/)
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/strelaysrv.svg?style=flat-square)](http://build.syncthing.net/job/strelaysrv/lastBuild/)
This is the relay server for the `syncthing` project.
To get it, run `go get github.com/syncthing/relaysrv` or download the
[latest build](http://build.syncthing.net/job/relaysrv/lastSuccessfulBuild/artifact/)
To get it, run `go get github.com/syncthing/strelaysrv` or download the
[latest build](http://build.syncthing.net/job/strelaysrv/lastSuccessfulBuild/artifact/)
from the build server.
:exclamation:Warnings:exclamation: - Read or regret
@@ -16,13 +16,13 @@ By default, all relay servers will join the default public relay pool, which mea
If you wish to disable this behaviour, please specify `-pools=""` argument.
Please note that `relaysrv` is only usable by `syncthing` **version v0.12 and onwards**.
Please note that `strelaysrv` is only usable by `syncthing` **version v0.12 and onwards**.
To run `relaysrv` you need to have port 22067 available to the internet, which means you might need to allow it through your firewall if you **have a public IP, or setup a port-forwarding** (22067 to 22067) if you are behind a router.
To run `strelaysrv` you need to have port 22067 available to the internet, which means you might need to allow it through your firewall if you **have a public IP, or setup a port-forwarding** (22067 to 22067) if you are behind a router.
Furthermore, **by default relaysrv will also expose a /status HTTP endpoint on port 22070**, which is used by the pool servers to peek at metrics of the relaysrv, such as what are the current transfer rates, how many clients are connected, etc, etc. If you wish this information to be available, similarlly you might want to allow it through your firewall, or port-forward it (22070 to 22070) on your NAT device.
Furthermore, **by default strelaysrv will also expose a /status HTTP endpoint on port 22070**, which is used by the pool servers to peek at metrics of the strelaysrv, such as what are the current transfer rates, how many clients are connected, etc, etc. If you wish this information to be available, similarlly you might want to allow it through your firewall, or port-forward it (22070 to 22070) on your NAT device.
This is **not mandatory** for the relaysrv to function, and is used only to gather metrics and present them in the overview page of the pool server, displaying stats about the specific relay.
This is **not mandatory** for the strelaysrv to function, and is used only to gather metrics and present them in the overview page of the pool server, displaying stats about the specific relay.
At the point of writing the endpoint output looks as follows:
@@ -62,31 +62,31 @@ At the point of writing the endpoint output looks as follows:
}
```
If you wish to disable the /status endpoint, provide `-status-srv=""` as one of the arguments when starting the relaysrv.
If you wish to disable the /status endpoint, provide `-status-srv=""` as one of the arguments when starting the strelaysrv.
Running for public use
----
Make sure you have a public IP with port 22067 open, or make sure you have port-forwarding (22067 to 22067) if you are behind a router.
Run the `relaysrv` with no arguments (or `-debug` if you want more output), and that should be enough for the server to join the public relay pool.
Run the `strelaysrv` with no arguments (or `-debug` if you want more output), and that should be enough for the server to join the public relay pool.
You should see a message saying:
```
2015/09/21 22:45:46 pool.go:60: Joined https://relays.syncthing.net/endpoint rejoining in 48m0s
```
See `relaysrv -help` for other options, such as rate limits, timeout intervals, etc.
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Running for private use
-----
Once you've started the `relaysrv`, it will generate a key pair and print an URI:
Once you've started the `strelaysrv`, it will generate a key pair and print an URI:
```bash
relay://:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
```
This URI contains partial address of the relay server, as well as it's options which in the future may be taken into account when choosing the best suitable relay out of multiple available.
Because `-listen` option was not used, the `relaysrv` does not know it's external IP, therefore you should replace the host part of the URI with your public IP address on which the `relaysrv` will be available:
Because `-listen` option was not used, the `strelaysrv` does not know it's external IP, therefore you should replace the host part of the URI with your public IP address on which the `strelaysrv` will be available:
```bash
relay://123.123.123.123:22067/?id=EZQOIDM-6DDD4ZI-DJ65NSM-4OQWRAT-EIKSMJO-OZ552BO-WQZEGYY-STS5RQM&pingInterval=1m0s&networkTimeout=2m0s&sessionLimitBps=0&globalLimitBps=0&statusAddr=:22070
@@ -100,7 +100,7 @@ relay://123.123.123.123:22067
This URI can then be used in `syncthing` as one of the relay servers.
See `relaysrv -help` for other options, such as rate limits, timeout intervals, etc.
See `strelaysrv -help` for other options, such as rate limits, timeout intervals, etc.
Other items available in this repo
----

View File

@@ -3,10 +3,10 @@ Description=Syncthing relay server
After=network.target
[Service]
User=syncthing-relaysrv
Group=syncthing-relaysrv
ExecStart=/usr/bin/relaysrv
WorkingDirectory=/var/lib/syncthing-relaysrv
User=strelaysrv
Group=strelaysrv
ExecStart=/usr/bin/strelaysrv
WorkingDirectory=/var/lib/strelaysrv
PrivateTmp=true
ProtectSystem=full

View File

@@ -42,7 +42,7 @@ func init() {
BuildDate = time.Unix(int64(stamp), 0)
date := BuildDate.UTC().Format("2006-01-02 15:04:05 MST")
LongVersion = fmt.Sprintf(`relaysrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
LongVersion = fmt.Sprintf(`strelaysrv %s (%s %s-%s) %s@%s %s`, Version, runtime.Version(), runtime.GOOS, runtime.GOARCH, BuildUser, BuildHost, date)
}
var (
@@ -121,7 +121,7 @@ func main() {
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
log.Println("Failed to load keypair. Generating one, this might take a while...")
cert, err = tlsutil.NewCertificate(certFile, keyFile, "relaysrv", 3072)
cert, err = tlsutil.NewCertificate(certFile, keyFile, "strelaysrv", 3072)
if err != nil {
log.Fatalln("Failed to generate X509 key pair:", err)
}

View File

@@ -17,6 +17,7 @@ import (
"path/filepath"
"reflect"
"runtime"
"runtime/pprof"
"sort"
"strconv"
"strings"
@@ -41,8 +42,7 @@ import (
)
var (
configInSync = true
startTime = time.Now()
startTime = time.Now()
)
type apiService struct {
@@ -91,8 +91,8 @@ type modelIntf interface {
ConnectedTo(deviceID protocol.DeviceID) bool
GlobalSize(folder string) (nfiles, deleted int, bytes int64)
LocalSize(folder string) (nfiles, deleted int, bytes int64)
CurrentLocalVersion(folder string) (int64, bool)
RemoteLocalVersion(folder string) (int64, bool)
CurrentSequence(folder string) (int64, bool)
RemoteSequence(folder string) (int64, bool)
State(folder string) (string, time.Time, error)
}
@@ -100,12 +100,13 @@ type configIntf interface {
GUI() config.GUIConfiguration
Raw() config.Configuration
Options() config.OptionsConfiguration
Replace(cfg config.Configuration) config.CommitResponse
Replace(cfg config.Configuration) error
Subscribe(c config.Committer)
Folders() map[string]config.FolderConfiguration
Devices() map[protocol.DeviceID]config.DeviceConfiguration
Save() error
ListenAddresses() []string
RequiresRestart() bool
}
type connectionsIntf interface {
@@ -268,8 +269,12 @@ func (s *apiService) Serve() {
postRestMux.HandleFunc("/rest/system/debug", s.postSystemDebug) // [enable] [disable]
// Debug endpoints, not for general use
getRestMux.HandleFunc("/rest/debug/peerCompletion", s.getPeerCompletion)
getRestMux.HandleFunc("/rest/debug/httpmetrics", s.getSystemHTTPMetrics)
debugMux := http.NewServeMux()
debugMux.HandleFunc("/rest/debug/peerCompletion", s.getPeerCompletion)
debugMux.HandleFunc("/rest/debug/httpmetrics", s.getSystemHTTPMetrics)
debugMux.HandleFunc("/rest/debug/cpuprof", s.getCPUProf) // duration
debugMux.HandleFunc("/rest/debug/heapprof", s.getHeapProf)
getRestMux.Handle("/rest/debug/", s.whenDebugging(debugMux))
// A handler that splits requests between the two above and disables
// caching
@@ -364,6 +369,9 @@ func (s *apiService) VerifyConfiguration(from, to config.Configuration) error {
}
func (s *apiService) CommitConfiguration(from, to config.Configuration) bool {
// No action required when this changes, so mask the fact that it changed at all.
from.GUI.Debugging = to.GUI.Debugging
if to.GUI == from.GUI {
return true
}
@@ -487,6 +495,18 @@ func withDetailsMiddleware(id protocol.DeviceID, h http.Handler) http.Handler {
})
}
func (s *apiService) whenDebugging(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if s.cfg.GUI().Debugging {
h.ServeHTTP(w, r)
return
}
http.Error(w, "Debugging disabled", http.StatusBadRequest)
return
})
}
func (s *apiService) restPing(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]string{"ping": "pong"})
}
@@ -596,10 +616,11 @@ func folderSummary(cfg configIntf, m modelIntf, folder string) map[string]interf
res["error"] = err.Error()
}
lv, _ := m.CurrentLocalVersion(folder)
rv, _ := m.RemoteLocalVersion(folder)
ourSeq, _ := m.CurrentSequence(folder)
remoteSeq, _ := m.RemoteSequence(folder)
res["version"] = lv + rv
res["version"] = ourSeq + remoteSeq // legacy
res["sequence"] = ourSeq + remoteSeq // new name
ignorePatterns, _, _ := m.GetIgnores(folder)
res["ignorePatterns"] = false
@@ -722,13 +743,19 @@ func (s *apiService) postSystemConfig(w http.ResponseWriter, r *http.Request) {
// Activate and save
resp := s.cfg.Replace(to)
configInSync = !resp.RequiresRestart
s.cfg.Save()
if err := s.cfg.Replace(to); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
if err := s.cfg.Save(); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (s *apiService) getSystemConfigInsync(w http.ResponseWriter, r *http.Request) {
sendJSON(w, map[string]bool{"configInSync": configInSync})
sendJSON(w, map[string]bool{"configInSync": !s.cfg.RequiresRestart()})
}
func (s *apiService) postSystemRestart(w http.ResponseWriter, r *http.Request) {
@@ -1159,6 +1186,32 @@ func (s *apiService) getSystemBrowse(w http.ResponseWriter, r *http.Request) {
sendJSON(w, ret)
}
func (s *apiService) getCPUProf(w http.ResponseWriter, r *http.Request) {
duration, err := time.ParseDuration(r.FormValue("duration"))
if err != nil {
duration = 30 * time.Second
}
filename := fmt.Sprintf("syncthing-cpu-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
pprof.StartCPUProfile(w)
time.Sleep(duration)
pprof.StopCPUProfile()
}
func (s *apiService) getHeapProf(w http.ResponseWriter, r *http.Request) {
filename := fmt.Sprintf("syncthing-heap-%s-%s-%s-%s.pprof", runtime.GOOS, runtime.GOARCH, Version, time.Now().Format("150405")) // hhmmss
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename="+filename)
runtime.GC()
pprof.WriteHeapProfile(w)
}
func (s *apiService) toNeedSlice(fs []db.FileInfoTruncated) []jsonDBFileInfo {
res := make([]jsonDBFileInfo, len(fs))
for i, f := range fs {
@@ -1173,13 +1226,17 @@ type jsonFileInfo protocol.FileInfo
func (f jsonFileInfo) MarshalJSON() ([]byte, error) {
return json.Marshal(map[string]interface{}{
"name": f.Name,
"size": protocol.FileInfo(f).Size(),
"flags": fmt.Sprintf("%#o", f.Flags),
"modified": time.Unix(f.Modified, 0),
"localVersion": f.LocalVersion,
"numBlocks": len(f.Blocks),
"version": jsonVersionVector(f.Version),
"name": f.Name,
"type": f.Type,
"size": f.Size,
"permissions": fmt.Sprintf("%#o", f.Permissions),
"deleted": f.Deleted,
"invalid": f.Invalid,
"noPermissions": f.NoPermissions,
"modified": protocol.FileInfo(f).ModTime(),
"sequence": f.Sequence,
"numBlocks": len(f.Blocks),
"version": jsonVersionVector(f.Version),
})
}
@@ -1187,20 +1244,23 @@ type jsonDBFileInfo db.FileInfoTruncated
func (f jsonDBFileInfo) MarshalJSON() ([]byte, error) {
return json.Marshal(map[string]interface{}{
"name": f.Name,
"size": db.FileInfoTruncated(f).Size(),
"flags": fmt.Sprintf("%#o", f.Flags),
"modified": time.Unix(f.Modified, 0),
"localVersion": f.LocalVersion,
"version": jsonVersionVector(f.Version),
"name": f.Name,
"type": f.Type,
"size": f.Size,
"permissions": fmt.Sprintf("%#o", f.Permissions),
"deleted": f.Deleted,
"invalid": f.Invalid,
"noPermissions": f.NoPermissions,
"modified": db.FileInfoTruncated(f).ModTime(),
"sequence": f.Sequence,
})
}
type jsonVersionVector protocol.Vector
func (v jsonVersionVector) MarshalJSON() ([]byte, error) {
res := make([]string, len(v))
for i, c := range v {
res := make([]string, len(v.Counters))
for i, c := range v.Counters {
res[i] = fmt.Sprintf("%v:%d", c.ID, c.Value)
}
return json.Marshal(res)

View File

@@ -41,6 +41,13 @@ func csrfMiddleware(unique string, prefix string, cfg config.GUIConfiguration, n
return
}
if strings.HasPrefix(r.URL.Path, "/rest/debug") {
// Debugging functions are only available when explicitly
// enabled, and can be accessed without a CSRF token
next.ServeHTTP(w, r)
return
}
// Allow requests for anything not under the protected path prefix,
// and set a CSRF cookie if there isn't already a valid one.
if !strings.HasPrefix(r.URL.Path, prefix) {

View File

@@ -48,7 +48,7 @@ var locations = map[locationEnum]string{
locKeyFile: "${config}/key.pem",
locHTTPSCertFile: "${config}/https-cert.pem",
locHTTPSKeyFile: "${config}/https-key.pem",
locDatabase: "${config}/index-v0.13.0.db",
locDatabase: "${config}/index-v0.14.0.db",
locLogFile: "${config}/syncthing.log", // -logfile on Windows
locCsrfTokens: "${config}/csrftokens.txt",
locPanicLog: "${config}/panic-${timestamp}.log",

View File

@@ -16,7 +16,6 @@ import (
"log"
"net"
"net/http"
_ "net/http/pprof"
"net/url"
"os"
"os/signal"
@@ -51,7 +50,7 @@ import (
var (
Version = "unknown-dev"
Codename = "Copper Cockroach"
Codename = "Dysprosium Dragonfly"
BuildStamp = "0"
BuildDate time.Time
BuildHost = "unknown"
@@ -539,8 +538,9 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
errors := logger.NewRecorder(l, logger.LevelWarn, maxSystemErrors, 0)
systemLog := logger.NewRecorder(l, logger.LevelDebug, maxSystemLog, initialSystemLog)
// Event subscription for the API; must start early to catch the early events. The LocalChangeDetected
// event might overwhelm the event reciever in some situations so we will not subscribe to it here.
// Event subscription for the API; must start early to catch the early
// events. The LocalChangeDetected event might overwhelm the event
// receiver in some situations so we will not subscribe to it here.
apiSub := events.NewBufferedSubscription(events.Default.Subscribe(events.AllEvents&^events.LocalChangeDetected), 1000)
if len(os.Getenv("GOMAXPROCS")) == 0 {
@@ -663,8 +663,14 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
}
}
if cfg.Raw().OriginalVersion == 15 {
// The config version 15->16 migration is about handling ignores and
// delta indexes and requires that we drop existing indexes that
// have been incorrectly ignore filtered.
ldb.DropDeltaIndexIDs()
}
m := model.NewModel(cfg, myID, myDeviceName(cfg), "syncthing", Version, ldb, protectedFiles)
cfg.Subscribe(m)
if t := os.Getenv("STDEADLOCKTIMEOUT"); len(t) > 0 {
it, err := strconv.Atoi(t)
@@ -681,17 +687,9 @@ func syncthingMain(runtimeOptions RuntimeOptions) {
}
}
// Clear out old indexes for other devices. Otherwise we'll start up and
// start needing a bunch of files which are nowhere to be found. This
// needs to be changed when we correctly do persistent indexes.
// Add and start folders
for _, folderCfg := range cfg.Folders() {
m.AddFolder(folderCfg)
for _, device := range folderCfg.DeviceIDs() {
if device == myID {
continue
}
m.Index(device, folderCfg.ID, nil, 0, nil)
}
m.StartFolder(folderCfg.ID)
}
@@ -1129,6 +1127,8 @@ func cleanConfigDirectory() {
"panic-*.log": 7 * 24 * time.Hour, // keep panic logs for a week
"audit-*.log": 7 * 24 * time.Hour, // keep audit logs for a week
"index": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.11.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index-v0.13.0.db": 14 * 24 * time.Hour, // keep old index format for two weeks
"index*.converted": 14 * 24 * time.Hour, // keep old converted indexes for two weeks
"config.xml.v*": 30 * 24 * time.Hour, // old config versions for a month
"*.idx.gz": 30 * 24 * time.Hour, // these should for sure no longer exist

View File

@@ -31,8 +31,8 @@ func (c *mockedConfig) Options() config.OptionsConfiguration {
return config.OptionsConfiguration{}
}
func (c *mockedConfig) Replace(cfg config.Configuration) config.CommitResponse {
return config.CommitResponse{}
func (c *mockedConfig) Replace(cfg config.Configuration) error {
return nil
}
func (c *mockedConfig) Subscribe(cm config.Committer) {}
@@ -48,3 +48,7 @@ func (c *mockedConfig) Devices() map[protocol.DeviceID]config.DeviceConfiguratio
func (c *mockedConfig) Save() error {
return nil
}
func (c *mockedConfig) RequiresRestart() bool {
return false
}

View File

@@ -103,11 +103,11 @@ func (m *mockedModel) LocalSize(folder string) (nfiles, deleted int, bytes int64
return 0, 0, 0
}
func (m *mockedModel) CurrentLocalVersion(folder string) (int64, bool) {
func (m *mockedModel) CurrentSequence(folder string) (int64, bool) {
return 0, false
}
func (m *mockedModel) RemoteLocalVersion(folder string) (int64, bool) {
func (m *mockedModel) RemoteSequence(folder string) (int64, bool) {
return 0, false
}

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/thejerf/suture"
)
@@ -59,7 +60,7 @@ func (c *folderSummaryService) Stop() {
// listenForUpdates subscribes to the event bus and makes note of folders that
// need their data recalculated.
func (c *folderSummaryService) listenForUpdates() {
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress)
sub := events.Default.Subscribe(events.LocalIndexUpdated | events.RemoteIndexUpdated | events.StateChanged | events.RemoteDownloadProgress | events.DeviceConnected)
defer events.Default.Unsubscribe(sub)
for {
@@ -67,8 +68,31 @@ func (c *folderSummaryService) listenForUpdates() {
select {
case ev := <-sub.C():
// Whenever the local or remote index is updated for a given
// folder we make a note of it.
if ev.Type == events.DeviceConnected {
// When a device connects we schedule a refresh of all
// folders shared with that device.
data := ev.Data.(map[string]string)
deviceID, _ := protocol.DeviceIDFromString(data["id"])
c.foldersMut.Lock()
nextFolder:
for _, folder := range c.cfg.Folders() {
for _, dev := range folder.Devices {
if dev.DeviceID == deviceID {
c.folders[folder.ID] = struct{}{}
continue nextFolder
}
}
}
c.foldersMut.Unlock()
continue
}
// The other events all have a "folder" attribute that they
// affect. Whenever the local or remote index is updated for a
// given folder we make a note of it.
data := ev.Data.(map[string]interface{})
folder := data["folder"].(string)

View File

@@ -8,13 +8,13 @@
"Add": "Προσθήκη",
"Add Device": "Προσθήκη συσκευής",
"Add Folder": "Προσθήκη φακέλου",
"Add Remote Device": "Add Remote Device",
"Add Remote Device": "Προσθήκη Απομακρυσμένης Συσκευής",
"Add new folder?": "Προσθήκη νέου φακέλου;",
"Address": "Διεύθυνση",
"Addresses": "Διευθύνσεις",
"Advanced": "Προχωρημένες",
"Advanced Configuration": "Προχωρημένες ρυθμίσεις",
"Advanced settings": "Advanced settings",
"Advanced settings": "Προχωρημένες ρυθμίσεις",
"All Data": "Όλα τα δεδομένα",
"Allow Anonymous Usage Reporting?": "Να επιτρέπεται η αποστολή ανώνυμων στοιχείων χρήσης;",
"Alphabetic": "Αλφαβητικά",
@@ -32,15 +32,15 @@
"Comment, when used at the start of a line": "Σχόλιο, όταν χρησιμοποιείται στην αρχή μιας γραμμής",
"Compression": "Συμπίεση",
"Connection Error": "Σφάλμα σύνδεσης",
"Connection Type": "Connection Type",
"Connection Type": "Τύπος Σύνδεσης",
"Copied from elsewhere": "Έχει αντιγραφεί από κάπου αλλού",
"Copied from original": "Έχει αντιγραφεί από το πρωτότυπο",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 the following Contributors:",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 οι παρακάτω Συνεισφέροντες:",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 από τους παρακάτω συνεισφορείς:",
"Danger!": "Προσοχή!",
"Delete": "Διαγραφή",
"Deleted": "Διαγραμμένα",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Device \"{{name}}\" ({{device}} at {{address}}) wants to connect. Add new device?",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Η συσκευή \"{{name}}\" ({{device}} στη {{address}}) επιθυμεί να συνδεθεί. Προσθήκη της νέας συσκευής?",
"Device ID": "Ταυτότητα συσκευής",
"Device Identification": "Ταυτότητα συσκευής",
"Device Name": "Όνομα συσκευής",
@@ -98,7 +98,7 @@
"Keep Versions": "Διατήρηση εκδόσεων",
"Largest First": "Το μεγαλύτερο πρώτα",
"Last File Received": "Πιο πρόσφατο αρχείο",
"Last Scan": "Last Scan",
"Last Scan": "Τελευταία Σάρωση",
"Last seen": "Τελευταία φορά συνδεδεμένος",
"Later": "Αργότερα",
"Listeners": "Listeners",
@@ -193,7 +193,7 @@
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Το Syncthing φαίνεται πως είναι απενεργοποιημένο ή υπάρχει πρόβλημα στη σύνδεσή σου στο διαδίκτυο. Προσπαθώ πάλι…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Το Syncthing φαίνεται να αντιμετωπίζει ένα πρόβλημα με την επεξεργασία του αιτήματός σου. Παρακαλούμε, αν το πρόβλημα συνεχίζει, ανανέωσε την σελίδα ή επανεκκίνησε το Syncthing.",
"The Syncthing admin interface is configured to allow remote access without a password.": "Η διεπαφή διαχείρισης του Syncthing είναι ρυθμισμένη να επιτρέπει την πρόσβαση χωρίς κωδικό.",
"The aggregated statistics are publicly available at the URL below.": "The aggregated statistics are publicly available at the URL below.",
"The aggregated statistics are publicly available at the URL below.": "Τα στατιστικά που έχουν συλλεγεί είναι δημόσια διαθέσιμα στη παρακάτω διεύθυνση.",
"The aggregated statistics are publicly available at {%url%}.": "Τα στατιστικά που έχουν συλλεγεί είναι δημόσια διαθέσιμα στο {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Οι ρυθμίσεις έχουν αποθηκευτεί αλλά δεν έχουν ενεργοποιηθεί. Πρέπει να επανεκκινήσεις το Syncthing για να ισχύσουν οι νέες ρυθμίσεις.",
"The device ID cannot be blank.": "Η ταυτότητα της συσκευής δεν μπορεί να είναι κενή",

View File

@@ -32,7 +32,7 @@
"Comment, when used at the start of a line": "Comentario, cuando es utilizado al inicio de una línea.",
"Compression": "Compresión",
"Connection Error": "Error de conexión",
"Connection Type": "Connection Type",
"Connection Type": "Tipo de conexión",
"Copied from elsewhere": "Copiado desde otra parte.",
"Copied from original": "Copiado del original",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 los siguientes contribuidores:",
@@ -40,7 +40,7 @@
"Danger!": "Peligro!",
"Delete": "Suprimir",
"Deleted": "Suprimido",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Device \"{{name}}\" ({{device}} at {{address}}) wants to connect. Add new device?",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Dispositivo \"{{name}}\" ({{device}} en {{address}}) quiere conectar. ¿Añadir nuevo dispositivo?",
"Device ID": "ID del dispositivo",
"Device Identification": "Identificación del dispositivo",
"Device Name": "Nombre del dispositivo",
@@ -56,10 +56,10 @@
"Edit Device": "Editar dispositivo",
"Edit Folder": "Editar repositorio",
"Editing": "Editando",
"Enable NAT traversal": "Enable NAT traversal",
"Enable Relaying": "Enable Relaying",
"Enable NAT traversal": "Habilitar NAT trasversal",
"Enable Relaying": "Habilitar Retransmisión",
"Enable UPnP": "Permitir UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduce las direcciones (\"tcp://ip:port\", \"tcp://host:port\") separadas por comas o \"dynamic\" para ejecutar un descubrimiento automático de la dirección. ",
"Enter ignore patterns, one per line.": "Añadir patrones de exclusión, uno por línea.",
"Error": "Error",
"External File Versioning": "Control de versiones externo",
@@ -72,10 +72,10 @@
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Los archivos están protegidos frente a los cambios realizados en otros dispositivos, peros los cambios realizados en este dispositivo serán envíados al resto del grupo",
"Folder": "Carpeta",
"Folder ID": "ID del repositorio",
"Folder Label": "Folder Label",
"Folder Label": "Etiqueta de Carpeta",
"Folder Master": "Repositorio maestro",
"Folder Path": "Ruta del repositorio",
"Folder Type": "Folder Type",
"Folder Type": "Tipo de Carpeta",
"Folders": "Repositorios",
"GUI": "GUI",
"GUI Authentication Password": "Contraseña de autenticación de la GUI",
@@ -84,7 +84,7 @@
"Generate": "Generar",
"Global Discovery": "Búsqueda en internet",
"Global Discovery Server": "Servidor global de identificación",
"Global Discovery Servers": "Global Discovery Servers",
"Global Discovery Servers": "Servidores globales de identificación",
"Global State": "Estado global",
"Help": "Ayuda",
"Home page": "Pagina de inicio",
@@ -98,15 +98,15 @@
"Keep Versions": "Conservar versiones",
"Largest First": "Más grande primero",
"Last File Received": "Último archivo recibido",
"Last Scan": "Last Scan",
"Last Scan": "Último escaneo",
"Last seen": "Visto por ultima vez",
"Later": "Más tarde",
"Listeners": "Listeners",
"Listeners": "Receptor",
"Local Discovery": "Búsqueda en red local",
"Local State": "Estado local",
"Local State (Total)": "Estado local (total)",
"Major Upgrade": "Actualización mayor",
"Master": "Master",
"Master": "Maestro",
"Maximum Age": "Edad máxima",
"Metadata Only": "Sólo metadatos",
"Minimum Free Disk Space": "Espacio mínimo libre en disco",
@@ -123,7 +123,7 @@
"OK": "OK",
"Off": "Apagado",
"Oldest First": "Antiguo primero",
"Optional descriptive label for the folder. Can be different on each device.": "Optional descriptive label for the folder. Can be different on each device.",
"Optional descriptive label for the folder. Can be different on each device.": "Etiqueta descriptiva opcional para la carpeta. Puede ser diferente en cada dispositivo.",
"Options": "Opciones",
"Out of Sync": "Fuera de sincronización",
"Out of Sync Items": "Ítems no sincronizados",
@@ -134,20 +134,20 @@
"Pause": "Pausa",
"Paused": "En pausa",
"Please consult the release notes before performing a major upgrade.": "Por favor consulta las notas de lanzamiento antes de realizar una actualizacón mayor.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Please set a GUI Authentication User and Password in the Settings dialog.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Por favor, establece un Usuario y Contraseña de Autenticación en la GUI en el diálogo de Configuración",
"Please wait": "Aguarde por favor",
"Preview": "Vista previa",
"Preview Usage Report": "Ver reporte de uso",
"Quick guide to supported patterns": "Guía rápida sobre los patrones soportados",
"RAM Utilization": "Utilización de RAM",
"Random": "Aleatorio",
"Relay Servers": "Relay Servers",
"Relay Servers": "Servidores de Retransmisión",
"Relayed via": "retransmitida vía",
"Relays": "Retransmisores",
"Release Notes": "Notas de lanzamiento",
"Remote Devices": "Dispositivos Remotos",
"Remove": "Eliminar",
"Required identifier for the folder. Must be the same on all cluster devices.": "Required identifier for the folder. Must be the same on all cluster devices.",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identificación requerida para la carpeta. Debe ser la misma en todos los dispositivos del grupo.",
"Rescan": "Reescanear",
"Rescan All": "Reescanear todo",
"Rescan Interval": "Intervalo de reescaneo",
@@ -193,11 +193,11 @@
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing parece estar apagado, o hay un problema con su conexión de Internet. Reintentando...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing parece estar experimentando un problema al procesar su solicitud. Por favor, recargue el navegador o reinicie Syncthing si el problema persiste.",
"The Syncthing admin interface is configured to allow remote access without a password.": "La interfaz administrativa del Syncthing está configurada para permitir acceso remoto sin una contraseña.",
"The aggregated statistics are publicly available at the URL below.": "The aggregated statistics are publicly available at the URL below.",
"The aggregated statistics are publicly available at the URL below.": "Las estadísticas agregadas están disponibles públicamente en la dirección de abajo.",
"The aggregated statistics are publicly available at {%url%}.": "Las estadísticas acumuladas están disponibles públicamente en {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuración ha sido guardada pero no activada.\nSyncthing debe reiniciarse para activar la nueva configuración.",
"The device ID cannot be blank.": "La ID del dispositivo no puede estar en blanco.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "La ID de dispositivo a introducir ser puede encontrar en el menú \"Acciones > Mostrar ID\" en el otro dispositivo. Espacios y guiones son opcionales (ignorados). ",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "La ID del dispositivo a introducir se puede encontrar en la opción de menú \"Edición > Mostrar ID\" en el otro dispositivo. Espacios y guiones son opcionales (ignorados).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "El informe de uso se envía encriptado diariamente. Se utiliza para hacer un seguimiento de plataformas comunes, tamaño de repositorios y versiones de la aplicación. Si el conjunto de datos cambia será notificado mediante este dialogo nuevamente.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "La ID del dispositivo introducida no es válida. Debe ser una cadena de 52 o 56 caracteres consistente en letras y números, con espacios y guiones opcionales.",
@@ -210,7 +210,7 @@
"The following items could not be synchronized.": "Los siguientes artículos no pueden ser sincronizados.",
"The maximum age must be a number and cannot be blank.": "La edad máxima debe ser un número y no puede estar en blanco.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "El tiempo máximo para mantener una versión (en días, establece en 0 para mantener versiones para siempre).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "El porcentaje de espacio libre en disco mínimo debe ser un número no negativo entre 0 y 100 (incluidos).",
"The number of days must be a number and cannot be blank.": "El número de días debe ser un número y no puede estar vacío.",
"The number of days to keep files in the trash can. Zero means forever.": "El tiempo máximo para mantener un archivo en el cubo de basura (en días, establece en 0 para mantener versiones para siempre).",
"The number of old versions to keep, per file.": "El numero de versiones anteriores a conservar, por archivo.",
@@ -247,5 +247,5 @@
"items": "ítems",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} quiere compartir repositorio \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} qiuere compartir el repositorio \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} wants to share folder \"{{folderlabel}}\" ({{folder}})."
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} quiere compartir la carpeta \"{{folderlabel}}\" ({{folder}})."
}

View File

@@ -1,63 +1,63 @@
{
"A device with that ID is already added.": "A device with that ID is already added.",
"A negative number of days doesn't make sense.": "Un nombre négatif de jours n'a pas de sens.",
"A device with that ID is already added.": "L'appareil portant cette ID est déjà présent.",
"A negative number of days doesn't make sense.": "Ce champ n'accepte qu'un entier positif ou nul.",
"A new major version may not be compatible with previous versions.": "Une nouvelle version majeure peut présenter des incompatibilités avec les versions antérieures.",
"API Key": "Clé API",
"About": "À propos",
"Actions": "Actions",
"Add": "Ajouter",
"Add Device": "Ajouter un périphérique",
"Add Folder": "Ajouter un répertoire",
"Add Remote Device": "Add Remote Device",
"Add new folder?": "Ajouter un nouveau dossier ?",
"Add Device": "Ajouter un appareil",
"Add Folder": "Ajouter un partage",
"Add Remote Device": "Ajouter un appareil distant",
"Add new folder?": "Ajouter un nouveau partage ?",
"Address": "Adresse",
"Addresses": "Adresses",
"Advanced": "Avancé",
"Advanced Configuration": "Configuration avancée",
"Advanced settings": "Advanced settings",
"Advanced settings": "Réglages experts",
"All Data": "Toutes les données",
"Allow Anonymous Usage Reporting?": "Autoriser le rapport anonyme de statistiques d'utilisation ?",
"Allow Anonymous Usage Reporting?": "Autoriser l'envoi de statistiques d'utilisation anonymisées ?",
"Alphabetic": "Alphabétique",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le dossier synchronisé.",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le répertoire synchronisé.",
"Anonymous Usage Reporting": "Rapport anonyme de statistiques d'utilisation",
"Any devices configured on an introducer device will be added to this device as well.": "Toute machine ajoutée depuis une machine introductrice sera aussi ajoutée sur cette machine.",
"Any devices configured on an introducer device will be added to this device as well.": "Tout appareil ajouté depuis un appareil introducteur sera aussi ajouté sur votre appareil.",
"Automatic upgrades": "Mises à jour automatiques",
"Be careful!": "Faites attention !",
"Bugs": "Bugs",
"CPU Utilization": "Utilisation du CPU",
"Changelog": "Historique des changements",
"Clean out after": "Nettoyer après",
"Clean out after": "Purger après :",
"Close": "Fermer",
"Command": "Commande",
"Comment, when used at the start of a line": "Commentaire lorsque utilisé en début de ligne",
"Compression": "Compression",
"Connection Error": "Erreur de connexion",
"Connection Type": "Connection Type",
"Connection Type": "Type de connexion",
"Copied from elsewhere": "Copié d'ailleurs",
"Copied from original": "Copié depuis l'original",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 the following Contributors:",
"Copyright © 2014-2016 the following Contributors:": "Les contributeurs suivants, Copyright © 2014-2016 :",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 Les contributeurs suivants:",
"Danger!": "Danger!",
"Danger!": "Attention !",
"Delete": "Supprimer",
"Deleted": "Supprimé",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Device \"{{name}}\" ({{device}} at {{address}}) wants to connect. Add new device?",
"Device ID": "ID du périphérique",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "L'appareil \"{{name}}\" ({{device}} à l'IP {{address}}) souhaite se connecter. L'acceptez-vous ?",
"Device ID": "ID de l'appareil",
"Device Identification": "Identification de l'appareil",
"Device Name": "Nom du périphérique",
"Device Name": "Nom de l'appareil",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "L'appareil {{device}} ({{address}}) veut se connecter. Voulez-vous ajouter cette appareil ?",
"Devices": "Appareil",
"Disconnected": "Déconnecté",
"Discovery": "Discovery",
"Discovery": "Découverte",
"Documentation": "Documentation",
"Download Rate": "Débit de réception",
"Downloaded": "Téléchargé",
"Downloading": "En cours de téléchargement",
"Edit": "Éditer",
"Edit Device": "Éditer le périphérique",
"Edit Folder": "Éditer le répertoire",
"Edit": "Modifier",
"Edit Device": "Modifier l'appareil",
"Edit Folder": "Modifier le partage",
"Editing": "Édition",
"Enable NAT traversal": "Enable NAT traversal",
"Enable Relaying": "Enable Relaying",
"Enable NAT traversal": "Activer transfert d'adresses NAT",
"Enable Relaying": "Activer le relayage",
"Enable UPnP": "Activer l'UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Entrer les adresses (\"tcp://ip:port\" ou \"tcp://host:port\") séparées par une virgule ou \"dynamic\" afin d'activer la recherche automatique de l'adresse.",
"Enter ignore patterns, one per line.": "Entrer les masques de filtrage, un par ligne.",
@@ -67,16 +67,16 @@
"File Pull Order": "Ordre de récupération de fichier",
"File Versioning": "Versions de fichier",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Les bits de permission de fichier sont ignorés lors de la recherche de changements. Utilisé sur les systèmes de fichiers FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés vers le dossier .stversions quand ils sont remplacés ou effacés par Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés, avec horodatage, dans le dossier .stversions quand ils sont remplacés ou supprimés par Syncthing.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés vers le répertoire .stversions quand ils sont remplacés ou effacés par Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés, avec horodatage, dans le répertoire .stversions quand ils sont remplacés ou supprimés par Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Les fichiers sont protégés des changements réalisés sur les autres appareils, mais les changements réalisés sur cet appareil seront transférés aux autres appareils.",
"Folder": "Dossier",
"Folder ID": "ID du répertoire",
"Folder Label": "Folder Label",
"Folder Master": "Répertoire maître",
"Folder Path": "Chemin du répertoire",
"Folder Type": "Folder Type",
"Folders": "Dossiers",
"Folder": "Partage",
"Folder ID": "ID du partage",
"Folder Label": "Étiquette du partage",
"Folder Master": "Partage maître",
"Folder Path": "Chemin racine du partage",
"Folder Type": "Type de partage",
"Folders": "Partages",
"GUI": "GUI",
"GUI Authentication Password": "Mot de passe d'authentification GUI",
"GUI Authentication User": "Utilisateur autorisé GUI",
@@ -84,29 +84,29 @@
"Generate": "Générer",
"Global Discovery": "Recherche globale",
"Global Discovery Server": "Serveur global de recherche",
"Global Discovery Servers": "Global Discovery Servers",
"Global Discovery Servers": "Serveurs de découverte globale",
"Global State": "État global",
"Help": "Aide",
"Home page": "Page d'accueil",
"Ignore": "Ignorer",
"Ignore Patterns": "Modèles à éviter",
"Ignore Patterns": "Règles d'exclusion",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit entrant (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos dossiers et mettre hors-service Syncthing",
"Introducer": "Initiateur",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Introducer": "Introducteur",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
"Keep Versions": "Conserver les versions",
"Largest First": "Les plus volumineux en premier",
"Last File Received": "Dernier fichier reçu",
"Last Scan": "Last Scan",
"Last Scan": "Dernière analyse",
"Last seen": "Dernière apparition",
"Later": "Plus tard",
"Listeners": "Listeners",
"Listeners": "Systèmes à l'écoute",
"Local Discovery": "Recherche locale",
"Local State": "État local",
"Local State (Total)": "État local (Total)",
"Major Upgrade": "Mise à jour majeure",
"Master": "Master",
"Master": "Maître",
"Maximum Age": "Ancienneté maximum",
"Metadata Only": "Métadonnées uniquement",
"Minimum Free Disk Space": "Espace disque libre minimum",
@@ -114,7 +114,7 @@
"Multi level wildcard (matches multiple directory levels)": "Astérisque à plusieurs niveaux (correspond aux répertoires et sous-répertoires)",
"Never": "Jamais",
"New Device": "Nouvel appareil",
"New Folder": "Nouveau dossier",
"New Folder": "Nouveau partage",
"Newest First": "Les plus récents en premier",
"No": "Non",
"No File Versioning": "Pas de version de fichier",
@@ -123,60 +123,60 @@
"OK": "OK",
"Off": "Éteint",
"Oldest First": "Les plus anciens en premier",
"Optional descriptive label for the folder. Can be different on each device.": "Optional descriptive label for the folder. Can be different on each device.",
"Optional descriptive label for the folder. Can be different on each device.": "Étiquette conviviale pour le partage. Elle peut être différente sur chaque appareil.",
"Options": "Options",
"Out of Sync": "Désynchronisé",
"Out of Sync Items": "Objets non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Le chemin du dossier sur l'ordinateur local sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Le chemin du répertoire sur l'ordinateur local sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
"Pause": "Pause",
"Paused": "En pause",
"Please consult the release notes before performing a major upgrade.": "Veuillez consulter les notes de version avant de réaliser une mise à jour majeure.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Please set a GUI Authentication User and Password in the Settings dialog.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Veuillez définir un nom d'utilisateur et un mot de passe dans les réglages.",
"Please wait": "Merci de patienter",
"Preview": "Aperçu",
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
"Quick guide to supported patterns": "Guide rapide des masques supportés",
"RAM Utilization": "Utilisation de la RAM",
"Random": "Aléatoire",
"Relay Servers": "Relay Servers",
"Relay Servers": "Serveurs relais",
"Relayed via": "Relayée par",
"Relays": "Relais",
"Release Notes": "Notes de version",
"Remote Devices": "Remote Devices",
"Remote Devices": "Appareils distants",
"Remove": "Enlever",
"Required identifier for the folder. Must be the same on all cluster devices.": "Required identifier for the folder. Must be the same on all cluster devices.",
"Rescan": "Rescanner",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant obligatoire du partage. Il doit être identique chez chaque participant.",
"Rescan": "Réanalyser",
"Rescan All": "Réanalyser tout",
"Rescan Interval": "Intervalle de scan",
"Rescan Interval": "Intervalle d'analyse",
"Restart": "Redémarrer",
"Restart Needed": "Redémarrage nécessaire",
"Restarting": "Redémarrage",
"Resume": "Résumer",
"Resume": "Reprise",
"Reused": "Réutilisé",
"Save": "Sauver",
"Scan Time Remaining": "Scan Time Remaining",
"Scanning": "En cours de scan",
"Select the devices to share this folder with.": "Sélectionner les appareils avec qui partager ce dossier.",
"Select the folders to share with this device.": "Sélectionner les dossiers à partager avec cet appareil.",
"Scan Time Remaining": "Temps d'analyse restant",
"Scanning": "Analyse en cours",
"Select the devices to share this folder with.": "Sélectionner les appareils invités à ce partage.",
"Select the folders to share with this device.": "Sélectionner les partages auxquels participe cet appareil.",
"Settings": "Configuration",
"Share": "Partager",
"Share Folder": "Partager le dossier",
"Share Folders With Device": "Partager des dossiers avec des appareils",
"Share Folder": "Partager",
"Share Folders With Device": "Partages avec cet appareil",
"Share With Devices": "Partage avec des appareils",
"Share this folder?": "Voulez-vous partager ce dossier ?",
"Share this folder?": "Acceptez-vous ce partage ?",
"Shared With": "Partagé avec",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identifiant court du dossier. Il doit être le même sur l'ensemble des appareils du groupe.",
"Show ID": "Montrer l'ID",
"Show QR": "Show QR",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identifiant court du partage. Il sera le même sur l'ensemble des appareils du groupe.",
"Show ID": "Afficher mon ID",
"Show QR": "Afficher l'image QR",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de l'appareil dans le groupe. Sera proposé aux autres appareils comme nom optionnel par défaut.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Affiché à la place de l'ID de l'appareil dans le groupe. Si laissé vide, il sera mis à jour par le nom proposé par l'appareil distant.",
"Shutdown": "Éteindre",
"Shutdown Complete": "Extinction terminée",
"Shutdown": "Arrêter",
"Shutdown Complete": "Arrêté !",
"Simple File Versioning": "Suivi simple des versions de fichier",
"Single level wildcard (matches within a directory only)": "Astérisque à un seul niveau (correspond uniquement à lintérieur du dossier)",
"Single level wildcard (matches within a directory only)": "Astérisque à un seul niveau (correspond uniquement à lintérieur du répertoire)",
"Smallest First": "Les plus petits en premier",
"Source Code": "Code source",
"Staggered File Versioning": "Versions échelonnées de fichier",
@@ -192,20 +192,20 @@
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble avoir un problème pour traiter votre demande. S'il vous plait, rafraichissez la page ou redémarrer Syncthing si le problème persiste.",
"The Syncthing admin interface is configured to allow remote access without a password.": "The Syncthing admin interface is configured to allow remote access without a password.",
"The aggregated statistics are publicly available at the URL below.": "The aggregated statistics are publicly available at the URL below.",
"The Syncthing admin interface is configured to allow remote access without a password.": "L'interface d'administration de Syncthing est configuré pour accepter l'accès distant sans mot de passe !",
"The aggregated statistics are publicly available at the URL below.": "Les statistiques aggrégées sont publiquement disponibles à l'adresse ci-dessous.",
"The aggregated statistics are publicly available at {%url%}.": "Les statistiques agrégées sont disponibles publiquement à l'adresse {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuration a été enregistrée mais pas activée. Syncthing doit redémarrer afin d'activer la nouvelle configuration.",
"The device ID cannot be blank.": "L'ID de l'appareil ne peut être vide.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID de l'appareil à entrer peut être trouvé dans le menu \"Éditer > Montrer l'ID\" des autres appareils. Les espaces et les tirets sont optionnels (ils seront ignorés).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des dossiers et les versions de l'application. Si les données rapportées sont modifiées cette boite de dialogue vous redemandera votre confirmation.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID d'appareil à saisir ici se trouve dans le menu \"Actions > Afficher mon ID\" sur l'appareil distant. Les tirets et espaces sont optionnels (et ignorés).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID de machine à saisir ici se trouve dans le menu \"Actions > Afficher mon ID\" de l'appareil distant. Les tirets et espaces sont optionnels (et ignorés).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des partages et les versions de l'application. Si les données rapportées sont modifiées cette boite de dialogue vous redemandera votre confirmation.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID de l'appareil inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 caractères comprenant des lettres, des chiffres et potentiellement des espaces et des traits d'union.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Le premier paramètre de ligne de commande est le chemin du dossier, et le second est le chemin relatif dans le dossier.",
"The folder ID cannot be blank.": "L'identifiant (ID) du dossier ne peut être vide.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID du dossier doit être un identifiant court (64 caractères ou moins) comprenant uniquement des lettres, chiffre, points (.), traits d'union (-) et tirets bas (_).",
"The folder ID must be unique.": "L'ID du répertoire doit être unique.",
"The folder path cannot be blank.": "Le chemin du répertoire ne peut pas être vide.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Le premier paramètre de ligne de commande est le chemin du répertoire partagé, et le second est le chemin relatif dans le répertoire.",
"The folder ID cannot be blank.": "L'identifiant du partage ne peut être vide.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID du partage doit être un identifiant court (64 caractères ou moins) comprenant uniquement des lettres, chiffre, points (.), traits d'union (-) et tirets bas (_).",
"The folder ID must be unique.": "L'ID du partage doit être unique.",
"The folder path cannot be blank.": "Le chemin vers le répertoire ne peut pas être vide.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Les intervalles suivant sont utilisés: la première heure une version est conservée chaque 30 secondes, le premier jour une version est conservée chaque heure, les premiers 30 jours une version est conservée chaque jour, jusqu'à la limite d'âge maximum une version est conservée chaque semaine.",
"The following items could not be synchronized.": "Les éléments suivants ne peuvent pas être synchronisés.",
"The maximum age must be a number and cannot be blank.": "L'âge maximum doit être un nombre et ne peut être vide.",
@@ -219,8 +219,8 @@
"The rate limit must be a non-negative number (0: no limit)": "La limite de débit ne doit pas être négative (0: Aucune limite)",
"The rescan interval must be a non-negative number of seconds.": "L'intervalle d'analyse ne doit pas être un nombre négatif de secondes.",
"They are retried automatically and will be synced when the error is resolved.": "Ils seront réessayés automatiquement et synchronisés quand l'erreur sera résolue.",
"This Device": "This Device",
"This can easily give hackers access to read and change any files on your computer.": "This can easily give hackers access to read and change any files on your computer.",
"This Device": "Cet appareil",
"This can easily give hackers access to read and change any files on your computer.": "Ceci peut aisément permettre à un intrus de lire et modifier n'importe quel fichier de votre ordinateur. ",
"This is a major version upgrade.": "Ceci est une mise à jour majeure.",
"Trash Can File Versioning": "Gestion des versions de fichier style poubelle.",
"Unknown": "Inconnu",
@@ -237,15 +237,15 @@
"Version": "Version",
"Versions Path": "Emplacement des versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions seront supprimées automatiquement, si elles dépassent la durée maximum de conservation, ou si leur nombre est supérieur à la valeur autorisée dans l'intervalle.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Warning, this path is a subdirectory of an existing folder \"{{otherFolder}}\".",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Attention, ce chemin est un sous-répertoire du partage existant \"{{otherFolder}}\". Ceci peut causer des problèmes tels que duplications de fichiers ou suppressions intempestives sur les autres machines.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Lorsqu'un appareil est ajouté, gardez à l'esprit que cet appareil doit aussi être ajouté de l'autre coté.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lorsqu'un nouveau répertoire est ajouté, gardez à l'esprit que son ID est utilisé pour lier les répertoires à travers les appareils. Les ID sont sensibles à la casse et doivent être identiques à travers tous les nœuds.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lorsqu'un nouveau partage est ajouté, gardez à l'esprit que son ID est utilisée pour lier les répertoires à travers les appareils. L'ID est sensible à la casse et sera forcément la même sur toutes les appareils participant à ce partage.",
"Yes": "Oui",
"You must keep at least one version.": "Vous devez garder au minimum une version.",
"days": "Jours",
"full documentation": "documentation complète",
"items": "éléments",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} veut partager le dossier \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} wants to share folder \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} wants to share folder \"{{folderlabel}}\" ({{folder}})."
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vous invite au partage \"{{folderLabel}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} vous invite au partage \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} vous invite au partage \"{{folderLabel}}\" ({{folder}})."
}

View File

@@ -1,32 +1,32 @@
{
"A device with that ID is already added.": "Une machine avec cet ID a déjà été ajoutée.",
"A negative number of days doesn't make sense.": "Un nombre négatif de jours n'a pas de sens.",
"A device with that ID is already added.": "L'appareil portant cette ID est déjà présent.",
"A negative number of days doesn't make sense.": "Ce champ n'accepte qu'un entier positif ou nul.",
"A new major version may not be compatible with previous versions.": "Une nouvelle version majeure peut présenter des incompatibilités avec les versions antérieures.",
"API Key": "Clé API",
"About": "À propos",
"Actions": "Actions",
"Add": "Ajouter",
"Add Device": "Ajouter une machine",
"Add Folder": "Ajouter un dossier",
"Add Remote Device": "Ajouter une machine distante",
"Add new folder?": "Ajouter un nouveau dossier ?",
"Add Device": "Ajouter un appareil",
"Add Folder": "Ajouter un partage",
"Add Remote Device": "Ajouter un appareil distant",
"Add new folder?": "Ajouter un nouveau partage ?",
"Address": "Adresse",
"Addresses": "Adresses",
"Advanced": "Avancé",
"Advanced Configuration": "Configuration avancée",
"Advanced settings": "Configuration avancée",
"All Data": "Toutes les données",
"Allow Anonymous Usage Reporting?": "Autoriser le rapport anonyme de statistiques d'utilisation ?",
"Allow Anonymous Usage Reporting?": "Autoriser l'envoi de statistiques d'utilisation anonymisées ?",
"Alphabetic": "Alphabétique",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le dossier synchronisé.",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Une commande externe gère les versions de fichiers. Elle supprime les fichiers dans le répertoire synchronisé.",
"Anonymous Usage Reporting": "Rapport anonyme de statistiques d'utilisation",
"Any devices configured on an introducer device will be added to this device as well.": "Toute machine ajoutée depuis une machine initiatrice sera aussi ajoutée sur cette machine.",
"Any devices configured on an introducer device will be added to this device as well.": "Tout appareil ajouté depuis un appareil introducteur sera aussi ajouté sur cet appareil.",
"Automatic upgrades": "Mises à jour automatiques",
"Be careful!": "Faites attention !",
"Bugs": "Bugs",
"CPU Utilization": "Utilisation du CPU",
"Changelog": "Historique des changements",
"Clean out after": "Nettoyer après",
"Clean out after": "Purger après :",
"Close": "Fermer",
"Command": "Commande",
"Comment, when used at the start of a line": "Commentaire lorsque utilisé en début de ligne",
@@ -40,60 +40,60 @@
"Danger!": "Attention !",
"Delete": "Supprimer",
"Deleted": "Supprimé",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "La machine \"{{name}}\" ({{device}} sur {{address}}) veut se connecter. Ajouter cette nouvelle machine ?",
"Device ID": "ID de la machine",
"Device Identification": "Identifiant de la machine",
"Device Name": "Nom de la machine",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "La machine {{device}} ({{address}}) veut se connecter. Voulez-vous ajouter cette machine ?",
"Devices": "Machines",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "L'appareil \"{{name}}\" ({{device}} à {{address}}) veut se connecter. L'acceptez-vous ?",
"Device ID": "ID de l'appareil",
"Device Identification": "Identifiant de l'appareil",
"Device Name": "Nom de l'appareil",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "L'appareil {{device}} ({{address}}) veut se connecter. Voulez-vous l'ajouter ?",
"Devices": "Appareils",
"Disconnected": "Déconnecté",
"Discovery": "Découverte",
"Documentation": "Documentation",
"Download Rate": "Vitesse de réception",
"Downloaded": "Téléchargé",
"Downloading": "En cours de téléchargement",
"Edit": "Éditer",
"Edit Device": "Éditer la machine",
"Edit Folder": "Éditer le dossier",
"Editing": "Édition",
"Enable NAT traversal": "Activer transfert d'adresses (NAT)",
"Enable Relaying": "Activer le relayage",
"Edit": "Modifier",
"Edit Device": "Modifier l'appareil",
"Edit Folder": "Modifier le partage",
"Editing": "Modifications",
"Enable NAT traversal": "Activer la translation d'adresses (NAT)",
"Enable Relaying": "Relayage possible",
"Enable UPnP": "Activer UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Entrer les adresses (\"tcp://ip:port\" ou \"tcp://host:port\") séparées par une virgule ou \"dynamic\" afin d'activer la recherche automatique de l'adresse.",
"Enter ignore patterns, one per line.": "Entrer les masques de filtrage, un par ligne.",
"Enter ignore patterns, one per line.": "Entrez les masques d'exclusion, un par ligne.",
"Error": "Erreur",
"External File Versioning": "Gestion externe des versions de fichiers",
"Failed Items": "Fichiers en échec",
"File Pull Order": "Ordre de récupération des fichiers",
"File Versioning": "Versions de fichier",
"File Versioning": "Méthode de préservation des fichiers",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Les bits de permission de fichier sont ignorés lors de la recherche de changements. Utilisé sur les systèmes de fichiers FAT.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés vers le dossier .stversions quand ils sont remplacés ou effacés par Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés, avec horodatage, dans le dossier .stversions quand ils sont remplacés ou supprimés par Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Les fichiers sont protégés des changements réalisés sur les autres machines, mais les changements réalisés sur celle-ci seront transférés aux autres machines.",
"Folder": "Dossier",
"Folder ID": "ID du dossier",
"Folder Label": "Étiquette du dossier",
"Folder Master": "Dossier maître",
"Folder Path": "Chemin du dossier",
"Folder Type": "Type de répertoire",
"Folders": "Dossiers",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés dans le sous-répertoire .stversions quand ils sont remplacés ou supprimés par Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Les fichiers sont déplacés et horodatés dans le sous-répertoire .stversions quand ils sont remplacés ou supprimés par Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Les fichiers sont protégés des changements réalisés sur les autres appareils, mais les changements réalisés sur celui-ci seront transférés aux autres.",
"Folder": "Partage",
"Folder ID": "ID du partage",
"Folder Label": "Étiquette du partage",
"Folder Master": "Partage maître",
"Folder Path": "Chemin racine du partage",
"Folder Type": "Type de partage",
"Folders": "Partages",
"GUI": "GUI",
"GUI Authentication Password": "Mot de passe d'authentification GUI",
"GUI Authentication User": "Utilisateur autorisé GUI",
"GUI Listen Addresses": "Adresse du GUI",
"Generate": "Générer",
"Global Discovery": "Recherche globale",
"Global Discovery Server": "Serveur global de recherche",
"Global Discovery Server": "Serveur de découverte globale",
"Global Discovery Servers": "Serveurs de découverte globale",
"Global State": "État global",
"Help": "Aide",
"Home page": "Page d'accueil",
"Ignore": "Ignorer",
"Ignore Patterns": "Modèles à éviter",
"Ignore Patterns": "Exclusions ...",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit de réception (Ko/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos dossiers et mettre hors-service Syncthing",
"Introducer": "Initiateur",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Une configuration incorrecte peut créer des dommages dans vos répertoires et mettre Syncthing hors-service.",
"Introducer": "introductrice",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
"Keep Versions": "Conserver les versions",
"Largest First": "Les plus volumineux en premier",
@@ -102,39 +102,39 @@
"Last seen": "Dernière apparition",
"Later": "Plus tard",
"Listeners": "Systèmes en écoute",
"Local Discovery": "Recherche locale",
"Local Discovery": "Découverte locale",
"Local State": "État local",
"Local State (Total)": "État local (Total)",
"Major Upgrade": "Mise à jour majeure",
"Master": "Maitre",
"Master": "Maître",
"Maximum Age": "Ancienneté maximum",
"Metadata Only": "Métadonnées uniquement",
"Minimum Free Disk Space": "Espace disque libre minimum",
"Move to top of queue": "Déplacer en haut de la file",
"Multi level wildcard (matches multiple directory levels)": "Astérisque à plusieurs niveaux (correspond aux répertoires et sous-répertoires)",
"Never": "Jamais",
"New Device": "Nouvelle machine",
"New Folder": "Nouveau dossier",
"New Device": "Nouvel appareil",
"New Folder": "Nouveau partage",
"Newest First": "Les plus récents en premier",
"No": "Non",
"No File Versioning": "Pas de version de fichier",
"No File Versioning": "Pas de préservation des fichiers",
"Normal": "Normal",
"Notice": "Notification",
"OK": "OK",
"Off": "Éteint",
"Off": "Désactivé(e)",
"Oldest First": "Les plus anciens en premier",
"Optional descriptive label for the folder. Can be different on each device.": "Étiquette optionnelle pour le dossier. Peut être différente pour chaque machine.",
"Optional descriptive label for the folder. Can be different on each device.": "Étiquette optionnelle pour le partage. Peut être différente sur chaque appareil.",
"Options": "Options",
"Out of Sync": "Désynchronisé",
"Out of Sync Items": "Fichiers non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit d'émission (Ko/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Le chemin du dossier sur l'ordinateur local sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les versions doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin vers le répertoire dans l'appareil local. Il sera créé s'il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Chemin où les copies doivent être conservées (laisser vide pour le chemin par défaut de .stversions dans le répertoire)",
"Pause": "Pause",
"Paused": "En pause",
"Please consult the release notes before performing a major upgrade.": "Veuillez consulter les notes de version avant de réaliser une mise à jour majeure.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "SVP, mettez un nom d'utilisateur et un mot de passe dans la fenêtre de paramétrage.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Veuillez définir un nom d'utilisateur et un mot de passe dans la fenêtre de Configuration.",
"Please wait": "Merci de patienter",
"Preview": "Aperçu",
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
@@ -145,38 +145,38 @@
"Relayed via": "Relayée par",
"Relays": "Relais",
"Release Notes": "Notes de version",
"Remote Devices": "Machines distantes",
"Remote Devices": "Appareils distants",
"Remove": "Enlever",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant pour le dossier. Doit être le même sur l'ensemble des machines du cluster.",
"Rescan": "Réanalyse",
"Required identifier for the folder. Must be the same on all cluster devices.": "Identifiant du partage. Doit être le même sur l'ensemble des appareils concernés.",
"Rescan": "Réanalyser",
"Rescan All": "Réanalyser tout",
"Rescan Interval": "Intervalle d'analyse",
"Restart": "Redémarrer",
"Restart Needed": "Redémarrage nécessaire",
"Restarting": "Redémarrage",
"Resume": "Résumer",
"Resume": "Reprise",
"Reused": "Réutilisé",
"Save": "Sauver",
"Scan Time Remaining": "Intervalle entre chaque analyse",
"Save": "Enregistrer",
"Scan Time Remaining": "Temps d'analyse restant",
"Scanning": "Analyse en cours",
"Select the devices to share this folder with.": "Sélectionner les machines avec qui partager ce dossier.",
"Select the folders to share with this device.": "Sélectionner les dossiers à partager avec cette machine.",
"Select the devices to share this folder with.": "Sélectionner les appareils invités à ce partage.",
"Select the folders to share with this device.": "Sélectionner les partages auxquels cet appareil participe.",
"Settings": "Configuration",
"Share": "Partager",
"Share Folder": "Partager le dossier",
"Share Folders With Device": "Partager des dossiers avec des machines",
"Share Folder": "Partager",
"Share Folders With Device": "Partages avec cet appareil",
"Share With Devices": "Partage avec des appareils",
"Share this folder?": "Voulez-vous partager ce dossier ?",
"Share this folder?": "Acceptez-vous ce partage ?",
"Shared With": "Partagé avec",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identifiant court du dossier. Il doit être le même sur l'ensemble des machines du groupe.",
"Show ID": "Afficher l'ID",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identifiant court du partage. Il sera le même sur l'ensemble des appareils du groupe.",
"Show ID": "Afficher mon ID",
"Show QR": "Afficher le QR",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de la machine dans le groupe. Sera proposé aux autres machines comme nom optionnel par défaut.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Affiché à la place de l'ID de la machine dans le groupe. Si laissé vide, il sera mis à jour par le nom proposé par la machine distante.",
"Shutdown": "Éteindre",
"Shutdown Complete": "Extinction terminée",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de l'appareil dans le groupe. Sera proposé aux autres appareils comme nom convivial par défaut.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Affiché à la place de l'ID de l'appareil dans le groupe. Si laissé vide, il sera mis à jour par le nom proposé par l'appareil distant.",
"Shutdown": "Arrêter",
"Shutdown Complete": "Arrêté !",
"Simple File Versioning": "Suivi simple des versions de fichier",
"Single level wildcard (matches within a directory only)": "Astérisque à un seul niveau (correspond uniquement à lintérieur du dossier)",
"Single level wildcard (matches within a directory only)": "Astérisque à un seul niveau (correspond uniquement à lintérieur du répertoire)",
"Smallest First": "Les plus petits en premier",
"Source Code": "Code source",
"Staggered File Versioning": "Versions échelonnées de fichier",
@@ -186,26 +186,26 @@
"Support": "Aide",
"Sync Protocol Listen Addresses": "Adresse d'écoute du protocole de synchronisation",
"Syncing": "Synchronisation en cours",
"Syncthing has been shut down.": "Syncthing a été éteint.",
"Syncthing has been shut down.": "Syncthing a été arrêté.",
"Syncthing includes the following software or portions thereof:": "Syncthing intègre les logiciels suivants (ou des éléments provenant de ces logiciels) :",
"Syncthing is restarting.": "Syncthing est cours de redémarrage.",
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble avoir un problème pour traiter votre demande. S'il vous plait, rafraichissez la page ou redémarrer Syncthing si le problème persiste.",
"The Syncthing admin interface is configured to allow remote access without a password.": "L'interface d'administration de Syncthing est paramétrée pour autoriser les accès à distance sans mot de passe.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être arrêté, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble avoir un problème pour traiter votre demande. S'il vous plaît, rafraichissez la page ou redémarrez Syncthing si le problème persiste.",
"The Syncthing admin interface is configured to allow remote access without a password.": "L'interface d'administration de Syncthing est paramétrée pour autoriser les accès à distance sans mot de passe !!!",
"The aggregated statistics are publicly available at the URL below.": "Les statistiques agrégées sont disponibles publiquement à l'adresse ci-dessous.",
"The aggregated statistics are publicly available at {%url%}.": "Les statistiques agrégées sont disponibles publiquement à l'adresse {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuration a été enregistrée mais pas activée. Syncthing doit redémarrer afin d'activer la nouvelle configuration.",
"The device ID cannot be blank.": "L'ID de la machine ne peut être vide.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID de la machine à indiquer ici se trouve dans \"Actions > Afficher ID\" sur l'autre machine. Espaces et tirets sont optionnels (ignorés).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID de la machine à entrer peut être trouvé dans le menu \"Éditer > Montrer l'ID\" des autres machines. Les espaces et les tirets sont optionnels (ils seront ignorés).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des dossiers et les versions de l'application. Si les données rapportées sont modifiées cette boite de dialogue vous redemandera votre confirmation.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID de la machine inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 caractères comprenant des lettres, des chiffres et potentiellement des espaces et des traits d'union.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Le premier paramètre de ligne de commande est le chemin du dossier, et le second est le chemin relatif dans le dossier.",
"The folder ID cannot be blank.": "L'identifiant (ID) du dossier ne peut être vide.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID du dossier doit être un identifiant court (64 caractères ou moins) comprenant uniquement des lettres, chiffre, points (.), traits d'union (-) et tirets bas (_).",
"The folder ID must be unique.": "L'ID du dossier doit être unique.",
"The folder path cannot be blank.": "Le chemin du dossier ne peut pas être vide.",
"The device ID cannot be blank.": "L'ID de l'appareil ne peut être vide.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID d'appareil à saisir ici se trouve dans le menu \"Actions > Afficher mon ID\" de l'appareil distant. Espaces et tirets sont optionnels (ignorés).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "L'ID d'appareil à saisir ici se trouve dans le menu \"Modifications > Afficher mon ID\" de l'appareil distant. Les espaces et les tirets sont optionnels (ils seront ignorés).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Le rapport d'utilisation chiffré est envoyé quotidiennement. Il sert à répertorier les plateformes utilisées, la taille des partages et les versions de l'application. Si le jeu de données rapportées devait être changé, il vous sera demandé de le valider de nouveau via ce message.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "L'ID de l'appareil inséré ne semble pas être valide. Il devrait ressembler à une chaîne de 52 ou 56 caractères comprenant des lettres, des chiffres et potentiellement des espaces et des traits d'union.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Le premier paramètre de ligne de commande est le chemin du répertoire partagé, et le second est le chemin relatif dans le répertoire.",
"The folder ID cannot be blank.": "L'ID du partage ne peut être vide.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "L'ID du partage doit être un identifiant court (64 caractères ou moins), constitué de lettres, chiffres, points (.), tirets milieu (-) et tirets bas (_) seulement.",
"The folder ID must be unique.": "L'ID du partage doit être unique.",
"The folder path cannot be blank.": "Le chemin vers le répertoire ne peut pas être vide.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Les intervalles suivant sont utilisés: la première heure une version est conservée chaque 30 secondes, le premier jour une version est conservée chaque heure, les premiers 30 jours une version est conservée chaque jour, jusqu'à la limite d'âge maximum une version est conservée chaque semaine.",
"The following items could not be synchronized.": "Les fichiers suivants ne peuvent pas être synchronisés.",
"The maximum age must be a number and cannot be blank.": "L'âge maximum doit être un nombre et ne peut être vide.",
@@ -219,10 +219,10 @@
"The rate limit must be a non-negative number (0: no limit)": "La limite de débit ne doit pas être négative (0: Aucune limite)",
"The rescan interval must be a non-negative number of seconds.": "L'intervalle d'analyse ne doit pas être un nombre négatif de secondes.",
"They are retried automatically and will be synced when the error is resolved.": "Ils seront réessayés automatiquement et synchronisés quand l'erreur sera résolue.",
"This Device": "Cette machine",
"This can easily give hackers access to read and change any files on your computer.": "Cela permet facilement aux pirates de lire et modifier n'importe quel fichier de votre machine.",
"This Device": "Cet appareil",
"This can easily give hackers access to read and change any files on your computer.": "Ceci peut aisément permettre à un intrus de lire et modifier n'importe quel fichier de votre ordinateur.",
"This is a major version upgrade.": "Ceci est une mise à jour majeure.",
"Trash Can File Versioning": "Gestion des versions de fichier style poubelle.",
"Trash Can File Versioning": "Préservation style poubelle",
"Unknown": "Inconnu",
"Unshared": "Non partagé",
"Unused": "Non utilisé",
@@ -237,15 +237,15 @@
"Version": "Version",
"Versions Path": "Emplacement des versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions seront supprimées automatiquement, si elles dépassent la durée maximum de conservation, ou si leur nombre est supérieur à la valeur autorisée dans l'intervalle.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Attention, ce chemin est un sous-répertoire du dossier existant \"{{otherFolder}}\".",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Lorsqu'une machine est ajoutée, gardez à l'esprit que cette machine doit aussi être ajoutée de l'autre coté.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lorsqu'un nouveau dossier est ajouté, gardez à l'esprit que son ID est utilisé pour lier les dossiers à travers les machines. Les ID sont sensibles à la casse et doivent être identiques à travers tous les nœuds.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Attention, ce chemin est un sous-répertoire du partage existant \"{{otherFolder}}\". Ceci peut causer des problèmes tels que duplications de fichiers ou suppressions intempestives sur les autres appareils.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Lorsqu'un appareil est ajouté, gardez à l'esprit que votre appareil doit aussi être ajoutée de l'autre coté.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lorsqu'un nouveau partage est ajouté, gardez à l'esprit que son ID est utilisée pour lier les répertoires à travers les appareils. L'ID est sensible à la casse et sera forcément la même sur toutes les appareils participant à ce partage.",
"Yes": "Oui",
"You must keep at least one version.": "Vous devez garder au minimum une version.",
"days": "Jours",
"full documentation": "documentation complète",
"items": "fichiers",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} veut partager le dossier \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} veut partager le dossier \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} veut partager le dossier \"{{folderLabel}}\" ({{folder}})."
"items": "éléments",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vous invite au partage \"{{folderLabel}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} vous invite au partage \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} vous invite au partage \"{{folderLabel}}\" ({{folder}})."
}

View File

@@ -33,8 +33,8 @@
"Compression": "Tömörítés",
"Connection Error": "Kapcsolódási hiba",
"Connection Type": "Kapcsolat típus",
"Copied from elsewhere": "Másolva máshonnan",
"Copied from original": "Másolva az eredetiről",
"Copied from elsewhere": "Máshonnan másolva",
"Copied from original": "Eredetiről másolva",
"Copyright © 2014-2016 the following Contributors:": "Szerzői jog © 2014-2016 az alábbi közreműködők:",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 az alábbi közreműködők:",
"Danger!": "Veszély!",
@@ -210,7 +210,7 @@
"The following items could not be synchronized.": "A következő elemek nem szinkronizálhatóak.",
"The maximum age must be a number and cannot be blank.": "A maximális kornak számnak kell lenni és nem lehet üres",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "A verziók megtartásának maximális ideje (napokban, ha 0-t adsz meg örökre megmaradnak).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "A minimális szabad terület százalékos. nem-negatív érték 0 és 100 között",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "A minimális szabad terület százalékos, nem-negatív értéke 0 és 100 között",
"The number of days must be a number and cannot be blank.": "A napok száma szám kell legyen és nem lehet üres.",
"The number of days to keep files in the trash can. Zero means forever.": "A napok száma ameddig a fájlok meg lesznek tartva a szemetesben. A 0 azt jelenti örökre.",
"The number of old versions to keep, per file.": "A megtartott régi verziók száma, fájlonként.",
@@ -220,7 +220,7 @@
"The rescan interval must be a non-negative number of seconds.": "Az átnézési intervallum nullánál nagyobb másodperc érték kell legyen",
"They are retried automatically and will be synced when the error is resolved.": "A hiba javítása után automatikusan újra megpróbálja a szinkronizálást.",
"This Device": "Ez az eszköz",
"This can easily give hackers access to read and change any files on your computer.": "Így a hekkerek könnyedén hozzáférhetnek a számítógépen található fájlokhoz. ",
"This can easily give hackers access to read and change any files on your computer.": "Így a hekkerek könnyedén hozzáférést szerezhetnek a gépeden tárolt fájlok olvasásához és módosításához.",
"This is a major version upgrade.": "Ez egy főverzió frissítés.",
"Trash Can File Versioning": "Szemetes fájl verziókövetés",
"Unknown": "Ismeretlen",
@@ -241,7 +241,7 @@
"When adding a new device, keep in mind that this device must be added on the other side too.": "Amikor új eszközt adsz hozzá, tartsd észben, hogy a másik oldalon ezt az eszközt is hozzá kell adni.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Amikor új mappát adsz hozzá, tartsd észben, hogy a mappa azonosító arra való hogy összekösd a mappákat az eszközeiden. Az azonosító kisbetű-nagybetű érzékeny és pontosan egyeznie kell az eszközökön.",
"Yes": "Igen",
"You must keep at least one version.": "Legalább egy verziót meg kell tartanod",
"You must keep at least one version.": "Legalább egy verziót meg kell tartanod.",
"days": "nap",
"full documentation": "teljes dokumentáció",
"items": "elem",

View File

@@ -3,7 +3,7 @@
"A negative number of days doesn't make sense.": "음수로는 지정할 수 없습니다.",
"A new major version may not be compatible with previous versions.": "새로운 메이저 버전은 이전 버전과 호환되지 않을 수 있습니다.",
"API Key": "API 키",
"About": " 정보",
"About": "정보",
"Actions": "동작",
"Add": "추가",
"Add Device": "기기 추가",

View File

@@ -60,7 +60,7 @@
"Enable Relaying": "Habilitar retransmissão",
"Enable UPnP": "Habilitar UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Insira endereços (\"tcp://ip:porta\", \"tcp://host:porta\") separados por vírgula ou \"dynamic\" para executar a descoberta automática do endereço.",
"Enter ignore patterns, one per line.": "Insira os padrões de exclusão, um por linha.",
"Enter ignore patterns, one per line.": "Insira os filtros, um por linha.",
"Error": "Erro",
"External File Versioning": "Versionamento externo de arquivo",
"Failed Items": "Itens com falha",
@@ -89,7 +89,7 @@
"Help": "Ajuda",
"Home page": "Página inicial",
"Ignore": "Ignorar",
"Ignore Patterns": "Padrões de exclusão",
"Ignore Patterns": "Filtros",
"Ignore Permissions": "Ignorar permissões",
"Incoming Rate Limit (KiB/s)": "Limite de velocidade de recepção (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "A configuração incorreta poderá causar danos aos seus dados e tornar o Syncthing inoperante.",

View File

@@ -1,6 +1,6 @@
{
"A device with that ID is already added.": "En enhet med det ID är redan tillagt.",
"A negative number of days doesn't make sense.": "Ett negativt antal dagar är inte troligt.",
"A device with that ID is already added.": "En enhet med det ID:t är redan tillagt.",
"A negative number of days doesn't make sense.": "Ett negativt antal dagar är inte rimligt.",
"A new major version may not be compatible with previous versions.": "En ny huvudversion kan eventuellt vara inkompatibel med tidigare versioner.",
"API Key": "API-nyckel",
"About": "Om",
@@ -18,8 +18,8 @@
"All Data": "All data",
"Allow Anonymous Usage Reporting?": "Tillåt anonym användarstatistiksrapportering?",
"Alphabetic": "Alfabetisk",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Ett externt kommando sköter versionshanteringen. Det måste ta bort filen från den synkroniserade mappen.",
"Anonymous Usage Reporting": "Anonym användarstatistik",
"An external command handles the versioning. It has to remove the file from the synced folder.": "Ett externt kommando sköter versionshanteringen. Den behöver ta bort filen från den synkroniserade mappen.",
"Anonymous Usage Reporting": "Anonym användarstatistik rapportering",
"Any devices configured on an introducer device will be added to this device as well.": "Enheter konfigurerade på en introduktörsenhet kommer också att läggas till den här enheten.",
"Automatic upgrades": "Automatiska uppgraderingar",
"Be careful!": "Var aktsam!",
@@ -29,87 +29,87 @@
"Clean out after": "Rensa efteråt",
"Close": "Stäng",
"Command": "Kommando",
"Comment, when used at the start of a line": "Kommentar, vid början av en rad.",
"Comment, when used at the start of a line": "Kommentar, vid användning i början av en rad.",
"Compression": "Komprimering",
"Connection Error": "Anslutningsproblem",
"Connection Type": "Anslutningstyp",
"Copied from elsewhere": "Kopierat utifrån",
"Copied from original": "Oförändrat",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 följande bidragande:",
"Copied from original": "Kopierat från original",
"Copyright © 2014-2016 the following Contributors:": "Copyright © 2014-2016 följande bidragare:",
"Copyright © 2015 the following Contributors:": "Copyright © 2015 följande medverkande:",
"Danger!": "Fara!",
"Delete": "Ta bort",
"Deleted": "Borttaget",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "Enhet \"{{name}}\" ({{device}} på {{address}}) vill ansluta. Lägg till ny enhet?",
"Device ID": "Enhets-ID",
"Device ID": "Enhets ID",
"Device Identification": "Enhetsidentifikation",
"Device Name": "Enhetsnamn",
"Device Name": "Enhetens namn",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Enheten {{device}} ({{address}}) vill ansluta. Lägg till ny enhet?",
"Devices": "Enheter",
"Disconnected": "Ej ansluten",
"Discovery": "Uppslagning",
"Disconnected": "Frånkopplad",
"Discovery": "Upptäckt",
"Documentation": "Dokumentation",
"Download Rate": "Nedladdningshastighet",
"Downloaded": "Nerladdat",
"Downloading": "Laddar ner",
"Downloaded": "Hämtat",
"Downloading": "Hämtar",
"Edit": "Redigera",
"Edit Device": "Redigera enhet",
"Edit Folder": "Redigera katalog",
"Editing": "Redigerar",
"Enable NAT traversal": "Aktivera NAT traversering",
"Enable Relaying": "Aktivera reläa",
"Enable UPnP": "Använd UPnP",
"Enable UPnP": "Aktivera UPnP",
"Enter comma separated (\"tcp://ip:port\", \"tcp://host:port\") addresses or \"dynamic\" to perform automatic discovery of the address.": "Ange kommaseparerade (\"tcp://ip:port\", \"tcp://host:port\")-adresser eller ordet \"dynamic\" för att använda automatisk uppslagning.",
"Enter ignore patterns, one per line.": "Ange filmönster, ett per rad.",
"Enter ignore patterns, one per line.": "Ange ignorera mönster, en per rad.",
"Error": "Fel",
"External File Versioning": "Extern versionshantering",
"Failed Items": "Misslyckade filer",
"File Pull Order": "Hämtningsprioritering av filer",
"File Versioning": "Versionshantering",
"External File Versioning": "Extern filversionshantering",
"Failed Items": "Misslyckade objekt",
"File Pull Order": "Filhämtningsprioritering",
"File Versioning": "Filversionshantering",
"File permission bits are ignored when looking for changes. Use on FAT file systems.": "Filrättigheter ignoreras vid sökning efter förändringar. Används på FAT-filsystem.",
"Files are moved to .stversions folder when replaced or deleted by Syncthing.": "Filer flyttas till katalogen .stversions om de ersätts eller raderas av Syncthing.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "Filer flyttas till datummärkta versioner i en .stversions-mapp när de ersatts eller raderats av Syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Filer skyddas från ändringar gjorda på andra enheter, men ändringar som görs på den här noden skickas till de andra klustermedlemmarna.",
"Folder": "Katalog",
"Folder ID": "Katalog-ID",
"Folder Label": "Katalog etikett",
"Folder Master": "Huvudlagring",
"Folder Path": "Sökväg",
"Folder Label": "Katalog-etikett",
"Folder Master": "Huvudkatalog",
"Folder Path": "Katalog-sökväg",
"Folder Type": "Katalogtyp",
"Folders": "Kataloger",
"GUI": "GUI",
"GUI Authentication Password": "GUI-lösenord",
"GUI Authentication User": "GUI-användare",
"GUI Listen Addresses": "GUI-adress",
"Generate": "Skapa",
"Global Discovery": "Global uppslagning",
"Global Discovery Server": "Global uppslagningsserver",
"Global Discovery Servers": "Globala uppslagningsservrar",
"GUI": "Grafiskt gränssnitt",
"GUI Authentication Password": "Lösenord för GUI",
"GUI Authentication User": "Användare för GUI",
"GUI Listen Addresses": "Lyssningsadresser för GUI",
"Generate": "Generera",
"Global Discovery": "Global upptäckt",
"Global Discovery Server": "Global upptäcktsserver",
"Global Discovery Servers": "Globala upptäcktsservrar",
"Global State": "Global status",
"Help": "Hjälp",
"Home page": "Hemsida",
"Ignore": "Ignorera",
"Ignore Patterns": "Ignorerade filmönster",
"Ignore Permissions": "Ignorera filrättigheter",
"Incoming Rate Limit (KiB/s)": "Max nedladdningshastighet (KiB/s)",
"Ignore Patterns": "Ignorera mönster",
"Ignore Permissions": "Ignorera rättigheter",
"Incoming Rate Limit (KiB/s)": "Nedladdningshastighetsgräns (KiB/s)",
"Incorrect configuration may damage your folder contents and render Syncthing inoperable.": "Inkorrekt konfiguration kan skada innehållet i katalogen and få Syncthing att sluta fungera.",
"Introducer": "introduktör",
"Inversion of the given condition (i.e. do not exclude)": "Vänder på villkoret, d.v.s. exkluderar inte.",
"Keep Versions": "Behåll versioner",
"Largest First": "Störst först",
"Last File Received": "Senast Mottagna Fil",
"Last Scan": "Senast skanning",
"Last seen": "Senast online",
"Largest First": "Största först",
"Last File Received": "Senaste fil mottagen",
"Last Scan": "Senaste skanning",
"Last seen": "Senast sedd",
"Later": "Senare",
"Listeners": "Lyssnare",
"Local Discovery": "Lokal uppslagning",
"Local Discovery": "Lokal upptäckt",
"Local State": "Lokal status",
"Local State (Total)": "Lokal status (Total)",
"Local State (Total)": "Lokal status (totalt)",
"Major Upgrade": "Stor uppgradering",
"Master": "Huvud",
"Maximum Age": "Högsta ålder",
"Maximum Age": "Maximum ålder",
"Metadata Only": "Endast metadata",
"Minimum Free Disk Space": "Minimum ledigt diskutrymme",
"Minimum Free Disk Space": "Minsta ledigt diskutrymme",
"Move to top of queue": "Flytta till överst i kön",
"Multi level wildcard (matches multiple directory levels)": "Jokertecken som representerar noll eller fler godtyckliga tecken, även över kataloggränser.",
"Never": "Aldrig",
@@ -117,7 +117,7 @@
"New Folder": "Ny katalog",
"Newest First": "Nyast först",
"No": "Nej",
"No File Versioning": "Ingen versionshantering",
"No File Versioning": "Ingen filversionshantering",
"Normal": "Normal",
"Notice": "Observera",
"OK": "OK",
@@ -126,39 +126,39 @@
"Optional descriptive label for the folder. Can be different on each device.": "Valfri beskrivande etikett för katalogen. Kan vara olika på varje enhet.",
"Options": "Alternativ",
"Out of Sync": "Osynkroniserad",
"Out of Sync Items": "Osynkroniserade poster",
"Outgoing Rate Limit (KiB/s)": "Max uppladdningshastighet (KiB/s)",
"Override Changes": "Skriv över ändringar",
"Out of Sync Items": "Osynkroniserade objekt",
"Outgoing Rate Limit (KiB/s)": "Uppladdningshastighetsgräns (KiB/s)",
"Override Changes": "Åsidosätt förändringar",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Sökväg till katalogen på din dator. Kommer att skapas om det inte finns. Tecknet tilde (~) kan användas som en genväg för",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Sökväg där versioner sparas (lämna tomt för att använda .stversions i den ordinarie katalogen).",
"Pause": "Paus",
"Paused": "Pausad",
"Please consult the release notes before performing a major upgrade.": "Läs igenom versionsnyheterna innan den stora uppgraderingen.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Ställ in ett grafiskt användarautentisering och lösenord i dialogrutan Inställningar.",
"Please set a GUI Authentication User and Password in the Settings dialog.": "Ställ in ett grafiska gränssnittets användarautentisering och lösenord i inställningsdialogrutan.",
"Please wait": "Var god vänta",
"Preview": "Förhandsgranska",
"Preview Usage Report": "Förhandsgranska statistik",
"Quick guide to supported patterns": "Snabb guide till filmönster som stöds",
"RAM Utilization": "Minnesanvändning",
"Quick guide to supported patterns": "Snabb handledning till mönster som stöds",
"RAM Utilization": "RAM-användning",
"Random": "Slumpmässig",
"Relay Servers": "Reläservrar",
"Relayed via": "Vidarbefordras via",
"Relays": "Vidarbefordringar",
"Relays": "Reläservrar",
"Release Notes": "Versionsanteckningar",
"Remote Devices": "Fjärrenheter",
"Remove": "Ta bort",
"Required identifier for the folder. Must be the same on all cluster devices.": "Krävs identifierare för katalogen. Måste vara densamma på alla kluster enheter.",
"Rescan": "Uppdatera",
"Rescan All": "Uppdatera alla",
"Rescan Interval": "Uppdateringsintervall",
"Rescan": "Skanna om",
"Rescan All": "Skanna om alla",
"Rescan Interval": "Omskanningsintervall",
"Restart": "Starta om",
"Restart Needed": "Omstart behövs",
"Restarting": "Startar om",
"Resume": "Återuppta",
"Reused": "Återanvänt",
"Reused": "Återanvänd",
"Save": "Spara",
"Scan Time Remaining": "Granska återstående tid",
"Scanning": "Uppdaterar",
"Scan Time Remaining": "Återstående skanningstid",
"Scanning": "Skannar",
"Select the devices to share this folder with.": "Ange enheterna att dela den här katalogen med.",
"Select the folders to share with this device.": "Välj kataloger att dela med den här enheten.",
"Settings": "Inställningar",
@@ -166,67 +166,67 @@
"Share Folder": "Dela katalog",
"Share Folders With Device": "Dela kataloger med enhet",
"Share With Devices": "Dela med enheter",
"Share this folder?": "Dela den här katalogen?",
"Share this folder?": "Dela denna katalog?",
"Shared With": "Delad med",
"Short identifier for the folder. Must be the same on all cluster devices.": "Kort identifieringssträng för katalogen. Måste vara samma på alla enheter i klustret.",
"Show ID": "Visa ID",
"Show QR": "Visa QR",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Visas i stället för enhets-ID. Skickas till andra enheter som namn på denna enhet.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Visas i stället för enhets-ID. Sätts till namnet på den andra enheten vid första anslutning om det lämnas tomt.",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Visas i stället för enhetens ID. Skickas till andra enheter som namn på denna enhet.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Visas i stället för enhetens ID. Sätts till namnet på den andra enheten vid första anslutning om det lämnas tomt.",
"Shutdown": "Stäng av",
"Shutdown Complete": "Avstängning klar",
"Simple File Versioning": "Enkel versionshantering",
"Simple File Versioning": "Enkel filversionshantering",
"Single level wildcard (matches within a directory only)": "Jokertecken som representerar noll eller fler godtyckliga tecken i ett filnamn.",
"Smallest First": "Minst först",
"Source Code": "Källkod",
"Staggered File Versioning": "Versionshantering i intervall",
"Start Browser": "Starta browser",
"Staggered File Versioning": "Filversionshantering i intervall",
"Start Browser": "Starta webbläsare",
"Statistics": "Statistik",
"Stopped": "Stoppad",
"Support": "Support",
"Sync Protocol Listen Addresses": "Address för inkommande anslutningar",
"Sync Protocol Listen Addresses": "Synkroniseringsprotokollets lyssningsadresser",
"Syncing": "Synkroniserar",
"Syncthing has been shut down.": "Syncthing har stängts.",
"Syncthing includes the following software or portions thereof:": "Syncthing innehåller följande mjukvarupaket eller delar av dem:",
"Syncthing is restarting.": "Syncthing startar om.",
"Syncthing is upgrading.": "Syncthing uppgraderas.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing verkar avstängd, eller finns det problem med din Internetanslutning. Försöker igen...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing verkar ha drabbats av ett problem. Uppdatera sidan eller starta om Syncthing om problemet kvarstår.",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing verkar ha drabbats av ett problem med behandlingen av din begäran. Uppdatera sidan eller starta om Syncthing om problemet kvarstår.",
"The Syncthing admin interface is configured to allow remote access without a password.": "Syncthing administratör gränssnittet är konfigurerat för att tillåta fjärrtillträde utan ett lösenord.",
"The aggregated statistics are publicly available at the URL below.": "Den aggregerade statistiken är offentligt tillgängliga på webbadressen nedan.",
"The aggregated statistics are publicly available at {%url%}.": "Sammanställd statistik finns publikt tillgänglig på {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfigurationen har sparats men inte aktiverats. Syncthing måste startas om för att aktivera den nya konfigurationen.",
"The device ID cannot be blank.": "Enhets-ID kan inte vara tomt.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhets-ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhets-ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The device ID cannot be blank.": "Enhetens ID kan inte vara tomt.",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhetens ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Enhetens ID som behövs här kan du hitta i \"Redigera > Visa ID\"-dialogen på den andra enheten. Mellanrum och bindestreck är valfria (ignoreras).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Den krypterade användarstatistiken skickas dagligen. Den används för att spåra vanliga plattformar, katalogstorlekar och versioner. Om datan som rapporteras ändras så kommer du att bli tillfrågad igen.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Det inmatade enhets-ID:t verkar inte korrekt. Det ska vara en 52 eller 56 teckens sträng bestående av siffror och bokstäver, eventuellt med mellanrum och bindestreck.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Det inmatade enhetens ID verkar inte korrekt. Det ska vara en 52 eller 56 teckens sträng bestående av siffror och bokstäver, eventuellt med mellanrum och bindestreck.",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "Den första kommandoparametern är sökvägen till mappen och den andra parametern är den relativa sökvägen i mappen.",
"The folder ID cannot be blank.": "Ange ett enhets-ID.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Katalog-ID:t måste vara en kort sträng (64 tecken eller mindre), bestående av endast bokstäver, siffror, punkt (.), bindestreck (-) och understreck (_).",
"The folder ID must be unique.": "Katalog-ID:t måste vara unikt.",
"The folder path cannot be blank.": "Ange en sökväg.",
"The folder ID cannot be blank.": "Katalogens ID får inte vara tomt.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Katalogens ID måste vara en kort sträng (64 tecken eller mindre), bestående av endast bokstäver, siffror, punkt (.), bindestreck (-) och understreck (_).",
"The folder ID must be unique.": "Katalogens ID måste vara unikt.",
"The folder path cannot be blank.": "Katalogsökvägen får inte vara tom.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "De följande intervallen används: varje 30 sekunder under den första timmen; varje timme under den första dagen; varje dag för de första 30 dagarna; varje vecka tills den maximala åldersgränsen uppnås.",
"The following items could not be synchronized.": "Följande filer kunde inte synkroniseras.",
"The following items could not be synchronized.": "Följande objekt kunde inte synkroniseras.",
"The maximum age must be a number and cannot be blank.": "Åldersgränsen måste vara ett tal och kan inte lämnas tomt.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Den längsta tiden att behålla en version (i dagar, sätt till 0 för att behålla versioner för evigt).",
"The minimum free disk space percentage must be a non-negative number between 0 and 100 (inclusive).": "Minimum ledigt diskutrymme i procent måste vara en icke negativ siffra mellan 0 och 100 (inklusive).",
"The number of days must be a number and cannot be blank.": "Antalet dagar måste vara en siffra och får inte vara tomt.",
"The number of days to keep files in the trash can. Zero means forever.": "Antal dagar som filer ligger kvar i papperskorgen. Noll betyder för alltid.",
"The number of days to keep files in the trash can. Zero means forever.": "Antalet dagar som filer ligger kvar i papperskorgen. Noll betyder för alltid.",
"The number of old versions to keep, per file.": "Antalet gamla versioner som ska behållas, per fil.",
"The number of versions must be a number and cannot be blank.": "Antalet versioner måste vara ett nummer och kan inte lämnas tomt.",
"The path cannot be blank.": "Ange en sökväg",
"The path cannot be blank.": "Sökvägen kan inte vara tom.",
"The rate limit must be a non-negative number (0: no limit)": "Frekvensgränsen måste vara ett icke-negativt tal (0: ingen gräns)",
"The rescan interval must be a non-negative number of seconds.": "Förnyelseintervallet måste vara ett positivt antal sekunder",
"They are retried automatically and will be synced when the error is resolved.": "De omprövas automatiskt och kommer att synkroniseras när felet är löst.",
"This Device": "Denna enhet",
"This can easily give hackers access to read and change any files on your computer.": "Detta kan lätt ge hackare tillgång till att läsa och ändra några filer på datorn.",
"This is a major version upgrade.": "Det här är en stor uppgradering.",
"Trash Can File Versioning": "Versionshantering på filer i papperskorgen",
"Unknown": "Okänt",
"Trash Can File Versioning": "Papperskorgs filversionshantering",
"Unknown": "Okänd",
"Unshared": "Inte delad",
"Unused": "Oanvänd",
"Up to Date": "Helt uppdaterad",
"Up to Date": "Uppdaterad",
"Updated": "Uppdaterad",
"Upgrade": "Uppgradering",
"Upgrade To {%version%}": "Uppgradera till {{version}}",
@@ -235,16 +235,16 @@
"Uptime": "Tid sedan start",
"Use HTTPS for GUI": "Använd HTTPS för GUI",
"Version": "Version",
"Versions Path": "Katalog för versioner",
"Versions Path": "Sökväg för versioner",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versioner tas bort automatiskt när de är äldre än den maximala åldersgränsen eller överstiger frekvensen i sitt interval.",
"Warning, this path is a subdirectory of an existing folder \"{%otherFolder%}\".": "Varning, denna sökväg är en underkatalog till en befintlig katalog \"{{otherFolder}}\".",
"When adding a new device, keep in mind that this device must be added on the other side too.": "När du lägger till en ny enhet, kom ihåg att den här enheten måste läggas till på den andra enheten också.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "När du lägger till ny katalog, tänk på att katalog-ID:t knyter ihop katalogen mellan olika noder. De måste vara exakt desamma mellan noder och stora eller små bokstäver har betydelse.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "När du lägger till ny katalog, tänk på att katalogens ID knyter ihop katalogen mellan olika noder. De måste vara exakt desamma mellan noder och stora eller små bokstäver har betydelse.",
"Yes": "Ja",
"You must keep at least one version.": "Du måste behålla åtminstone en version.",
"days": "dagar",
"full documentation": "fullständig dokumentation",
"items": "poster",
"items": "objekt",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vill dela katalogen \"{{folder}}\".",
"{%device%} wants to share folder \"{%folderLabel%}\" ({%folder%}).": "{{device}} vill dela katalogen \"{{folderLabel}}\" ({{folder}}).",
"{%device%} wants to share folder \"{%folderlabel%}\" ({%folder%}).": "{{device}} vill dela katalogen \"{{folderlabel}}\" ({{folder}})."

View File

@@ -41,7 +41,7 @@
"Delete": "删除",
"Deleted": "已删除",
"Device \"{%name%}\" ({%device%} at {%address%}) wants to connect. Add new device?": "设备 \"{{name}}\"(位于 {{address}} 的 {{device}})请求连接。是否添加新设备?",
"Device ID": "设备标识",
"Device ID": "设备 ID",
"Device Identification": "设备标识",
"Device Name": "设备名",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "设备:{{device}} 地址:({{address}}) 请求连接。是否添加新设备?",
@@ -71,7 +71,7 @@
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by Syncthing.": "当某个文件在其他设备被替换或删除时,本设备将会在 .stversions 文件夹中保留该文件的备份,并在文件名中加入时间戳信息。",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "在其它设备中对该文件夹内文件的修改并不会被同步到本机,但是在本机上对其的修改,则会被同步到其它设备中。",
"Folder": "文件夹",
"Folder ID": "文件夹标识",
"Folder ID": "文件夹 ID",
"Folder Label": "文件夹标签",
"Folder Master": "主文件夹",
"Folder Path": "文件夹路径",
@@ -127,7 +127,7 @@
"Options": "选项",
"Out of Sync": "未同步",
"Out of Sync Items": "未同步的项目",
"Outgoing Rate Limit (KiB/s)": "上传速度限制(千字节/秒)",
"Outgoing Rate Limit (KiB/s)": "上传速度限制 (KiB/s)",
"Override Changes": "撤销改变",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "文件夹在本地的路径。如果不存在,则会被创建。波浪线符号(~)是如下路径的缩略符:",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "用来存储历史版本的文件夹(留空则将默认会存储在.stversions文件夹中",
@@ -169,7 +169,7 @@
"Share this folder?": "是否共享该文件夹?",
"Shared With": "共享给",
"Short identifier for the folder. Must be the same on all cluster devices.": "文件夹的别名。必须在所有设备上保持一致。",
"Show ID": "显示设备标识",
"Show ID": "显示 ID",
"Show QR": "显示 QR 码",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "在设备丛中,显示该名称,而不是设备 ID。亦会作为一个可选的默认名称被发送到其他设备。",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "在设备丛中,将会显示本名称,而不是设备 ID。如果设置为空则会使用目标设备提供的默认名称。",
@@ -190,21 +190,21 @@
"Syncthing includes the following software or portions thereof:": "Syncthing 使用了下列软件或其中的一部分",
"Syncthing is restarting.": "Syncthing 正在重启",
"Syncthing is upgrading.": "Syncthing 正在升级",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing 似乎关闭了,或者您的网络连接存在故障。重试中...",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing 似乎关闭了,或者您的网络连接存在故障。重试中",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing 在处理您的请求时似乎遇到了问题。如果问题持续,请刷新页面,或重启 Syncthing。",
"The Syncthing admin interface is configured to allow remote access without a password.": "当前配置允许在不使用密码的情况下远程访问 Syncthing 管理界面",
"The aggregated statistics are publicly available at the URL below.": "全局统计数据公布于以下 URL。",
"The aggregated statistics are publicly available at {%url%}.": "全局统计公布于 {{url}}",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "设置已经保存但是还未生效。Syncthing 需要重启以启用新的设置。",
"The device ID cannot be blank.": "设备标识不能为空",
"The device ID cannot be blank.": "设备 ID 不能为空",
"The device ID to enter here can be found in the \"Actions > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "在这里所需要输入的设备 ID可以在目标设备的“操作->显示 ID”中看到。空格和横线可选将会被忽略。",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "在这里所需要输入的设备标识,可以在目标设备的“选项->显示设备标识”中看到。空格和横线可选(将会被忽略)。",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "在这里所需要输入的设备 ID,可以在目标设备的“选项->显示 ID”中看到。空格和横线可选(将会被忽略)。",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "经过加密的使用报告会每天发送。它用来跟踪统计使用本软件的平台,文件夹大小,以及本软件的版本。如果报告的内容有任何变化,本对话框会再次弹出提示您。",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "输入的设备标识似乎无效。设备标识长度必须为52或56的字母和数字,空格和横线不在内。",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "输入的设备 ID 似乎无效。设备 ID 包含字母和数字,长度为 52 或 56,空格和横线不在内。",
"The first command line parameter is the folder path and the second parameter is the relative path in the folder.": "命令行的第一个参数是文件夹的路径,第二个参数是文件夹内的相对路径。",
"The folder ID cannot be blank.": "文件夹标识不能为空。",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "文件夹标识不得长于 64 字符,且仅能包含字母、数字、半角句号(.)、横线(-和下划线_。",
"The folder ID must be unique.": "文件夹标识不得重复",
"The folder ID cannot be blank.": "文件夹 ID 不能为空。",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "文件夹 ID 不得长于 64 字符,且仅能包含字母、数字、半角句号(.)、横线(-和下划线_。",
"The folder ID must be unique.": "文件夹 ID 不得重复",
"The folder path cannot be blank.": "文件夹路径不能为空",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "保留的历史版本会遵循以下条件:最近一小时内的历史版本,更新间隔小于三十秒的仅保留一份。最近一天内的历史版本,更新间隔小于一小时的仅保留一份。最近一个月内的历史版本,更新间隔小于一天的仅保留一份。距离现在超过一个月且小于最长保留时间的,更新间隔小于一周的仅保留一份。",
"The following items could not be synchronized.": "下列项目无法被同步。",

View File

@@ -32,7 +32,7 @@
<nav class="navbar navbar-top navbar-default" role="navigation">
<div class="container">
<span class="navbar-brand">
<span class="navbar-brand" aria-hidden="true">
<img class="logo hidden-xs" src="assets/img/logo-horizontal.svg" height="32" width="117"/>
<img class="logo hidden visible-xs" src="assets/img/favicon-default.png" height="32"/>
</span>
@@ -58,7 +58,7 @@
</a>
</li>
<li class="dropdown action-menu">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" aria-expanded="false">
<span class="fa fa-cog"></span>
<span class="hidden-xs" translate>Actions</span>
<span class="caret"></span>
@@ -66,17 +66,17 @@
<ul class="dropdown-menu">
<li><a href="" ng-click="editSettings()"><span class="fa fa-fw fa-cog"></span>&nbsp;<span translate>Settings</span></a></li>
<li><a href="" data-toggle="modal" data-target="#idqr" ng-click="currentDevice=thisDevice()"><span class="fa fa-fw fa-qrcode"></span>&nbsp;<span translate>Show ID</span></a></li>
<li class="divider"></li>
<li class="divider" aria-hidden="true"></li>
<li><a href="" ng-click="shutdown()"><span class="fa fa-fw fa-power-off"></span>&nbsp;<span translate>Shutdown</span></a></li>
<li><a href="" ng-click="restart()"><span class="fa fa-fw fa-refresh"></span>&nbsp;<span translate>Restart</span></a></li>
<li class="divider"></li>
<li class="divider" aria-hidden="true"></li>
<li class="visible-xs">
<a href="https://docs.syncthing.net/intro/gui.html" target="_blank">
<span class="fa fa-fw fa-book"></span>&nbsp;<span translate>Help</span>
</a>
</li>
<li><a href="" data-toggle="modal" data-target="#about"><span class="fa fa-fw fa-heart-o"></span>&nbsp;<span translate>About</span></a></li>
<li class="divider"></li>
<li class="divider" aria-hidden="true"></li>
<li><a href="" ng-click="advanced()"><span class="fa fa-fw fa-cogs"></span>&nbsp;<span translate>Advanced</span></a></li>
</ul>
</li>
@@ -248,17 +248,19 @@
</div>
</div>
<div ng-if="config && config.options && config.options.unackedNotificationIDs" ng-include="'syncthing/core/notifications.html'"></div>
<!-- First regular row -->
<div class="row">
<!-- Folder list (top left) -->
<div class="col-md-6">
<h3 translate>Folders</h3>
<div class="col-md-6" aria-labelledby="folder_list" role="region" >
<h3 id="folder_list" translate>Folders</h3>
<div class="panel-group" id="folders">
<div class="panel panel-default" ng-repeat="folder in folderList()">
<button class="btn panel-heading" data-toggle="collapse" data-parent="#folders" data-target="#folder-{{$index}}">
<button class="btn panel-heading" data-toggle="collapse" data-parent="#folders" data-target="#folder-{{$index}}" aria-expanded="false">
<div class="panel-progress" ng-show="folderStatus(folder) == 'syncing'" ng-attr-style="width: {{syncPercentage(folder.id)}}%"></div>
<div class="panel-progress" ng-show="folderStatus(folder) == 'scanning' && scanProgress[folder.id] != undefined" ng-attr-style="width: {{scanPercentage(folder.id)}}%"></div>
<h4 class="panel-title">
@@ -437,10 +439,10 @@
<!-- This device -->
<div class="col-md-6">
<div class="col-md-6" aria-label="{{'Devices' | translate}}" role="region">
<h3 translate>This Device</h3>
<div class="panel panel-default" ng-repeat="deviceCfg in [thisDevice()]">
<button class="btn panel-heading" data-toggle="collapse" data-target="#device-this">
<button class="btn panel-heading" data-toggle="collapse" data-target="#device-this" aria-expanded="true">
<h4 class="panel-title">
<identicon class="panel-icon" data-value="deviceCfg.deviceID"></identicon>
<div class="panel-title-text">{{deviceName(deviceCfg)}}</div>
@@ -516,7 +518,7 @@
<h3 translate>Remote Devices</h3>
<div class="panel-group" id="devices">
<div class="panel panel-default" ng-repeat="deviceCfg in otherDevices()">
<button class="btn panel-heading" data-toggle="collapse" data-parent="#devices" data-target="#device-{{$index}}">
<button class="btn panel-heading" data-toggle="collapse" data-parent="#devices" data-target="#device-{{$index}}" aria-expanded="false">
<div class="panel-progress" ng-show="deviceStatus(deviceCfg) == 'syncing'" ng-attr-style="width: {{completion[deviceCfg.deviceID]._total | number:0}}%"></div>
<h4 class="panel-title">
<identicon class="panel-icon" data-value="deviceCfg.deviceID"></identicon>
@@ -672,6 +674,7 @@
<script src="syncthing/core/localeService.js"></script>
<script src="syncthing/core/modalDirective.js"></script>
<script src="syncthing/core/naturalFilter.js"></script>
<script src="syncthing/core/notificationDirective.js"></script>
<script src="syncthing/core/pathIsSubDirDirective.js"></script>
<script src="syncthing/core/popoverDirective.js"></script>
<script src="syncthing/core/selectOnClickDirective.js"></script>

View File

@@ -12,7 +12,7 @@
<p translate>Copyright &copy; 2014-2016 the following Contributors:</p>
<div class="row">
<div class="col-md-12" id="contributor-list">
Jakob Borg, Audrius Butkevicius, Alexander Graf, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Aaron Bieber, Adam Piggott, Alessandro G., Alexandre Viau, Andrew Dunham, Andrey D, Arthur Axel fREW Schmidt, Bart De Vries, Ben Curthoys, Ben Sidhom, Benny Ng, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Colin Kennedy, Daniel Bergmann, Daniel Martí, David Rimmer, Denis A., Dennis Wilson, Dominik Heidler, Elias Jarlebring, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jens Diemer, Jochen Voss, Johan Vromans, Karol Różycki, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Laurent Etiemble, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mateusz Naściszewski, Matt Burke, Max Schulze, Michael Jephcote, Michael Tilli, Nate Morrison, Pascal Jungblut, Peter Hoeg, Phill Luby, Piotr Bejda, Scott Klupfel, Stefan Kuntz, Tim Abell, Tobias Nygren, Tomas Cerveny, Tully Robinson, Tyler Brazier, Veeti Paananen, Victor Buinsky, Vil Brekin, William A. Kennington III, Wulf Weich, Yannic A.
Jakob Borg, Audrius Butkevicius, Alexander Graf, Anderson Mesquita, Antony Male, Ben Schulz, Caleb Callaway, Daniel Harte, Lars K.W. Gohlke, Lode Hoste, Michael Ploujnikov, Philippe Schommers, Ryan Sullivan, Sergey Mishin, Stefan Tatschner, Aaron Bieber, Adam Piggott, Alessandro G., Alexandre Viau, Andrew Dunham, Andrey D, Antoine Lamielle, Arthur Axel fREW Schmidt, Bart De Vries, Ben Curthoys, Ben Sidhom, Benny Ng, Brandon Philips, Brendan Long, Brian R. Becker, Carsten Hagemann, Cathryne Linenweaver, Cedric Staniewski, Chris Howie, Chris Joel, Colin Kennedy, Daniel Bergmann, Daniel Martí, David Rimmer, Denis A., Dennis Wilson, Dominik Heidler, Elias Jarlebring, Emil Hessman, Erik Meitner, Federico Castagnini, Felix Ableitner, Felix Unterpaintner, Francois-Xavier Gsell, Frank Isemann, Gilli Sigurdsson, Jaakko Hannikainen, Jacek Szafarkiewicz, Jake Peterson, James Patterson, Jaroslav Malec, Jens Diemer, Jochen Voss, Johan Vromans, Karol Różycki, Kelong Cong, Ken'ichi Kamada, Kevin Allen, Laurent Etiemble, Lord Landon Agahnim, Majed Abdulaziz, Marc Laporte, Marc Pujol, Marcin Dziadus, Mateusz Naściszewski, Matt Burke, Max Schulze, Michael Jephcote, Michael Tilli, Nate Morrison, Pascal Jungblut, Peter Hoeg, Phill Luby, Piotr Bejda, Scott Klupfel, Stefan Kuntz, Tim Abell, Tobias Nygren, Tomas Cerveny, Tully Robinson, Tyler Brazier, Veeti Paananen, Victor Buinsky, Vil Brekin, William A. Kennington III, Wulf Weich, Yannic A.
</div>
</div>
<hr/>

View File

@@ -4,7 +4,7 @@ angular.module('syncthing.core')
return {
restrict: 'EA',
template:
'<a ng-if="visible" href="#" class="dropdown-toggle" data-toggle="dropdown" aria-expanded="true"><span class="fa fa-globe"></span><span class="hidden-xs">&nbsp;{{localesNames[currentLocale] || "English"}}</span> <span class="caret"></span></a>'+
'<a ng-if="visible" href="#" class="dropdown-toggle" data-toggle="dropdown" aria-expanded="false"><span class="fa fa-globe"></span><span class="hidden-xs">&nbsp;{{localesNames[currentLocale] || "English"}}</span> <span class="caret"></span></a>'+
'<ul ng-if="visible" class="dropdown-menu">'+
'<li ng-repeat="(i,name) in localesNames" ng-class="{active: i==currentLocale}">'+
'<a href="#" data-ng-click="changeLanguage(i)">{{name}}</a>'+

View File

@@ -0,0 +1,21 @@
angular.module('syncthing.core')
.directive('notification', function () {
return {
restrict: 'E',
scope: true,
transclude: true,
template: '<div class="row" ng-if="visible()"><div class="col-md-12" ng-transclude></div></div>',
link: function (scope, elm, attrs) {
scope.visible = function () {
return scope.config.options.unackedNotificationIDs.indexOf(attrs.id) > -1;
}
scope.dismiss = function () {
var idx = scope.config.options.unackedNotificationIDs.indexOf(attrs.id);
if (idx > -1) {
scope.config.options.unackedNotificationIDs.splice(idx, 1);
scope.saveConfig();
}
}
}
};
});

View File

@@ -0,0 +1,16 @@
<!--
<notification id='exampleNotification'>
<div class="panel panel-warning">
<div class="panel-heading"><h3 class="panel-title"><span class="fa fa-exclamation-circle"></span><span translate>Notice</span></h3></div>
<div class="panel-body">
<p translate>This is an example notification. ID of the notification should be appended to Options.UnackedNotificationIDs of the config.</p>
</div>
<div class="panel-footer">
<button type="button" class="btn btn-sm btn-default pull-right" ng-click="dismiss()">
<span class="fa fa-check"></span>&nbsp;<span translate>OK</span>
</button>
<div class="clearfix"></div>
</div>
</div>
</notification>
-->

View File

@@ -312,15 +312,25 @@ angular.module('syncthing.core')
$scope.completion[data.device][data.folder] = data.completion;
var tot = 0,
cnt = 0;
cnt = 0,
isComplete = true;
for (var cmp in $scope.completion[data.device]) {
if (cmp === "_total") {
continue;
}
tot += $scope.completion[data.device][cmp];
cnt += 1;
tot += $scope.completion[data.device][cmp] * $scope.model[cmp].globalBytes;
cnt += $scope.model[cmp].globalBytes;
if ($scope.completion[data.device][cmp] != 100) {
isComplete = false;
}
}
//To be sure that we won't get any rounding errors resulting in non-100% status when it should be
if (isComplete) {
$scope.completion[data.device]._total = 100;
}
else {
$scope.completion[data.device]._total = tot / cnt;
}
$scope.completion[data.device]._total = tot / cnt;
});
$scope.$on(Events.FOLDER_ERRORS, function (event, arg) {
@@ -460,15 +470,25 @@ angular.module('syncthing.core')
$scope.completion[device][folder] = data.completion;
var tot = 0,
cnt = 0;
cnt = 0,
isComplete = true;
for (var cmp in $scope.completion[device]) {
if (cmp === "_total") {
continue;
}
tot += $scope.completion[device][cmp];
cnt += 1;
tot += $scope.completion[device][cmp] * $scope.model[cmp].globalBytes;
cnt += $scope.model[cmp].globalBytes;
if ($scope.completion[device][cmp] != 100) {
isComplete = false;
}
}
//To be sure that we won't get any rounding errors resulting in non-100% status when it should be
if (isComplete) {
$scope.completion[device]._total = 100;
}
else {
$scope.completion[device]._total = tot / cnt;
}
$scope.completion[device]._total = tot / cnt;
console.log("refreshCompletion", device, folder, $scope.completion[device]);
}).error($scope.emitHTTPError);
@@ -1453,7 +1473,7 @@ angular.module('syncthing.core')
}
}
folders.sort();
folders.sort(folderCompare);
return folders;
};

View File

@@ -181,14 +181,14 @@
<button type="button" class="btn btn-primary btn-sm" ng-click="saveFolder()" ng-disabled="folderEditor.$invalid">
<span class="fa fa-check"></span>&nbsp;<span translate>Save</span>
</button>
<button type="button" class="btn btn-default btn-sm" id="editIgnoresButton" ng-click="editIgnores()" ng-if="editingExisting">
<span class="fa fa-eye-slash"></span>&nbsp;<span translate>Ignore Patterns</span>
</button>
<button type="button" class="btn btn-default btn-sm" data-dismiss="modal">
<span class="fa fa-times"></span>&nbsp;<span translate>Close</span>
</button>
<button type="button" class="btn btn-warning pull-left btn-sm" ng-click="deleteFolder(currentFolder.id)" ng-if="editingExisting">
<span class="fa fa-minus-circle"></span>&nbsp;<span translate>Remove</span>
</button>
<button type="button" class="btn btn-default pull-left btn-sm" id="editIgnoresButton" ng-click="editIgnores()" ng-if="editingExisting">
<span class="fa fa-eye-slash"></span>&nbsp;<span translate>Ignore Patterns</span>
</button>
</div>
</modal>

View File

@@ -9,8 +9,8 @@
<div class="panel-group" id="advancedAccordion" role="tablist" aria-multiselectable="true">
<div class="panel panel-default">
<div class="panel-heading" role="tab" id="guiHeading" data-toggle="collapse" data-parent="#advancedAccordion" href="#guiConfig" aria-expanded="true" aria-controls="guiConfig" style="cursor: pointer">
<h4 class="panel-title" translate>GUI</h4>
<div class="panel-heading" role="tab" id="guiHeading" data-toggle="collapse" data-parent="#advancedAccordion" href="#guiConfig" aria-expanded="true" aria-controls="guiConfig" style="cursor: pointer;">
<h4 class="panel-title" translate tabindex="0">GUI</h4>
</div>
<div id="guiConfig" class="panel-collapse collapse in" role="tabpanel" aria-labelledby="guiHeading">
<div class="panel-body">
@@ -27,8 +27,8 @@
</div>
<div class="panel panel-default">
<div class="panel-heading" role="tab" id="optionsHeading" data-toggle="collapse" data-parent="#advancedAccordion" href="#optionsConfig" aria-expanded="true" aria-controls="optionsConfig" style="cursor: pointer">
<h4 class="panel-title" translate>Options</h4>
<div class="panel-heading" role="tab" id="optionsHeading" data-toggle="collapse" data-parent="#advancedAccordion" href="#optionsConfig" aria-expanded="false" aria-controls="optionsConfig" style="cursor: pointer;">
<h4 class="panel-title" tabindex="0" translate>Options</h4>
</div>
<div id="optionsConfig" class="panel-collapse collapse" role="tabpanel" aria-labelledby="optionsHeading">
<div class="panel-body">
@@ -45,11 +45,11 @@
</div>
<div class="panel panel-default" ng-repeat="folder in advancedConfig.folders">
<div class="panel-heading" role="tab" id="folder{{$index}}Heading" data-toggle="collapse" data-parent="#advancedAccordion" href="#folder{{$index}}Config" aria-expanded="true" aria-controls="folder{{$index}}Config" style="cursor: pointer">
<h4 ng-if="folder.label.length == 0" class="panel-title">
<div class="panel-heading" role="tab" id="folder{{$index}}Heading" data-toggle="collapse" data-parent="#advancedAccordion" href="#folder{{$index}}Config" aria-expanded="false" aria-controls="folder{{$index}}Config" style="cursor: pointer;">
<h4 ng-if="folder.label.length == 0" class="panel-title" tabindex="0">
<span translate>Folder</span> "{{folder.id}}"
</h4>
<h4 ng-if="folder.label.length != 0" class="panel-title">
<h4 ng-if="folder.label.length != 0" class="panel-title" tabindex="0">
<span translate>Folder</span> "{{folder.label}}" ({{folder.id}})
</h4>
</div>

View File

@@ -3,9 +3,8 @@
<p translate>
The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.
</p>
<p translate translate-value-url="<a href=&quot;https://data.syncthing.net&quot; target=&quot;_blank&quot;>https://data.syncthing.net</a>">
The aggregated statistics are publicly available at {%url%}.
</p>
<p translate>The aggregated statistics are publicly available at the URL below.</p>
<p><a href="https://data.syncthing.net/" target="_blank">https://data.syncthing.net/</a></p>
<form>
<textarea class="form-control" rows="20">{{reportData | json}}</textarea>
</form>

View File

@@ -11,12 +11,18 @@ import (
"testing"
)
type requiresRestart struct{}
type requiresRestart struct {
committed chan struct{}
}
func (requiresRestart) VerifyConfiguration(_, _ Configuration) error {
return nil
}
func (requiresRestart) CommitConfiguration(_, _ Configuration) bool {
func (c requiresRestart) CommitConfiguration(_, _ Configuration) bool {
select {
case c.committed <- struct{}{}:
default:
}
return false
}
func (requiresRestart) String() string {
@@ -28,7 +34,7 @@ type validationError struct{}
func (validationError) VerifyConfiguration(_, _ Configuration) error {
return errors.New("some error")
}
func (validationError) CommitConfiguration(_, _ Configuration) bool {
func (c validationError) CommitConfiguration(_, _ Configuration) bool {
return true
}
func (validationError) String() string {
@@ -44,30 +50,33 @@ func TestReplaceCommit(t *testing.T) {
// Replace config. We should get back a clean response and the config
// should change.
resp := w.Replace(Configuration{Version: 1})
if resp.ValidationError != nil {
t.Fatal("Should not have a validation error")
err := w.Replace(Configuration{Version: 1})
if err != nil {
t.Fatal("Should not have a validation error:", err)
}
if resp.RequiresRestart {
if w.RequiresRestart() {
t.Fatal("Should not require restart")
}
if w.Raw().Version != 1 {
if w.Raw().Version != CurrentVersion {
t.Fatal("Config should have changed")
}
// Now with a subscriber requiring restart. We should get a clean response
// but with the restart flag set, and the config should change.
w.Subscribe(requiresRestart{})
sub0 := requiresRestart{committed: make(chan struct{}, 1)}
w.Subscribe(sub0)
resp = w.Replace(Configuration{Version: 2})
if resp.ValidationError != nil {
t.Fatal("Should not have a validation error")
err = w.Replace(Configuration{Version: 2})
if err != nil {
t.Fatal("Should not have a validation error:", err)
}
if !resp.RequiresRestart {
<-sub0.committed
if !w.RequiresRestart() {
t.Fatal("Should require restart")
}
if w.Raw().Version != 2 {
if w.Raw().Version != CurrentVersion {
t.Fatal("Config should have changed")
}
@@ -76,14 +85,14 @@ func TestReplaceCommit(t *testing.T) {
w.Subscribe(validationError{})
resp = w.Replace(Configuration{Version: 3})
if resp.ValidationError == nil {
err = w.Replace(Configuration{Version: 3})
if err == nil {
t.Fatal("Should have a validation error")
}
if resp.RequiresRestart {
t.Fatal("Should not require restart")
if !w.RequiresRestart() {
t.Fatal("Should still require restart")
}
if w.Raw().Version != 2 {
if w.Raw().Version != CurrentVersion {
t.Fatal("Config should not have changed")
}
}

View File

@@ -26,7 +26,7 @@ import (
const (
OldestHandledVersion = 10
CurrentVersion = 15
CurrentVersion = 16
MaxRescanIntervalS = 365 * 24 * 60 * 60
)
@@ -70,7 +70,10 @@ func New(myID protocol.DeviceID) Configuration {
util.SetDefaults(&cfg.Options)
util.SetDefaults(&cfg.GUI)
cfg.prepare(myID)
// Can't happen.
if err := cfg.prepare(myID); err != nil {
panic("bug: error in preparing new folder: " + err.Error())
}
return cfg
}
@@ -105,7 +108,9 @@ func ReadJSON(r io.Reader, myID protocol.DeviceID) (Configuration, error) {
return Configuration{}, err
}
err = json.Unmarshal(bs, &cfg)
if err := json.Unmarshal(bs, &cfg); err != nil {
return Configuration{}, err
}
cfg.OriginalVersion = cfg.Version
if err := cfg.prepare(myID); err != nil {
@@ -162,6 +167,36 @@ func (cfg *Configuration) WriteXML(w io.Writer) error {
}
func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
var myName string
// Ensure this device is present in the config
for _, device := range cfg.Devices {
if device.DeviceID == myID {
goto found
}
}
myName, _ = os.Hostname()
cfg.Devices = append(cfg.Devices, DeviceConfiguration{
DeviceID: myID,
Name: myName,
})
found:
if err := cfg.clean(); err != nil {
return err
}
// Ensure that we are part of the devices
for i := range cfg.Folders {
cfg.Folders[i].Devices = ensureDevicePresent(cfg.Folders[i].Devices, myID)
}
return nil
}
func (cfg *Configuration) clean() error {
util.FillNilSlices(&cfg.Options)
// Initialize any empty slices
@@ -174,6 +209,9 @@ func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
if cfg.Options.AlwaysLocalNets == nil {
cfg.Options.AlwaysLocalNets = []string{}
}
if cfg.Options.UnackedNotificationIDs == nil {
cfg.Options.UnackedNotificationIDs = []string{}
}
// Prepare folders and check for duplicates. Duplicates are bad and
// dangerous, can't currently be resolved in the GUI, and shouldn't
@@ -213,6 +251,9 @@ func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
if cfg.Version == 14 {
convertV14V15(cfg)
}
if cfg.Version == 15 {
convertV15V16(cfg)
}
// Build a list of available devices
existingDevices := make(map[protocol.DeviceID]bool)
@@ -220,26 +261,14 @@ func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
existingDevices[device.DeviceID] = true
}
// Ensure this device is present in the config
if !existingDevices[myID] {
myName, _ := os.Hostname()
cfg.Devices = append(cfg.Devices, DeviceConfiguration{
DeviceID: myID,
Name: myName,
})
existingDevices[myID] = true
}
// Ensure that the device list is free from duplicates
cfg.Devices = ensureNoDuplicateDevices(cfg.Devices)
sort.Sort(DeviceConfigurationList(cfg.Devices))
// Ensure that any loose devices are not present in the wrong places
// Ensure that there are no duplicate devices
// Ensure that puller settings are sane
// Ensure that the versioning configuration parameter map is not nil
for i := range cfg.Folders {
cfg.Folders[i].Devices = ensureDevicePresent(cfg.Folders[i].Devices, myID)
cfg.Folders[i].Devices = ensureExistingDevices(cfg.Folders[i].Devices, existingDevices)
cfg.Folders[i].Devices = ensureNoDuplicateFolderDevices(cfg.Folders[i].Devices)
if cfg.Folders[i].Versioning.Params == nil {
@@ -265,6 +294,16 @@ func (cfg *Configuration) prepare(myID protocol.DeviceID) error {
cfg.GUI.APIKey = rand.String(32)
}
// The list of ignored devices should not contain any devices that have
// been manually added to the config.
newIgnoredDevices := []protocol.DeviceID{}
for _, dev := range cfg.IgnoredDevices {
if !existingDevices[dev] {
newIgnoredDevices = append(newIgnoredDevices, dev)
}
}
cfg.IgnoredDevices = newIgnoredDevices
return nil
}
@@ -283,6 +322,11 @@ func convertV14V15(cfg *Configuration) {
cfg.Version = 15
}
func convertV15V16(cfg *Configuration) {
// Triggers a database tweak
cfg.Version = 16
}
func convertV13V14(cfg *Configuration) {
// Not using the ignore cache is the new default. Disable it on existing
// configurations.

View File

@@ -549,6 +549,9 @@ func TestPullOrder(t *testing.T) {
t.Logf("%s", buf.Bytes())
cfg, err = ReadXML(buf, device1)
if err != nil {
t.Fatal(err)
}
wrapper = Wrap("testdata/pullorder.xml", cfg)
folders = wrapper.Folders()
@@ -694,3 +697,73 @@ func TestV14ListenAddressesMigration(t *testing.T) {
}
}
}
func TestIgnoredDevices(t *testing.T) {
// Verify that ignored devices that are also present in the
// configuration are not in fact ignored.
wrapper, err := Load("testdata/ignoreddevices.xml", device1)
if err != nil {
t.Fatal(err)
}
if wrapper.IgnoredDevice(device1) {
t.Errorf("Device %v should not be ignored", device1)
}
if !wrapper.IgnoredDevice(device3) {
t.Errorf("Device %v should be ignored", device3)
}
}
func TestGetDevice(t *testing.T) {
// Verify that the Device() call does the right thing
wrapper, err := Load("testdata/ignoreddevices.xml", device1)
if err != nil {
t.Fatal(err)
}
// device1 is mentioned in the config
device, ok := wrapper.Device(device1)
if !ok {
t.Error(device1, "should exist")
}
if device.DeviceID != device1 {
t.Error("Should have returned", device1, "not", device.DeviceID)
}
// device3 is not
device, ok = wrapper.Device(device3)
if ok {
t.Error(device3, "should not exist")
}
if device.DeviceID == device3 {
t.Error("Should not returned ID", device3)
}
}
func TestSharesRemovedOnDeviceRemoval(t *testing.T) {
wrapper, err := Load("testdata/example.xml", device1)
if err != nil {
t.Errorf("Failed: %s", err)
}
raw := wrapper.Raw()
raw.Devices = raw.Devices[:len(raw.Devices)-1]
if len(raw.Folders[0].Devices) <= len(raw.Devices) {
t.Error("Should have less devices")
}
err = wrapper.Replace(raw)
if err != nil {
t.Errorf("Failed: %s", err)
}
raw = wrapper.Raw()
if len(raw.Folders[0].Devices) > len(raw.Devices) {
t.Error("Unexpected extra device")
}
}

View File

@@ -21,6 +21,7 @@ type GUIConfiguration struct {
APIKey string `xml:"apikey,omitempty" json:"apiKey"`
InsecureAdminAccess bool `xml:"insecureAdminAccess,omitempty" json:"insecureAdminAccess"`
Theme string `xml:"theme" json:"theme" default:"default"`
Debugging bool `xml:"debugging,attr" json:"debugging"`
}
func (c GUIConfiguration) Address() string {

View File

@@ -40,6 +40,7 @@ type OptionsConfiguration struct {
AlwaysLocalNets []string `xml:"alwaysLocalNet" json:"alwaysLocalNets"`
OverwriteRemoteDevNames bool `xml:"overwriteRemoteDeviceNamesOnConnect" json:"overwriteRemoteDeviceNamesOnConnect" default:"false"`
TempIndexMinBlocks int `xml:"tempIndexMinBlocks" json:"tempIndexMinBlocks" default:"10"`
UnackedNotificationIDs []string `xml:"unackedNotificationID" json:"unackedNotificationIDs"`
DeprecatedUPnPEnabled bool `xml:"upnpEnabled,omitempty" json:"-"`
DeprecatedUPnPLeaseM int `xml:"upnpLeaseMinutes,omitempty" json:"-"`
@@ -56,5 +57,7 @@ func (orig OptionsConfiguration) Copy() OptionsConfiguration {
copy(c.GlobalAnnServers, orig.GlobalAnnServers)
c.AlwaysLocalNets = make([]string, len(orig.AlwaysLocalNets))
copy(c.AlwaysLocalNets, orig.AlwaysLocalNets)
c.UnackedNotificationIDs = make([]string, len(orig.UnackedNotificationIDs))
copy(c.UnackedNotificationIDs, orig.UnackedNotificationIDs)
return c
}

10
lib/config/testdata/ignoreddevices.xml vendored Normal file
View File

@@ -0,0 +1,10 @@
<configuration version="15">
<device id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR">
<address>dynamic</address>
</device>
<device id="GYRZZQB-IRNPV4Z-T7TC52W-EQYJ3TT-FDQW6MW-DFLMU42-SSSU6EM-FBK2VAY">
<address>dynamic</address>
</device>
<ignoredDevice>AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR</ignoredDevice>
<ignoredDevice>LGFPDIT-7SKNNJL-VJZA4FC-7QNCRKA-CE753K7-2BW5QDK-2FOZ7FR-FEP57QJ</ignoredDevice>
</configuration>

14
lib/config/testdata/v16.xml vendored Normal file
View File

@@ -0,0 +1,14 @@
<configuration version="16">
<folder id="test" path="testdata" type="readonly" ignorePerms="false" rescanIntervalS="600" autoNormalize="true">
<device id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR"></device>
<device id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2"></device>
<minDiskFreePct>1</minDiskFreePct>
<maxConflicts>-1</maxConflicts>
</folder>
<device id="AIR6LPZ-7K4PTTV-UXQSMUU-CPQ5YWH-OEDFIIQ-JUG777G-2YQXXR5-YD6AWQR" name="node one" compression="metadata">
<address>tcp://a</address>
</device>
<device id="P56IOI7-MZJNU2Y-IQGDREY-DM2MGTI-MGL3BXN-PQ6W5BM-TBBZ4TJ-XZWICQ2" name="node two" compression="metadata">
<address>tcp://b</address>
</device>
</configuration>

View File

@@ -8,6 +8,7 @@ package config
import (
"os"
"sync/atomic"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/osutil"
@@ -41,11 +42,6 @@ type Committer interface {
String() string
}
type CommitResponse struct {
ValidationError error
RequiresRestart bool
}
// A wrapper around a Configuration that manages loads, saves and published
// notifications of changes to registered Handlers
@@ -58,6 +54,8 @@ type Wrapper struct {
replaces chan Configuration
subs []Committer
mut sync.Mutex
requiresRestart uint32 // an atomic bool
}
// Wrap wraps an existing Configuration structure and ties it to a file on
@@ -128,32 +126,25 @@ func (w *Wrapper) Raw() Configuration {
}
// Replace swaps the current configuration object for the given one.
func (w *Wrapper) Replace(cfg Configuration) CommitResponse {
func (w *Wrapper) Replace(cfg Configuration) error {
w.mut.Lock()
defer w.mut.Unlock()
return w.replaceLocked(cfg)
}
func (w *Wrapper) replaceLocked(to Configuration) CommitResponse {
func (w *Wrapper) replaceLocked(to Configuration) error {
from := w.cfg
if err := to.clean(); err != nil {
return err
}
for _, sub := range w.subs {
l.Debugln(sub, "verifying configuration")
if err := sub.VerifyConfiguration(from, to); err != nil {
l.Debugln(sub, "rejected config:", err)
return CommitResponse{
ValidationError: err,
}
}
}
allOk := true
for _, sub := range w.subs {
l.Debugln(sub, "committing configuration")
ok := sub.CommitConfiguration(from, to)
if !ok {
l.Debugln(sub, "requires restart")
allOk = false
return err
}
}
@@ -161,8 +152,22 @@ func (w *Wrapper) replaceLocked(to Configuration) CommitResponse {
w.deviceMap = nil
w.folderMap = nil
return CommitResponse{
RequiresRestart: !allOk,
w.notifyListeners(from, to)
return nil
}
func (w *Wrapper) notifyListeners(from, to Configuration) {
for _, sub := range w.subs {
go w.notifyListener(sub, from, to)
}
}
func (w *Wrapper) notifyListener(sub Committer, from, to Configuration) {
l.Debugln(sub, "committing configuration")
if !sub.CommitConfiguration(from, to) {
l.Debugln(sub, "requires restart")
w.setRequiresRestart()
}
}
@@ -182,7 +187,7 @@ func (w *Wrapper) Devices() map[protocol.DeviceID]DeviceConfiguration {
// SetDevice adds a new device to the configuration, or overwrites an existing
// device with the same ID.
func (w *Wrapper) SetDevice(dev DeviceConfiguration) CommitResponse {
func (w *Wrapper) SetDevice(dev DeviceConfiguration) error {
w.mut.Lock()
defer w.mut.Unlock()
@@ -218,7 +223,7 @@ func (w *Wrapper) Folders() map[string]FolderConfiguration {
// SetFolder adds a new folder to the configuration, or overwrites an existing
// folder with the same ID.
func (w *Wrapper) SetFolder(fld FolderConfiguration) CommitResponse {
func (w *Wrapper) SetFolder(fld FolderConfiguration) error {
w.mut.Lock()
defer w.mut.Unlock()
@@ -246,7 +251,7 @@ func (w *Wrapper) Options() OptionsConfiguration {
}
// SetOptions replaces the current options configuration object.
func (w *Wrapper) SetOptions(opts OptionsConfiguration) CommitResponse {
func (w *Wrapper) SetOptions(opts OptionsConfiguration) error {
w.mut.Lock()
defer w.mut.Unlock()
newCfg := w.cfg.Copy()
@@ -262,7 +267,7 @@ func (w *Wrapper) GUI() GUIConfiguration {
}
// SetGUI replaces the current GUI configuration object.
func (w *Wrapper) SetGUI(gui GUIConfiguration) CommitResponse {
func (w *Wrapper) SetGUI(gui GUIConfiguration) error {
w.mut.Lock()
defer w.mut.Unlock()
newCfg := w.cfg.Copy()
@@ -283,6 +288,18 @@ func (w *Wrapper) IgnoredDevice(id protocol.DeviceID) bool {
return false
}
// Device returns the configuration for the given device and an "ok" bool.
func (w *Wrapper) Device(id protocol.DeviceID) (DeviceConfiguration, bool) {
w.mut.Lock()
defer w.mut.Unlock()
for _, device := range w.cfg.Devices {
if device.DeviceID == id {
return device, true
}
}
return DeviceConfiguration{}, false
}
// Save writes the configuration to disk, and generates a ConfigSaved event.
func (w *Wrapper) Save() error {
fd, err := osutil.CreateAtomic(w.path, 0600)
@@ -332,3 +349,11 @@ func (w *Wrapper) ListenAddresses() []string {
}
return util.UniqueStrings(addresses)
}
func (w *Wrapper) RequiresRestart() bool {
return atomic.LoadUint32(&w.requiresRestart) != 0
}
func (w *Wrapper) setRequiresRestart() {
atomic.StoreUint32(&w.requiresRestart, 1)
}

View File

@@ -0,0 +1,10 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
// The existence of this file means we get 0% test coverage rather than no
// test coverage at all. Remove when implementing an actual test.
package connections

View File

@@ -162,8 +162,17 @@ next:
if err != nil {
if protocol.IsVersionMismatch(err) {
// The error will be a relatively user friendly description
// of what's wrong with the version compatibility
msg := fmt.Sprintf("Connecting to %s (%s): %s", remoteID, c.RemoteAddr(), err)
// of what's wrong with the version compatibility. By
// default identify the other side by device ID and IP.
remote := fmt.Sprintf("%v (%v)", remoteID, c.RemoteAddr())
if hello.DeviceName != "" {
// If the name was set in the hello return, use that to
// give the user more info about which device is the
// affected one. It probably says more than the remote
// IP.
remote = fmt.Sprintf("%q (%s %s, %v)", hello.DeviceName, hello.ClientName, hello.ClientVersion, remoteID)
}
msg := fmt.Sprintf("Connecting to %s: %s", remote, err)
warningFor(remoteID, msg)
} else {
// It's something else - connection reset or whatever
@@ -174,19 +183,26 @@ next:
}
c.SetDeadline(time.Time{})
s.model.OnHello(remoteID, c.RemoteAddr(), hello)
// The Model will return an error for devices that we don't want to
// have a connection with for whatever reason, for example unknown devices.
if err := s.model.OnHello(remoteID, c.RemoteAddr(), hello); err != nil {
l.Infof("Connection from %s at %s (%s) rejected: %v", remoteID, c.RemoteAddr(), c.Type, err)
c.Close()
continue
}
// If we have a relay connection, and the new incoming connection is
// not a relay connection, we should drop that, and prefer the this one.
connected := s.model.ConnectedTo(remoteID)
s.curConMut.Lock()
ct, ok := s.currentConnection[remoteID]
s.curConMut.Unlock()
priorityKnown := ok && connected
// Lower priority is better, just like nice etc.
if ok && ct.Priority > c.Priority {
if priorityKnown && ct.Priority > c.Priority {
l.Debugln("Switching connections", remoteID)
s.model.Close(remoteID, protocol.ErrSwitchingConnections)
} else if s.model.ConnectedTo(remoteID) {
} else if connected {
// We should not already be connected to the other party. TODO: This
// could use some better handling. If the old connection is dead but
// hasn't timed out yet we may want to drop *that* connection and keep
@@ -196,63 +212,56 @@ next:
l.Infof("Connected to already connected device (%s)", remoteID)
c.Close()
continue
} else if s.model.IsPaused(remoteID) {
l.Infof("Connection from paused device (%s)", remoteID)
}
deviceCfg, ok := s.cfg.Device(remoteID)
if !ok {
panic("bug: unknown device should already have been rejected")
}
// Verify the name on the certificate. By default we set it to
// "syncthing" when generating, but the user may have replaced
// the certificate and used another name.
certName := deviceCfg.CertName
if certName == "" {
certName = s.tlsDefaultCommonName
}
if err := remoteCert.VerifyHostname(certName); err != nil {
// Incorrect certificate name is something the user most
// likely wants to know about, since it's an advanced
// config. Warn instead of Info.
l.Warnf("Bad certificate from %s (%v): %v", remoteID, c.RemoteAddr(), err)
c.Close()
continue
continue next
}
for deviceID, deviceCfg := range s.cfg.Devices() {
if deviceID == remoteID {
// Verify the name on the certificate. By default we set it to
// "syncthing" when generating, but the user may have replaced
// the certificate and used another name.
certName := deviceCfg.CertName
if certName == "" {
certName = s.tlsDefaultCommonName
}
err := remoteCert.VerifyHostname(certName)
if err != nil {
// Incorrect certificate name is something the user most
// likely wants to know about, since it's an advanced
// config. Warn instead of Info.
l.Warnf("Bad certificate from %s (%v): %v", remoteID, c.RemoteAddr(), err)
c.Close()
continue next
}
// If rate limiting is set, and based on the address we should
// limit the connection, then we wrap it in a limiter.
// If rate limiting is set, and based on the address we should
// limit the connection, then we wrap it in a limiter.
limit := s.shouldLimit(c.RemoteAddr())
limit := s.shouldLimit(c.RemoteAddr())
wr := io.Writer(c)
if limit && s.writeRateLimit != nil {
wr = NewWriteLimiter(c, s.writeRateLimit)
}
rd := io.Reader(c)
if limit && s.readRateLimit != nil {
rd = NewReadLimiter(c, s.readRateLimit)
}
name := fmt.Sprintf("%s-%s (%s)", c.LocalAddr(), c.RemoteAddr(), c.Type)
protoConn := protocol.NewConnection(remoteID, rd, wr, s.model, name, deviceCfg.Compression)
modelConn := Connection{c, protoConn}
l.Infof("Established secure connection to %s at %s", remoteID, name)
l.Debugf("cipher suite: %04X in lan: %t", c.ConnectionState().CipherSuite, !limit)
s.model.AddConnection(modelConn, hello)
s.curConMut.Lock()
s.currentConnection[remoteID] = modelConn
s.curConMut.Unlock()
continue next
}
wr := io.Writer(c)
if limit && s.writeRateLimit != nil {
wr = NewWriteLimiter(c, s.writeRateLimit)
}
l.Infof("Connection from %s (%s) with ignored device ID %s", c.RemoteAddr(), c.Type, remoteID)
c.Close()
rd := io.Reader(c)
if limit && s.readRateLimit != nil {
rd = NewReadLimiter(c, s.readRateLimit)
}
name := fmt.Sprintf("%s-%s (%s)", c.LocalAddr(), c.RemoteAddr(), c.Type)
protoConn := protocol.NewConnection(remoteID, rd, wr, s.model, name, deviceCfg.Compression)
modelConn := Connection{c, protoConn}
l.Infof("Established secure connection to %s at %s", remoteID, name)
l.Debugf("cipher suite: %04X in lan: %t", c.ConnectionState().CipherSuite, !limit)
s.model.AddConnection(modelConn, hello)
s.curConMut.Lock()
s.currentConnection[remoteID] = modelConn
s.curConMut.Unlock()
continue next
}
}
@@ -298,10 +307,11 @@ func (s *Service) connect() {
connected := s.model.ConnectedTo(deviceID)
s.curConMut.Lock()
ct := s.currentConnection[deviceID]
ct, ok := s.currentConnection[deviceID]
s.curConMut.Unlock()
priorityKnown := ok && connected
if connected && ct.Priority == bestDialerPrio {
if priorityKnown && ct.Priority == bestDialerPrio {
// Things are already as good as they can get.
continue
}
@@ -349,7 +359,7 @@ func (s *Service) connect() {
continue
}
if connected && dialerFactory.Priority() >= ct.Priority {
if priorityKnown && dialerFactory.Priority() >= ct.Priority {
l.Debugf("Not dialing using %s as priority is less than current connection (%d >= %d)", dialerFactory, dialerFactory.Priority(), ct.Priority)
continue
}
@@ -364,10 +374,6 @@ func (s *Service) connect() {
continue
}
if connected {
s.model.Close(deviceID, protocol.ErrSwitchingConnections)
}
s.conns <- conn
continue nextDevice
}
@@ -428,10 +434,6 @@ func (s *Service) VerifyConfiguration(from, to config.Configuration) error {
}
func (s *Service) CommitConfiguration(from, to config.Configuration) bool {
// We require a restart if a device as been removed.
restart := false
newDevices := make(map[protocol.DeviceID]bool, len(to.Devices))
for _, dev := range to.Devices {
newDevices[dev.DeviceID] = true
@@ -439,7 +441,12 @@ func (s *Service) CommitConfiguration(from, to config.Configuration) bool {
for _, dev := range from.Devices {
if !newDevices[dev.DeviceID] {
restart = true
s.curConMut.Lock()
delete(s.currentConnection, dev.DeviceID)
s.curConMut.Unlock()
warningLimitersMut.Lock()
delete(warningLimiters, dev.DeviceID)
warningLimitersMut.Unlock()
}
}
@@ -491,7 +498,7 @@ func (s *Service) CommitConfiguration(from, to config.Configuration) bool {
s.natServiceToken = nil
}
return !restart
return true
}
func (s *Service) AllAddresses() []string {

View File

@@ -8,6 +8,7 @@ package connections
import (
"crypto/tls"
"fmt"
"net"
"net/url"
"time"
@@ -28,6 +29,10 @@ type Connection struct {
protocol.Connection
}
func (c Connection) String() string {
return fmt.Sprintf("%s-%s/%s", c.LocalAddr(), c.RemoteAddr(), c.Type)
}
type dialerFactory interface {
New(*config.Wrapper, *tls.Config) genericDialer
Priority() int
@@ -69,8 +74,8 @@ type Model interface {
AddConnection(conn Connection, hello protocol.HelloResult)
ConnectedTo(remoteID protocol.DeviceID) bool
IsPaused(remoteID protocol.DeviceID) bool
OnHello(protocol.DeviceID, net.Addr, protocol.HelloResult)
GetHello(protocol.DeviceID) protocol.Version13HelloMessage
OnHello(protocol.DeviceID, net.Addr, protocol.HelloResult) error
GetHello(protocol.DeviceID) protocol.HelloIntf
}
// serviceFunc wraps a function to create a suture.Service without stop

View File

@@ -73,7 +73,7 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
m := NewBlockMap(db, db.folderIdx.ID([]byte("folder1")))
f3.Flags |= protocol.FlagDirectory
f3.Type = protocol.FileInfoTypeDirectory
err := m.Add([]protocol.FileInfo{f1, f2, f3})
if err != nil {
@@ -99,9 +99,11 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
return true
})
f3.Flags = f1.Flags
f1.Flags |= protocol.FlagDeleted
f2.Flags |= protocol.FlagInvalid
f3.Permissions = f1.Permissions
f3.Deleted = f1.Deleted
f3.Invalid = f1.Invalid
f1.Deleted = true
f2.Invalid = true
// Should remove
err = m.Update([]protocol.FileInfo{f1, f2, f3})
@@ -145,9 +147,15 @@ func TestBlockMapAddUpdateWipe(t *testing.T) {
t.Fatal("db not empty")
}
f1.Flags = 0
f2.Flags = 0
f3.Flags = 0
f1.Deleted = false
f1.Invalid = false
f1.Permissions = 0
f2.Deleted = false
f2.Invalid = false
f2.Permissions = 0
f3.Deleted = false
f3.Invalid = false
f3.Permissions = 0
}
func TestBlockFinderLookup(t *testing.T) {
@@ -187,7 +195,7 @@ func TestBlockFinderLookup(t *testing.T) {
t.Fatal("Incorrect count", counter)
}
f1.Flags |= protocol.FlagDeleted
f1.Deleted = true
err = m1.Update([]protocol.FileInfo{f1})
if err != nil {
@@ -212,7 +220,7 @@ func TestBlockFinderLookup(t *testing.T) {
t.Fatal("Incorrect count")
}
f1.Flags = 0
f1.Deleted = false
}
func TestBlockFinderFix(t *testing.T) {

View File

@@ -4,9 +4,6 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate -command genxdr go run ../../vendor/github.com/calmh/xdr/cmd/genxdr/main.go
//go:generate genxdr -o leveldb_xdr.go leveldb.go
package db
import (
@@ -14,27 +11,10 @@ import (
"fmt"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
)
var (
clockTick int64
clockMut = sync.NewMutex()
)
func clock(v int64) int64 {
clockMut.Lock()
defer clockMut.Unlock()
if v > clockTick {
clockTick = v + 1
} else {
clockTick++
}
return clockTick
}
const (
KeyTypeDevice = iota
KeyTypeGlobal
@@ -44,27 +24,19 @@ const (
KeyTypeVirtualMtime
KeyTypeFolderIdx
KeyTypeDeviceIdx
KeyTypeIndexID
)
type fileVersion struct {
version protocol.Vector
device []byte
}
type VersionList struct {
versions []fileVersion
}
func (l VersionList) String() string {
var b bytes.Buffer
var id protocol.DeviceID
b.WriteString("{")
for i, v := range l.versions {
for i, v := range l.Versions {
if i > 0 {
b.WriteString(", ")
}
copy(id[:], v.device)
fmt.Fprintf(&b, "{%d, %v}", v.version, id)
copy(id[:], v.Device)
fmt.Fprintf(&b, "{%d, %v}", v.Version, id)
}
b.WriteString("}")
return b.String()
@@ -101,7 +73,7 @@ func getFile(db dbReader, key []byte) (protocol.FileInfo, bool) {
}
var f protocol.FileInfo
err = f.UnmarshalXDR(bs)
err = f.Unmarshal(bs)
if err != nil {
panic(err)
}

View File

@@ -1,114 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"bytes"
"github.com/syndtr/goleveldb/leveldb"
)
// convertKeyFormat converts from the v0.12 to the v0.13 database format, to
// avoid having to do rescan. The change is in the key format for folder
// labels, so we basically just iterate over the database rewriting keys as
// necessary.
func convertKeyFormat(from, to *leveldb.DB) error {
l.Infoln("Converting database key format")
blocks, files, globals, unchanged := 0, 0, 0, 0
dbi := newDBInstance(to)
i := from.NewIterator(nil, nil)
for i.Next() {
key := i.Key()
switch key[0] {
case KeyTypeBlock:
folder, file := oldFromBlockKey(key)
folderIdx := dbi.folderIdx.ID([]byte(folder))
hash := key[1+64:]
newKey := blockKeyInto(nil, hash, folderIdx, file)
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
blocks++
case KeyTypeDevice:
newKey := dbi.deviceKey(oldDeviceKeyFolder(key), oldDeviceKeyDevice(key), oldDeviceKeyName(key))
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
files++
case KeyTypeGlobal:
newKey := dbi.globalKey(oldGlobalKeyFolder(key), oldGlobalKeyName(key))
if err := to.Put(newKey, i.Value(), nil); err != nil {
return err
}
globals++
case KeyTypeVirtualMtime:
// Cannot be converted, we drop it instead :(
default:
if err := to.Put(key, i.Value(), nil); err != nil {
return err
}
unchanged++
}
}
l.Infof("Converted %d blocks, %d files, %d globals (%d unchanged).", blocks, files, globals, unchanged)
return nil
}
func oldDeviceKeyFolder(key []byte) []byte {
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder[:izero]
}
func oldDeviceKeyDevice(key []byte) []byte {
return key[1+64 : 1+64+32]
}
func oldDeviceKeyName(key []byte) []byte {
return key[1+64+32:]
}
func oldGlobalKeyName(key []byte) []byte {
return key[1+64:]
}
func oldGlobalKeyFolder(key []byte) []byte {
folder := key[1 : 1+64]
izero := bytes.IndexByte(folder, 0)
if izero < 0 {
return folder
}
return folder[:izero]
}
func oldFromBlockKey(data []byte) (string, string) {
if len(data) < 1+64+32+1 {
panic("Incorrect key length")
}
if data[0] != KeyTypeBlock {
panic("Incorrect key type")
}
file := string(data[1+64+32:])
slice := data[1 : 1+64]
izero := bytes.IndexByte(slice, 0)
if izero > -1 {
return string(slice[:izero]), file
}
return string(slice), file
}

View File

@@ -1,136 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"archive/zip"
"io"
"os"
"path/filepath"
"testing"
"github.com/syndtr/goleveldb/leveldb"
)
func TestLabelConversion(t *testing.T) {
os.RemoveAll("testdata/oldformat.db")
defer os.RemoveAll("testdata/oldformat.db")
os.RemoveAll("testdata/newformat.db")
defer os.RemoveAll("testdata/newformat.db")
if err := unzip("testdata/oldformat.db.zip", "testdata"); err != nil {
t.Fatal(err)
}
odb, err := leveldb.OpenFile("testdata/oldformat.db", nil)
if err != nil {
t.Fatal(err)
}
ldb, err := leveldb.OpenFile("testdata/newformat.db", nil)
if err != nil {
t.Fatal(err)
}
if err = convertKeyFormat(odb, ldb); err != nil {
t.Fatal(err)
}
ldb.Close()
odb.Close()
inst, err := Open("testdata/newformat.db")
if err != nil {
t.Fatal(err)
}
fs := NewFileSet("default", inst)
files, deleted, _ := fs.GlobalSize()
if files+deleted != 953 {
// Expected number of global entries determined by
// ../../bin/stindex testdata/oldformat.db/ | grep global | grep -c default
t.Errorf("Conversion error, global list differs (%d != 953)", files+deleted)
}
files, deleted, _ = fs.LocalSize()
if files+deleted != 953 {
t.Errorf("Conversion error, device list differs (%d != 953)", files+deleted)
}
f := NewBlockFinder(inst)
// [block] F:"default" H:1c25dea9003cc16216e2a22900be1ec1cc5aaf270442904e2f9812c314e929d8 N:"f/f2/f25f1b3e6e029231b933531b2138796d" I:3
h := []byte{0x1c, 0x25, 0xde, 0xa9, 0x00, 0x3c, 0xc1, 0x62, 0x16, 0xe2, 0xa2, 0x29, 0x00, 0xbe, 0x1e, 0xc1, 0xcc, 0x5a, 0xaf, 0x27, 0x04, 0x42, 0x90, 0x4e, 0x2f, 0x98, 0x12, 0xc3, 0x14, 0xe9, 0x29, 0xd8}
found := 0
f.Iterate([]string{"default"}, h, func(folder, file string, idx int32) bool {
if folder == "default" && file == filepath.FromSlash("f/f2/f25f1b3e6e029231b933531b2138796d") && idx == 3 {
found++
}
return true
})
if found != 1 {
t.Errorf("Found %d blocks instead of expected 1", found)
}
inst.Close()
}
func unzip(src, dest string) error {
r, err := zip.OpenReader(src)
if err != nil {
return err
}
defer func() {
if err := r.Close(); err != nil {
panic(err)
}
}()
os.MkdirAll(dest, 0755)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
if f.FileInfo().IsDir() {
os.MkdirAll(path, f.Mode())
} else {
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range r.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -10,12 +10,10 @@ import (
"bytes"
"encoding/binary"
"os"
"path/filepath"
"sort"
"strings"
"sync/atomic"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
"github.com/syndtr/goleveldb/leveldb"
@@ -26,11 +24,12 @@ import (
"github.com/syndtr/goleveldb/leveldb/util"
)
type deletionHandler func(t readWriteTransaction, folder, device, name []byte, dbi iterator.Iterator) int64
type deletionHandler func(t readWriteTransaction, folder, device, name []byte, dbi iterator.Iterator)
type Instance struct {
committed int64 // this must be the first attribute in the struct to ensure 64 bit alignment on 32 bit plaforms
*leveldb.DB
location string
folderIdx *smallIndex
deviceIdx *smallIndex
}
@@ -48,16 +47,6 @@ func Open(file string) (*Instance, error) {
WriteBuffer: 4 << 20,
}
if _, err := os.Stat(file); os.IsNotExist(err) {
// The file we are looking to open does not exist. This may be the
// first launch so we should look for an old version and try to
// convert it.
if err := checkConvertDatabase(file); err != nil {
l.Infoln("Converting old database:", err)
l.Infoln("Will rescan from scratch.")
}
}
db, err := leveldb.OpenFile(file, opts)
if leveldbIsCorrupted(err) {
db, err = leveldb.RecoverFile(file, opts)
@@ -76,17 +65,18 @@ func Open(file string) (*Instance, error) {
return nil, err
}
return newDBInstance(db), nil
return newDBInstance(db, file), nil
}
func OpenMemory() *Instance {
db, _ := leveldb.Open(storage.NewMemStorage(), nil)
return newDBInstance(db)
return newDBInstance(db, "<memory>")
}
func newDBInstance(db *leveldb.DB) *Instance {
func newDBInstance(db *leveldb.DB, location string) *Instance {
i := &Instance{
DB: db,
DB: db,
location: location,
}
i.folderIdx = newSmallIndex(i, []byte{KeyTypeFolderIdx})
i.deviceIdx = newSmallIndex(i, []byte{KeyTypeDeviceIdx})
@@ -98,7 +88,12 @@ func (db *Instance) Committed() int64 {
return atomic.LoadInt64(&db.committed)
}
func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker, deleteFn deletionHandler) int64 {
// Location returns the filesystem path where the database is stored
func (db *Instance) Location() string {
return db.location
}
func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker, deleteFn deletionHandler) {
sort.Sort(fileList(fs)) // sort list on name, same as in the database
t := db.newReadWriteTransaction()
@@ -109,7 +104,6 @@ func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo
moreDb := dbi.Next()
fsi := 0
var maxLocalVer int64
isLocalDevice := bytes.Equal(device, protocol.LocalDeviceID[:])
for {
@@ -136,9 +130,7 @@ func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo
case moreFs && (!moreDb || cmp == -1):
l.Debugln("generic replace; missing - insert")
// Database is missing this file. Insert it.
if lv := t.insertFile(folder, device, fs[fsi]); lv > maxLocalVer {
maxLocalVer = lv
}
t.insertFile(folder, device, fs[fsi])
if isLocalDevice {
localSize.addFile(fs[fsi])
}
@@ -151,16 +143,14 @@ func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo
case moreFs && moreDb && cmp == 0:
// File exists on both sides - compare versions. We might get an
// update with the same version and different flags if a device has
// marked a file as invalid, so handle that too.
// update with the same version if a device has marked a file as
// invalid, so handle that too.
l.Debugln("generic replace; exists - compare")
var ef FileInfoTruncated
ef.UnmarshalXDR(dbi.Value())
if !fs[fsi].Version.Equal(ef.Version) || fs[fsi].Flags != ef.Flags {
ef.Unmarshal(dbi.Value())
if !fs[fsi].Version.Equal(ef.Version) || fs[fsi].Invalid != ef.Invalid {
l.Debugln("generic replace; differs - insert")
if lv := t.insertFile(folder, device, fs[fsi]); lv > maxLocalVer {
maxLocalVer = lv
}
t.insertFile(folder, device, fs[fsi])
if isLocalDevice {
localSize.removeFile(ef)
localSize.addFile(fs[fsi])
@@ -179,9 +169,7 @@ func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo
case moreDb && (!moreFs || cmp == 1):
l.Debugln("generic replace; exists - remove")
if lv := deleteFn(t, folder, device, oldName, dbi); lv > maxLocalVer {
maxLocalVer = lv
}
deleteFn(t, folder, device, oldName, dbi)
moreDb = dbi.Next()
}
@@ -189,26 +177,21 @@ func (db *Instance) genericReplace(folder, device []byte, fs []protocol.FileInfo
// growing too large and thus allocating unnecessarily much memory.
t.checkFlush()
}
return maxLocalVer
}
func (db *Instance) replace(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker) int64 {
// TODO: Return the remaining maxLocalVer?
return db.genericReplace(folder, device, fs, localSize, globalSize, func(t readWriteTransaction, folder, device, name []byte, dbi iterator.Iterator) int64 {
func (db *Instance) replace(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker) {
db.genericReplace(folder, device, fs, localSize, globalSize, func(t readWriteTransaction, folder, device, name []byte, dbi iterator.Iterator) {
// Database has a file that we are missing. Remove it.
l.Debugf("delete; folder=%q device=%v name=%q", folder, protocol.DeviceIDFromBytes(device), name)
t.removeFromGlobal(folder, device, name, globalSize)
t.Delete(dbi.Key())
return 0
})
}
func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker) int64 {
func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, localSize, globalSize *sizeTracker) {
t := db.newReadWriteTransaction()
defer t.close()
var maxLocalVer int64
var fk []byte
isLocalDevice := bytes.Equal(device, protocol.LocalDeviceID[:])
for _, f := range fs {
@@ -220,9 +203,7 @@ func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, l
localSize.addFile(f)
}
if lv := t.insertFile(folder, device, f); lv > maxLocalVer {
maxLocalVer = lv
}
t.insertFile(folder, device, f)
if f.IsInvalid() {
t.removeFromGlobal(folder, device, name, globalSize)
} else {
@@ -232,21 +213,18 @@ func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, l
}
var ef FileInfoTruncated
err = ef.UnmarshalXDR(bs)
err = ef.Unmarshal(bs)
if err != nil {
panic(err)
}
// Flags might change without the version being bumped when we set the
// invalid flag on an existing file.
if !ef.Version.Equal(f.Version) || ef.Flags != f.Flags {
// The Invalid flag might change without the version being bumped.
if !ef.Version.Equal(f.Version) || ef.Invalid != f.Invalid {
if isLocalDevice {
localSize.removeFile(ef)
localSize.addFile(f)
}
if lv := t.insertFile(folder, device, f); lv > maxLocalVer {
maxLocalVer = lv
}
t.insertFile(folder, device, f)
if f.IsInvalid() {
t.removeFromGlobal(folder, device, name, globalSize)
} else {
@@ -258,8 +236,6 @@ func (db *Instance) updateFiles(folder, device []byte, fs []protocol.FileInfo, l
// growing too large and thus allocating unnecessarily much memory.
t.checkFlush()
}
return maxLocalVer
}
func (db *Instance) withHave(folder, device, prefix []byte, truncate bool, fn Iterator) {
@@ -308,7 +284,7 @@ func (db *Instance) withAllFolderTruncated(folder []byte, fn func(device []byte,
// struct, which in turn references the buffer it was unmarshalled
// from. dbi.Value() just returns an internal slice that it reuses, so
// we need to copy it.
err := f.UnmarshalXDR(append([]byte{}, dbi.Value()...))
err := f.Unmarshal(append([]byte{}, dbi.Value()...))
if err != nil {
panic(err)
}
@@ -347,16 +323,16 @@ func (db *Instance) getGlobal(folder, file []byte, truncate bool) (FileIntf, boo
}
var vl VersionList
err = vl.UnmarshalXDR(bs)
err = vl.Unmarshal(bs)
if err != nil {
panic(err)
}
if len(vl.versions) == 0 {
if len(vl.Versions) == 0 {
l.Debugln(k)
panic("no versions?")
}
k = db.deviceKey(folder, vl.versions[0].device, file)
k = db.deviceKey(folder, vl.Versions[0].Device, file)
bs, err = t.Get(k, nil)
if err != nil {
panic(err)
@@ -384,11 +360,11 @@ func (db *Instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
var fk []byte
for dbi.Next() {
var vl VersionList
err := vl.UnmarshalXDR(dbi.Value())
err := vl.Unmarshal(dbi.Value())
if err != nil {
panic(err)
}
if len(vl.versions) == 0 {
if len(vl.Versions) == 0 {
l.Debugln(dbi.Key())
panic("no versions?")
}
@@ -398,13 +374,13 @@ func (db *Instance) withGlobal(folder, prefix []byte, truncate bool, fn Iterator
return
}
fk = db.deviceKeyInto(fk[:cap(fk)], folder, vl.versions[0].device, name)
fk = db.deviceKeyInto(fk[:cap(fk)], folder, vl.Versions[0].Device, name)
bs, err := t.Get(fk, nil)
if err != nil {
l.Debugf("folder: %q (%x)", folder, folder)
l.Debugf("key: %q (%x)", dbi.Key(), dbi.Key())
l.Debugf("vl: %v", vl)
l.Debugf("vl.versions[0].device: %x", vl.versions[0].device)
l.Debugf("vl.Versions[0].Device: %x", vl.Versions[0].Device)
l.Debugf("name: %q (%x)", name, name)
l.Debugf("fk: %q", fk)
l.Debugf("fk: %x %x %x",
@@ -436,17 +412,17 @@ func (db *Instance) availability(folder, file []byte) []protocol.DeviceID {
}
var vl VersionList
err = vl.UnmarshalXDR(bs)
err = vl.Unmarshal(bs)
if err != nil {
panic(err)
}
var devices []protocol.DeviceID
for _, v := range vl.versions {
if !v.version.Equal(vl.versions[0].version) {
for _, v := range vl.Versions {
if !v.Version.Equal(vl.Versions[0].Version) {
break
}
n := protocol.DeviceIDFromBytes(v.device)
n := protocol.DeviceIDFromBytes(v.Device)
devices = append(devices, n)
}
@@ -464,11 +440,11 @@ func (db *Instance) withNeed(folder, device []byte, truncate bool, fn Iterator)
nextFile:
for dbi.Next() {
var vl VersionList
err := vl.UnmarshalXDR(dbi.Value())
err := vl.Unmarshal(dbi.Value())
if err != nil {
panic(err)
}
if len(vl.versions) == 0 {
if len(vl.Versions) == 0 {
l.Debugln(dbi.Key())
panic("no versions?")
}
@@ -476,29 +452,29 @@ nextFile:
have := false // If we have the file, any version
need := false // If we have a lower version of the file
var haveVersion protocol.Vector
for _, v := range vl.versions {
if bytes.Equal(v.device, device) {
for _, v := range vl.Versions {
if bytes.Equal(v.Device, device) {
have = true
haveVersion = v.version
haveVersion = v.Version
// XXX: This marks Concurrent (i.e. conflicting) changes as
// needs. Maybe we should do that, but it needs special
// handling in the puller.
need = !v.version.GreaterEqual(vl.versions[0].version)
need = !v.Version.GreaterEqual(vl.Versions[0].Version)
break
}
}
if need || !have {
name := db.globalKeyName(dbi.Key())
needVersion := vl.versions[0].version
needVersion := vl.Versions[0].Version
nextVersion:
for i := range vl.versions {
if !vl.versions[i].version.Equal(needVersion) {
for i := range vl.Versions {
if !vl.Versions[i].Version.Equal(needVersion) {
// We haven't found a valid copy of the file with the needed version.
continue nextFile
}
fk = db.deviceKeyInto(fk[:cap(fk)], folder, vl.versions[i].device, name)
fk = db.deviceKeyInto(fk[:cap(fk)], folder, vl.Versions[i].Device, name)
bs, err := t.Get(fk, nil)
if err != nil {
var id protocol.DeviceID
@@ -528,7 +504,7 @@ nextFile:
continue nextFile
}
l.Debugf("need folder=%q device=%v name=%q need=%v have=%v haveV=%d globalV=%d", folder, protocol.DeviceIDFromBytes(device), name, need, have, haveVersion, vl.versions[0].version)
l.Debugf("need folder=%q device=%v name=%q need=%v have=%v haveV=%d globalV=%d", folder, protocol.DeviceIDFromBytes(device), name, need, have, haveVersion, vl.Versions[0].Version)
if cont := fn(gf); !cont {
return
@@ -601,7 +577,7 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
for dbi.Next() {
gk := dbi.Key()
var vl VersionList
err := vl.UnmarshalXDR(dbi.Value())
err := vl.Unmarshal(dbi.Value())
if err != nil {
panic(err)
}
@@ -613,8 +589,8 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
name := db.globalKeyName(gk)
var newVL VersionList
for i, version := range vl.versions {
fk = db.deviceKeyInto(fk[:cap(fk)], folder, version.device, name)
for i, version := range vl.Versions {
fk = db.deviceKeyInto(fk[:cap(fk)], folder, version.Device, name)
_, err := t.Get(fk, nil)
if err == leveldb.ErrNotFound {
@@ -623,10 +599,10 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
if err != nil {
panic(err)
}
newVL.versions = append(newVL.versions, version)
newVL.Versions = append(newVL.Versions, version)
if i == 0 {
fi, ok := t.getFile(folder, version.device, name)
fi, ok := t.getFile(folder, version.Device, name)
if !ok {
panic("nonexistent global master file")
}
@@ -634,8 +610,8 @@ func (db *Instance) checkGlobals(folder []byte, globalSize *sizeTracker) {
}
}
if len(newVL.versions) != len(vl.versions) {
t.Put(dbi.Key(), newVL.MustMarshalXDR())
if len(newVL.Versions) != len(vl.Versions) {
t.Put(dbi.Key(), mustMarshal(&newVL))
t.checkFlush()
}
}
@@ -712,15 +688,75 @@ func (db *Instance) globalKeyFolder(key []byte) []byte {
return folder
}
func (db *Instance) getIndexID(device, folder []byte) protocol.IndexID {
key := db.indexIDKey(device, folder)
cur, err := db.Get(key, nil)
if err != nil {
return 0
}
var id protocol.IndexID
if err := id.Unmarshal(cur); err != nil {
return 0
}
return id
}
func (db *Instance) setIndexID(device, folder []byte, id protocol.IndexID) {
key := db.indexIDKey(device, folder)
bs, _ := id.Marshal() // marshalling can't fail
if err := db.Put(key, bs, nil); err != nil {
panic("storing index ID: " + err.Error())
}
}
func (db *Instance) indexIDKey(device, folder []byte) []byte {
k := make([]byte, keyPrefixLen+keyDeviceLen+keyFolderLen)
k[0] = KeyTypeIndexID
binary.BigEndian.PutUint32(k[keyPrefixLen:], db.deviceIdx.ID(device))
binary.BigEndian.PutUint32(k[keyPrefixLen+keyDeviceLen:], db.folderIdx.ID(folder))
return k
}
func (db *Instance) mtimesKey(folder []byte) []byte {
prefix := make([]byte, 5) // key type + 4 bytes folder idx number
prefix[0] = KeyTypeVirtualMtime
binary.BigEndian.PutUint32(prefix[1:], db.folderIdx.ID(folder))
return prefix
}
// DropDeltaIndexIDs removes all index IDs from the database. This will
// cause a full index transmission on the next connection.
func (db *Instance) DropDeltaIndexIDs() {
db.dropPrefix([]byte{KeyTypeIndexID})
}
func (db *Instance) dropMtimes(folder []byte) {
db.dropPrefix(db.mtimesKey(folder))
}
func (db *Instance) dropPrefix(prefix []byte) {
t := db.newReadWriteTransaction()
defer t.close()
dbi := t.NewIterator(util.BytesPrefix(prefix), nil)
defer dbi.Release()
for dbi.Next() {
t.Delete(dbi.Key())
}
}
func unmarshalTrunc(bs []byte, truncate bool) (FileIntf, error) {
if truncate {
var tf FileInfoTruncated
err := tf.UnmarshalXDR(bs)
err := tf.Unmarshal(bs)
return tf, err
}
var tf protocol.FileInfo
err := tf.UnmarshalXDR(bs)
err := tf.Unmarshal(bs)
return tf, err
}
@@ -740,50 +776,6 @@ func leveldbIsCorrupted(err error) bool {
return false
}
// checkConvertDatabase tries to convert an existing old (v0.11) database to
// new (v0.13) format.
func checkConvertDatabase(dbFile string) error {
oldLoc := filepath.Join(filepath.Dir(dbFile), "index-v0.11.0.db")
if _, err := os.Stat(oldLoc); os.IsNotExist(err) {
// The old database file does not exist; that's ok, continue as if
// everything succeeded.
return nil
} else if err != nil {
// Any other error is weird.
return err
}
// There exists a database in the old format. We run a one time
// conversion from old to new.
fromDb, err := leveldb.OpenFile(oldLoc, nil)
if err != nil {
return err
}
toDb, err := leveldb.OpenFile(dbFile, nil)
if err != nil {
return err
}
err = convertKeyFormat(fromDb, toDb)
if err != nil {
return err
}
err = toDb.Close()
if err != nil {
return err
}
// We've done this one, we don't want to do it again (if the user runs
// -reset or so). We don't care too much about errors any more at this stage.
fromDb.Close()
osutil.Rename(oldLoc, oldLoc+".converted")
return nil
}
// A smallIndex is an in memory bidirectional []byte to uint32 map. It gives
// fast lookups in both directions and persists to the database. Don't use for
// storing more items than fit comfortably in RAM.

View File

@@ -74,18 +74,12 @@ func (t readWriteTransaction) flush() {
atomic.AddInt64(&t.db.committed, int64(t.Batch.Len()))
}
func (t readWriteTransaction) insertFile(folder, device []byte, file protocol.FileInfo) int64 {
func (t readWriteTransaction) insertFile(folder, device []byte, file protocol.FileInfo) {
l.Debugf("insert; folder=%q device=%v %v", folder, protocol.DeviceIDFromBytes(device), file)
if file.LocalVersion == 0 {
file.LocalVersion = clock(0)
}
name := []byte(file.Name)
nk := t.db.deviceKey(folder, device, name)
t.Put(nk, file.MustMarshalXDR())
return file.LocalVersion
t.Put(nk, mustMarshal(&file))
}
// updateGlobal adds this device+version to the version list for the given
@@ -105,14 +99,14 @@ func (t readWriteTransaction) updateGlobal(folder, device []byte, file protocol.
var hasOldFile bool
// Remove the device from the current version list
if len(svl) != 0 {
err = fl.UnmarshalXDR(svl)
err = fl.Unmarshal(svl)
if err != nil {
panic(err)
}
for i := range fl.versions {
if bytes.Equal(fl.versions[i].device, device) {
if fl.versions[i].version.Equal(file.Version) {
for i := range fl.Versions {
if bytes.Equal(fl.Versions[i].Device, device) {
if fl.Versions[i].Version.Equal(file.Version) {
// No need to do anything
return false
}
@@ -120,29 +114,29 @@ func (t readWriteTransaction) updateGlobal(folder, device []byte, file protocol.
if i == 0 {
// Keep the current newest file around so we can subtract it from
// the globalSize if we replace it.
oldFile, hasOldFile = t.getFile(folder, fl.versions[0].device, name)
oldFile, hasOldFile = t.getFile(folder, fl.Versions[0].Device, name)
}
fl.versions = append(fl.versions[:i], fl.versions[i+1:]...)
fl.Versions = append(fl.Versions[:i], fl.Versions[i+1:]...)
break
}
}
}
nv := fileVersion{
device: device,
version: file.Version,
nv := FileVersion{
Device: device,
Version: file.Version,
}
insertedAt := -1
// Find a position in the list to insert this file. The file at the front
// of the list is the newer, the "global".
for i := range fl.versions {
switch fl.versions[i].version.Compare(file.Version) {
for i := range fl.Versions {
switch fl.Versions[i].Version.Compare(file.Version) {
case protocol.Equal, protocol.Lesser:
// The version at this point in the list is equal to or lesser
// ("older") than us. We insert ourselves in front of it.
fl.versions = insertVersion(fl.versions, i, nv)
fl.Versions = insertVersion(fl.Versions, i, nv)
insertedAt = i
goto done
@@ -153,12 +147,12 @@ func (t readWriteTransaction) updateGlobal(folder, device []byte, file protocol.
// "Greater" in the condition above is just based on the device
// IDs in the version vector, which is not the only thing we use
// to determine the winner.)
of, ok := t.getFile(folder, fl.versions[i].device, name)
of, ok := t.getFile(folder, fl.Versions[i].Device, name)
if !ok {
panic("file referenced in version list does not exist")
}
if file.WinsConflict(of) {
fl.versions = insertVersion(fl.versions, i, nv)
fl.Versions = insertVersion(fl.Versions, i, nv)
insertedAt = i
goto done
}
@@ -166,8 +160,8 @@ func (t readWriteTransaction) updateGlobal(folder, device []byte, file protocol.
}
// We didn't find a position for an insert above, so append to the end.
fl.versions = append(fl.versions, nv)
insertedAt = len(fl.versions) - 1
fl.Versions = append(fl.Versions, nv)
insertedAt = len(fl.Versions) - 1
done:
if insertedAt == 0 {
@@ -178,9 +172,9 @@ done:
if hasOldFile {
// We have the old file that was removed at the head of the list.
globalSize.removeFile(oldFile)
} else if len(fl.versions) > 1 {
} else if len(fl.Versions) > 1 {
// The previous newest version is now at index 1, grab it from there.
oldFile, ok := t.getFile(folder, fl.versions[1].device, name)
oldFile, ok := t.getFile(folder, fl.Versions[1].Device, name)
if !ok {
panic("file referenced in version list does not exist")
}
@@ -190,7 +184,7 @@ done:
}
l.Debugf("new global after update: %v", fl)
t.Put(gk, fl.MustMarshalXDR())
t.Put(gk, mustMarshal(&fl))
return true
}
@@ -210,14 +204,14 @@ func (t readWriteTransaction) removeFromGlobal(folder, device, file []byte, glob
}
var fl VersionList
err = fl.UnmarshalXDR(svl)
err = fl.Unmarshal(svl)
if err != nil {
panic(err)
}
removed := false
for i := range fl.versions {
if bytes.Equal(fl.versions[i].device, device) {
for i := range fl.Versions {
if bytes.Equal(fl.Versions[i].Device, device) {
if i == 0 && globalSize != nil {
f, ok := t.getFile(folder, device, file)
if !ok {
@@ -226,18 +220,18 @@ func (t readWriteTransaction) removeFromGlobal(folder, device, file []byte, glob
globalSize.removeFile(f)
removed = true
}
fl.versions = append(fl.versions[:i], fl.versions[i+1:]...)
fl.Versions = append(fl.Versions[:i], fl.Versions[i+1:]...)
break
}
}
if len(fl.versions) == 0 {
if len(fl.Versions) == 0 {
t.Delete(gk)
} else {
l.Debugf("new global after remove: %v", fl)
t.Put(gk, fl.MustMarshalXDR())
t.Put(gk, mustMarshal(&fl))
if removed {
f, ok := t.getFile(folder, fl.versions[0].device, file)
f, ok := t.getFile(folder, fl.Versions[0].Device, file)
if !ok {
panic("new global is nonexistent file")
}
@@ -246,9 +240,21 @@ func (t readWriteTransaction) removeFromGlobal(folder, device, file []byte, glob
}
}
func insertVersion(vl []fileVersion, i int, v fileVersion) []fileVersion {
t := append(vl, fileVersion{})
func insertVersion(vl []FileVersion, i int, v FileVersion) []FileVersion {
t := append(vl, FileVersion{})
copy(t[i+1:], t[i:])
t[i] = v
return t
}
type marshaller interface {
Marshal() ([]byte, error)
}
func mustMarshal(f marshaller) []byte {
bs, err := f.Marshal()
if err != nil {
panic(err)
}
return bs
}

View File

@@ -1,142 +0,0 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package db
import (
"github.com/calmh/xdr"
)
/*
fileVersion Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Vector Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ device (length + padded data) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct fileVersion {
Vector version;
opaque device<>;
}
*/
func (o fileVersion) XDRSize() int {
return o.version.XDRSize() +
4 + len(o.device) + xdr.Padding(len(o.device))
}
func (o fileVersion) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
}
func (o fileVersion) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o fileVersion) MarshalXDRInto(m *xdr.Marshaller) error {
if err := o.version.MarshalXDRInto(m); err != nil {
return err
}
m.MarshalBytes(o.device)
return m.Error
}
func (o *fileVersion) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *fileVersion) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
(&o.version).UnmarshalXDRFrom(u)
o.device = u.UnmarshalBytes()
return u.Error
}
/*
VersionList Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of versions |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more fileVersion Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct VersionList {
fileVersion versions<>;
}
*/
func (o VersionList) XDRSize() int {
return 4 + xdr.SizeOfSlice(o.versions)
}
func (o VersionList) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
}
func (o VersionList) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o VersionList) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(uint32(len(o.versions)))
for i := range o.versions {
if err := o.versions[i].MarshalXDRInto(m); err != nil {
return err
}
}
return m.Error
}
func (o *VersionList) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *VersionList) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
_versionsSize := int(u.UnmarshalUint32())
if _versionsSize < 0 {
return xdr.ElementSizeExceeded("versions", _versionsSize, 0)
} else if _versionsSize == 0 {
o.versions = nil
} else {
if _versionsSize <= len(o.versions) {
o.versions = o.versions[:_versionsSize]
} else {
o.versions = make([]fileVersion, _versionsSize)
}
for i := range o.versions {
(&o.versions[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
}

View File

@@ -14,26 +14,31 @@ package db
import (
stdsync "sync"
"sync/atomic"
"github.com/syncthing/syncthing/lib/fs"
"github.com/syncthing/syncthing/lib/osutil"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/sync"
)
type FileSet struct {
localVersion map[protocol.DeviceID]int64
mutex sync.Mutex
folder string
db *Instance
blockmap *BlockMap
localSize sizeTracker
globalSize sizeTracker
sequence int64 // Our local sequence number
folder string
db *Instance
blockmap *BlockMap
localSize sizeTracker
globalSize sizeTracker
remoteSequence map[protocol.DeviceID]int64 // Highest seen sequence numbers for other devices
updateMutex sync.Mutex // protects remoteSequence and database updates
}
// FileIntf is the set of methods implemented by both protocol.FileInfo and
// protocol.FileInfoTruncated.
// FileInfoTruncated.
type FileIntf interface {
Size() int64
FileSize() int64
FileName() string
IsDeleted() bool
IsInvalid() bool
IsDirectory() bool
@@ -42,7 +47,7 @@ type FileIntf interface {
}
// The Iterator is called with either a protocol.FileInfo or a
// protocol.FileInfoTruncated (depending on the method) and returns true to
// FileInfoTruncated (depending on the method) and returns true to
// continue iteration, false to stop.
type Iterator func(f FileIntf) bool
@@ -64,7 +69,7 @@ func (s *sizeTracker) addFile(f FileIntf) {
} else {
s.files++
}
s.bytes += f.Size()
s.bytes += f.FileSize()
s.mut.Unlock()
}
@@ -79,7 +84,7 @@ func (s *sizeTracker) removeFile(f FileIntf) {
} else {
s.files--
}
s.bytes -= f.Size()
s.bytes -= f.FileSize()
if s.deleted < 0 || s.files < 0 {
panic("bug: removed more than added")
}
@@ -94,11 +99,11 @@ func (s *sizeTracker) Size() (files, deleted int, bytes int64) {
func NewFileSet(folder string, db *Instance) *FileSet {
var s = FileSet{
localVersion: make(map[protocol.DeviceID]int64),
folder: folder,
db: db,
blockmap: NewBlockMap(db, db.folderIdx.ID([]byte(folder))),
mutex: sync.NewMutex(),
remoteSequence: make(map[protocol.DeviceID]int64),
folder: folder,
db: db,
blockmap: NewBlockMap(db, db.folderIdx.ID([]byte(folder))),
updateMutex: sync.NewMutex(),
}
s.db.checkGlobals([]byte(folder), &s.globalSize)
@@ -106,16 +111,17 @@ func NewFileSet(folder string, db *Instance) *FileSet {
var deviceID protocol.DeviceID
s.db.withAllFolderTruncated([]byte(folder), func(device []byte, f FileInfoTruncated) bool {
copy(deviceID[:], device)
if f.LocalVersion > s.localVersion[deviceID] {
s.localVersion[deviceID] = f.LocalVersion
}
if deviceID == protocol.LocalDeviceID {
if f.Sequence > s.sequence {
s.sequence = f.Sequence
}
s.localSize.addFile(f)
} else if f.Sequence > s.remoteSequence[deviceID] {
s.remoteSequence[deviceID] = f.Sequence
}
return true
})
l.Debugf("loaded localVersion for %q: %#v", folder, s.localVersion)
clock(s.localVersion[protocol.LocalDeviceID])
l.Debugf("loaded sequence for %q: %#v", folder, s.sequence)
return &s
}
@@ -123,13 +129,25 @@ func NewFileSet(folder string, db *Instance) *FileSet {
func (s *FileSet) Replace(device protocol.DeviceID, fs []protocol.FileInfo) {
l.Debugf("%s Replace(%v, [%d])", s.folder, device, len(fs))
normalizeFilenames(fs)
s.mutex.Lock()
defer s.mutex.Unlock()
s.localVersion[device] = s.db.replace([]byte(s.folder), device[:], fs, &s.localSize, &s.globalSize)
if len(fs) == 0 {
// Reset the local version if all files were removed.
s.localVersion[device] = 0
s.updateMutex.Lock()
defer s.updateMutex.Unlock()
if device == protocol.LocalDeviceID {
if len(fs) == 0 {
s.sequence = 0
} else {
// Always overwrite Sequence on updated files to ensure
// correct ordering. The caller is supposed to leave it set to
// zero anyhow.
for i := range fs {
fs[i].Sequence = atomic.AddInt64(&s.sequence, 1)
}
}
} else {
s.remoteSequence[device] = maxSequence(fs)
}
s.db.replace([]byte(s.folder), device[:], fs, &s.localSize, &s.globalSize)
if device == protocol.LocalDeviceID {
s.blockmap.Drop()
s.blockmap.Add(fs)
@@ -139,12 +157,15 @@ func (s *FileSet) Replace(device protocol.DeviceID, fs []protocol.FileInfo) {
func (s *FileSet) Update(device protocol.DeviceID, fs []protocol.FileInfo) {
l.Debugf("%s Update(%v, [%d])", s.folder, device, len(fs))
normalizeFilenames(fs)
s.mutex.Lock()
defer s.mutex.Unlock()
s.updateMutex.Lock()
defer s.updateMutex.Unlock()
if device == protocol.LocalDeviceID {
discards := make([]protocol.FileInfo, 0, len(fs))
updates := make([]protocol.FileInfo, 0, len(fs))
for _, newFile := range fs {
for i, newFile := range fs {
fs[i].Sequence = atomic.AddInt64(&s.sequence, 1)
existingFile, ok := s.db.getFile([]byte(s.folder), device[:], []byte(newFile.Name))
if !ok || !existingFile.Version.Equal(newFile.Version) {
discards = append(discards, existingFile)
@@ -153,10 +174,10 @@ func (s *FileSet) Update(device protocol.DeviceID, fs []protocol.FileInfo) {
}
s.blockmap.Discard(discards)
s.blockmap.Update(updates)
} else {
s.remoteSequence[device] = maxSequence(fs)
}
if lv := s.db.updateFiles([]byte(s.folder), device[:], fs, &s.localSize, &s.globalSize); lv > s.localVersion[device] {
s.localVersion[device] = lv
}
s.db.updateFiles([]byte(s.folder), device[:], fs, &s.localSize, &s.globalSize)
}
func (s *FileSet) WithNeed(device protocol.DeviceID, fn Iterator) {
@@ -228,10 +249,14 @@ func (s *FileSet) Availability(file string) []protocol.DeviceID {
return s.db.availability([]byte(s.folder), []byte(osutil.NormalizedFilename(file)))
}
func (s *FileSet) LocalVersion(device protocol.DeviceID) int64 {
s.mutex.Lock()
defer s.mutex.Unlock()
return s.localVersion[device]
func (s *FileSet) Sequence(device protocol.DeviceID) int64 {
if device == protocol.LocalDeviceID {
return atomic.LoadInt64(&s.sequence)
}
s.updateMutex.Lock()
defer s.updateMutex.Unlock()
return s.remoteSequence[device]
}
func (s *FileSet) LocalSize() (files, deleted int, bytes int64) {
@@ -242,16 +267,65 @@ func (s *FileSet) GlobalSize() (files, deleted int, bytes int64) {
return s.globalSize.Size()
}
func (s *FileSet) IndexID(device protocol.DeviceID) protocol.IndexID {
id := s.db.getIndexID(device[:], []byte(s.folder))
if id == 0 && device == protocol.LocalDeviceID {
// No index ID set yet. We create one now.
id = protocol.NewIndexID()
s.db.setIndexID(device[:], []byte(s.folder), id)
}
return id
}
func (s *FileSet) SetIndexID(device protocol.DeviceID, id protocol.IndexID) {
if device == protocol.LocalDeviceID {
panic("do not explicitly set index ID for local device")
}
s.db.setIndexID(device[:], []byte(s.folder), id)
}
func (s *FileSet) MtimeFS() *fs.MtimeFS {
prefix := s.db.mtimesKey([]byte(s.folder))
kv := NewNamespacedKV(s.db, string(prefix))
return fs.NewMtimeFS(kv)
}
func (s *FileSet) ListDevices() []protocol.DeviceID {
s.updateMutex.Lock()
devices := make([]protocol.DeviceID, 0, len(s.remoteSequence))
for id, seq := range s.remoteSequence {
if seq > 0 {
devices = append(devices, id)
}
}
s.updateMutex.Unlock()
return devices
}
// maxSequence returns the highest of the Sequence numbers found in
// the given slice of FileInfos. This should really be the Sequence of
// the last item, but Syncthing v0.14.0 and other implementations may not
// implement update sorting....
func maxSequence(fs []protocol.FileInfo) int64 {
var max int64
for _, f := range fs {
if f.Sequence > max {
max = f.Sequence
}
}
return max
}
// DropFolder clears out all information related to the given folder from the
// database.
func DropFolder(db *Instance, folder string) {
db.dropFolder([]byte(folder))
db.dropMtimes([]byte(folder))
bm := &BlockMap{
db: db,
folder: db.folderIdx.ID([]byte(folder)),
}
bm.Drop()
NewVirtualMtimeRepo(db, folder).Drop()
}
func normalizeFilenames(fs []protocol.FileInfo) {

View File

@@ -88,7 +88,7 @@ func (l fileList) String() string {
var b bytes.Buffer
b.WriteString("[]protocol.FileList{\n")
for _, f := range l {
fmt.Fprintf(&b, " %q: #%d, %d bytes, %d blocks, flags=%o\n", f.Name, f.Version, f.Size(), len(f.Blocks), f.Flags)
fmt.Fprintf(&b, " %q: #%d, %d bytes, %d blocks, perms=%o\n", f.Name, f.Version, f.Size, len(f.Blocks), f.Permissions)
}
b.WriteString("}")
return b.String()
@@ -100,35 +100,35 @@ func TestGlobalSet(t *testing.T) {
m := db.NewFileSet("test", ldb)
local0 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "z", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(8)},
protocol.FileInfo{Name: "a", Sequence: 1, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Sequence: 2, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Sequence: 3, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Sequence: 4, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "z", Sequence: 5, Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(8)},
}
local1 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "z", Version: protocol.Vector{{ID: myID, Value: 1001}}, Flags: protocol.FlagDeleted},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "z", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Deleted: true},
}
localTot := fileList{
local0[0],
local0[1],
local0[2],
local0[3],
protocol.FileInfo{Name: "z", Version: protocol.Vector{{ID: myID, Value: 1001}}, Flags: protocol.FlagDeleted},
protocol.FileInfo{Name: "z", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Deleted: true},
}
remote0 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(5)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(5)},
}
remote1 := fileList{
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(6)},
protocol.FileInfo{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(6)},
protocol.FileInfo{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(7)},
}
remoteTot := fileList{
remote0[0],
@@ -178,7 +178,7 @@ func TestGlobalSet(t *testing.T) {
} else {
globalFiles++
}
globalBytes += f.Size()
globalBytes += f.FileSize()
}
gsFiles, gsDeleted, gsBytes := m.GlobalSize()
if gsFiles != globalFiles {
@@ -208,7 +208,7 @@ func TestGlobalSet(t *testing.T) {
} else {
haveFiles++
}
haveBytes += f.Size()
haveBytes += f.FileSize()
}
lsFiles, lsDeleted, lsBytes := m.LocalSize()
if lsFiles != haveFiles {
@@ -303,23 +303,23 @@ func TestNeedWithInvalid(t *testing.T) {
s := db.NewFileSet("test", ldb)
localHave := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
}
remote0Have := fileList{
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(5), Invalid: true},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(7)},
}
remote1Have := fileList{
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1004}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(5), Invalid: true},
protocol.FileInfo{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1004}}}, Blocks: genBlocks(5), Invalid: true},
}
expectedNeed := fileList{
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(7)},
}
s.Replace(protocol.LocalDeviceID, localHave)
@@ -340,10 +340,10 @@ func TestUpdateToInvalid(t *testing.T) {
s := db.NewFileSet("test", ldb)
localHave := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(5), Invalid: true},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(7)},
}
s.Replace(protocol.LocalDeviceID, localHave)
@@ -355,7 +355,7 @@ func TestUpdateToInvalid(t *testing.T) {
t.Errorf("Have incorrect before invalidation;\n A: %v !=\n E: %v", have, localHave)
}
localHave[1] = protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}, Flags: protocol.FlagInvalid}
localHave[1] = protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Invalid: true}
s.Update(protocol.LocalDeviceID, localHave[1:2])
have = fileList(haveList(s, protocol.LocalDeviceID))
@@ -372,16 +372,16 @@ func TestInvalidAvailability(t *testing.T) {
s := db.NewFileSet("test", ldb)
remote0Have := fileList{
protocol.FileInfo{Name: "both", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "r0only", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "none", Version: protocol.Vector{{ID: myID, Value: 1004}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "both", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(5), Invalid: true},
protocol.FileInfo{Name: "r0only", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "none", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1004}}}, Blocks: genBlocks(5), Invalid: true},
}
remote1Have := fileList{
protocol.FileInfo{Name: "both", Version: protocol.Vector{{ID: myID, Value: 1001}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "r0only", Version: protocol.Vector{{ID: myID, Value: 1003}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "none", Version: protocol.Vector{{ID: myID, Value: 1004}}, Blocks: genBlocks(5), Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "both", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "r1only", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(7)},
protocol.FileInfo{Name: "r0only", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1003}}}, Blocks: genBlocks(5), Invalid: true},
protocol.FileInfo{Name: "none", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1004}}}, Blocks: genBlocks(5), Invalid: true},
}
s.Replace(remoteDevice0, remote0Have)
@@ -410,17 +410,17 @@ func TestGlobalReset(t *testing.T) {
m := db.NewFileSet("test", ldb)
local := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
remote := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
m.Replace(protocol.LocalDeviceID, local)
@@ -448,23 +448,23 @@ func TestNeed(t *testing.T) {
m := db.NewFileSet("test", ldb)
local := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
remote := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
shouldNeed := []protocol.FileInfo{
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1001}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1001}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
m.Replace(protocol.LocalDeviceID, local)
@@ -480,31 +480,31 @@ func TestNeed(t *testing.T) {
}
}
func TestLocalVersion(t *testing.T) {
func TestSequence(t *testing.T) {
ldb := db.OpenMemory()
m := db.NewFileSet("test", ldb)
local1 := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
local2 := []protocol.FileInfo{
local1[0],
// [1] deleted
local1[2],
{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
m.Replace(protocol.LocalDeviceID, local1)
c0 := m.LocalVersion(protocol.LocalDeviceID)
c0 := m.Sequence(protocol.LocalDeviceID)
m.Replace(protocol.LocalDeviceID, local2)
c1 := m.LocalVersion(protocol.LocalDeviceID)
c1 := m.Sequence(protocol.LocalDeviceID)
if !(c1 > c0) {
t.Fatal("Local version number should have incremented")
}
@@ -515,17 +515,17 @@ func TestListDropFolder(t *testing.T) {
s0 := db.NewFileSet("test0", ldb)
local1 := []protocol.FileInfo{
{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
s0.Replace(protocol.LocalDeviceID, local1)
s1 := db.NewFileSet("test1", ldb)
local2 := []protocol.FileInfo{
{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "e", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "f", Version: protocol.Vector{{ID: myID, Value: 1002}}},
{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "e", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
{Name: "f", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}},
}
s1.Replace(remoteDevice0, local2)
@@ -566,24 +566,24 @@ func TestGlobalNeedWithInvalid(t *testing.T) {
s := db.NewFileSet("test1", ldb)
rem0 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1002}}, Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Invalid: true},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
}
s.Replace(remoteDevice0, rem0)
rem1 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Flags: protocol.FlagInvalid},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Invalid: true},
}
s.Replace(remoteDevice1, rem1)
total := fileList{
// There's a valid copy of each file, so it should be merged
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1002}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1002}}}, Blocks: genBlocks(4)},
}
need := fileList(needList(s, protocol.LocalDeviceID))
@@ -609,7 +609,7 @@ func TestLongPath(t *testing.T) {
name := b.String() // 5000 characters
local := []protocol.FileInfo{
{Name: string(name), Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: string(name), Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
s.Replace(protocol.LocalDeviceID, local)
@@ -633,7 +633,7 @@ func TestCommitted(t *testing.T) {
s := db.NewFileSet("test", ldb)
local := []protocol.FileInfo{
{Name: string("file"), Version: protocol.Vector{{ID: myID, Value: 1000}}},
{Name: string("file"), Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}},
}
// Adding a file should increase the counter
@@ -659,12 +659,12 @@ func TestCommitted(t *testing.T) {
func BenchmarkUpdateOneFile(b *testing.B) {
local0 := fileList{
protocol.FileInfo{Name: "a", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(4)},
protocol.FileInfo{Name: "a", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(1)},
protocol.FileInfo{Name: "b", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(2)},
protocol.FileInfo{Name: "c", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(3)},
protocol.FileInfo{Name: "d", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(4)},
// A longer name is more realistic and causes more allocations
protocol.FileInfo{Name: "zajksdhaskjdh/askjdhaskjdashkajshd/kasjdhaskjdhaskdjhaskdjash/dkjashdaksjdhaskdjahskdjh", Version: protocol.Vector{{ID: myID, Value: 1000}}, Blocks: genBlocks(8)},
protocol.FileInfo{Name: "zajksdhaskjdh/askjdhaskjdashkajshd/kasjdhaskjdhaskdjhaskdjash/dkjashdaksjdhaskdjahskdjh", Version: protocol.Vector{Counters: []protocol.Counter{{ID: myID, Value: 1000}}}, Blocks: genBlocks(8)},
}
ldb, err := db.Open("testdata/benchmarkupdate.db")
@@ -687,3 +687,35 @@ func BenchmarkUpdateOneFile(b *testing.B) {
b.ReportAllocs()
}
func TestIndexID(t *testing.T) {
ldb := db.OpenMemory()
s := db.NewFileSet("test", ldb)
// The Index ID for some random device is zero by default.
id := s.IndexID(remoteDevice0)
if id != 0 {
t.Errorf("index ID for remote device should default to zero, not %d", id)
}
// The Index ID for someone else should be settable
s.SetIndexID(remoteDevice0, 42)
id = s.IndexID(remoteDevice0)
if id != 42 {
t.Errorf("index ID for remote device should be remembered; got %d, expected %d", id, 42)
}
// Our own index ID should be generated randomly.
id = s.IndexID(protocol.LocalDeviceID)
if id == 0 {
t.Errorf("index ID for local device should be random, not zero")
}
t.Logf("random index ID is 0x%016x", id)
// But of course always the same after that.
again := s.IndexID(protocol.LocalDeviceID)
if again != id {
t.Errorf("index ID changed; %d != %d", again, id)
}
}

62
lib/db/structs.go Normal file
View File

@@ -0,0 +1,62 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate go run ../../script/protofmt.go structs.proto
//go:generate protoc --proto_path=../../../../../:../../../../gogo/protobuf/protobuf:. --gogofast_out=. structs.proto
package db
import (
"fmt"
"time"
"github.com/syncthing/syncthing/lib/protocol"
)
func (f FileInfoTruncated) String() string {
return fmt.Sprintf("File{Name:%q, Permissions:0%o, Modified:%v, Version:%v, Length:%d, Deleted:%v, Invalid:%v, NoPermissions:%v}",
f.Name, f.Permissions, f.ModTime(), f.Version, f.Size, f.Deleted, f.Invalid, f.NoPermissions)
}
func (f FileInfoTruncated) IsDeleted() bool {
return f.Deleted
}
func (f FileInfoTruncated) IsInvalid() bool {
return f.Invalid
}
func (f FileInfoTruncated) IsDirectory() bool {
return f.Type == protocol.FileInfoTypeDirectory
}
func (f FileInfoTruncated) IsSymlink() bool {
switch f.Type {
case protocol.FileInfoTypeSymlinkDirectory, protocol.FileInfoTypeSymlinkFile, protocol.FileInfoTypeSymlinkUnknown:
return true
default:
return false
}
}
func (f FileInfoTruncated) HasPermissionBits() bool {
return !f.NoPermissions
}
func (f FileInfoTruncated) FileSize() int64 {
if f.IsDirectory() || f.IsDeleted() {
return 128
}
return f.Size
}
func (f FileInfoTruncated) FileName() string {
return f.Name
}
func (f FileInfoTruncated) ModTime() time.Time {
return time.Unix(f.ModifiedS, int64(f.ModifiedNs))
}

943
lib/db/structs.pb.go Normal file
View File

@@ -0,0 +1,943 @@
// Code generated by protoc-gen-gogo.
// source: structs.proto
// DO NOT EDIT!
/*
Package db is a generated protocol buffer package.
It is generated from these files:
structs.proto
It has these top-level messages:
FileVersion
VersionList
FileInfoTruncated
*/
package db
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import protocol "github.com/syncthing/syncthing/lib/protocol"
import io "io"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
type FileVersion struct {
Version protocol.Vector `protobuf:"bytes,1,opt,name=version" json:"version"`
Device []byte `protobuf:"bytes,2,opt,name=device,proto3" json:"device,omitempty"`
}
func (m *FileVersion) Reset() { *m = FileVersion{} }
func (m *FileVersion) String() string { return proto.CompactTextString(m) }
func (*FileVersion) ProtoMessage() {}
func (*FileVersion) Descriptor() ([]byte, []int) { return fileDescriptorStructs, []int{0} }
type VersionList struct {
Versions []FileVersion `protobuf:"bytes,1,rep,name=versions" json:"versions"`
}
func (m *VersionList) Reset() { *m = VersionList{} }
func (*VersionList) ProtoMessage() {}
func (*VersionList) Descriptor() ([]byte, []int) { return fileDescriptorStructs, []int{1} }
// Must be the same as FileInfo but without the blocks field
type FileInfoTruncated struct {
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Type protocol.FileInfoType `protobuf:"varint,2,opt,name=type,proto3,enum=protocol.FileInfoType" json:"type,omitempty"`
Size int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
Permissions uint32 `protobuf:"varint,4,opt,name=permissions,proto3" json:"permissions,omitempty"`
ModifiedS int64 `protobuf:"varint,5,opt,name=modified_s,json=modifiedS,proto3" json:"modified_s,omitempty"`
ModifiedNs int32 `protobuf:"varint,11,opt,name=modified_ns,json=modifiedNs,proto3" json:"modified_ns,omitempty"`
Deleted bool `protobuf:"varint,6,opt,name=deleted,proto3" json:"deleted,omitempty"`
Invalid bool `protobuf:"varint,7,opt,name=invalid,proto3" json:"invalid,omitempty"`
NoPermissions bool `protobuf:"varint,8,opt,name=no_permissions,json=noPermissions,proto3" json:"no_permissions,omitempty"`
Version protocol.Vector `protobuf:"bytes,9,opt,name=version" json:"version"`
Sequence int64 `protobuf:"varint,10,opt,name=sequence,proto3" json:"sequence,omitempty"`
}
func (m *FileInfoTruncated) Reset() { *m = FileInfoTruncated{} }
func (*FileInfoTruncated) ProtoMessage() {}
func (*FileInfoTruncated) Descriptor() ([]byte, []int) { return fileDescriptorStructs, []int{2} }
func init() {
proto.RegisterType((*FileVersion)(nil), "db.FileVersion")
proto.RegisterType((*VersionList)(nil), "db.VersionList")
proto.RegisterType((*FileInfoTruncated)(nil), "db.FileInfoTruncated")
}
func (m *FileVersion) Marshal() (data []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return data[:n], nil
}
func (m *FileVersion) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
data[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(m.Version.ProtoSize()))
n1, err := m.Version.MarshalTo(data[i:])
if err != nil {
return 0, err
}
i += n1
if len(m.Device) > 0 {
data[i] = 0x12
i++
i = encodeVarintStructs(data, i, uint64(len(m.Device)))
i += copy(data[i:], m.Device)
}
return i, nil
}
func (m *VersionList) Marshal() (data []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return data[:n], nil
}
func (m *VersionList) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Versions) > 0 {
for _, msg := range m.Versions {
data[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(msg.ProtoSize()))
n, err := msg.MarshalTo(data[i:])
if err != nil {
return 0, err
}
i += n
}
}
return i, nil
}
func (m *FileInfoTruncated) Marshal() (data []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return data[:n], nil
}
func (m *FileInfoTruncated) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.Name) > 0 {
data[i] = 0xa
i++
i = encodeVarintStructs(data, i, uint64(len(m.Name)))
i += copy(data[i:], m.Name)
}
if m.Type != 0 {
data[i] = 0x10
i++
i = encodeVarintStructs(data, i, uint64(m.Type))
}
if m.Size != 0 {
data[i] = 0x18
i++
i = encodeVarintStructs(data, i, uint64(m.Size))
}
if m.Permissions != 0 {
data[i] = 0x20
i++
i = encodeVarintStructs(data, i, uint64(m.Permissions))
}
if m.ModifiedS != 0 {
data[i] = 0x28
i++
i = encodeVarintStructs(data, i, uint64(m.ModifiedS))
}
if m.Deleted {
data[i] = 0x30
i++
if m.Deleted {
data[i] = 1
} else {
data[i] = 0
}
i++
}
if m.Invalid {
data[i] = 0x38
i++
if m.Invalid {
data[i] = 1
} else {
data[i] = 0
}
i++
}
if m.NoPermissions {
data[i] = 0x40
i++
if m.NoPermissions {
data[i] = 1
} else {
data[i] = 0
}
i++
}
data[i] = 0x4a
i++
i = encodeVarintStructs(data, i, uint64(m.Version.ProtoSize()))
n2, err := m.Version.MarshalTo(data[i:])
if err != nil {
return 0, err
}
i += n2
if m.Sequence != 0 {
data[i] = 0x50
i++
i = encodeVarintStructs(data, i, uint64(m.Sequence))
}
if m.ModifiedNs != 0 {
data[i] = 0x58
i++
i = encodeVarintStructs(data, i, uint64(m.ModifiedNs))
}
return i, nil
}
func encodeFixed64Structs(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Structs(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintStructs(data []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
return offset + 1
}
func (m *FileVersion) ProtoSize() (n int) {
var l int
_ = l
l = m.Version.ProtoSize()
n += 1 + l + sovStructs(uint64(l))
l = len(m.Device)
if l > 0 {
n += 1 + l + sovStructs(uint64(l))
}
return n
}
func (m *VersionList) ProtoSize() (n int) {
var l int
_ = l
if len(m.Versions) > 0 {
for _, e := range m.Versions {
l = e.ProtoSize()
n += 1 + l + sovStructs(uint64(l))
}
}
return n
}
func (m *FileInfoTruncated) ProtoSize() (n int) {
var l int
_ = l
l = len(m.Name)
if l > 0 {
n += 1 + l + sovStructs(uint64(l))
}
if m.Type != 0 {
n += 1 + sovStructs(uint64(m.Type))
}
if m.Size != 0 {
n += 1 + sovStructs(uint64(m.Size))
}
if m.Permissions != 0 {
n += 1 + sovStructs(uint64(m.Permissions))
}
if m.ModifiedS != 0 {
n += 1 + sovStructs(uint64(m.ModifiedS))
}
if m.Deleted {
n += 2
}
if m.Invalid {
n += 2
}
if m.NoPermissions {
n += 2
}
l = m.Version.ProtoSize()
n += 1 + l + sovStructs(uint64(l))
if m.Sequence != 0 {
n += 1 + sovStructs(uint64(m.Sequence))
}
if m.ModifiedNs != 0 {
n += 1 + sovStructs(uint64(m.ModifiedNs))
}
return n
}
func sovStructs(x uint64) (n int) {
for {
n++
x >>= 7
if x == 0 {
break
}
}
return n
}
func sozStructs(x uint64) (n int) {
return sovStructs(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *FileVersion) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: FileVersion: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: FileVersion: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthStructs
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.Version.Unmarshal(data[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthStructs
}
postIndex := iNdEx + byteLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Device = append(m.Device[:0], data[iNdEx:postIndex]...)
if m.Device == nil {
m.Device = []byte{}
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *VersionList) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: VersionList: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: VersionList: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Versions", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthStructs
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Versions = append(m.Versions, FileVersion{})
if err := m.Versions[len(m.Versions)-1].Unmarshal(data[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *FileInfoTruncated) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: FileInfoTruncated: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: FileInfoTruncated: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthStructs
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Name = string(data[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
}
m.Type = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.Type |= (protocol.FileInfoType(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Size", wireType)
}
m.Size = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.Size |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
case 4:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Permissions", wireType)
}
m.Permissions = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.Permissions |= (uint32(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
case 5:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field ModifiedS", wireType)
}
m.ModifiedS = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.ModifiedS |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
case 6:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Deleted", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.Deleted = bool(v != 0)
case 7:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Invalid", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.Invalid = bool(v != 0)
case 8:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field NoPermissions", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.NoPermissions = bool(v != 0)
case 9:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Version", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthStructs
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.Version.Unmarshal(data[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 10:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Sequence", wireType)
}
m.Sequence = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.Sequence |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
case 11:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field ModifiedNs", wireType)
}
m.ModifiedNs = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowStructs
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.ModifiedNs |= (int32(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipStructs(data[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthStructs
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipStructs(data []byte) (n int, err error) {
l := len(data)
iNdEx := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowStructs
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowStructs
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
break
}
}
return iNdEx, nil
case 1:
iNdEx += 8
return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowStructs
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthStructs
}
return iNdEx, nil
case 3:
for {
var innerWire uint64
var start int = iNdEx
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowStructs
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
innerWireType := int(innerWire & 0x7)
if innerWireType == 4 {
break
}
next, err := skipStructs(data[start:])
if err != nil {
return 0, err
}
iNdEx = start + next
}
return iNdEx, nil
case 4:
return iNdEx, nil
case 5:
iNdEx += 4
return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
}
panic("unreachable")
}
var (
ErrInvalidLengthStructs = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowStructs = fmt.Errorf("proto: integer overflow")
)
var fileDescriptorStructs = []byte{
// 419 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x51, 0xcd, 0xaa, 0xd3, 0x40,
0x18, 0x4d, 0xda, 0xdc, 0x36, 0xfd, 0x62, 0xaf, 0x3a, 0xc8, 0x25, 0x14, 0x4c, 0x2f, 0x05, 0x41,
0x04, 0x53, 0xbd, 0xe2, 0xc6, 0x65, 0x17, 0x05, 0x41, 0x44, 0x46, 0xa9, 0xcb, 0xd2, 0x64, 0xa6,
0xe9, 0x40, 0x32, 0x13, 0x33, 0x93, 0x42, 0x7d, 0x12, 0x97, 0x7d, 0x9c, 0x2e, 0x7d, 0x02, 0xd1,
0xfa, 0x12, 0x2e, 0x9d, 0x4e, 0x7e, 0xcc, 0xd2, 0x45, 0xe0, 0x3b, 0x73, 0xce, 0xf9, 0xce, 0x99,
0x0c, 0x8c, 0xa5, 0x2a, 0xca, 0x58, 0xc9, 0x30, 0x2f, 0x84, 0x12, 0xa8, 0x47, 0xa2, 0xc9, 0xf3,
0x84, 0xa9, 0x5d, 0x19, 0x85, 0xb1, 0xc8, 0xe6, 0x89, 0x48, 0xc4, 0xdc, 0x50, 0x51, 0xb9, 0x35,
0xc8, 0x00, 0x33, 0x55, 0x96, 0xc9, 0xeb, 0x8e, 0x5c, 0x1e, 0x78, 0xac, 0x76, 0x8c, 0x27, 0x9d,
0x29, 0x65, 0x51, 0xb5, 0x21, 0x16, 0xe9, 0x3c, 0xa2, 0x79, 0x65, 0x9b, 0x7d, 0x06, 0x6f, 0xc9,
0x52, 0xba, 0xa2, 0x85, 0x64, 0x82, 0xa3, 0x17, 0x30, 0xdc, 0x57, 0xa3, 0x6f, 0xdf, 0xda, 0x4f,
0xbd, 0xbb, 0x07, 0x61, 0x63, 0x0a, 0x57, 0x34, 0x56, 0xa2, 0x58, 0x38, 0xa7, 0x1f, 0x53, 0x0b,
0x37, 0x32, 0x74, 0x03, 0x03, 0x42, 0xf7, 0x2c, 0xa6, 0x7e, 0x4f, 0x1b, 0xee, 0xe1, 0x1a, 0xcd,
0x96, 0xe0, 0xd5, 0x4b, 0xdf, 0x31, 0xa9, 0xd0, 0x4b, 0x70, 0x6b, 0x87, 0xd4, 0x9b, 0xfb, 0x7a,
0xf3, 0xfd, 0x90, 0x44, 0x61, 0x27, 0xbb, 0x5e, 0xdc, 0xca, 0xde, 0x38, 0xdf, 0x8e, 0x53, 0x6b,
0xf6, 0xa7, 0x07, 0x0f, 0x2f, 0xaa, 0xb7, 0x7c, 0x2b, 0x3e, 0x15, 0x25, 0x8f, 0x37, 0x8a, 0x12,
0x84, 0xc0, 0xe1, 0x9b, 0x8c, 0x9a, 0x92, 0x23, 0x6c, 0x66, 0xf4, 0x0c, 0x1c, 0x75, 0xc8, 0xab,
0x1e, 0xd7, 0x77, 0x37, 0xff, 0x8a, 0xb7, 0x76, 0xcd, 0x62, 0xa3, 0xb9, 0xf8, 0x25, 0xfb, 0x4a,
0xfd, 0xbe, 0xd6, 0xf6, 0xb1, 0x99, 0xd1, 0x2d, 0x78, 0x39, 0x2d, 0x32, 0x26, 0xab, 0x96, 0x8e,
0xa6, 0xc6, 0xb8, 0x7b, 0x84, 0x1e, 0x03, 0x64, 0x82, 0xb0, 0x2d, 0xa3, 0x64, 0x2d, 0xfd, 0x2b,
0xe3, 0x1d, 0x35, 0x27, 0x1f, 0x91, 0x0f, 0x43, 0x42, 0x53, 0xaa, 0xfb, 0xf9, 0x03, 0xcd, 0xb9,
0xb8, 0x81, 0x17, 0x86, 0xf1, 0xfd, 0x26, 0x65, 0xc4, 0x1f, 0x56, 0x4c, 0x0d, 0xd1, 0x13, 0xb8,
0xe6, 0x62, 0xdd, 0xcd, 0x75, 0x8d, 0x60, 0xcc, 0xc5, 0x87, 0x4e, 0x72, 0xe7, 0x5d, 0x46, 0xff,
0xf7, 0x2e, 0x13, 0x70, 0x25, 0xfd, 0x52, 0x52, 0xae, 0x5f, 0x06, 0x4c, 0xd3, 0x16, 0xa3, 0x29,
0x78, 0xed, 0x3d, 0x74, 0xa2, 0xa7, 0xe9, 0x2b, 0xdc, 0x5e, 0xed, 0x7d, 0xfd, 0xeb, 0x17, 0x8f,
0x4e, 0xbf, 0x02, 0xeb, 0x74, 0x0e, 0xec, 0xef, 0xfa, 0xfb, 0x79, 0x0e, 0xac, 0xe3, 0xef, 0xc0,
0x8e, 0x06, 0x26, 0xf8, 0xd5, 0xdf, 0x00, 0x00, 0x00, 0xff, 0xff, 0x2a, 0xae, 0x24, 0x77, 0xb3,
0x02, 0x00, 0x00,
}

36
lib/db/structs.proto Normal file
View File

@@ -0,0 +1,36 @@
syntax = "proto3";
package db;
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
import "github.com/syncthing/syncthing/lib/protocol/bep.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.sizer_all) = false;
option (gogoproto.protosizer_all) = true;
message FileVersion {
protocol.Vector version = 1 [(gogoproto.nullable) = false];
bytes device = 2;
}
message VersionList {
option (gogoproto.goproto_stringer) = false;
repeated FileVersion versions = 1 [(gogoproto.nullable) = false];
}
// Must be the same as FileInfo but without the blocks field
message FileInfoTruncated {
option (gogoproto.goproto_stringer) = false;
string name = 1;
protocol.FileInfoType type = 2;
int64 size = 3;
uint32 permissions = 4;
int64 modified_s = 5;
int32 modified_ns = 11;
bool deleted = 6;
bool invalid = 7;
bool no_permissions = 8;
protocol.Vector version = 9 [(gogoproto.nullable) = false];
int64 sequence = 10;
}

View File

Binary file not shown.

View File

@@ -1,52 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"github.com/calmh/xdr"
"github.com/syncthing/syncthing/lib/protocol"
)
type FileInfoTruncated struct {
protocol.FileInfo
}
func (o *FileInfoTruncated) UnmarshalXDR(bs []byte) error {
return o.UnmarshalXDRFrom(&xdr.Unmarshaller{Data: bs})
}
func (o *FileInfoTruncated) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Name = u.UnmarshalStringMax(8192)
o.Flags = u.UnmarshalUint32()
o.Modified = int64(u.UnmarshalUint64())
(&o.Version).UnmarshalXDRFrom(u)
o.LocalVersion = int64(u.UnmarshalUint64())
_BlocksSize := int(u.UnmarshalUint32())
if _BlocksSize < 0 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 10000000)
} else if _BlocksSize == 0 {
o.Blocks = nil
} else {
if _BlocksSize > 10000000 {
return xdr.ElementSizeExceeded("Blocks", _BlocksSize, 10000000)
}
for i := 0; i < _BlocksSize; i++ {
size := int64(u.UnmarshalUint32())
o.CachedSize += size
u.UnmarshalBytes()
}
}
return u.Error
}
func BlocksToSize(num int) int64 {
if num < 2 {
return protocol.BlockSize / 2
}
return int64(num-1)*protocol.BlockSize + protocol.BlockSize/2
}

View File

@@ -1,79 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"encoding/binary"
"fmt"
"time"
)
// This type encapsulates a repository of mtimes for platforms where file mtimes
// can't be set to arbitrary values. For this to work, we need to store both
// the mtime we tried to set (the "actual" mtime) as well as the mtime the file
// has when we're done touching it (the "disk" mtime) so that we can tell if it
// was changed. So in GetMtime(), it's not sufficient that the record exists --
// the argument must also equal the "disk" mtime in the record, otherwise it's
// been touched locally and the "disk" mtime is actually correct.
type VirtualMtimeRepo struct {
ns *NamespacedKV
}
func NewVirtualMtimeRepo(ldb *Instance, folder string) *VirtualMtimeRepo {
var prefix [5]byte // key type + 4 bytes folder idx number
prefix[0] = KeyTypeVirtualMtime
binary.BigEndian.PutUint32(prefix[1:], ldb.folderIdx.ID([]byte(folder)))
return &VirtualMtimeRepo{
ns: NewNamespacedKV(ldb, string(prefix[:])),
}
}
func (r *VirtualMtimeRepo) UpdateMtime(path string, diskMtime, actualMtime time.Time) {
l.Debugf("virtual mtime: storing values for path:%s disk:%v actual:%v", path, diskMtime, actualMtime)
diskBytes, _ := diskMtime.MarshalBinary()
actualBytes, _ := actualMtime.MarshalBinary()
data := append(diskBytes, actualBytes...)
r.ns.PutBytes(path, data)
}
func (r *VirtualMtimeRepo) GetMtime(path string, diskMtime time.Time) time.Time {
data, exists := r.ns.Bytes(path)
if !exists {
// Absence of debug print is significant enough in itself here
return diskMtime
}
var mtime time.Time
if err := mtime.UnmarshalBinary(data[:len(data)/2]); err != nil {
panic(fmt.Sprintf("Can't unmarshal stored mtime at path %s: %v", path, err))
}
if mtime.Equal(diskMtime) {
if err := mtime.UnmarshalBinary(data[len(data)/2:]); err != nil {
panic(fmt.Sprintf("Can't unmarshal stored mtime at path %s: %v", path, err))
}
l.Debugf("virtual mtime: return %v instead of %v for path: %s", mtime, diskMtime, path)
return mtime
}
l.Debugf("virtual mtime: record exists, but mismatch inDisk: %v dbDisk: %v for path: %s", diskMtime, mtime, path)
return diskMtime
}
func (r *VirtualMtimeRepo) DeleteMtime(path string) {
r.ns.Delete(path)
}
func (r *VirtualMtimeRepo) Drop() {
r.ns.Reset()
}

View File

@@ -1,74 +0,0 @@
// Copyright (C) 2015 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package db
import (
"testing"
"time"
)
func TestVirtualMtimeRepo(t *testing.T) {
ldb := OpenMemory()
// A few repos so we can ensure they don't pollute each other
repo1 := NewVirtualMtimeRepo(ldb, "folder1")
repo2 := NewVirtualMtimeRepo(ldb, "folder2")
// Since GetMtime() returns its argument if the key isn't found or is outdated, we need a dummy to test with.
dummyTime := time.Date(2001, time.February, 3, 4, 5, 6, 0, time.UTC)
// Some times to test with
time1 := time.Date(2001, time.February, 3, 4, 5, 7, 0, time.UTC)
time2 := time.Date(2010, time.February, 3, 4, 5, 6, 0, time.UTC)
file1 := "file1.txt"
// Files are not present at the start
if v := repo1.GetMtime(file1, dummyTime); !v.Equal(dummyTime) {
t.Errorf("Mtime should be missing (%v) from repo 1 but it's %v", dummyTime, v)
}
if v := repo2.GetMtime(file1, dummyTime); !v.Equal(dummyTime) {
t.Errorf("Mtime should be missing (%v) from repo 2 but it's %v", dummyTime, v)
}
repo1.UpdateMtime(file1, time1, time2)
// Now it should return time2 only when time1 is passed as the argument
if v := repo1.GetMtime(file1, time1); !v.Equal(time2) {
t.Errorf("Mtime should be %v for disk time %v but we got %v", time2, time1, v)
}
if v := repo1.GetMtime(file1, dummyTime); !v.Equal(dummyTime) {
t.Errorf("Mtime should be %v for disk time %v but we got %v", dummyTime, dummyTime, v)
}
// repo2 shouldn't know about this file
if v := repo2.GetMtime(file1, time1); !v.Equal(time1) {
t.Errorf("Mtime should be %v for disk time %v in repo 2 but we got %v", time1, time1, v)
}
repo1.DeleteMtime(file1)
// Now it should be gone
if v := repo1.GetMtime(file1, time1); !v.Equal(time1) {
t.Errorf("Mtime should be %v for disk time %v but we got %v", time1, time1, v)
}
// Try again but with Drop()
repo1.UpdateMtime(file1, time1, time2)
repo1.Drop()
if v := repo1.GetMtime(file1, time1); !v.Equal(time1) {
t.Errorf("Mtime should be %v for disk time %v but we got %v", time1, time1, v)
}
}

10
lib/dialer/empty_test.go Normal file
View File

@@ -0,0 +1,10 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
// The existence of this file means we get 0% test coverage rather than no
// test coverage at all. Remove when implementing an actual test.
package dialer

View File

@@ -26,6 +26,7 @@ type CacheEntry struct {
when time.Time // When did we get the result
found bool // Is it a success (cacheTime applies) or a failure (negCacheTime applies)?
validUntil time.Time // Validity time, overrides normal calculation
instanceID int64 // for local discovery, the instance ID (random on each restart)
}
// A FinderService is a Finder that has background activity and must be run as

View File

@@ -4,10 +4,14 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate go run ../../script/protofmt.go local.proto
//go:generate protoc --proto_path=../../../../../:../../../../gogo/protobuf/protobuf:. --gogofast_out=. local.proto
package discover
import (
"bytes"
"encoding/binary"
"encoding/hex"
"io"
"net"
@@ -18,6 +22,7 @@ import (
"github.com/syncthing/syncthing/lib/beacon"
"github.com/syncthing/syncthing/lib/events"
"github.com/syncthing/syncthing/lib/protocol"
"github.com/syncthing/syncthing/lib/rand"
"github.com/thejerf/suture"
)
@@ -38,6 +43,8 @@ type localClient struct {
const (
BroadcastInterval = 30 * time.Second
CacheLifeTime = 3 * BroadcastInterval
Magic = uint32(0x2EA7D90B) // same as in BEP
v13Magic = uint32(0x7D79BC40) // previous version
)
func NewLocal(id protocol.DeviceID, addr string, addrList AddressLister) (FinderService, error) {
@@ -107,25 +114,20 @@ func (c *localClient) Error() error {
}
func (c *localClient) announcementPkt() Announce {
var addrs []Address
for _, addr := range c.addrList.AllAddresses() {
addrs = append(addrs, Address{
URL: addr,
})
}
return Announce{
Magic: AnnouncementMagic,
This: Device{
ID: c.myID[:],
Addresses: addrs,
},
ID: c.myID[:],
Addresses: c.addrList.AllAddresses(),
InstanceID: rand.Int63(),
}
}
func (c *localClient) sendLocalAnnouncements() {
msg := make([]byte, 4)
binary.BigEndian.PutUint32(msg, Magic)
var pkt = c.announcementPkt()
msg := pkt.MustMarshalXDR()
bs, _ := pkt.Marshal()
msg = append(msg, bs...)
for {
c.beacon.Send(msg)
@@ -138,26 +140,44 @@ func (c *localClient) sendLocalAnnouncements() {
}
func (c *localClient) recvAnnouncements(b beacon.Interface) {
warnedAbout := make(map[string]bool)
for {
buf, addr := b.Recv()
if len(buf) < 4 {
l.Debugf("discover: short packet from %s")
continue
}
magic := binary.BigEndian.Uint32(buf)
switch magic {
case Magic:
// All good
case v13Magic:
// Old version
if !warnedAbout[addr.String()] {
l.Warnf("Incompatible (v0.13) local discovery packet from %v - upgrade that device to connect", addr)
warnedAbout[addr.String()] = true
}
continue
default:
l.Debugf("discover: Incorrect magic %x from %s", magic, addr)
continue
}
var pkt Announce
err := pkt.UnmarshalXDR(buf)
err := pkt.Unmarshal(buf[4:])
if err != nil && err != io.EOF {
l.Debugf("discover: Failed to unmarshal local announcement from %s:\n%s", addr, hex.Dump(buf))
continue
}
if pkt.Magic != AnnouncementMagic {
l.Debugf("discover: Incorrect magic from %s: %s != %s", addr, pkt.Magic, AnnouncementMagic)
continue
}
l.Debugf("discover: Received local announcement from %s for %s", addr, protocol.DeviceIDFromBytes(pkt.This.ID))
l.Debugf("discover: Received local announcement from %s for %s", addr, protocol.DeviceIDFromBytes(pkt.ID))
var newDevice bool
if !bytes.Equal(pkt.This.ID, c.myID[:]) {
newDevice = c.registerDevice(addr, pkt.This)
if !bytes.Equal(pkt.ID, c.myID[:]) {
newDevice = c.registerDevice(addr, pkt)
}
if newDevice {
@@ -171,14 +191,16 @@ func (c *localClient) recvAnnouncements(b beacon.Interface) {
}
}
func (c *localClient) registerDevice(src net.Addr, device Device) bool {
func (c *localClient) registerDevice(src net.Addr, device Announce) bool {
var id protocol.DeviceID
copy(id[:], device.ID)
// Remember whether we already had a valid cache entry for this device.
// If the instance ID has changed the remote device has restarted since
// we last heard from it, so we should treat it as a new device.
ce, existsAlready := c.Get(id)
isNewDevice := !existsAlready || time.Since(ce.when) > CacheLifeTime
isNewDevice := !existsAlready || time.Since(ce.when) > CacheLifeTime || ce.instanceID != device.InstanceID
// Any empty or unspecified addresses should be set to the source address
// of the announcement. We also skip any addresses we can't parse.
@@ -186,7 +208,7 @@ func (c *localClient) registerDevice(src net.Addr, device Device) bool {
l.Debugln("discover: Registering addresses for", id)
var validAddresses []string
for _, addr := range device.Addresses {
u, err := url.Parse(addr.URL)
u, err := url.Parse(addr)
if err != nil {
continue
}
@@ -197,6 +219,21 @@ func (c *localClient) registerDevice(src net.Addr, device Device) bool {
}
if len(tcpAddr.IP) == 0 || tcpAddr.IP.IsUnspecified() {
srcAddr, err := net.ResolveTCPAddr("tcp", src.String())
if err != nil {
continue
}
// Do not use IPv6 source address if requested scheme is tcp4
if u.Scheme == "tcp4" && srcAddr.IP.To4() == nil {
continue
}
// Do not use IPv4 source address if requested scheme is tcp6
if u.Scheme == "tcp6" && srcAddr.IP.To4() != nil {
continue
}
host, _, err := net.SplitHostPort(src.String())
if err != nil {
continue
@@ -204,17 +241,18 @@ func (c *localClient) registerDevice(src net.Addr, device Device) bool {
u.Host = net.JoinHostPort(host, strconv.Itoa(tcpAddr.Port))
l.Debugf("discover: Reconstructed URL is %#v", u)
validAddresses = append(validAddresses, u.String())
l.Debugf("discover: Replaced address %v in %s to get %s", tcpAddr.IP, addr.URL, u.String())
l.Debugf("discover: Replaced address %v in %s to get %s", tcpAddr.IP, addr, u.String())
} else {
validAddresses = append(validAddresses, addr.URL)
l.Debugf("discover: Accepted address %s verbatim", addr.URL)
validAddresses = append(validAddresses, addr)
l.Debugf("discover: Accepted address %s verbatim", addr)
}
}
c.Set(id, CacheEntry{
Addresses: validAddresses,
when: time.Now(),
found: true,
Addresses: validAddresses,
when: time.Now(),
found: true,
instanceID: device.InstanceID,
})
if isNewDevice {

398
lib/discover/local.pb.go Normal file
View File

@@ -0,0 +1,398 @@
// Code generated by protoc-gen-gogo.
// source: local.proto
// DO NOT EDIT!
/*
Package discover is a generated protocol buffer package.
It is generated from these files:
local.proto
It has these top-level messages:
Announce
*/
package discover
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import io "io"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
type Announce struct {
ID []byte `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
Addresses []string `protobuf:"bytes,2,rep,name=addresses" json:"addresses,omitempty"`
InstanceID int64 `protobuf:"varint,3,opt,name=instance_id,json=instanceId,proto3" json:"instance_id,omitempty"`
}
func (m *Announce) Reset() { *m = Announce{} }
func (m *Announce) String() string { return proto.CompactTextString(m) }
func (*Announce) ProtoMessage() {}
func (*Announce) Descriptor() ([]byte, []int) { return fileDescriptorLocal, []int{0} }
func init() {
proto.RegisterType((*Announce)(nil), "discover.Announce")
}
func (m *Announce) Marshal() (data []byte, err error) {
size := m.ProtoSize()
data = make([]byte, size)
n, err := m.MarshalTo(data)
if err != nil {
return nil, err
}
return data[:n], nil
}
func (m *Announce) MarshalTo(data []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.ID) > 0 {
data[i] = 0xa
i++
i = encodeVarintLocal(data, i, uint64(len(m.ID)))
i += copy(data[i:], m.ID)
}
if len(m.Addresses) > 0 {
for _, s := range m.Addresses {
data[i] = 0x12
i++
l = len(s)
for l >= 1<<7 {
data[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
data[i] = uint8(l)
i++
i += copy(data[i:], s)
}
}
if m.InstanceID != 0 {
data[i] = 0x18
i++
i = encodeVarintLocal(data, i, uint64(m.InstanceID))
}
return i, nil
}
func encodeFixed64Local(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Local(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintLocal(data []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
return offset + 1
}
func (m *Announce) ProtoSize() (n int) {
var l int
_ = l
l = len(m.ID)
if l > 0 {
n += 1 + l + sovLocal(uint64(l))
}
if len(m.Addresses) > 0 {
for _, s := range m.Addresses {
l = len(s)
n += 1 + l + sovLocal(uint64(l))
}
}
if m.InstanceID != 0 {
n += 1 + sovLocal(uint64(m.InstanceID))
}
return n
}
func sovLocal(x uint64) (n int) {
for {
n++
x >>= 7
if x == 0 {
break
}
}
return n
}
func sozLocal(x uint64) (n int) {
return sovLocal(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *Announce) Unmarshal(data []byte) error {
l := len(data)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Announce: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Announce: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
}
var byteLen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
byteLen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if byteLen < 0 {
return ErrInvalidLengthLocal
}
postIndex := iNdEx + byteLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ID = append(m.ID[:0], data[iNdEx:postIndex]...)
if m.ID == nil {
m.ID = []byte{}
}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthLocal
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addresses = append(m.Addresses, string(data[iNdEx:postIndex]))
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field InstanceID", wireType)
}
m.InstanceID = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowLocal
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
m.InstanceID |= (int64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipLocal(data[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthLocal
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipLocal(data []byte) (n int, err error) {
l := len(data)
iNdEx := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
wireType := int(wire & 0x7)
switch wireType {
case 0:
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
break
}
}
return iNdEx, nil
case 1:
iNdEx += 8
return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthLocal
}
return iNdEx, nil
case 3:
for {
var innerWire uint64
var start int = iNdEx
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return 0, ErrIntOverflowLocal
}
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
innerWireType := int(innerWire & 0x7)
if innerWireType == 4 {
break
}
next, err := skipLocal(data[start:])
if err != nil {
return 0, err
}
iNdEx = start + next
}
return iNdEx, nil
case 4:
return iNdEx, nil
case 5:
iNdEx += 4
return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
}
panic("unreachable")
}
var (
ErrInvalidLengthLocal = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowLocal = fmt.Errorf("proto: integer overflow")
)
var fileDescriptorLocal = []byte{
// 194 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xce, 0xc9, 0x4f, 0x4e,
0xcc, 0xd1, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x48, 0xc9, 0x2c, 0x4e, 0xce, 0x2f, 0x4b,
0x2d, 0x92, 0xd2, 0x4d, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf,
0x4f, 0xcf, 0xd7, 0x07, 0x2b, 0x48, 0x2a, 0x4d, 0x03, 0xf3, 0xc0, 0x1c, 0x30, 0x0b, 0xa2, 0x51,
0xa9, 0x90, 0x8b, 0xc3, 0x31, 0x2f, 0x2f, 0xbf, 0x34, 0x2f, 0x39, 0x55, 0x48, 0x8c, 0x8b, 0x29,
0x33, 0x45, 0x82, 0x51, 0x81, 0x51, 0x83, 0xc7, 0x89, 0xed, 0xd1, 0x3d, 0x79, 0x26, 0x4f, 0x97,
0x20, 0xa0, 0x88, 0x90, 0x0c, 0x17, 0x67, 0x62, 0x4a, 0x4a, 0x51, 0x6a, 0x71, 0x71, 0x6a, 0xb1,
0x04, 0x93, 0x02, 0xb3, 0x06, 0x67, 0x10, 0x42, 0x40, 0x48, 0x9f, 0x8b, 0x3b, 0x33, 0xaf, 0xb8,
0x24, 0x11, 0x68, 0x42, 0x3c, 0x50, 0x3b, 0x33, 0x50, 0x3b, 0xb3, 0x13, 0x1f, 0x50, 0x3b, 0x97,
0x27, 0x54, 0x18, 0x68, 0x0c, 0x17, 0x4c, 0x89, 0x67, 0x8a, 0x93, 0xc8, 0x89, 0x87, 0x72, 0x0c,
0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x00, 0xe2, 0x07, 0x8f, 0xe4, 0x18, 0x16, 0x3c, 0x96, 0x63, 0x4c,
0x62, 0x03, 0xbb, 0xc7, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x91, 0x3f, 0x96, 0x25, 0xd7, 0x00,
0x00, 0x00,
}

15
lib/discover/local.proto Normal file
View File

@@ -0,0 +1,15 @@
syntax = "proto3";
package discover;
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
option (gogoproto.goproto_getters_all) = false;
option (gogoproto.sizer_all) = false;
option (gogoproto.protosizer_all) = true;
message Announce {
bytes id = 1 [(gogoproto.customname) = "ID"];
repeated string addresses = 2;
int64 instance_id = 3 [(gogoproto.customname) = "InstanceID"];
}

View File

@@ -0,0 +1,81 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package discover
import (
"net"
"testing"
"github.com/syncthing/syncthing/lib/protocol"
)
func TestRandomLocalInstanceID(t *testing.T) {
c, err := NewLocal(protocol.LocalDeviceID, ":0", &fakeAddressLister{})
if err != nil {
t.Fatal(err)
}
go c.Serve()
defer c.Stop()
lc := c.(*localClient)
p0 := lc.announcementPkt()
p1 := lc.announcementPkt()
if p0.InstanceID == p1.InstanceID {
t.Error("each generated packet should have a new instance id")
}
}
func TestLocalInstanceIDShouldTriggerNew(t *testing.T) {
c, err := NewLocal(protocol.LocalDeviceID, ":0", &fakeAddressLister{})
if err != nil {
t.Fatal(err)
}
lc := c.(*localClient)
src := &net.UDPAddr{IP: []byte{10, 20, 30, 40}, Port: 50}
new := lc.registerDevice(src, Announce{
ID: []byte{10, 20, 30, 40, 50, 60, 70, 80, 90},
Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890,
})
if !new {
t.Fatal("first register should be new")
}
new = lc.registerDevice(src, Announce{
ID: []byte{10, 20, 30, 40, 50, 60, 70, 80, 90},
Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890,
})
if new {
t.Fatal("second register should not be new")
}
new = lc.registerDevice(src, Announce{
ID: []byte{42, 10, 20, 30, 40, 50, 60, 70, 80, 90},
Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 1234567890,
})
if !new {
t.Fatal("new device ID should be new")
}
new = lc.registerDevice(src, Announce{
ID: []byte{10, 20, 30, 40, 50, 60, 70, 80, 90},
Addresses: []string{"tcp://0.0.0.0:22000"},
InstanceID: 91234567890,
})
if !new {
t.Fatal("new instance ID should be new")
}
}

View File

@@ -1,29 +0,0 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
//go:generate -command genxdr go run ../../vendor/github.com/calmh/xdr/cmd/genxdr/main.go
//go:generate genxdr -o localpackets_xdr.go localpackets.go
package discover
const (
AnnouncementMagic = 0x7D79BC40
)
type Announce struct {
Magic uint32
This Device
Extra []Device // max:16
}
type Device struct {
ID []byte // max:32
Addresses []Address // max:16
}
type Address struct {
URL string // max:2083
}

View File

@@ -1,246 +0,0 @@
// ************************************************************
// This file is automatically generated by genxdr. Do not edit.
// ************************************************************
package discover
import (
"github.com/calmh/xdr"
)
/*
Announce Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Magic |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Device Structure \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Extra |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Device Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct Announce {
unsigned int Magic;
Device This;
Device Extra<16>;
}
*/
func (o Announce) XDRSize() int {
return 4 +
o.This.XDRSize() +
4 + xdr.SizeOfSlice(o.Extra)
}
func (o Announce) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
}
func (o Announce) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o Announce) MarshalXDRInto(m *xdr.Marshaller) error {
m.MarshalUint32(o.Magic)
if err := o.This.MarshalXDRInto(m); err != nil {
return err
}
if l := len(o.Extra); l > 16 {
return xdr.ElementSizeExceeded("Extra", l, 16)
}
m.MarshalUint32(uint32(len(o.Extra)))
for i := range o.Extra {
if err := o.Extra[i].MarshalXDRInto(m); err != nil {
return err
}
}
return m.Error
}
func (o *Announce) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *Announce) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.Magic = u.UnmarshalUint32()
(&o.This).UnmarshalXDRFrom(u)
_ExtraSize := int(u.UnmarshalUint32())
if _ExtraSize < 0 {
return xdr.ElementSizeExceeded("Extra", _ExtraSize, 16)
} else if _ExtraSize == 0 {
o.Extra = nil
} else {
if _ExtraSize > 16 {
return xdr.ElementSizeExceeded("Extra", _ExtraSize, 16)
}
if _ExtraSize <= len(o.Extra) {
o.Extra = o.Extra[:_ExtraSize]
} else {
o.Extra = make([]Device, _ExtraSize)
}
for i := range o.Extra {
(&o.Extra[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
}
/*
Device Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ ID (length + padded data) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Addresses |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Address Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct Device {
opaque ID<32>;
Address Addresses<16>;
}
*/
func (o Device) XDRSize() int {
return 4 + len(o.ID) + xdr.Padding(len(o.ID)) +
4 + xdr.SizeOfSlice(o.Addresses)
}
func (o Device) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
}
func (o Device) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o Device) MarshalXDRInto(m *xdr.Marshaller) error {
if l := len(o.ID); l > 32 {
return xdr.ElementSizeExceeded("ID", l, 32)
}
m.MarshalBytes(o.ID)
if l := len(o.Addresses); l > 16 {
return xdr.ElementSizeExceeded("Addresses", l, 16)
}
m.MarshalUint32(uint32(len(o.Addresses)))
for i := range o.Addresses {
if err := o.Addresses[i].MarshalXDRInto(m); err != nil {
return err
}
}
return m.Error
}
func (o *Device) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *Device) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.ID = u.UnmarshalBytesMax(32)
_AddressesSize := int(u.UnmarshalUint32())
if _AddressesSize < 0 {
return xdr.ElementSizeExceeded("Addresses", _AddressesSize, 16)
} else if _AddressesSize == 0 {
o.Addresses = nil
} else {
if _AddressesSize > 16 {
return xdr.ElementSizeExceeded("Addresses", _AddressesSize, 16)
}
if _AddressesSize <= len(o.Addresses) {
o.Addresses = o.Addresses[:_AddressesSize]
} else {
o.Addresses = make([]Address, _AddressesSize)
}
for i := range o.Addresses {
(&o.Addresses[i]).UnmarshalXDRFrom(u)
}
}
return u.Error
}
/*
Address Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ URL (length + padded data) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct Address {
string URL<2083>;
}
*/
func (o Address) XDRSize() int {
return 4 + len(o.URL) + xdr.Padding(len(o.URL))
}
func (o Address) MarshalXDR() ([]byte, error) {
buf := make([]byte, o.XDRSize())
m := &xdr.Marshaller{Data: buf}
return buf, o.MarshalXDRInto(m)
}
func (o Address) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o Address) MarshalXDRInto(m *xdr.Marshaller) error {
if l := len(o.URL); l > 2083 {
return xdr.ElementSizeExceeded("URL", l, 2083)
}
m.MarshalString(o.URL)
return m.Error
}
func (o *Address) UnmarshalXDR(bs []byte) error {
u := &xdr.Unmarshaller{Data: bs}
return o.UnmarshalXDRFrom(u)
}
func (o *Address) UnmarshalXDRFrom(u *xdr.Unmarshaller) error {
o.URL = u.UnmarshalStringMax(2083)
return u.Error
}

View File

@@ -9,6 +9,7 @@ package events
import (
"errors"
"runtime"
stdsync "sync"
"time"
@@ -47,6 +48,8 @@ const (
AllEvents = (1 << iota) - 1
)
var runningTests = false
func (t EventType) String() string {
switch t {
case Ping:
@@ -186,6 +189,13 @@ func (l *Logger) Subscribe(mask EventType) *Subscription {
// We need to create the timeout timer in the stopped, non-fired state so
// that Subscription.Poll() can safely reset it and select on the timeout
// channel. This ensures the timer is stopped and the channel drained.
if runningTests {
// Make the behavior stable when running tests to avoid randomly
// varying test coverage. This ensures, in practice if not in
// theory, that the timer fires and we take the true branch of the
// next if.
runtime.Gosched()
}
if !s.timeout.Stop() {
<-s.timeout.C
}
@@ -231,6 +241,14 @@ func (s *Subscription) Poll(timeout time.Duration) (Event, error) {
if !ok {
return e, ErrClosed
}
if runningTests {
// Make the behavior stable when running tests to avoid randomly
// varying test coverage. This ensures, in practice if not in
// theory, that the timer fires and we take the true branch of
// the next if.
s.timeout.Reset(0)
runtime.Gosched()
}
if !s.timeout.Stop() {
// The timeout must be stopped and possibly drained to be ready
// for reuse in the next call.

View File

@@ -4,27 +4,29 @@
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package events_test
package events
import (
"fmt"
"testing"
"time"
"github.com/syncthing/syncthing/lib/events"
)
const timeout = 100 * time.Millisecond
func init() {
runningTests = true
}
func TestNewLogger(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
if l == nil {
t.Fatal("Unexpected nil Logger")
}
}
func TestSubscriber(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(0)
defer l.Unsubscribe(s)
if s == nil {
@@ -33,41 +35,41 @@ func TestSubscriber(t *testing.T) {
}
func TestTimeout(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(0)
defer l.Unsubscribe(s)
_, err := s.Poll(timeout)
if err != events.ErrTimeout {
if err != ErrTimeout {
t.Fatal("Unexpected non-Timeout error:", err)
}
}
func TestEventBeforeSubscribe(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
l.Log(events.DeviceConnected, "foo")
l.Log(DeviceConnected, "foo")
s := l.Subscribe(0)
defer l.Unsubscribe(s)
_, err := s.Poll(timeout)
if err != events.ErrTimeout {
if err != ErrTimeout {
t.Fatal("Unexpected non-Timeout error:", err)
}
}
func TestEventAfterSubscribe(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
l.Log(events.DeviceConnected, "foo")
l.Log(DeviceConnected, "foo")
ev, err := s.Poll(timeout)
if err != nil {
t.Fatal("Unexpected error:", err)
}
if ev.Type != events.DeviceConnected {
if ev.Type != DeviceConnected {
t.Error("Incorrect event type", ev.Type)
}
switch v := ev.Data.(type) {
@@ -81,27 +83,27 @@ func TestEventAfterSubscribe(t *testing.T) {
}
func TestEventAfterSubscribeIgnoreMask(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.DeviceDisconnected)
s := l.Subscribe(DeviceDisconnected)
defer l.Unsubscribe(s)
l.Log(events.DeviceConnected, "foo")
l.Log(DeviceConnected, "foo")
_, err := s.Poll(timeout)
if err != events.ErrTimeout {
if err != ErrTimeout {
t.Fatal("Unexpected non-Timeout error:", err)
}
}
func TestBufferOverflow(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
t0 := time.Now()
for i := 0; i < events.BufferSize*2; i++ {
l.Log(events.DeviceConnected, "foo")
for i := 0; i < BufferSize*2; i++ {
l.Log(DeviceConnected, "foo")
}
if time.Since(t0) > timeout {
t.Fatalf("Logging took too long")
@@ -109,10 +111,10 @@ func TestBufferOverflow(t *testing.T) {
}
func TestUnsubscribe(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
l.Log(events.DeviceConnected, "foo")
s := l.Subscribe(AllEvents)
l.Log(DeviceConnected, "foo")
_, err := s.Poll(timeout)
if err != nil {
@@ -120,22 +122,22 @@ func TestUnsubscribe(t *testing.T) {
}
l.Unsubscribe(s)
l.Log(events.DeviceConnected, "foo")
l.Log(DeviceConnected, "foo")
_, err = s.Poll(timeout)
if err != events.ErrClosed {
if err != ErrClosed {
t.Fatal("Unexpected non-Closed error:", err)
}
}
func TestGlobalIDs(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
l.Log(events.DeviceConnected, "foo")
_ = l.Subscribe(events.AllEvents)
l.Log(events.DeviceConnected, "bar")
l.Log(DeviceConnected, "foo")
_ = l.Subscribe(AllEvents)
l.Log(DeviceConnected, "bar")
ev, err := s.Poll(timeout)
if err != nil {
@@ -159,15 +161,15 @@ func TestGlobalIDs(t *testing.T) {
}
func TestSubscriptionIDs(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.DeviceConnected)
s := l.Subscribe(DeviceConnected)
defer l.Unsubscribe(s)
l.Log(events.DeviceDisconnected, "a")
l.Log(events.DeviceConnected, "b")
l.Log(events.DeviceConnected, "c")
l.Log(events.DeviceDisconnected, "d")
l.Log(DeviceDisconnected, "a")
l.Log(DeviceConnected, "b")
l.Log(DeviceConnected, "c")
l.Log(DeviceDisconnected, "d")
ev, err := s.Poll(timeout)
if err != nil {
@@ -193,21 +195,21 @@ func TestSubscriptionIDs(t *testing.T) {
}
ev, err = s.Poll(timeout)
if err != events.ErrTimeout {
if err != ErrTimeout {
t.Fatal("Unexpected error:", err)
}
}
func TestBufferedSub(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
bs := events.NewBufferedSubscription(s, 10*events.BufferSize)
bs := NewBufferedSubscription(s, 10*BufferSize)
go func() {
for i := 0; i < 10*events.BufferSize; i++ {
l.Log(events.DeviceConnected, fmt.Sprintf("event-%d", i))
for i := 0; i < 10*BufferSize; i++ {
l.Log(DeviceConnected, fmt.Sprintf("event-%d", i))
if i%30 == 0 {
// Give the buffer routine time to pick up the events
time.Sleep(20 * time.Millisecond)
@@ -216,7 +218,7 @@ func TestBufferedSub(t *testing.T) {
}()
recv := 0
for recv < 10*events.BufferSize {
for recv < 10*BufferSize {
evs := bs.Since(recv, nil)
for _, ev := range evs {
if ev.GlobalID != recv+1 {
@@ -228,12 +230,12 @@ func TestBufferedSub(t *testing.T) {
}
func BenchmarkBufferedSub(b *testing.B) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.AllEvents)
s := l.Subscribe(AllEvents)
defer l.Unsubscribe(s)
bufferSize := events.BufferSize
bs := events.NewBufferedSubscription(s, bufferSize)
bufferSize := BufferSize
bs := NewBufferedSubscription(s, bufferSize)
// The coord channel paces the sender according to the receiver,
// ensuring that no events are dropped. The benchmark measures sending +
@@ -249,7 +251,7 @@ func BenchmarkBufferedSub(b *testing.B) {
go func() {
defer close(done)
recv := 0
var evs []events.Event
var evs []Event
for i := 0; i < b.N; {
evs = bs.Since(recv, evs[:0])
for _, ev := range evs {
@@ -270,7 +272,7 @@ func BenchmarkBufferedSub(b *testing.B) {
"and": "something else",
}
for i := 0; i < b.N; i++ {
l.Log(events.DeviceConnected, eventData)
l.Log(DeviceConnected, eventData)
<-coord
}
@@ -279,16 +281,16 @@ func BenchmarkBufferedSub(b *testing.B) {
}
func TestSinceUsesSubscriptionId(t *testing.T) {
l := events.NewLogger()
l := NewLogger()
s := l.Subscribe(events.DeviceConnected)
s := l.Subscribe(DeviceConnected)
defer l.Unsubscribe(s)
bs := events.NewBufferedSubscription(s, 10*events.BufferSize)
bs := NewBufferedSubscription(s, 10*BufferSize)
l.Log(events.DeviceConnected, "a") // SubscriptionID = 1
l.Log(events.DeviceDisconnected, "b")
l.Log(events.DeviceDisconnected, "c")
l.Log(events.DeviceConnected, "d") // SubscriptionID = 2
l.Log(DeviceConnected, "a") // SubscriptionID = 1
l.Log(DeviceDisconnected, "b")
l.Log(DeviceDisconnected, "c")
l.Log(DeviceConnected, "d") // SubscriptionID = 2
// We need to loop for the events, as they may not all have been
// delivered to the buffered subscription when we get here.

136
lib/fs/mtimefs.go Normal file
View File

@@ -0,0 +1,136 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package fs
import (
"os"
"time"
"github.com/syncthing/syncthing/lib/osutil"
)
// The database is where we store the virtual mtimes
type database interface {
Bytes(key string) (data []byte, ok bool)
PutBytes(key string, data []byte)
Delete(key string)
}
// variable so that we can mock it for testing
var osChtimes = os.Chtimes
// The MtimeFS is a filesystem with nanosecond mtime precision, regardless
// of what shenanigans the underlying filesystem gets up to.
type MtimeFS struct {
db database
}
func NewMtimeFS(db database) *MtimeFS {
return &MtimeFS{
db: db,
}
}
func (f *MtimeFS) Chtimes(name string, atime, mtime time.Time) error {
// Do a normal Chtimes call, don't care if it succeeds or not.
osChtimes(name, atime, mtime)
// Stat the file to see what happened. Here we *do* return an error,
// because it might be "does not exist" or similar. osutil.Lstat is the
// souped up version to account for Android breakage.
info, err := osutil.Lstat(name)
if err != nil {
return err
}
f.save(name, info.ModTime(), mtime)
return nil
}
func (f *MtimeFS) Lstat(name string) (os.FileInfo, error) {
info, err := osutil.Lstat(name)
if err != nil {
return nil, err
}
real, virtual := f.load(name)
if real == info.ModTime() {
info = mtimeFileInfo{
FileInfo: info,
mtime: virtual,
}
}
return info, nil
}
// "real" is the on disk timestamp
// "virtual" is what want the timestamp to be
func (f *MtimeFS) save(name string, real, virtual time.Time) {
if real.Equal(virtual) {
// If the virtual time and the real on disk time are equal we don't
// need to store anything.
f.db.Delete(name)
return
}
mtime := dbMtime{
real: real,
virtual: virtual,
}
bs, _ := mtime.Marshal() // Can't fail
f.db.PutBytes(name, bs)
}
func (f *MtimeFS) load(name string) (real, virtual time.Time) {
data, exists := f.db.Bytes(name)
if !exists {
return
}
var mtime dbMtime
if err := mtime.Unmarshal(data); err != nil {
return
}
return mtime.real, mtime.virtual
}
// The mtimeFileInfo is an os.FileInfo that lies about the ModTime().
type mtimeFileInfo struct {
os.FileInfo
mtime time.Time
}
func (m mtimeFileInfo) ModTime() time.Time {
return m.mtime
}
// The dbMtime is our database representation
type dbMtime struct {
real time.Time
virtual time.Time
}
func (t *dbMtime) Marshal() ([]byte, error) {
bs0, _ := t.real.MarshalBinary()
bs1, _ := t.virtual.MarshalBinary()
return append(bs0, bs1...), nil
}
func (t *dbMtime) Unmarshal(bs []byte) error {
if err := t.real.UnmarshalBinary(bs[:len(bs)/2]); err != nil {
return err
}
if err := t.virtual.UnmarshalBinary(bs[len(bs)/2:]); err != nil {
return err
}
return nil
}

111
lib/fs/mtimefs_test.go Normal file
View File

@@ -0,0 +1,111 @@
// Copyright (C) 2016 The Syncthing Authors.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package fs
import (
"errors"
"io/ioutil"
"os"
"testing"
"time"
"github.com/syncthing/syncthing/lib/osutil"
)
func TestMtimeFS(t *testing.T) {
osutil.RemoveAll("testdata")
defer osutil.RemoveAll("testdata")
os.Mkdir("testdata", 0755)
ioutil.WriteFile("testdata/exists0", []byte("hello"), 0644)
ioutil.WriteFile("testdata/exists1", []byte("hello"), 0644)
ioutil.WriteFile("testdata/exists2", []byte("hello"), 0644)
// a random time with nanosecond precision
testTime := time.Unix(1234567890, 123456789)
mtimefs := NewMtimeFS(make(mapStore))
// Do one Chtimes call that will go through to the normal filesystem
osChtimes = os.Chtimes
if err := mtimefs.Chtimes("testdata/exists0", testTime, testTime); err != nil {
t.Error("Should not have failed:", err)
}
// Do one call that gets an error back from the underlying Chtimes
osChtimes = failChtimes
if err := mtimefs.Chtimes("testdata/exists1", testTime, testTime); err != nil {
t.Error("Should not have failed:", err)
}
// Do one call that gets struck by an exceptionally evil Chtimes
osChtimes = evilChtimes
if err := mtimefs.Chtimes("testdata/exists2", testTime, testTime); err != nil {
t.Error("Should not have failed:", err)
}
// All of the calls were successfull, so an Lstat on them should return
// the test timestamp.
for _, file := range []string{"testdata/exists0", "testdata/exists1", "testdata/exists2"} {
if info, err := mtimefs.Lstat(file); err != nil {
t.Error("Lstat shouldn't fail:", err)
} else if !info.ModTime().Equal(testTime) {
t.Errorf("Time mismatch; %v != expected %v", info.ModTime(), testTime)
}
}
// The two last files should certainly not have the correct timestamp
// when looking directly on disk though.
for _, file := range []string{"testdata/exists1", "testdata/exists2"} {
if info, err := os.Lstat(file); err != nil {
t.Error("Lstat shouldn't fail:", err)
} else if info.ModTime().Equal(testTime) {
t.Errorf("Unexpected time match; %v == %v", info.ModTime(), testTime)
}
}
// Changing the timestamp on disk should be reflected in a new Lstat
// call. Choose a time that is likely to be able to be on all reasonable
// filesystems.
testTime = time.Now().Add(5 * time.Hour).Truncate(time.Minute)
os.Chtimes("testdata/exists0", testTime, testTime)
if info, err := mtimefs.Lstat("testdata/exists0"); err != nil {
t.Error("Lstat shouldn't fail:", err)
} else if !info.ModTime().Equal(testTime) {
t.Errorf("Time mismatch; %v != expected %v", info.ModTime(), testTime)
}
}
// The mapStore is a simple database
type mapStore map[string][]byte
func (s mapStore) PutBytes(key string, data []byte) {
s[key] = data
}
func (s mapStore) Bytes(key string) (data []byte, ok bool) {
data, ok = s[key]
return
}
func (s mapStore) Delete(key string) {
delete(s, key)
}
// failChtimes does nothing, and fails
func failChtimes(name string, mtime, atime time.Time) error {
return errors.New("no")
}
// evilChtimes will set an mtime that's 300 days in the future of what was
// asked for, and truncate the time to the closest hour.
func evilChtimes(name string, mtime, atime time.Time) error {
return os.Chtimes(name, mtime.Add(300*time.Hour).Truncate(time.Hour), atime.Add(300*time.Hour).Truncate(time.Hour))
}

Some files were not shown because too many files have changed in this diff Show More