Compare commits

...

268 Commits

Author SHA1 Message Date
Jakob Borg
efd6a29909 Translation update 2015-02-15 16:55:44 +01:00
Jakob Borg
e7dbb8ccdc Fix test for unknown flags 2015-02-15 09:51:39 +01:00
Jakob Borg
fe2a743c8d Protocol update 2015-02-15 09:41:32 +01:00
Audrius Butkevicius
1f5c124ac4 Merge pull request #1359 from krozycki/master
Typo fix. Fixes #1358
2015-02-14 15:17:45 +00:00
Karol Różycki
64a5bc038a Typo fix. Fixes #1358 2015-02-14 16:10:43 +01:00
Jakob Borg
5d9396334c Merge pull request #1311 from krozycki/master
Button to rescan all folders (fixes #1151)
2015-02-13 08:36:17 +01:00
Karol Różycki
ec160f1f0a Button to rescan all folders, fixes #1151 2015-02-12 21:03:35 +01:00
Jakob Borg
bbaeca96eb Merge pull request #1326 from rumpelsepp/systemd
Some systemd tweaks
2015-02-12 16:58:54 +01:00
Jakob Borg
3ab779895f Add tnn2 2015-02-12 12:14:56 +01:00
Jakob Borg
203c7360e7 Merge remote-tracking branch 'syncthing/pr/1332'
* syncthing/pr/1332:
  Implement memorySize() for NetBSD
2015-02-12 12:13:35 +01:00
Jakob Borg
a831f174ef Add marclaporte 2015-02-12 12:08:49 +01:00
Stefan Tatschner
153091f52f Some systemd tweaks
- Removed environment file to keep the service file minimal.
  "systemctl edit syncthing.service" does the job if somebody wants
  to customize the service.
- Changed "cmdline.target" to "default.target" as "cmdline.target"
  does not exist in systemd.special:
  http://www.freedesktop.org/software/systemd/man/systemd.special.html
- Added a missing "After=network.target".
- Added a documentation hint, thx @jaystrictor
2015-02-12 09:23:12 +01:00
Jakob Borg
35d3af5039 Use syncthing/build:latest for building 2015-02-11 10:17:15 +01:00
Jakob Borg
57e8cd6eab Regenerate assets 2015-02-11 09:35:08 +01:00
Jakob Borg
fc123a71af Merge pull request #1341 from AudriusButkevicius/configtest
Fix tests on Windows
2015-02-11 08:13:54 +01:00
Audrius Butkevicius
f14836cf02 Merge pull request #1346 from uok/match-icons
Match icons for ignore patterns
2015-02-10 21:42:53 +00:00
Ben Schulz
4178feb65f Match icons for ignore patterns 2015-02-10 21:38:56 +01:00
Audrius Butkevicius
2edaf22590 Merge pull request #1345 from calmh/smaller-batches
Reduce memory usage by writing smaller batches
2015-02-10 19:53:53 +00:00
Audrius Butkevicius
acd3dab957 Fix tests on Windows 2015-02-10 19:52:14 +00:00
Jakob Borg
6bbd74adcd Merge pull request #1321 from AudriusButkevicius/bitcheck
Refuse files with unknown bits set (fixes #1276)
2015-02-10 20:27:14 +01:00
Jakob Borg
9bb928bb38 Reduce memory usage by writing smaller batches 2015-02-10 20:24:25 +01:00
Jakob Borg
c87a6c5969 Use single filename for heap profiles 2015-02-10 19:50:27 +01:00
Jakob Borg
c482c13dcb Configurable heap profiling rate 2015-02-10 19:45:32 +01:00
Audrius Butkevicius
b87ed97402 Refuse files with unknown bits set (fixes #1276) 2015-02-09 23:32:33 +00:00
Jakob Borg
ee000dabfd Translation update 2015-02-09 23:03:31 +01:00
Jakob Borg
a73a011ee0 Merge pull request #1323 from AudriusButkevicius/finished
Add ItemFinished event (fixes #1258)
2015-02-09 15:24:10 +01:00
Jakob Borg
2a8e5e2c14 Merge pull request #1304 from AudriusButkevicius/pprof
Add STBLOCKPROFILE
2015-02-09 15:18:42 +01:00
Jakob Borg
5d9a41f712 Merge pull request #1319 from AudriusButkevicius/renames
Fix issues with renames
2015-02-09 15:14:47 +01:00
Jakob Borg
ebcf4b60f6 Merge pull request #1320 from AudriusButkevicius/cache
Remove fd cache (ref #1308)
2015-02-09 15:10:42 +01:00
Jakob Borg
f976b78917 Merge pull request #1322 from AudriusButkevicius/browser
Opening a browser happens in it's own routine (fixes #1273)
2015-02-09 15:06:56 +01:00
Tobias Nygren
078790bd0f Implement memorySize() for NetBSD 2015-02-05 20:31:25 +01:00
Audrius Butkevicius
b88c5a89a8 Add rumpelsepp 2015-02-03 00:15:57 +00:00
Audrius Butkevicius
57028e3acc Merge pull request #1325 from rumpelsepp/master
Added exit code definitions to systemd service files (fixes #1324)
2015-02-02 08:04:27 +00:00
Stefan Tatschner
c586a17926 Added exit code definitions to systemd service files (fixes #1324) 2015-02-01 23:55:54 +01:00
Audrius Butkevicius
9d078bac54 Add STBLOCKPROFILE 2015-02-01 19:00:24 +00:00
Audrius Butkevicius
38eaefcabd Add ItemFinished event (fixes #1258) 2015-02-01 18:59:29 +00:00
Audrius Butkevicius
ba8cadc2f1 Opening a browser happens in it's own routine (fixes #1273) 2015-02-01 18:59:28 +00:00
Audrius Butkevicius
380d5dfa6d Remove fd cache (ref #1308) 2015-02-01 18:59:24 +00:00
Audrius Butkevicius
32af626630 Fix issues with renames (fixes #1302)
Extra comments explain current issues.
2015-02-01 18:58:27 +00:00
Audrius Butkevicius
8358fedaf4 Fix failing integration tests 2015-02-01 18:57:46 +00:00
Audrius Butkevicius
ec82b0c648 Merge pull request #1309 from uok/fix-override
Fix button "override changes" line break (fixes #1144)
2015-01-29 10:40:14 +00:00
Ben Schulz
81a87f873f Fix button "override changes" line break 2015-01-29 11:28:59 +01:00
Audrius Butkevicius
d91b8ac444 Merge pull request #1299 from krozycki/master
Show information in folder panel if ignore patterns are active fixes #1279
2015-01-27 19:54:09 +00:00
Karol Różycki
952e51ac75 Show information in folder panel if ignore patterns are active, fixes #1279 2015-01-27 15:27:44 +01:00
Audrius Butkevicius
11267cd44f Merge pull request #1298 from uok/link-usage
Make links to usage report clickable
2015-01-26 20:59:20 +00:00
Ben Schulz
0e59e0aebd Make links to usage report clickable 2015-01-26 17:20:55 +01:00
Audrius Butkevicius
ae1d3b3dd3 Merge pull request #1293 from uok/move-panels
Move panels to top of page (ref #1270)
2015-01-25 09:53:03 +00:00
Ben Schulz
1a91dbee5f Move panels to top of page
- Move panels (new device, new folder, notice) to top of page
- Add icons to panel headers (restart, new folder, notice)
2015-01-23 16:28:30 +01:00
Jakob Borg
fd507e3e41 Merge pull request #1290 from krozycki/master
Ensuring path separator at the end of the folder path. (fixes #1262)
2015-01-22 15:36:32 -08:00
Jakob Borg
9c1a67cf47 Add krozycki 2015-01-22 15:35:58 -08:00
Jakob Borg
69e3824840 Add test for #1262 2015-01-22 15:34:22 -08:00
Karol Różycki
fcb1a98129 Ensuring path separator at the end of the folder path. (fixes #1262) 2015-01-23 00:22:30 +01:00
Audrius Butkevicius
6d942635af Merge pull request #1269 from uok/master
Small improvements for job queue
2015-01-22 22:58:33 +00:00
Audrius Butkevicius
cda2c5d459 Fix integration tests 2015-01-22 22:42:39 +00:00
Jakob Borg
969bb5a742 Fix protocol dependency hash 2015-01-22 13:29:54 -08:00
Jakob Borg
4bccc611c3 Update dependencies 2015-01-22 12:53:10 -08:00
Jakob Borg
d18c4ece0c Merge pull request #1282 from syncthing/integ
Improvements to integration tests
2015-01-22 08:22:46 -08:00
Audrius Butkevicius
25c664b13a Improvements to integration tests 2015-01-22 00:18:08 +00:00
Audrius Butkevicius
7c680c955f Merge pull request #1274 from calmh/minor-tweaks
Integer type policy
2015-01-19 20:40:36 +00:00
Ben Schulz
f037d1b6ca improve job queue 2015-01-19 20:49:19 +01:00
Jakob Borg
2c8b627008 Integer type policy
Integers are for numbers, enabling arithmetic like subtractions and for
loops without getting shot in the foot. Unsigneds are for bitfields.

- "int" for numbers that will always be laughably smaller than four
  billion, and where we don't care about the serialization format.

- "int32" for numbers that will always be laughably smaller than four
  billion, and will be serialized to four bytes.

- "int64" for numbers that may approach four billion or will be
  serialized to eight bytes.

- "uint32" and "uint64" for bitfields, depending on required number of
  bits and serialization format. Likewise "uint8" and "uint16", although
  rare in this project since they don't exist in XDR.

- "int8", "int16" and plain "uint" are almost never useful.
2015-01-19 10:34:36 -08:00
Jakob Borg
221e3eddd5 Remove leveldb panic workaround
Haven't seen this triggered for a long time...
2015-01-19 10:23:00 -08:00
Jakob Borg
74c39c677b Actually remove test file after test run 2015-01-19 10:23:00 -08:00
Jakob Borg
e6558832bf check-contrib 2015-01-19 10:18:28 -08:00
Jakob Borg
9c50625c55 Merge pull request #1267 from rumpelsepp/doc
Fix documentation link
2015-01-19 10:16:43 -08:00
Jakob Borg
d372435e92 Merge pull request #1264 from AudriusButkevicius/tempclean
Put temporary files in the OS temp directory (fixes #1239)
2015-01-19 10:12:32 -08:00
Audrius Butkevicius
a53facf709 Cleanup temporary files (fixes #1239) 2015-01-18 20:34:47 +00:00
Stefan Tatschner
ffc39dfbcb Fix documentation link 2015-01-18 18:17:55 +01:00
Audrius Butkevicius
cba38b15a9 Check for deleted files 2015-01-18 13:44:10 +00:00
Audrius Butkevicius
5ac7564bfe Merge pull request #1259 from calmh/check-folder-shared
Verify folder<->device permission in Request
2015-01-16 12:07:37 +00:00
Jakob Borg
53cd289b90 Verify folder<->device permission in Request
Requests from valid devices for valid folders should be rejected if the
folder is not shared with that device.
2015-01-16 12:50:51 +01:00
Jakob Borg
8dc13bcf1a go1.4.1 2015-01-16 10:18:54 +01:00
Audrius Butkevicius
261825a89b Merge pull request #1256 from calmh/fix-963
Adds File Versioning to folder info (fixes #963)
2015-01-15 17:25:28 +00:00
Jakob Borg
f47a5a309d Add File Versioning to folder info (fixes #963)
Could potentially use shorter value strings ("Simple" vs "Simple File
Versioning"), but this avoid introducing unnecessary strings to
translate - can always be changed in the future.
2015-01-15 15:50:49 +01:00
Jakob Borg
a40f2b9fa0 Merge pull request #1237 from syncthing/renamer
Efficient renames (fixes #1217)
2015-01-14 00:28:58 +01:00
Jakob Borg
cfcd3892f7 More concise changelog 2015-01-14 00:27:29 +01:00
Jakob Borg
703987f61c Changelog should read in chronological order from the top 2015-01-13 23:11:46 +01:00
Audrius Butkevicius
e50a8917ec Add renames to integration tests 2015-01-13 22:07:14 +00:00
Audrius Butkevicius
74d7c8e625 Efficient renames (fixes #1217) 2015-01-13 22:06:13 +00:00
Jakob Borg
a5d1383fe8 Translation update 2015-01-13 17:27:47 +01:00
Jakob Borg
bf2bcf515c Merge pull request #1247 from Rewt0r/master
Change Bottom Bar links to a new tab/window
2015-01-13 17:26:48 +01:00
Jakob Borg
4c5e94c64b Add Rewt0r 2015-01-13 17:25:51 +01:00
Jakob Borg
b4043216b6 Use bytes.Reader instead of bytes.Buffer for compiled in assets 2015-01-13 16:05:03 +01:00
Michael Jephcote
4371014667 Change Bottom Bar links to a new tab/window
Was getting kinda annoying clicking links in the footer and it navigating away from the page, I've added "_blank" as the target to those links.
2015-01-13 14:24:08 +00:00
Jakob Borg
fbb3222d29 Update dependencies 2015-01-13 13:55:35 +01:00
Audrius Butkevicius
4ca3889bed Merge pull request #1246 from syncthing/break-out-proto
Refactor out protocol and luhn (protocol dependency) packages
2015-01-13 12:37:46 +00:00
Jakob Borg
eef1aebe8c Refactor out protocol and luhn (protocol dependency) packages 2015-01-13 13:22:56 +01:00
Audrius Butkevicius
48382c4b59 Merge pull request #1245 from syncthing/indexclean-v2
Also filter out some other obviously invalid filenames (ref #1243)
2015-01-13 11:35:04 +00:00
Jakob Borg
9a45f0b31c Also filter out some other obviously invalid filenames (ref #1243) 2015-01-13 12:28:35 +01:00
Audrius Butkevicius
25c26e2f81 Merge pull request #1244 from syncthing/indexclean
Remove nil filenames from database and indexes (fixes #1243)
2015-01-13 08:36:49 +00:00
Jakob Borg
e4837f14b1 Remove nil filenames from database and indexes (fixes #1243) 2015-01-13 09:20:14 +01:00
Audrius Butkevicius
ce86131d12 Merge pull request #1242 from syncthing/rename-set
Renaming of package internal/files and type files.Set
2015-01-12 21:10:27 +00:00
Jakob Borg
e6c9baf6ef Rename db.Set to db.FileSet 2015-01-12 20:57:39 +01:00
Jakob Borg
8d6db7be31 Rename package internal/files to internal/db 2015-01-12 20:57:22 +01:00
Audrius Butkevicius
a2548b1fd0 Merge pull request #1241 from syncthing/fix-1143
Don't start a new refresh() loop on each UIOnline (fixes #1143)
2015-01-12 11:29:53 +00:00
Jakob Borg
e4658bb99d Don't start a new refresh() loop on each UIOnline (fixes #1143)
Separate out the stuff that should run on each UIOnline from the stuff
that should only run on init.
2015-01-12 12:15:58 +01:00
Jakob Borg
bf2e4a561a Changelog prints output on two lines per commit 2015-01-11 21:58:19 +01:00
Jakob Borg
1816320124 One more translation update 2015-01-11 21:19:42 +01:00
Jakob Borg
f09bfe293d Translation update 2015-01-11 20:31:30 +01:00
Jakob Borg
7b4e8fda4b Modal dialog titles must be manually translated 2015-01-11 14:48:40 +01:00
Audrius Butkevicius
c95812353f Merge pull request #1236 from syncthing/flag-safe
Reject Index and Request messages with unexpected flags
2015-01-11 12:40:40 +00:00
Jakob Borg
b622ec7a28 Reject Index and Request messages with unexpected flags 2015-01-11 13:29:01 +01:00
Jakob Borg
d8fbe7b77f Refactor readerLoop to switch on message type directly 2015-01-11 13:24:56 +01:00
Jakob Borg
dbcac37d91 Can run integration tests between different versions 2015-01-11 09:55:44 +01:00
Jakob Borg
d4d391b34f Change integration test "log" field to "instance" 2015-01-11 09:55:17 +01:00
Jakob Borg
571cf7d490 Merge pull request #1182 from AudriusButkevicius/autoauto
Connecting to a newer node triggers autoupgrade check (fixes #1177)
2015-01-11 09:16:12 +01:00
Jakob Borg
e18b19ca5a Translation base & assets update 2015-01-10 18:15:08 +01:00
Jakob Borg
48651bf482 Merge pull request #1234 from uok/master
Small improvements for footer navbar (ref #1154)
2015-01-10 18:14:47 +01:00
Audrius Butkevicius
5034a41c08 Connecting to a newer node triggers autoupgrade check (fixes #1177) 2015-01-10 17:05:19 +00:00
Ben Schulz
6795173e77 Small improvements for footer navbar 2015-01-10 18:02:27 +01:00
Jakob Borg
219ef996f5 Merge pull request #1226 from syncthing/deregister-fix
All roads lead to Finisher (fixes #1201)
2015-01-10 17:53:01 +01:00
Jakob Borg
00af1db275 Translation base & assets update 2015-01-10 17:51:18 +01:00
Jakob Borg
5935ea896f Merge pull request #1232 from facastagnini/master
"Quick guide to supported patterns" link updated
2015-01-10 17:50:23 +01:00
Jakob Borg
459983c05e Add facastagnini 2015-01-10 17:47:41 +01:00
Jakob Borg
ebf4f029ac Merge pull request #1229 from AudriusButkevicius/cfg-hasher
Make parallel hasher configurable, remove finisher setting (fixes #1199)
2015-01-10 17:45:15 +01:00
Jakob Borg
0eec945df1 Merge pull request #1230 from AudriusButkevicius/separator
Expose and use path separator (fixes #1163)
2015-01-10 17:43:41 +01:00
Jakob Borg
8824b9d68f Merge pull request #1231 from AudriusButkevicius/text
Rename "Last File Synced" to "Last File Received" (fixes #1145)
2015-01-10 17:43:06 +01:00
Jakob Borg
d2862814c5 Merge pull request #1233 from AudriusButkevicius/disco-log
Make discovery logging a bit better (fixes #1188)
2015-01-10 17:42:13 +01:00
Audrius Butkevicius
25fece2d50 Make discovery logging a bit better (fixes #1188) 2015-01-10 16:15:16 +00:00
Federico Castagnini
beb4239d1b The "Quick guide to supported patterns" link now points to the wiki article and will open in a new page/tab to avoid disrupting the settings page. 2015-01-10 10:42:45 -05:00
Audrius Butkevicius
2b78e37d92 Rename "Last File Synced" to "Last File Received" (fixes #1145)
IMHO this is more clear
2015-01-10 14:55:08 +00:00
Audrius Butkevicius
a2070d9ce4 Expose and use path separator (fixes #1163) 2015-01-10 14:51:29 +00:00
Audrius Butkevicius
5827a686b8 Make parallel hasher configurable, remove finisher setting (fixes #1199) 2015-01-10 14:32:20 +00:00
Audrius Butkevicius
dec479532e All roads lead to Finisher (fixes #1201) 2015-01-10 13:45:48 +00:00
Jakob Borg
5d173168cc Merge pull request #1214 from bigbear2nd/master
Added colored status indicator for narrow screens. (Fixes #1084)
2015-01-09 16:23:24 +01:00
bigbear2nd
2aac1cde04 Added colored status indicator for narrow screens. (Fixes #1084) 2015-01-09 20:54:00 +09:00
Audrius Butkevicius
3676f0268f Merge pull request #1220 from syncthing/arm-build
Only build ARMv5 (fixes #1218)
2015-01-09 10:25:50 +00:00
Audrius Butkevicius
a7b75a54bb Merge pull request #1219 from syncthing/refactor-truncated
Refactor stuff around FileInfoTruncated
2015-01-09 10:12:48 +00:00
Jakob Borg
961a87b743 Only build ARMv5 (fixes #1218)
With this change, the build system only builds one ARM variant - ARMv5.
We call the build architecture simply "arm", as this is what
runtime.GOARCH says.
2015-01-09 10:45:15 +01:00
Jakob Borg
e03d59e381 The protocol specs moved again 2015-01-09 08:54:19 +01:00
Jakob Borg
d46ce5003c Implement GetGlobalTruncated 2015-01-09 08:41:02 +01:00
Jakob Borg
4c4143d9be Move FileInfoTruncated to files package
This is where it's used, and it clarifies that it's never used over the
wire.
2015-01-09 08:28:24 +01:00
Jakob Borg
8bc7d259f4 Move FileIntf to files package, expose Iterator type
This is where FileIntf is used, so it should be defined here (it's not
a protocol thing, really).
2015-01-09 08:18:42 +01:00
Jakob Borg
2d047fa428 Remove unused types 2015-01-09 08:14:02 +01:00
Audrius Butkevicius
735d420d40 Merge pull request #1215 from syncthing/new-proto-bc
Add some new protocol fields
2015-01-08 21:54:01 +00:00
Jakob Borg
bc9fc1aece Actually close connection based on unknown protocol version 2015-01-08 22:11:26 +01:00
Jakob Borg
b88e3c99c1 Add fields for future extensibility
This adds a number of fields to the end of existing messages. This is a
backwards compatible change.
2015-01-08 22:11:26 +01:00
Jakob Borg
ce3e6e084c Ensure backwards compatibility before modifying protocol
This change makes sure that things work smoothly when "we" are a newer
version than our peer and have more fields in our messages than they do.
Missing fields will be left at zero/nil.

(The other side will ignore our extra fields, for the same effect.)
2015-01-08 14:25:11 +01:00
Jakob Borg
2a58ca7697 Merge pull request #1212 from cqcallaw/upnp
Properly handle absolute URLs when parsing UPnP service control URLs
2015-01-08 08:50:30 +01:00
Caleb Callaway
af96f7a0cd Properly handle absolute URLs when parsing UPnP service control URLs
Fixes #1187
2015-01-07 21:23:20 -08:00
Jakob Borg
7d39d1a925 Make it possible to include extra external files into binary packages 2015-01-07 16:15:50 +01:00
Jakob Borg
1b6c700e18 Merge remote-tracking branch 'origin/pr/1198'
* origin/pr/1198:
  Fix rendering issue on firefox when zoomed (fixes #1197)
2015-01-07 14:24:49 +01:00
Tim Abell
6304bd60ee Fix rendering issue on firefox when zoomed (fixes #1197)
issue #1197
2015-01-07 09:27:17 +00:00
Jakob Borg
4ad4417740 Add timabell 2015-01-07 08:36:54 +01:00
Jakob Borg
6a4c259a73 Merge pull request #1196 from AudriusButkevicius/finddevice
Add device finder utility
2015-01-07 08:31:54 +01:00
Audrius Butkevicius
12eabb220d Add device finder utility 2015-01-06 23:12:12 +00:00
Jakob Borg
d68ce2d68c Translation update 2015-01-06 23:12:40 +01:00
Jakob Borg
8e02c040eb Update key ID for signed releases in README (fixes #1180) 2015-01-06 23:06:16 +01:00
Jakob Borg
a7a317c284 The predictableRandom test can only run once successfully (fixes #1184) 2015-01-06 23:03:35 +01:00
Audrius Butkevicius
9d6ef24660 Merge pull request #1194 from syncthing/fix-1186
Use comma-ok idiom to signal files missing in database (fixes #1186)
2015-01-06 21:54:13 +00:00
Jakob Borg
14014408fb Merge pull request #1181 from kozec/stnoupgrade-disable-button
Return HTTP/500 from /rest/upgrade if STNOUPGRADE is defined
2015-01-06 22:54:08 +01:00
kozec
b933e9666a /rest/upgrade returns HTTP/500 if STNOUPGRADE is defined 2015-01-06 22:50:56 +01:00
Jakob Borg
7aff59bcce Add brendanlong 2015-01-06 22:48:01 +01:00
Jakob Borg
8e2760cb3d Merge pull request #1183 from brendanlong/fix-tests-on-go-1.3
Don't use Go 1.4 range syntax in queue_test.go
2015-01-06 22:47:23 +01:00
Brendan Long
7a9fc6dbd3 Don't use Go 1.4 range syntax in queue_test.go, since the listed requirement is Go 1.3. 2015-01-06 15:45:58 -06:00
Jakob Borg
75d0dc251e Use comma-ok idiom to signal files missing in database (fixes #1186)
Prevents us from doing stupid things to the folder root (empty file
path) when nodes disconnect...
2015-01-06 22:40:20 +01:00
Jakob Borg
9a50c4d93f Don't unnecessarily chmod directories when renaming 2015-01-06 22:10:44 +01:00
Audrius Butkevicius
010d5a0192 Merge pull request #1179 from syncthing/httperror
Handle HTTP errors on non-event requests (fixes #1120)
2015-01-05 17:45:18 +00:00
Jakob Borg
cf1594829a Handle HTTP errors on non-event requests (fixes #1120, fixes #807) 2015-01-05 16:03:00 +01:00
Jakob Borg
854d720ce0 Merge pr/988
* commit 'b9817ac':
  add README
  on-failure instead of always as we cannot otherwise kill the service
  systemd units for system/user
2015-01-05 15:14:33 +01:00
Jakob Borg
2f43c74ece Add peterhoeg 2015-01-05 15:14:22 +01:00
Peter Hoeg
b9817ac6b4 add README 2015-01-05 18:29:13 +08:00
Peter Hoeg
1e8da0d494 on-failure instead of always as we cannot otherwise kill the service 2015-01-05 18:29:13 +08:00
Peter Hoeg
c47be7b415 systemd units for system/user 2015-01-05 18:29:13 +08:00
Jakob Borg
d3f6cb860f Translation update 2015-01-04 20:18:14 +01:00
Audrius Butkevicius
83d25f09a3 Fix broken upgrades (fixes #1175) 2015-01-04 18:19:00 +00:00
Audrius Butkevicius
ed747a2d3d Add identicons to device prompts 2015-01-03 23:34:15 +00:00
Jakob Borg
3a8ee4ce2e Merge pull request #1169 from syncthing/pullhash
Hash blocks after receipt, try multiple peers (fixes #1166)
2015-01-04 00:24:07 +01:00
Audrius Butkevicius
5ac01a3af4 Hash blocks after receipt, try multiple peers (fixes #1166) 2015-01-03 23:21:57 +00:00
Jakob Borg
46343f2f9e Merge pull request #1174 from AudriusButkevicius/intro
New device, folder prompts (fixes #120, fixes #330)
2015-01-04 00:16:10 +01:00
Audrius Butkevicius
56ccb5b2ab New device, folder prompts (fixes #120, fixes #330) 2015-01-03 23:06:41 +00:00
Jakob Borg
9a946eed80 Discourse -> Wiki for docs 2015-01-03 16:44:13 +01:00
Audrius Butkevicius
9c6cb0f630 Merge pull request #1172 from syncthing/random-scanintv
Add a random perturbation to the scan interval (fixes #1150)
2015-01-02 15:25:22 +00:00
Audrius Butkevicius
1b066d6965 Merge pull request #1171 from syncthing/jobqueue
Add job queue (replaces #1060)
2015-01-02 15:18:50 +00:00
Jakob Borg
54c3caad53 Add a random perturbation to the scan interval (fixes #1150) 2015-01-02 16:16:16 +01:00
Jakob Borg
9b5e8aaf83 Repair buggy BringToFront 2015-01-02 15:54:04 +01:00
Jakob Borg
5143c09bcf Refactor / cleanup 2015-01-02 15:54:04 +01:00
Jakob Borg
2496185629 Only buffer file names, not full &FileInfo 2015-01-02 15:33:39 +01:00
Jakob Borg
34deb82aea Use slice instead of list, no map
benchmark                           old ns/op     new ns/op     delta
BenchmarkJobQueueBump               345           154498        +44682.03%
BenchmarkJobQueuePushPopDone10k     9437373       3258204       -65.48%

benchmark                           old allocs     new allocs     delta
BenchmarkJobQueueBump               0              0              +0.00%
BenchmarkJobQueuePushPopDone10k     10565          22             -99.79%

benchmark                           old bytes     new bytes     delta
BenchmarkJobQueueBump               0             0             +0.00%
BenchmarkJobQueuePushPopDone10k     1452498       385869        -73.43%
2015-01-02 15:33:39 +01:00
Jakob Borg
8f72ae9da2 Add some benchmarks 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
b753f01ac1 Add tests 2015-01-02 15:33:39 +01:00
Audrius Butkevicius
fd0a147ae6 Add job queue (fixes #629)
Request to terminate currently ongoing downloads and jump to the bumped file
incoming in 3, 2, 1.

Also, has a slightly strange effect where we pop a job off the queue, but
the copyChannel is still busy and blocks, though it gets moved to the
progress slice in the jobqueue, and looks like it's in progress which it isn't
as it's waiting to be picked up from the copyChan.

As a result, the progress emitter doesn't register on the task, and hence the file
doesn't have a progress bar, but cannot be replaced by a bump.

I guess I can fix progress bar issue by moving the progressEmiter.Register just
before passing the file to the copyChan, but then we are back to the initial
problem of a file with a progress bar, but no progress happening as it's stuck
 on write to copyChan

I checked if there is a way to check for channel writeability (before popping)
but got struck by lightning just for bringing the idea up in #go-nuts.

My ideal scenario would be to check if copyChan is writeable, pop job from the
queue and shove it down handleFile. This way jobs would stay in the queue while
they cannot be handled, meaning that the `Bump` could bring your file up higher.
2015-01-02 15:33:39 +01:00
Audrius Butkevicius
e94bd90782 Merge pull request #1164 from syncthing/ro-tempfiles
Handle read only temp files after crash/restart
2014-12-31 12:08:37 +00:00
Jakob Borg
ce4b897d0e Handle read only temp files after crash/restart 2014-12-31 13:06:28 +01:00
Jakob Borg
a7694029e2 Make sure to stop processes when exiting integration test 2014-12-31 13:04:06 +01:00
Jakob Borg
1e9110b763 Add debugging utility for manual directory comparison 2014-12-31 13:04:06 +01:00
Jakob Borg
6f3fbbbe49 Improve error checking in integration tests 2014-12-31 13:04:04 +01:00
Jakob Borg
d346ec7bfe Merge pull request #1160 from AudriusButkevicius/upnp
Use unique names for UPnP mappings (fixes #1100, fixes #1128)
2014-12-31 12:56:47 +01:00
Jakob Borg
26a3613397 Merge pull request #1162 from AudriusButkevicius/silence
Silence versioner warnings for unmatched files (fixes #1117)
2014-12-31 12:54:23 +01:00
Jakob Borg
e6318bddf3 Merge pull request #1161 from AudriusButkevicius/upnp2
Use ListenMulticastUDP for multicast sockets (potentially fixes #1113)
2014-12-31 12:53:56 +01:00
Audrius Butkevicius
514bb0beda Silence versioner warnings for unmatched files (fixes #1117) 2014-12-30 22:43:07 +00:00
Audrius Butkevicius
41b1bd2f05 Use ListenMulticastUDP for multicast sockets (potentially fixes #1113) 2014-12-30 22:27:47 +00:00
Audrius Butkevicius
bf40dadf04 Use unique names for UPnP mappings (fixes #1100, fixes #1128) 2014-12-30 21:47:12 +00:00
Jakob Borg
cb1678ebec Clean up folders after -reset test 2014-12-30 11:02:49 +01:00
Jakob Borg
0c1ac568b5 Fix tests with newer goleveldb 2014-12-29 14:50:24 +01:00
Audrius Butkevicius
0f9550c747 Merge pull request #1149 from syncthing/fix-1058
Also check file size when determining if file is unchanged (fixes #1058)
2014-12-29 13:29:00 +00:00
Audrius Butkevicius
b13ae17a47 Merge pull request #1147 from syncthing/fix-1118
Generate a random API key on initial setup (fixes #1118)
2014-12-29 13:28:38 +00:00
Jakob Borg
f762a12d18 Also check file size when determining if file is unchanged (fixes #1058) 2014-12-29 14:24:12 +01:00
Jakob Borg
20d30a80be Generate a random API key on initial setup (fixes #1118)
Also makes the javascript implementation use the same algorithm for
generating random strings.
2014-12-29 13:48:26 +01:00
Audrius Butkevicius
229b218203 Merge pull request #1146 from syncthing/fix-1047
Make auto upgrade careful about breaking changes (fixes #1047)
2014-12-29 11:41:10 +00:00
Jakob Borg
4b668aaca8 Make auto upgrade careful about breaking changes (fixes #1047) 2014-12-29 12:35:06 +01:00
Jakob Borg
8c7f1421c6 Update goleveldb 2014-12-29 12:23:07 +01:00
Jakob Borg
d90b2c1d52 Translation update 2014-12-29 09:42:17 +01:00
Jakob Borg
22f39be197 Exit before attempting to use nil variables on scanning nonexistent folder 2014-12-23 14:14:05 +01:00
Audrius Butkevicius
2fa45436c2 Merge pull request #1140 from syncthing/fix-1133
Refactor ignore handling to fix #1133
2014-12-23 13:01:56 +02:00
Jakob Borg
cadbb6bbce Move ignore handling from index recv to puller (fixes #1133)
With this change we accept updates for ignored files from other devices,
and check the ignore patterns at pull time. When we detect that the
ignore patterns have changed we do a full check of files that we might
now need to pull.
2014-12-23 10:46:02 +01:00
Jakob Borg
2c89f04be7 Refactor ignore handling (...)
This uses persistent Matcher objects that can reload their content and
provide a hash string that can be used to check if it's changed. The
cache is local to each Matcher object instead of kept globally.
2014-12-23 10:46:02 +01:00
Jakob Borg
597011e3a9 Disregard change to removed doc 2014-12-23 10:23:36 +01:00
Audrius Butkevicius
0d433b58ba Merge pull request #1139 from syncthing/check-upgrade-md5
Check upgrade md5
2014-12-22 15:33:19 +02:00
Jakob Borg
cde8ef56e5 Implement manual -upgrade-to option 2014-12-22 12:18:10 +01:00
Jakob Borg
110816c7aa Consolidate Windows/Unix upgrading and check MD5 (fixes #1138) 2014-12-22 12:13:31 +01:00
Jakob Borg
fbb1e168f7 Include MD5 sums in archives 2014-12-22 12:12:34 +01:00
Jakob Borg
23085eb5ae Must verify success of from-network copy during upgrade (ref #1138) 2014-12-22 10:42:47 +01:00
Jakob Borg
7344a6205f Move protocol specs to a separate repo 2014-12-22 09:55:58 +01:00
marco-m
4b76ec40c0 Update DISCOVERY.md
Correct DISCOVERY.md with the changes proposed in the forum (https://discourse.syncthing.net/t/questions-about-the-discovery-protocol/1586)
2014-12-21 22:47:47 +01:00
Audrius Butkevicius
90101d0269 Merge pull request #1134 from syncthing/fix-816
Don't ignore ignored items forever (fixes #816)
2014-12-21 16:18:24 +02:00
Jakob Borg
7ac84c0660 Don't ignore ignored items forever (fixes #816) 2014-12-21 13:55:50 +01:00
Jakob Borg
2090530bbb Improve and clean up integration tests, benchmark. 2014-12-19 12:43:48 +01:00
Jakob Borg
b6cb7ddbaf There is no Legend string right now 2014-12-19 10:18:51 +01:00
Jakob Borg
3422d9335c ... and in NICKS (I should go to bed) 2014-12-18 22:55:04 +01:00
Jakob Borg
e91f9a944e Revert "Update bootstrap" (fixes #1121)
This reverts commit 51cdd38c3e.

Conflicts:
	internal/auto/gui.files.go
2014-12-18 22:32:03 +01:00
Jakob Borg
e7ddc7cf0f ... also in index.html 2014-12-18 22:02:45 +01:00
Jakob Borg
40dfa48756 Rebuild assets 2014-12-18 22:01:38 +01:00
Jakob Borg
579f92cf5f Merge branch 'pr-1115'
* pr-1115:
  Make progress indicators less animated
  put legend above list of needed files
2014-12-18 22:01:27 +01:00
Jakob Borg
4565125da9 Add Cathryne 2014-12-18 21:59:54 +01:00
Jakob Borg
ce13a01e65 Clarify authorship requirements in contribution guidelines 2014-12-18 21:56:52 +01:00
Jakob Borg
618a8682b7 golint style tweaks 2014-12-16 23:33:56 +01:00
Jakob Borg
963077f918 Translation update 2014-12-16 23:20:59 +01:00
Jakob Borg
3704d2d86b Don't exit after creating HTTPS certs (fixes #1103) 2014-12-16 22:55:44 +01:00
Jakob Borg
fc6a029311 gofmt 2014-12-16 22:40:04 +01:00
Jakob Borg
7c7b1e6c2d Merge branch 'update-bootstrap'
* update-bootstrap:
  Fix checkbox breakage in Settings dialog
  Update bootstrap
2014-12-15 09:13:05 +01:00
Jakob Borg
892920039d Fix checkbox breakage in Settings dialog 2014-12-15 09:12:59 +01:00
Jakob Borg
51cdd38c3e Update bootstrap 2014-12-15 08:54:29 +01:00
Jakob Borg
80977bd4c0 Make progress indicators less animated 2014-12-15 00:34:03 +01:00
Cathryne
d8022f94ef put legend above list of needed files 2014-12-13 18:33:20 +01:00
Jakob Borg
1c43587d7d Patch Go for issue #9102 in build env (fixes #1112) 2014-12-13 10:38:05 +01:00
Jakob Borg
b2ed32b118 Command -generate should work on non-existent dir 2014-12-12 21:39:03 +01:00
Jakob Borg
0cc815d816 Need config available for -reset (fixes #1111) 2014-12-12 21:29:57 +01:00
Jakob Borg
d452b7593f Merge branch 'pr-1094'
* pr-1094:
  GUI tweaks for last file synced
  Display last received file and time (fixes #292, fixes #801)
2014-12-12 14:25:12 +01:00
Jakob Borg
5346bdc683 GUI tweaks for last file synced 2014-12-12 14:24:36 +01:00
Jakob Borg
dc5c1e2002 Use Go 1.4 for builds 2014-12-11 12:48:40 +01:00
Jakob Borg
2e48e298a2 Merge pull request #1107 from AudriusButkevicius/cleanup
Remove temporaries during scan (fixes #1092)
2014-12-10 09:19:34 +01:00
Audrius Butkevicius
7a1aaaf5c4 Remove temporaries during scan (fixes #1092) 2014-12-09 23:58:58 +00:00
Audrius Butkevicius
bde92d5cfe Display last received file and time (fixes #292, fixes #801) 2014-12-09 20:24:48 +00:00
Audrius Butkevicius
691f0f4845 Merge pull request #1102 from syncthing/gui-poodle
Protect GUI HTTPS from some attacks
2014-12-09 09:52:21 +00:00
Jakob Borg
fdd458d2fe Protect GUI HTTPS from some attacks
- Disable SSLv3 against POODLE
 - Disable RC4 as a weak cipher
 - Set the CommonName to the system host name
2014-12-09 10:49:58 +01:00
Jakob Borg
d2c0b8374a Fix integration tests for Windows native 2014-12-08 22:15:10 +01:00
Jakob Borg
c96c78892d Include error in randomness failure panic 2014-12-08 19:40:38 +01:00
Jakob Borg
957643f523 crypto/rand.Reader may not return all entropy immediately 2014-12-08 19:36:08 +01:00
Audrius Butkevicius
749bbec566 Merge pull request #1099 from syncthing/vet-and-lint
Various changes for vet and lint
2014-12-08 17:08:18 +00:00
Jakob Borg
25e363c5fb Style tweaks and some *IDG->IGD in UPnP code 2014-12-08 17:07:55 +01:00
Jakob Borg
febeed3277 config.ConfigWrapper -> config.Wrapper 2014-12-08 16:39:11 +01:00
Jakob Borg
9d07aa006d Various style fixes 2014-12-08 16:36:15 +01:00
Jakob Borg
12d69e25dd Fixes for go vet 2014-12-08 16:19:08 +01:00
Jakob Borg
0c9f1efc75 Run vet and lint during build 2014-12-08 16:12:53 +01:00
Jakob Borg
665b4506e7 Correct check-contrib.sh 2014-12-08 15:44:55 +01:00
Jakob Borg
cb5548ceb8 Fit better in with Jenkins 2014-12-08 15:42:53 +01:00
Jakob Borg
c9492e54f7 Copyright notice 2014-12-08 15:42:53 +01:00
Jakob Borg
12490eafff Script to fail build on missing authors and copyrights 2014-12-08 15:42:53 +01:00
Jakob Borg
6e83d11d5f Translation update 2014-12-08 13:25:27 +01:00
Jakob Borg
9c6aedc91b Merge remote-tracking branch 'origin/pr/1097'
* origin/pr/1097:
  Revert "Cache file descriptors" (fixes #1096)
2014-12-08 13:23:06 +01:00
Audrius Butkevicius
a9339d0627 Revert "Cache file descriptors" (fixes #1096)
This reverts commit 992ad97ad5.

Causes issues on Windows which uses file locking.
Meaning we cannot archive or modify the file while it's open.
2014-12-08 11:56:14 +00:00
Jakob Borg
4d9aa10532 Merge pull request #1093 from syncthing/random
Refactor random string stuff and seeding
2014-12-08 09:40:30 +01:00
Audrius Butkevicius
b00264b594 Copy compression setting while introducing 2014-12-07 22:43:30 +00:00
Jakob Borg
e329c7015e Refactor random string stuff and seeding
Make sure we have a good random seed on the default RNG, that the
predictable RNG is clearly marked as such, that random strings are
actually the length requested, and that they contain a restricted set of
characters only.
2014-12-07 16:47:24 +01:00
Jakob Borg
1392cfc72d Actually commit and use new random UR ID 2014-12-07 15:49:17 +01:00
Jakob Borg
c6688d8f89 Include ref#, show author nickname in release notes 2014-12-07 12:52:18 +01:00
Jakob Borg
87abea0ba3 Script for generating the change log 2014-12-07 09:07:13 +01:00
233 changed files with 9332 additions and 7535 deletions

4
.gitignore vendored
View File

@@ -1,4 +1,4 @@
syncthing
./syncthing
syncthing.exe
*.tar.gz
*.zip
@@ -13,3 +13,5 @@ perfstats*.csv
coverage.xml
!gui/scripts/syncthing
.DS_Store
syncthing.md5
syncthing.exe.md5

1
.mailmap Symbolic link
View File

@@ -0,0 +1 @@
NICKS

12
AUTHORS
View File

@@ -5,15 +5,18 @@ Alexander Graf <register-github@alex-graf.de>
Andrew Dunham <andrew@du.nham.ca>
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Arthur Axel fREW Schmidt <frew@afoolishmanifesto.com> <frioux@gmail.com>
Ben Schulz <ueomkail@gmail.com>
Ben Schulz <ueomkail@gmail.com> <uok@users.noreply.github.com>
Ben Sidhom <bsidhom@gmail.com>
Brandon Philips <brandon@ifup.org>
Brendan Long <self@brendanlong.com>
Caleb Callaway <enlightened.despot@gmail.com>
Cathryne Linenweaver <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
Chris Joel <chris@scriptolo.gy>
Daniel Martí <mvdan@mvdan.cc>
Dennis Wilson <dw@risu.io>
Dominik Heidler <dominik@heidler.eu>
Emil Hessman <emil@hessman.se>
Federico Castagnini <federico.castagnini@gmail.com>
Felix Ableitner <me@nutomic.com>
Felix Unterpaintner <bigbear2nd@gmail.com>
Gilli Sigurdsson <gilli@vx.is>
@@ -21,13 +24,20 @@ Jakob Borg <jakob@nym.se>
James Patterson <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
Jens Diemer <github.com@jensdiemer.de> <git@jensdiemer.de>
Jochen Voss <voss@seehuhn.de>
Karol Różycki <rozycki.karol@gmail.com>
Lode Hoste <zillode@zillode.be>
Marcin Dziadus <dziadus.marcin@gmail.com>
Marc Laporte <marc@marclaporte.com>
Michael Jephcote <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Michael Tilli <pyfisch@gmail.com>
Peter Hoeg <peter@speartail.com>
Philippe Schommers <philippe@schommers.be>
Phill Luby <phill.luby@newredo.com>
Piotr Bejda <piotrb10@gmail.com>
Ryan Sullivan <kayoticsully@gmail.com>
Stefan Tatschner <stefan@sevenbyte.org>
Tim Abell <tim@timwise.co.uk>
Tobias Nygren <tnn@nygren.pp.se>
Tomas Cerveny <kozec@kozec.com>
Tully Robinson <tully@tojr.org>
Veeti Paananen <veeti.paananen@rojekti.fi>

View File

@@ -33,8 +33,8 @@ latest info on Transifex.
Every contribution is welcome. If you want to contribute but are unsure
where to start, any open issues are fair game! Be prepared for a
[certain amount of review](https://discourse.syncthing.net/t/733); it's
all in the name of quality. :) Following the points below will make this
[certain amount of review](https://github.com/syncthing/syncthing/wiki/FAQ#why-are-you-being-so-hard-on-my-pull-request);
it's all in the name of quality. :) Following the points below will make this
a smoother process.
Individuals making significant and valuable contributions are given
@@ -46,6 +46,20 @@ All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
### Core Team
The Syncthing core team currently consists of the following members;
@@ -91,8 +105,8 @@ add yourself as a separate commit in your first pull request.
## Building
[See the documentation](http://discourse.syncthing.net/t/44) on how to
get started with a build environment.
[See the documentation](https://github.com/syncthing/syncthing/wiki/Building)
on how to get started with a build environment.
## Branches
@@ -119,7 +133,7 @@ Yes please!
## Documentation
[Over here!](http://discourse.syncthing.net/category/documentation)
[Over here!](https://github.com/syncthing/syncthing/wiki)
## License

34
Godeps/Godeps.json generated
View File

@@ -1,14 +1,10 @@
{
"ImportPath": "github.com/syncthing/syncthing",
"GoVersion": "go1.4rc1",
"GoVersion": "go1.4",
"Packages": [
"./cmd/..."
],
"Deps": [
{
"ImportPath": "github.com/AudriusButkevicius/lrufdcache",
"Rev": "9bddff8f67224ab3e7d80525a6ae9bcf1ce10769"
},
{
"ImportPath": "github.com/bkaradzic/go-lz4",
"Rev": "93a831dcee242be64a9cc9803dda84af25932de7"
@@ -17,25 +13,29 @@
"ImportPath": "github.com/calmh/logger",
"Rev": "f50d32b313bec2933a3e1049f7416a29f3413d29"
},
{
"ImportPath": "github.com/calmh/luhn",
"Rev": "0c8388ff95fa92d4094011e5a04fc99dea3d1632"
},
{
"ImportPath": "github.com/calmh/osext",
"Rev": "9bf61584e5f1f172e8766ddc9022d9c401faaa5e"
},
{
"ImportPath": "github.com/calmh/xdr",
"Rev": "45c46b7db7ff83b8b9ee09bbd95f36ab50043ece"
},
{
"ImportPath": "github.com/golang/groupcache/lru",
"Rev": "f391194b967ae0d21deadc861ea87120d9687447"
"Rev": "ff948d7666c5e0fd18d398f6278881724d36a90b"
},
{
"ImportPath": "github.com/juju/ratelimit",
"Rev": "f9f36d11773655c0485207f0ad30dc2655f69d56"
},
{
"ImportPath": "github.com/syncthing/protocol",
"Rev": "2e2d479103df8fb721d55d59b0198d6c24f4865b"
},
{
"ImportPath": "github.com/syndtr/goleveldb/leveldb",
"Rev": "97e257099d2ab9578151ba85e2641e2cd14d3ca8"
"Rev": "63c9e642efad852f49e20a6f90194cae112fd2ac"
},
{
"ImportPath": "github.com/syndtr/gosnappy/snappy",
@@ -55,23 +55,19 @@
},
{
"ImportPath": "golang.org/x/crypto/bcrypt",
"Comment": "null-236",
"Rev": "69e2a90ed92d03812364aeb947b7068dc42e561e"
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
},
{
"ImportPath": "golang.org/x/crypto/blowfish",
"Comment": "null-236",
"Rev": "69e2a90ed92d03812364aeb947b7068dc42e561e"
"Rev": "4ed45ec682102c643324fae5dff8dab085b6c300"
},
{
"ImportPath": "golang.org/x/text/transform",
"Comment": "null-112",
"Rev": "2f707e0ad64637ca1318279be7201f5ed19c4050"
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Comment": "null-112",
"Rev": "2f707e0ad64637ca1318279be7201f5ed19c4050"
"Rev": "c980adc4a823548817b9c47d38c6ca6b7d7d8b6a"
}
]
}

View File

@@ -1,24 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

View File

@@ -1,25 +0,0 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org>

View File

@@ -1,4 +0,0 @@
lrufdcache
==========
A LRU file descriptor cache

View File

@@ -1,88 +0,0 @@
// Package logger implements a LRU file descriptor cache for concurrent ReadAt
// calls.
package lrufdcache
import (
"os"
"sync"
"github.com/golang/groupcache/lru"
)
// A wrapper around *os.File which counts references
type CachedFile struct {
file *os.File
wg sync.WaitGroup
// Locking between file.Close and file.ReadAt
// (just to please the race detector...)
flock sync.RWMutex
}
// Tells the cache that we are done using the file, but it's up to the cache
// to decide when this file will really be closed. The error, if any, will be
// lost.
func (f *CachedFile) Close() error {
f.wg.Done()
return nil
}
// Read the file at the given offset.
func (f *CachedFile) ReadAt(buf []byte, at int64) (int, error) {
f.flock.RLock()
defer f.flock.RUnlock()
return f.file.ReadAt(buf, at)
}
type FileCache struct {
cache *lru.Cache
mut sync.Mutex
}
// Create a new cache with the number of entries to hold.
func NewCache(entries int) *FileCache {
c := FileCache{
cache: lru.New(entries),
}
c.cache.OnEvicted = func(key lru.Key, fdi interface{}) {
// The file might not have been closed by all openers yet, therefore
// spawn a routine which waits for that to happen and then closes the
// file.
go func(item *CachedFile) {
item.wg.Wait()
item.flock.Lock()
item.file.Close()
item.flock.Unlock()
}(fdi.(*CachedFile))
}
return &c
}
// Open and cache a file descriptor or use an existing cached descriptor for
// the given path.
func (c *FileCache) Open(path string) (*CachedFile, error) {
// Evictions can only happen during c.cache.Add, and there is a potential
// race between c.cache.Get and cfd.wg.Add where if not guarded by a mutex
// could result in cfd getting closed before the counter is incremented if
// a concurrent routine does a c.cache.Add
c.mut.Lock()
defer c.mut.Unlock()
fdi, ok := c.cache.Get(path)
if ok {
cfd := fdi.(*CachedFile)
cfd.wg.Add(1)
return cfd, nil
}
fd, err := os.Open(path)
if err != nil {
return nil, err
}
cfd := &CachedFile{
file: fd,
wg: sync.WaitGroup{},
}
cfd.wg.Add(1)
c.cache.Add(path, cfd)
return cfd, nil
}

View File

@@ -1,195 +0,0 @@
package lrufdcache
import (
"io/ioutil"
"os"
"sync"
"time"
"testing"
)
func TestNoopReadFailsOnClosed(t *testing.T) {
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.WriteString("test")
fd.Close()
buf := make([]byte, 4)
defer os.Remove(fd.Name())
_, err = fd.ReadAt(buf, 0)
if err == nil {
t.Fatal("Expected error")
}
}
func TestSingleFileEviction(t *testing.T) {
c := NewCache(1)
wg := sync.WaitGroup{}
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.WriteString("test")
fd.Close()
buf := make([]byte, 4)
defer os.Remove(fd.Name())
for k := 0; k < 100; k++ {
wg.Add(1)
go func() {
defer wg.Done()
cfd, err := c.Open(fd.Name())
if err != nil {
t.Fatal(err)
return
}
defer cfd.Close()
_, err = cfd.ReadAt(buf, 0)
if err != nil {
t.Fatal(err)
}
}()
}
wg.Wait()
}
func TestMultifileEviction(t *testing.T) {
c := NewCache(1)
wg := sync.WaitGroup{}
for k := 0; k < 100; k++ {
wg.Add(1)
go func() {
defer wg.Done()
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.WriteString("test")
fd.Close()
buf := make([]byte, 4)
defer os.Remove(fd.Name())
cfd, err := c.Open(fd.Name())
if err != nil {
t.Fatal(err)
return
}
defer cfd.Close()
_, err = cfd.ReadAt(buf, 0)
if err != nil {
t.Fatal(err)
}
}()
}
wg.Wait()
}
func TestMixedEviction(t *testing.T) {
c := NewCache(1)
wg := sync.WaitGroup{}
wg2 := sync.WaitGroup{}
for i := 0; i < 100; i++ {
wg2.Add(1)
go func() {
defer wg2.Done()
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.WriteString("test")
fd.Close()
buf := make([]byte, 4)
for k := 0; k < 100; k++ {
wg.Add(1)
go func() {
defer wg.Done()
cfd, err := c.Open(fd.Name())
if err != nil {
t.Fatal(err)
return
}
defer cfd.Close()
_, err = cfd.ReadAt(buf, 0)
if err != nil {
t.Fatal(err)
}
}()
}
}()
}
wg2.Wait()
wg.Wait()
}
func TestLimit(t *testing.T) {
testcase := 50
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.Close()
defer os.Remove(fd.Name())
c := NewCache(testcase)
fds := make([]*CachedFile, testcase*2)
for i := 0; i < testcase*2; i++ {
fd, err := ioutil.TempFile("", "fdcache")
if err != nil {
t.Fatal(err)
return
}
fd.WriteString("test")
fd.Close()
defer os.Remove(fd.Name())
nfd, err := c.Open(fd.Name())
if err != nil {
t.Fatal(err)
return
}
fds = append(fds, nfd)
nfd.Close()
}
// Allow closes to happen
time.Sleep(time.Millisecond * 100)
buf := make([]byte, 4)
ok := 0
for _, fd := range fds {
if fd == nil {
continue
}
_, err := fd.ReadAt(buf, 0)
if err == nil {
ok++
}
}
if ok > testcase {
t.Fatal("More than", testcase, "fds open")
}
}

View File

@@ -1,194 +1,194 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt indicates the input was corrupt
ErrCorrupt = errors.New("corrupt input")
)
const (
mlBits = 4
mlMask = (1 << mlBits) - 1
runBits = 8 - mlBits
runMask = (1 << runBits) - 1
)
type decoder struct {
src []byte
dst []byte
spos uint32
dpos uint32
ref uint32
}
func (d *decoder) readByte() (uint8, error) {
if int(d.spos) == len(d.src) {
return 0, io.EOF
}
b := d.src[d.spos]
d.spos++
return b, nil
}
func (d *decoder) getLen() (uint32, error) {
length := uint32(0)
ln, err := d.readByte()
if err != nil {
return 0, ErrCorrupt
}
for ln == 255 {
length += 255
ln, err = d.readByte()
if err != nil {
return 0, ErrCorrupt
}
}
length += uint32(ln)
return length, nil
}
func (d *decoder) cp(length, decr uint32) {
if int(d.ref+length) < int(d.dpos) {
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
} else {
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
}
}
d.dpos += length
d.ref += length - decr
}
func (d *decoder) finish(err error) error {
if err == io.EOF {
return nil
}
return err
}
// Decode returns the decoded form of src. The returned slice may be a
// subslice of dst if it was large enough to hold the entire decoded block.
func Decode(dst, src []byte) ([]byte, error) {
if len(src) < 4 {
return nil, ErrCorrupt
}
uncompressedLen := binary.LittleEndian.Uint32(src)
if uncompressedLen == 0 {
return nil, nil
}
if uncompressedLen > MaxInputSize {
return nil, ErrTooLarge
}
if dst == nil || len(dst) < int(uncompressedLen) {
dst = make([]byte, uncompressedLen)
}
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
decr := []uint32{0, 3, 2, 3}
for {
code, err := d.readByte()
if err != nil {
return d.dst, d.finish(err)
}
length := uint32(code >> mlBits)
if length == runMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
if int(d.spos+length) > len(d.src) {
return nil, ErrCorrupt
}
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.src[d.spos+ii]
}
d.spos += length
d.dpos += length
if int(d.spos) == len(d.src) {
return d.dst, nil
}
if int(d.spos+2) >= len(d.src) {
return nil, ErrCorrupt
}
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
if back > d.dpos {
return nil, ErrCorrupt
}
d.spos += 2
d.ref = d.dpos - back
length = uint32(code & mlMask)
if length == mlMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
literal := d.dpos - d.ref
if literal < 4 {
d.cp(4, decr[literal])
} else {
length += 4
}
if d.dpos+length > uncompressedLen {
return nil, ErrCorrupt
}
d.cp(length, 0)
}
}
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt indicates the input was corrupt
ErrCorrupt = errors.New("corrupt input")
)
const (
mlBits = 4
mlMask = (1 << mlBits) - 1
runBits = 8 - mlBits
runMask = (1 << runBits) - 1
)
type decoder struct {
src []byte
dst []byte
spos uint32
dpos uint32
ref uint32
}
func (d *decoder) readByte() (uint8, error) {
if int(d.spos) == len(d.src) {
return 0, io.EOF
}
b := d.src[d.spos]
d.spos++
return b, nil
}
func (d *decoder) getLen() (uint32, error) {
length := uint32(0)
ln, err := d.readByte()
if err != nil {
return 0, ErrCorrupt
}
for ln == 255 {
length += 255
ln, err = d.readByte()
if err != nil {
return 0, ErrCorrupt
}
}
length += uint32(ln)
return length, nil
}
func (d *decoder) cp(length, decr uint32) {
if int(d.ref+length) < int(d.dpos) {
copy(d.dst[d.dpos:], d.dst[d.ref:d.ref+length])
} else {
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.dst[d.ref+ii]
}
}
d.dpos += length
d.ref += length - decr
}
func (d *decoder) finish(err error) error {
if err == io.EOF {
return nil
}
return err
}
// Decode returns the decoded form of src. The returned slice may be a
// subslice of dst if it was large enough to hold the entire decoded block.
func Decode(dst, src []byte) ([]byte, error) {
if len(src) < 4 {
return nil, ErrCorrupt
}
uncompressedLen := binary.LittleEndian.Uint32(src)
if uncompressedLen == 0 {
return nil, nil
}
if uncompressedLen > MaxInputSize {
return nil, ErrTooLarge
}
if dst == nil || len(dst) < int(uncompressedLen) {
dst = make([]byte, uncompressedLen)
}
d := decoder{src: src, dst: dst[:uncompressedLen], spos: 4}
decr := []uint32{0, 3, 2, 3}
for {
code, err := d.readByte()
if err != nil {
return d.dst, d.finish(err)
}
length := uint32(code >> mlBits)
if length == runMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
if int(d.spos+length) > len(d.src) {
return nil, ErrCorrupt
}
for ii := uint32(0); ii < length; ii++ {
d.dst[d.dpos+ii] = d.src[d.spos+ii]
}
d.spos += length
d.dpos += length
if int(d.spos) == len(d.src) {
return d.dst, nil
}
if int(d.spos+2) >= len(d.src) {
return nil, ErrCorrupt
}
back := uint32(d.src[d.spos]) | uint32(d.src[d.spos+1])<<8
if back > d.dpos {
return nil, ErrCorrupt
}
d.spos += 2
d.ref = d.dpos - back
length = uint32(code & mlMask)
if length == mlMask {
ln, err := d.getLen()
if err != nil {
return nil, ErrCorrupt
}
length += ln
}
literal := d.dpos - d.ref
if literal < 4 {
d.cp(4, decr[literal])
} else {
length += 4
}
if d.dpos+length > uncompressedLen {
return nil, ErrCorrupt
}
d.cp(length, 0)
}
}

View File

@@ -1,188 +1,188 @@
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import "encoding/binary"
import "errors"
const (
minMatch = 4
hashLog = 17
hashTableSize = 1 << hashLog
hashShift = (minMatch * 8) - hashLog
incompressible uint32 = 128
uninitHash = 0x88888888
// MaxInputSize is the largest buffer than can be compressed in a single block
MaxInputSize = 0x7E000000
)
var (
// ErrTooLarge indicates the input buffer was too large
ErrTooLarge = errors.New("input too large")
)
type encoder struct {
src []byte
dst []byte
hashTable []uint32
pos uint32
anchor uint32
dpos uint32
}
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
func CompressBound(isize int) int {
if isize > MaxInputSize {
return 0
}
return isize + ((isize) / 255) + 16 + 4
}
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
ln := length
var code byte
if ln > runMask-1 {
code = runMask
} else {
code = byte(ln)
}
if mlLen > mlMask-1 {
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
} else {
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
}
e.dpos++
if code == runMask {
ln -= runMask
for ; ln > 254; ln -= 255 {
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(ln)
e.dpos++
}
for ii := uint32(0); ii < length; ii++ {
e.dst[e.dpos+ii] = e.src[pos+ii]
}
e.dpos += length
}
// Encode returns the encoded form of src. The returned array may be a
// sub-slice of dst if it was large enough to hold the entire output.
func Encode(dst, src []byte) ([]byte, error) {
if len(src) >= MaxInputSize {
return nil, ErrTooLarge
}
if n := CompressBound(len(src)); len(dst) < n {
dst = make([]byte, n)
}
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
e.dpos = 4
var (
step uint32 = 1
limit = incompressible
)
for {
if int(e.pos)+12 >= len(e.src) {
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
return e.dst[:e.dpos], nil
}
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
hash := (sequence * 2654435761) >> hashShift
ref := e.hashTable[hash] + uninitHash
e.hashTable[hash] = e.pos - uninitHash
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
if e.pos-e.anchor > limit {
limit <<= 1
step += 1 + (step >> 2)
}
e.pos += step
continue
}
if step > 1 {
e.hashTable[hash] = ref - uninitHash
e.pos -= step - 1
step = 1
continue
}
limit = incompressible
ln := e.pos - e.anchor
back := e.pos - ref
anchor := e.anchor
e.pos += minMatch
ref += minMatch
e.anchor = e.pos
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
e.pos++
ref++
}
mlLen := e.pos - e.anchor
e.writeLiterals(ln, mlLen, anchor)
e.dst[e.dpos] = uint8(back)
e.dst[e.dpos+1] = uint8(back >> 8)
e.dpos += 2
if mlLen > mlMask-1 {
mlLen -= mlMask
for mlLen > 254 {
mlLen -= 255
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(mlLen)
e.dpos++
}
e.anchor = e.pos
}
}
/*
* Copyright 2011-2012 Branimir Karadzic. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice, this
* list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
* SHALL COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
* OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
package lz4
import "encoding/binary"
import "errors"
const (
minMatch = 4
hashLog = 17
hashTableSize = 1 << hashLog
hashShift = (minMatch * 8) - hashLog
incompressible uint32 = 128
uninitHash = 0x88888888
// MaxInputSize is the largest buffer than can be compressed in a single block
MaxInputSize = 0x7E000000
)
var (
// ErrTooLarge indicates the input buffer was too large
ErrTooLarge = errors.New("input too large")
)
type encoder struct {
src []byte
dst []byte
hashTable []uint32
pos uint32
anchor uint32
dpos uint32
}
// CompressBound returns the maximum length of a lz4 block, given it's uncompressed length
func CompressBound(isize int) int {
if isize > MaxInputSize {
return 0
}
return isize + ((isize) / 255) + 16 + 4
}
func (e *encoder) writeLiterals(length, mlLen, pos uint32) {
ln := length
var code byte
if ln > runMask-1 {
code = runMask
} else {
code = byte(ln)
}
if mlLen > mlMask-1 {
e.dst[e.dpos] = (code << mlBits) + byte(mlMask)
} else {
e.dst[e.dpos] = (code << mlBits) + byte(mlLen)
}
e.dpos++
if code == runMask {
ln -= runMask
for ; ln > 254; ln -= 255 {
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(ln)
e.dpos++
}
for ii := uint32(0); ii < length; ii++ {
e.dst[e.dpos+ii] = e.src[pos+ii]
}
e.dpos += length
}
// Encode returns the encoded form of src. The returned array may be a
// sub-slice of dst if it was large enough to hold the entire output.
func Encode(dst, src []byte) ([]byte, error) {
if len(src) >= MaxInputSize {
return nil, ErrTooLarge
}
if n := CompressBound(len(src)); len(dst) < n {
dst = make([]byte, n)
}
e := encoder{src: src, dst: dst, hashTable: make([]uint32, hashTableSize)}
binary.LittleEndian.PutUint32(dst, uint32(len(src)))
e.dpos = 4
var (
step uint32 = 1
limit = incompressible
)
for {
if int(e.pos)+12 >= len(e.src) {
e.writeLiterals(uint32(len(e.src))-e.anchor, 0, e.anchor)
return e.dst[:e.dpos], nil
}
sequence := uint32(e.src[e.pos+3])<<24 | uint32(e.src[e.pos+2])<<16 | uint32(e.src[e.pos+1])<<8 | uint32(e.src[e.pos+0])
hash := (sequence * 2654435761) >> hashShift
ref := e.hashTable[hash] + uninitHash
e.hashTable[hash] = e.pos - uninitHash
if ((e.pos-ref)>>16) != 0 || uint32(e.src[ref+3])<<24|uint32(e.src[ref+2])<<16|uint32(e.src[ref+1])<<8|uint32(e.src[ref+0]) != sequence {
if e.pos-e.anchor > limit {
limit <<= 1
step += 1 + (step >> 2)
}
e.pos += step
continue
}
if step > 1 {
e.hashTable[hash] = ref - uninitHash
e.pos -= step - 1
step = 1
continue
}
limit = incompressible
ln := e.pos - e.anchor
back := e.pos - ref
anchor := e.anchor
e.pos += minMatch
ref += minMatch
e.anchor = e.pos
for int(e.pos) < len(e.src)-5 && e.src[e.pos] == e.src[ref] {
e.pos++
ref++
}
mlLen := e.pos - e.anchor
e.writeLiterals(ln, mlLen, anchor)
e.dst[e.dpos] = uint8(back)
e.dst[e.dpos+1] = uint8(back >> 8)
e.dpos += 2
if mlLen > mlMask-1 {
mlLen -= mlMask
for mlLen > 254 {
mlLen -= 255
e.dst[e.dpos] = 255
e.dpos++
}
e.dst[e.dpos] = byte(mlLen)
e.dpos++
}
e.anchor = e.pos
}
}

19
Godeps/_workspace/src/github.com/calmh/luhn/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,19 @@
Copyright (C) 2014 Jakob Borg
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 Jakob Borg
// Package luhn generates and validates Luhn mod N check digits.
package luhn

View File

@@ -1,24 +1,11 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 Jakob Borg
package luhn_test
import (
"testing"
"github.com/syncthing/syncthing/internal/luhn"
"github.com/calmh/luhn"
)
func TestGenerate(t *testing.T) {

View File

@@ -321,10 +321,10 @@ func generateDiagram(output io.Writer, s structInfo) {
case "bool":
fmt.Fprintf(output, "| %s |V|\n", center(name+" (V=0 or 1)", 59))
fmt.Fprintln(output, line)
case "uint16":
case "int16", "uint16":
fmt.Fprintf(output, "| %s | %s |\n", center("0x0000", 29), center(name, 29))
fmt.Fprintln(output, line)
case "uint32":
case "int32", "uint32":
fmt.Fprintf(output, "| %s |\n", center(name, 61))
fmt.Fprintln(output, line)
case "int64", "uint64":
@@ -374,6 +374,8 @@ func generateXdr(output io.Writer, s structInfo) {
}
switch tn {
case "int16", "int32":
fmt.Fprintf(output, "\tint %s%s;\n", fn, suf)
case "uint16", "uint32":
fmt.Fprintf(output, "\tunsigned int %s%s;\n", fn, suf)
case "int64":

View File

@@ -154,6 +154,10 @@ func (e XDRError) Error() string {
return "xdr " + e.op + ": " + e.err.Error()
}
func (e XDRError) IsEOF() bool {
return e.err == io.EOF
}
func (r *Reader) Error() error {
if r.err == nil {
return nil

View File

@@ -1,121 +0,0 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package lru implements an LRU cache.
package lru
import "container/list"
// Cache is an LRU cache. It is not safe for concurrent access.
type Cache struct {
// MaxEntries is the maximum number of cache entries before
// an item is evicted. Zero means no limit.
MaxEntries int
// OnEvicted optionally specificies a callback function to be
// executed when an entry is purged from the cache.
OnEvicted func(key Key, value interface{})
ll *list.List
cache map[interface{}]*list.Element
}
// A Key may be any value that is comparable. See http://golang.org/ref/spec#Comparison_operators
type Key interface{}
type entry struct {
key Key
value interface{}
}
// New creates a new Cache.
// If maxEntries is zero, the cache has no limit and it's assumed
// that eviction is done by the caller.
func New(maxEntries int) *Cache {
return &Cache{
MaxEntries: maxEntries,
ll: list.New(),
cache: make(map[interface{}]*list.Element),
}
}
// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
if c.cache == nil {
c.cache = make(map[interface{}]*list.Element)
c.ll = list.New()
}
if ee, ok := c.cache[key]; ok {
c.ll.MoveToFront(ee)
ee.Value.(*entry).value = value
return
}
ele := c.ll.PushFront(&entry{key, value})
c.cache[key] = ele
if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
c.RemoveOldest()
}
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key Key) (value interface{}, ok bool) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.ll.MoveToFront(ele)
return ele.Value.(*entry).value, true
}
return
}
// Remove removes the provided key from the cache.
func (c *Cache) Remove(key Key) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.removeElement(ele)
}
}
// RemoveOldest removes the oldest item from the cache.
func (c *Cache) RemoveOldest() {
if c.cache == nil {
return
}
ele := c.ll.Back()
if ele != nil {
c.removeElement(ele)
}
}
func (c *Cache) removeElement(e *list.Element) {
c.ll.Remove(e)
kv := e.Value.(*entry)
delete(c.cache, kv.key)
if c.OnEvicted != nil {
c.OnEvicted(kv.key, kv.value)
}
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
if c.cache == nil {
return 0
}
return c.ll.Len()
}

View File

@@ -1,73 +0,0 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package lru
import (
"testing"
)
type simpleStruct struct {
int
string
}
type complexStruct struct {
int
simpleStruct
}
var getTests = []struct {
name string
keyToAdd interface{}
keyToGet interface{}
expectedOk bool
}{
{"string_hit", "myKey", "myKey", true},
{"string_miss", "myKey", "nonsense", false},
{"simple_struct_hit", simpleStruct{1, "two"}, simpleStruct{1, "two"}, true},
{"simeple_struct_miss", simpleStruct{1, "two"}, simpleStruct{0, "noway"}, false},
{"complex_struct_hit", complexStruct{1, simpleStruct{2, "three"}},
complexStruct{1, simpleStruct{2, "three"}}, true},
}
func TestGet(t *testing.T) {
for _, tt := range getTests {
lru := New(0)
lru.Add(tt.keyToAdd, 1234)
val, ok := lru.Get(tt.keyToGet)
if ok != tt.expectedOk {
t.Fatalf("%s: cache hit = %v; want %v", tt.name, ok, !ok)
} else if ok && val != 1234 {
t.Fatalf("%s expected get to return 1234 but got %v", tt.name, val)
}
}
}
func TestRemove(t *testing.T) {
lru := New(0)
lru.Add("myKey", 1234)
if val, ok := lru.Get("myKey"); !ok {
t.Fatal("TestRemove returned no match")
} else if val != 1234 {
t.Fatalf("TestRemove failed. Expected %d, got %v", 1234, val)
}
lru.Remove("myKey")
if _, ok := lru.Get("myKey"); ok {
t.Fatal("TestRemove returned a removed entry")
}
}

View File

@@ -0,0 +1,4 @@
# This is the official list of Protocol Authors for copyright purposes.
Audrius Butkevicius <audrius.butkevicius@gmail.com>
Jakob Borg <jakob@nym.se>

View File

@@ -0,0 +1,76 @@
## Reporting Bugs
Please file bugs in the [Github Issue
Tracker](https://github.com/syncthing/protocol/issues).
## Contributing Code
Every contribution is welcome. Following the points below will make this
a smoother process.
Individuals making significant and valuable contributions are given
commit-access to the project. If you make a significant contribution and
are not considered for commit-access, please contact any of the
Syncthing core team members.
All nontrivial contributions should go through the pull request
mechanism for internal review. Determining what is "nontrivial" is left
at the discretion of the contributor.
### Authorship
All code authors are listed in the AUTHORS file. Commits must be made
with the same name and email as listed in the AUTHORS file. To
accomplish this, ensure that your git configuration is set correctly
prior to making your first commit;
$ git config --global user.name "Jane Doe"
$ git config --global user.email janedoe@example.com
You must be reachable on the given email address. If you do not wish to
use your real name for whatever reason, using a nickname or pseudonym is
perfectly acceptable.
## Coding Style
- Follow the conventions laid out in [Effective Go](https://golang.org/doc/effective_go.html)
as much as makes sense.
- All text files use Unix line endings.
- Each commit should be `go fmt` clean.
- The commit message subject should be a single short sentence
describing the change, starting with a capital letter.
- Commits that resolve an existing issue must include the issue number
as `(fixes #123)` at the end of the commit message subject.
- Imports are grouped per `goimports` standard; that is, standard
library first, then third party libraries after a blank line.
- A contribution solving a single issue or introducing a single new
feature should probably be a single commit based on the current
`master` branch. You may be asked to "rebase" or "squash" your pull
request to make sure this is the case, especially if there have been
amendments during review.
## Licensing
All contributions are made under the same MIT license as the rest of the
project, except documentation, user interface text and translation
strings which are licensed under the Creative Commons Attribution 4.0
International License. You retain the copyright to code you have
written.
When accepting your first contribution, the maintainer of the project
will ensure that you are added to the AUTHORS file. You are welcome to
add yourself as a separate commit in your first pull request.
## Tests
Yes please!
## License
MIT

View File

@@ -0,0 +1,19 @@
Copyright (C) 2014-2015 The Protocol Authors
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,13 @@
The BEPv1 Protocol
==================
[![Latest Build](http://img.shields.io/jenkins/s/http/build.syncthing.net/protocol.svg?style=flat-square)](http://build.syncthing.net/job/protocol/lastBuild/)
[![API Documentation](http://img.shields.io/badge/api-Godoc-blue.svg?style=flat-square)](http://godoc.org/github.com/syncthing/protocol)
[![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat-square)](http://opensource.org/licenses/MIT)
This is the protocol implementation used by Syncthing.
License
=======
MIT

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -0,0 +1,62 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"io"
"sync/atomic"
"time"
)
type countingReader struct {
io.Reader
tot int64 // bytes
last int64 // unix nanos
}
var (
totalIncoming int64
totalOutgoing int64
)
func (c *countingReader) Read(bs []byte) (int, error) {
n, err := c.Reader.Read(bs)
atomic.AddInt64(&c.tot, int64(n))
atomic.AddInt64(&totalIncoming, int64(n))
atomic.StoreInt64(&c.last, time.Now().UnixNano())
return n, err
}
func (c *countingReader) Tot() int64 {
return atomic.LoadInt64(&c.tot)
}
func (c *countingReader) Last() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.last))
}
type countingWriter struct {
io.Writer
tot int64 // bytes
last int64 // unix nanos
}
func (c *countingWriter) Write(bs []byte) (int, error) {
n, err := c.Writer.Write(bs)
atomic.AddInt64(&c.tot, int64(n))
atomic.AddInt64(&totalOutgoing, int64(n))
atomic.StoreInt64(&c.last, time.Now().UnixNano())
return n, err
}
func (c *countingWriter) Tot() int64 {
return atomic.LoadInt64(&c.tot)
}
func (c *countingWriter) Last() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.last))
}
func TotalInOut() (int64, int64) {
return atomic.LoadInt64(&totalIncoming), atomic.LoadInt64(&totalOutgoing)
}

View File

@@ -0,0 +1,15 @@
// Copyright (C) 2014 The Protocol Authors.
package protocol
import (
"os"
"strings"
"github.com/calmh/logger"
)
var (
debug = strings.Contains(os.Getenv("STTRACE"), "protocol") || os.Getenv("STTRACE") == "all"
l = logger.DefaultLogger
)

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -24,7 +11,7 @@ import (
"regexp"
"strings"
"github.com/syncthing/syncthing/internal/luhn"
"github.com/calmh/luhn"
)
type DeviceID [32]byte

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -0,0 +1,4 @@
// Copyright (C) 2014 The Protocol Authors.
// Package protocol implements the Block Exchange Protocol.
package protocol

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -0,0 +1,122 @@
// Copyright (C) 2014 The Protocol Authors.
//go:generate genxdr -o message_xdr.go message.go
package protocol
import "fmt"
type IndexMessage struct {
Folder string // max:64
Files []FileInfo
Flags uint32
Options []Option // max:64
}
type FileInfo struct {
Name string // max:8192
Flags uint32
Modified int64
Version int64
LocalVersion int64
Blocks []BlockInfo
}
func (f FileInfo) String() string {
return fmt.Sprintf("File{Name:%q, Flags:0%o, Modified:%d, Version:%d, Size:%d, Blocks:%v}",
f.Name, f.Flags, f.Modified, f.Version, f.Size(), f.Blocks)
}
func (f FileInfo) Size() (bytes int64) {
if f.IsDeleted() || f.IsDirectory() {
return 128
}
for _, b := range f.Blocks {
bytes += int64(b.Size)
}
return
}
func (f FileInfo) IsDeleted() bool {
return f.Flags&FlagDeleted != 0
}
func (f FileInfo) IsInvalid() bool {
return f.Flags&FlagInvalid != 0
}
func (f FileInfo) IsDirectory() bool {
return f.Flags&FlagDirectory != 0
}
func (f FileInfo) IsSymlink() bool {
return f.Flags&FlagSymlink != 0
}
func (f FileInfo) HasPermissionBits() bool {
return f.Flags&FlagNoPermBits == 0
}
type BlockInfo struct {
Offset int64 // noencode (cache only)
Size int32
Hash []byte // max:64
}
func (b BlockInfo) String() string {
return fmt.Sprintf("Block{%d/%d/%x}", b.Offset, b.Size, b.Hash)
}
type RequestMessage struct {
Folder string // max:64
Name string // max:8192
Offset int64
Size int32
Hash []byte // max:64
Flags uint32
Options []Option // max:64
}
type ResponseMessage struct {
Data []byte
Error int32
}
type ClusterConfigMessage struct {
ClientName string // max:64
ClientVersion string // max:64
Folders []Folder // max:64
Options []Option // max:64
}
func (o *ClusterConfigMessage) GetOption(key string) string {
for _, option := range o.Options {
if option.Key == key {
return option.Value
}
}
return ""
}
type Folder struct {
ID string // max:64
Devices []Device
}
type Device struct {
ID []byte // max:32
Flags uint32
MaxLocalVersion int64
}
type Option struct {
Key string // max:64
Value string // max:1024
}
type CloseMessage struct {
Reason string // max:1024
Code int32
}
type EmptyMessage struct{}

View File

@@ -30,11 +30,21 @@ IndexMessage Structure:
\ Zero or more FileInfo Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct IndexMessage {
string Folder<64>;
FileInfo Files<>;
unsigned int Flags;
Option Options<64>;
}
*/
@@ -75,6 +85,17 @@ func (o IndexMessage) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), err
}
}
xw.WriteUint32(o.Flags)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].encodeXDR(xw)
if err != nil {
return xw.Tot(), err
}
}
return xw.Tot(), xw.Error()
}
@@ -96,6 +117,15 @@ func (o *IndexMessage) decodeXDR(xr *xdr.Reader) error {
for i := range o.Files {
(&o.Files[i]).decodeXDR(xr)
}
o.Flags = xr.ReadUint32()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).decodeXDR(xr)
}
return xr.Error()
}
@@ -138,8 +168,8 @@ struct FileInfo {
string Name<8192>;
unsigned int Flags;
hyper Modified;
unsigned hyper Version;
unsigned hyper LocalVersion;
hyper Version;
hyper LocalVersion;
BlockInfo Blocks<>;
}
@@ -176,8 +206,8 @@ func (o FileInfo) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteString(o.Name)
xw.WriteUint32(o.Flags)
xw.WriteUint64(uint64(o.Modified))
xw.WriteUint64(o.Version)
xw.WriteUint64(o.LocalVersion)
xw.WriteUint64(uint64(o.Version))
xw.WriteUint64(uint64(o.LocalVersion))
xw.WriteUint32(uint32(len(o.Blocks)))
for i := range o.Blocks {
_, err := o.Blocks[i].encodeXDR(xw)
@@ -203,8 +233,8 @@ func (o *FileInfo) decodeXDR(xr *xdr.Reader) error {
o.Name = xr.ReadStringMax(8192)
o.Flags = xr.ReadUint32()
o.Modified = int64(xr.ReadUint64())
o.Version = xr.ReadUint64()
o.LocalVersion = xr.ReadUint64()
o.Version = int64(xr.ReadUint64())
o.LocalVersion = int64(xr.ReadUint64())
_BlocksSize := int(xr.ReadUint32())
o.Blocks = make([]BlockInfo, _BlocksSize)
for i := range o.Blocks {
@@ -215,106 +245,6 @@ func (o *FileInfo) decodeXDR(xr *xdr.Reader) error {
/*
FileInfoTruncated Structure:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Name |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Name (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Modified (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Version (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Local Version (64 bits) +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Num Blocks |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct FileInfoTruncated {
string Name<8192>;
unsigned int Flags;
hyper Modified;
unsigned hyper Version;
unsigned hyper LocalVersion;
unsigned int NumBlocks;
}
*/
func (o FileInfoTruncated) EncodeXDR(w io.Writer) (int, error) {
var xw = xdr.NewWriter(w)
return o.encodeXDR(xw)
}
func (o FileInfoTruncated) MarshalXDR() ([]byte, error) {
return o.AppendXDR(make([]byte, 0, 128))
}
func (o FileInfoTruncated) MustMarshalXDR() []byte {
bs, err := o.MarshalXDR()
if err != nil {
panic(err)
}
return bs
}
func (o FileInfoTruncated) AppendXDR(bs []byte) ([]byte, error) {
var aw = xdr.AppendWriter(bs)
var xw = xdr.NewWriter(&aw)
_, err := o.encodeXDR(xw)
return []byte(aw), err
}
func (o FileInfoTruncated) encodeXDR(xw *xdr.Writer) (int, error) {
if l := len(o.Name); l > 8192 {
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
}
xw.WriteString(o.Name)
xw.WriteUint32(o.Flags)
xw.WriteUint64(uint64(o.Modified))
xw.WriteUint64(o.Version)
xw.WriteUint64(o.LocalVersion)
xw.WriteUint32(o.NumBlocks)
return xw.Tot(), xw.Error()
}
func (o *FileInfoTruncated) DecodeXDR(r io.Reader) error {
xr := xdr.NewReader(r)
return o.decodeXDR(xr)
}
func (o *FileInfoTruncated) UnmarshalXDR(bs []byte) error {
var br = bytes.NewReader(bs)
var xr = xdr.NewReader(br)
return o.decodeXDR(xr)
}
func (o *FileInfoTruncated) decodeXDR(xr *xdr.Reader) error {
o.Name = xr.ReadStringMax(8192)
o.Flags = xr.ReadUint32()
o.Modified = int64(xr.ReadUint64())
o.Version = xr.ReadUint64()
o.LocalVersion = xr.ReadUint64()
o.NumBlocks = xr.ReadUint32()
return xr.Error()
}
/*
BlockInfo Structure:
0 1 2 3
@@ -331,7 +261,7 @@ BlockInfo Structure:
struct BlockInfo {
unsigned int Size;
int Size;
opaque Hash<64>;
}
@@ -362,7 +292,7 @@ func (o BlockInfo) AppendXDR(bs []byte) ([]byte, error) {
}
func (o BlockInfo) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteUint32(o.Size)
xw.WriteUint32(uint32(o.Size))
if l := len(o.Hash); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Hash", l, 64)
}
@@ -382,7 +312,7 @@ func (o *BlockInfo) UnmarshalXDR(bs []byte) error {
}
func (o *BlockInfo) decodeXDR(xr *xdr.Reader) error {
o.Size = xr.ReadUint32()
o.Size = int32(xr.ReadUint32())
o.Hash = xr.ReadBytesMax(64)
return xr.Error()
}
@@ -412,13 +342,30 @@ RequestMessage Structure:
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Length of Hash |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Hash (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Flags |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Number of Options |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/ /
\ Zero or more Option Structures \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct RequestMessage {
string Folder<64>;
string Name<8192>;
unsigned hyper Offset;
unsigned int Size;
hyper Offset;
int Size;
opaque Hash<64>;
unsigned int Flags;
Option Options<64>;
}
*/
@@ -456,8 +403,23 @@ func (o RequestMessage) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("Name", l, 8192)
}
xw.WriteString(o.Name)
xw.WriteUint64(o.Offset)
xw.WriteUint32(o.Size)
xw.WriteUint64(uint64(o.Offset))
xw.WriteUint32(uint32(o.Size))
if l := len(o.Hash); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Hash", l, 64)
}
xw.WriteBytes(o.Hash)
xw.WriteUint32(o.Flags)
if l := len(o.Options); l > 64 {
return xw.Tot(), xdr.ElementSizeExceeded("Options", l, 64)
}
xw.WriteUint32(uint32(len(o.Options)))
for i := range o.Options {
_, err := o.Options[i].encodeXDR(xw)
if err != nil {
return xw.Tot(), err
}
}
return xw.Tot(), xw.Error()
}
@@ -475,8 +437,18 @@ func (o *RequestMessage) UnmarshalXDR(bs []byte) error {
func (o *RequestMessage) decodeXDR(xr *xdr.Reader) error {
o.Folder = xr.ReadStringMax(64)
o.Name = xr.ReadStringMax(8192)
o.Offset = xr.ReadUint64()
o.Size = xr.ReadUint32()
o.Offset = int64(xr.ReadUint64())
o.Size = int32(xr.ReadUint32())
o.Hash = xr.ReadBytesMax(64)
o.Flags = xr.ReadUint32()
_OptionsSize := int(xr.ReadUint32())
if _OptionsSize > 64 {
return xdr.ElementSizeExceeded("Options", _OptionsSize, 64)
}
o.Options = make([]Option, _OptionsSize)
for i := range o.Options {
(&o.Options[i]).decodeXDR(xr)
}
return xr.Error()
}
@@ -493,10 +465,13 @@ ResponseMessage Structure:
\ Data (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Error |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct ResponseMessage {
opaque Data<>;
int Error;
}
*/
@@ -527,6 +502,7 @@ func (o ResponseMessage) AppendXDR(bs []byte) ([]byte, error) {
func (o ResponseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
xw.WriteBytes(o.Data)
xw.WriteUint32(uint32(o.Error))
return xw.Tot(), xw.Error()
}
@@ -543,6 +519,7 @@ func (o *ResponseMessage) UnmarshalXDR(bs []byte) error {
func (o *ResponseMessage) decodeXDR(xr *xdr.Reader) error {
o.Data = xr.ReadBytes()
o.Error = int32(xr.ReadUint32())
return xr.Error()
}
@@ -789,7 +766,7 @@ Device Structure:
struct Device {
opaque ID<32>;
unsigned int Flags;
unsigned hyper MaxLocalVersion;
hyper MaxLocalVersion;
}
*/
@@ -824,7 +801,7 @@ func (o Device) encodeXDR(xw *xdr.Writer) (int, error) {
}
xw.WriteBytes(o.ID)
xw.WriteUint32(o.Flags)
xw.WriteUint64(o.MaxLocalVersion)
xw.WriteUint64(uint64(o.MaxLocalVersion))
return xw.Tot(), xw.Error()
}
@@ -842,7 +819,7 @@ func (o *Device) UnmarshalXDR(bs []byte) error {
func (o *Device) decodeXDR(xr *xdr.Reader) error {
o.ID = xr.ReadBytesMax(32)
o.Flags = xr.ReadUint32()
o.MaxLocalVersion = xr.ReadUint64()
o.MaxLocalVersion = int64(xr.ReadUint64())
return xr.Error()
}
@@ -940,10 +917,13 @@ CloseMessage Structure:
\ Reason (variable length) \
/ /
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
struct CloseMessage {
string Reason<1024>;
int Code;
}
*/
@@ -977,6 +957,7 @@ func (o CloseMessage) encodeXDR(xw *xdr.Writer) (int, error) {
return xw.Tot(), xdr.ElementSizeExceeded("Reason", l, 1024)
}
xw.WriteString(o.Reason)
xw.WriteUint32(uint32(o.Code))
return xw.Tot(), xw.Error()
}
@@ -993,6 +974,7 @@ func (o *CloseMessage) UnmarshalXDR(bs []byte) error {
func (o *CloseMessage) decodeXDR(xr *xdr.Reader) error {
o.Reason = xr.ReadStringMax(1024)
o.Code = int32(xr.ReadUint32())
return xr.Error()
}

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
// +build darwin

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
// +build !windows,!darwin

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
// +build windows

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -56,6 +43,8 @@ const (
FlagSymlink = 1 << 16
FlagSymlinkMissingTarget = 1 << 17
FlagsAll = (1 << 18) - 1
SymlinkTypeMask = FlagDirectory | FlagSymlinkMissingTarget
)
@@ -71,6 +60,10 @@ var (
ErrClosed = errors.New("connection closed")
)
// Specific variants of empty messages...
type pingMessage struct{ EmptyMessage }
type pongMessage struct{ EmptyMessage }
type Model interface {
// An index was received from the peer device
Index(deviceID DeviceID, folder string, files []FileInfo)
@@ -133,6 +126,10 @@ type encodable interface {
AppendXDR([]byte) ([]byte, error)
}
type isEofer interface {
IsEOF() bool
}
const (
pingTimeout = 30 * time.Second
pingIdleTime = 60 * time.Second
@@ -183,7 +180,10 @@ func (c *rawConnection) Index(folder string, idx []FileInfo) error {
default:
}
c.idxMut.Lock()
c.send(-1, messageTypeIndex, IndexMessage{folder, idx})
c.send(-1, messageTypeIndex, IndexMessage{
Folder: folder,
Files: idx,
})
c.idxMut.Unlock()
return nil
}
@@ -196,7 +196,10 @@ func (c *rawConnection) IndexUpdate(folder string, idx []FileInfo) error {
default:
}
c.idxMut.Lock()
c.send(-1, messageTypeIndexUpdate, IndexMessage{folder, idx})
c.send(-1, messageTypeIndexUpdate, IndexMessage{
Folder: folder,
Files: idx,
})
c.idxMut.Unlock()
return nil
}
@@ -218,7 +221,12 @@ func (c *rawConnection) Request(folder string, name string, offset int64, size i
c.awaiting[id] = rc
c.awaitingMut.Unlock()
ok := c.send(id, messageTypeRequest, RequestMessage{folder, name, uint64(offset), uint32(size)})
ok := c.send(id, messageTypeRequest, RequestMessage{
Folder: folder,
Name: name,
Offset: offset,
Size: int32(size),
})
if !ok {
return nil, ErrClosed
}
@@ -274,48 +282,60 @@ func (c *rawConnection) readerLoop() (err error) {
return err
}
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
switch msg := msg.(type) {
case IndexMessage:
if msg.Flags != 0 {
// We don't currently support or expect any flags.
return fmt.Errorf("protocol error: unknown flags 0x%x in Index(Update) message", msg.Flags)
}
c.handleIndex(msg.(IndexMessage))
c.state = stateIdxRcvd
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
switch hdr.msgType {
case messageTypeIndex:
if c.state < stateCCRcvd {
return fmt.Errorf("protocol error: index message in state %d", c.state)
}
c.handleIndex(msg)
c.state = stateIdxRcvd
case messageTypeIndexUpdate:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: index update message in state %d", c.state)
}
c.handleIndexUpdate(msg)
}
c.handleIndexUpdate(msg.(IndexMessage))
case messageTypeRequest:
case RequestMessage:
if msg.Flags != 0 {
// We don't currently support or expect any flags.
return fmt.Errorf("protocol error: unknown flags 0x%x in Request message", msg.Flags)
}
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: request message in state %d", c.state)
}
// Requests are handled asynchronously
go c.handleRequest(hdr.msgID, msg.(RequestMessage))
go c.handleRequest(hdr.msgID, msg)
case messageTypeResponse:
case ResponseMessage:
if c.state < stateIdxRcvd {
return fmt.Errorf("protocol error: response message in state %d", c.state)
}
c.handleResponse(hdr.msgID, msg.(ResponseMessage))
c.handleResponse(hdr.msgID, msg)
case messageTypePing:
c.send(hdr.msgID, messageTypePong, EmptyMessage{})
case pingMessage:
c.send(hdr.msgID, messageTypePong, pongMessage{})
case messageTypePong:
case pongMessage:
c.handlePong(hdr.msgID)
case messageTypeClusterConfig:
case ClusterConfigMessage:
if c.state != stateInitial {
return fmt.Errorf("protocol error: cluster config message in state %d", c.state)
}
go c.receiver.ClusterConfig(c.id, msg.(ClusterConfigMessage))
go c.receiver.ClusterConfig(c.id, msg)
c.state = stateCCRcvd
case messageTypeClose:
return errors.New(msg.(CloseMessage).Reason)
case CloseMessage:
return errors.New(msg.Reason)
default:
return fmt.Errorf("protocol error: %s: unknown message type %#x", c.id, hdr.msgType)
@@ -341,6 +361,11 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
l.Debugf("read header %v (msglen=%d)", hdr, msglen)
}
if hdr.version != 0 {
err = fmt.Errorf("unknown protocol version 0x%x", hdr.version)
return
}
if cap(c.rdbuf0) < msglen {
c.rdbuf0 = make([]byte, msglen)
} else {
@@ -376,33 +401,58 @@ func (c *rawConnection) readMessage() (hdr header, msg encodable, err error) {
}
}
// We check each returned error for the XDRError.IsEOF() method.
// IsEOF()==true here means that the message contained fewer fields than
// expected. It does not signify an EOF on the socket, because we've
// successfully read a size value and that many bytes already. New fields
// we expected but the other peer didn't send should be interpreted as
// zero/nil, and if that's not valid we'll verify it somewhere else.
switch hdr.msgType {
case messageTypeIndex, messageTypeIndexUpdate:
var idx IndexMessage
err = idx.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = idx
case messageTypeRequest:
var req RequestMessage
err = req.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = req
case messageTypeResponse:
var resp ResponseMessage
err = resp.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = resp
case messageTypePing, messageTypePong:
msg = EmptyMessage{}
case messageTypePing:
msg = pingMessage{}
case messageTypePong:
msg = pongMessage{}
case messageTypeClusterConfig:
var cc ClusterConfigMessage
err = cc.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = cc
case messageTypeClose:
var cm CloseMessage
err = cm.UnmarshalXDR(msgBuf)
if xdrErr, ok := err.(isEofer); ok && xdrErr.IsEOF() {
err = nil
}
msg = cm
default:
@@ -416,20 +466,48 @@ func (c *rawConnection) handleIndex(im IndexMessage) {
if debug {
l.Debugf("Index(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
}
c.receiver.Index(c.id, im.Folder, im.Files)
c.receiver.Index(c.id, im.Folder, filterIndexMessageFiles(im.Files))
}
func (c *rawConnection) handleIndexUpdate(im IndexMessage) {
if debug {
l.Debugf("queueing IndexUpdate(%v, %v, %d files)", c.id, im.Folder, len(im.Files))
}
c.receiver.IndexUpdate(c.id, im.Folder, im.Files)
c.receiver.IndexUpdate(c.id, im.Folder, filterIndexMessageFiles(im.Files))
}
func filterIndexMessageFiles(fs []FileInfo) []FileInfo {
var out []FileInfo
for i, f := range fs {
switch f.Name {
case "", ".", "..", "/": // A few obviously invalid filenames
l.Infof("Dropping invalid filename %q from incoming index", f.Name)
if out == nil {
// Most incoming updates won't contain anything invalid, so we
// delay the allocation and copy to output slice until we
// really need to do it, then copy all the so var valid files
// to it.
out = make([]FileInfo, i, len(fs)-1)
copy(out, fs)
}
default:
if out != nil {
out = append(out, f)
}
}
}
if out != nil {
return out
}
return fs
}
func (c *rawConnection) handleRequest(msgID int, req RequestMessage) {
data, _ := c.receiver.Request(c.id, req.Folder, req.Name, int64(req.Offset), int(req.Size))
c.send(msgID, messageTypeResponse, ResponseMessage{data})
c.send(msgID, messageTypeResponse, ResponseMessage{
Data: data,
})
}
func (c *rawConnection) handleResponse(msgID int, resp ResponseMessage) {
@@ -630,8 +708,8 @@ func (c *rawConnection) pingerLoop() {
type Statistics struct {
At time.Time
InBytesTotal uint64
OutBytesTotal uint64
InBytesTotal int64
OutBytesTotal int64
}
func (c *rawConnection) Statistics() Statistics {

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol
@@ -189,7 +176,7 @@ func TestVersionErr(t *testing.T) {
msgID: 0,
msgType: 0,
}))
w.WriteUint32(0)
w.WriteUint32(0) // Avoids reader closing due to EOF
if !m1.isClosed() {
t.Error("Connection should close due to unknown version")
@@ -212,7 +199,7 @@ func TestTypeErr(t *testing.T) {
msgID: 0,
msgType: 42,
}))
w.WriteUint32(0)
w.WriteUint32(0) // Avoids reader closing due to EOF
if !m1.isClosed() {
t.Error("Connection should close due to unknown message type")

View File

@@ -1,17 +1,4 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// Copyright (C) 2014 The Protocol Authors.
package protocol

View File

@@ -8,152 +8,669 @@
package cache
import (
"sync"
"sync/atomic"
"unsafe"
"github.com/syndtr/goleveldb/leveldb/util"
)
// SetFunc is the function that will be called by Namespace.Get to create
// a cache object, if charge is less than one than the cache object will
// not be registered to cache tree, if value is nil then the cache object
// will not be created.
type SetFunc func() (charge int, value interface{})
// DelFin is the function that will be called as the result of a delete operation.
// Exist == true is indication that the object is exist, and pending == true is
// indication of deletion already happen but haven't done yet (wait for all handles
// to be released). And exist == false means the object doesn't exist.
type DelFin func(exist, pending bool)
// PurgeFin is the function that will be called as the result of a purge operation.
type PurgeFin func(ns, key uint64)
// Cache is a cache tree. A cache instance must be goroutine-safe.
type Cache interface {
// SetCapacity sets cache tree capacity.
SetCapacity(capacity int)
// Capacity returns cache tree capacity.
// Cacher provides interface to implements a caching functionality.
// An implementation must be goroutine-safe.
type Cacher interface {
// Capacity returns cache capacity.
Capacity() int
// Used returns used cache tree capacity.
Used() int
// SetCapacity sets cache capacity.
SetCapacity(capacity int)
// Size returns entire alive cache objects size.
Size() int
// Promote promotes the 'cache node'.
Promote(n *Node)
// NumObjects returns number of alive objects.
NumObjects() int
// Ban evicts the 'cache node' and prevent subsequent 'promote'.
Ban(n *Node)
// GetNamespace gets cache namespace with the given id.
// GetNamespace is never return nil.
GetNamespace(id uint64) Namespace
// Evict evicts the 'cache node'.
Evict(n *Node)
// PurgeNamespace purges cache namespace with the given id from this cache tree.
// Also read Namespace.Purge.
PurgeNamespace(id uint64, fin PurgeFin)
// EvictNS evicts 'cache node' with the given namespace.
EvictNS(ns uint64)
// ZapNamespace detaches cache namespace with the given id from this cache tree.
// Also read Namespace.Zap.
ZapNamespace(id uint64)
// EvictAll evicts all 'cache node'.
EvictAll()
// Purge purges all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Purge method on all cache namespace.
Purge(fin PurgeFin)
// Zap detaches all cache namespace from this cache tree.
// This is behave the same as calling Namespace.Zap method on all cache namespace.
Zap()
// Close closes the 'cache tree'
Close() error
}
// Namespace is a cache namespace. A namespace instance must be goroutine-safe.
type Namespace interface {
// Get gets cache object with the given key.
// If cache object is not found and setf is not nil, Get will atomically creates
// the cache object by calling setf. Otherwise Get will returns nil.
//
// The returned cache handle should be released after use by calling Release
// method.
Get(key uint64, setf SetFunc) Handle
// Value is a 'cacheable object'. It may implements util.Releaser, if
// so the the Release method will be called once object is released.
type Value interface{}
// Delete removes cache object with the given key from cache tree.
// A deleted cache object will be released as soon as all of its handles have
// been released.
// Delete only happen once, subsequent delete will consider cache object doesn't
// exist, even if the cache object ins't released yet.
//
// If not nil, fin will be called if the cache object doesn't exist or when
// finally be released.
//
// Delete returns true if such cache object exist and never been deleted.
Delete(key uint64, fin DelFin) bool
// Purge removes all cache objects within this namespace from cache tree.
// This is the same as doing delete on all cache objects.
//
// If not nil, fin will be called on all cache objects when its finally be
// released.
Purge(fin PurgeFin)
// Zap detaches namespace from cache tree and release all its cache objects.
// A zapped namespace can never be filled again.
// Calling Get on zapped namespace will always return nil.
Zap()
type CacheGetter struct {
Cache *Cache
NS uint64
}
// Handle is a cache handle.
type Handle interface {
// Release releases this cache handle. This method can be safely called mutiple
// times.
Release()
// Value returns value of this cache handle.
// Value will returns nil after this cache handle have be released.
Value() interface{}
func (g *CacheGetter) Get(key uint64, setFunc func() (size int, value Value)) *Handle {
return g.Cache.Get(g.NS, key, setFunc)
}
// The hash tables implementation is based on:
// "Dynamic-Sized Nonblocking Hash Tables", by Yujie Liu, Kunlong Zhang, and Michael Spear. ACM Symposium on Principles of Distributed Computing, Jul 2014.
const (
DelNotExist = iota
DelExist
DelPendig
mInitialSize = 1 << 4
mOverflowThreshold = 1 << 5
mOverflowGrowThreshold = 1 << 7
)
// Namespace state.
type nsState int
const (
nsEffective nsState = iota
nsZapped
)
// Node state.
type nodeState int
const (
nodeZero nodeState = iota
nodeEffective
nodeEvicted
nodeDeleted
)
// Fake handle.
type fakeHandle struct {
value interface{}
fin func()
once uint32
type mBucket struct {
mu sync.Mutex
node []*Node
frozen bool
}
func (h *fakeHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.value
func (b *mBucket) freeze() []*Node {
b.mu.Lock()
defer b.mu.Unlock()
if !b.frozen {
b.frozen = true
}
return b.node
}
func (b *mBucket) get(r *Cache, h *mNode, hash uint32, ns, key uint64, noset bool) (done, added bool, n *Node) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
for _, n := range b.node {
if n.hash == hash && n.ns == ns && n.key == key {
atomic.AddInt32(&n.ref, 1)
b.mu.Unlock()
return true, false, n
}
}
// Get only.
if noset {
b.mu.Unlock()
return true, false, nil
}
// Create node.
n = &Node{
r: r,
hash: hash,
ns: ns,
key: key,
ref: 1,
}
// Add node to bucket.
b.node = append(b.node, n)
bLen := len(b.node)
b.mu.Unlock()
// Update counter.
grow := atomic.AddInt32(&r.nodes, 1) >= h.growThreshold
if bLen > mOverflowThreshold {
grow = grow || atomic.AddInt32(&h.overflow, 1) >= mOverflowGrowThreshold
}
// Grow.
if grow && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) << 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
return true, true, n
}
func (b *mBucket) delete(r *Cache, h *mNode, hash uint32, ns, key uint64) (done, deleted bool) {
b.mu.Lock()
if b.frozen {
b.mu.Unlock()
return
}
// Scan the node.
var (
n *Node
bLen int
)
for i := range b.node {
n = b.node[i]
if n.ns == ns && n.key == key {
if atomic.LoadInt32(&n.ref) == 0 {
deleted = true
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Remove node from bucket.
b.node = append(b.node[:i], b.node[i+1:]...)
bLen = len(b.node)
}
break
}
}
b.mu.Unlock()
if deleted {
// Call OnDel.
for _, f := range n.onDel {
f()
}
// Update counter.
atomic.AddInt32(&r.size, int32(n.size)*-1)
shrink := atomic.AddInt32(&r.nodes, -1) < h.shrinkThreshold
if bLen >= mOverflowThreshold {
atomic.AddInt32(&h.overflow, -1)
}
// Shrink.
if shrink && len(h.buckets) > mInitialSize && atomic.CompareAndSwapInt32(&h.resizeInProgess, 0, 1) {
nhLen := len(h.buckets) >> 1
nh := &mNode{
buckets: make([]unsafe.Pointer, nhLen),
mask: uint32(nhLen) - 1,
pred: unsafe.Pointer(h),
growThreshold: int32(nhLen * mOverflowThreshold),
shrinkThreshold: int32(nhLen >> 1),
}
ok := atomic.CompareAndSwapPointer(&r.mHead, unsafe.Pointer(h), unsafe.Pointer(nh))
if !ok {
panic("BUG: failed swapping head")
}
go nh.initBuckets()
}
}
return true, deleted
}
type mNode struct {
buckets []unsafe.Pointer // []*mBucket
mask uint32
pred unsafe.Pointer // *mNode
resizeInProgess int32
overflow int32
growThreshold int32
shrinkThreshold int32
}
func (n *mNode) initBucket(i uint32) *mBucket {
if b := (*mBucket)(atomic.LoadPointer(&n.buckets[i])); b != nil {
return b
}
p := (*mNode)(atomic.LoadPointer(&n.pred))
if p != nil {
var node []*Node
if n.mask > p.mask {
// Grow.
pb := (*mBucket)(atomic.LoadPointer(&p.buckets[i&p.mask]))
if pb == nil {
pb = p.initBucket(i & p.mask)
}
m := pb.freeze()
// Split nodes.
for _, x := range m {
if x.hash&n.mask == i {
node = append(node, x)
}
}
} else {
// Shrink.
pb0 := (*mBucket)(atomic.LoadPointer(&p.buckets[i]))
if pb0 == nil {
pb0 = p.initBucket(i)
}
pb1 := (*mBucket)(atomic.LoadPointer(&p.buckets[i+uint32(len(n.buckets))]))
if pb1 == nil {
pb1 = p.initBucket(i + uint32(len(n.buckets)))
}
m0 := pb0.freeze()
m1 := pb1.freeze()
// Merge nodes.
node = make([]*Node, 0, len(m0)+len(m1))
node = append(node, m0...)
node = append(node, m1...)
}
b := &mBucket{node: node}
if atomic.CompareAndSwapPointer(&n.buckets[i], nil, unsafe.Pointer(b)) {
if len(node) > mOverflowThreshold {
atomic.AddInt32(&n.overflow, int32(len(node)-mOverflowThreshold))
}
return b
}
}
return (*mBucket)(atomic.LoadPointer(&n.buckets[i]))
}
func (n *mNode) initBuckets() {
for i := range n.buckets {
n.initBucket(uint32(i))
}
atomic.StorePointer(&n.pred, nil)
}
// Cache is a 'cache map'.
type Cache struct {
mu sync.RWMutex
mHead unsafe.Pointer // *mNode
nodes int32
size int32
cacher Cacher
closed bool
}
// NewCache creates a new 'cache map'. The cacher is optional and
// may be nil.
func NewCache(cacher Cacher) *Cache {
h := &mNode{
buckets: make([]unsafe.Pointer, mInitialSize),
mask: mInitialSize - 1,
growThreshold: int32(mInitialSize * mOverflowThreshold),
shrinkThreshold: 0,
}
for i := range h.buckets {
h.buckets[i] = unsafe.Pointer(&mBucket{})
}
r := &Cache{
mHead: unsafe.Pointer(h),
cacher: cacher,
}
return r
}
func (r *Cache) getBucket(hash uint32) (*mNode, *mBucket) {
h := (*mNode)(atomic.LoadPointer(&r.mHead))
i := hash & h.mask
b := (*mBucket)(atomic.LoadPointer(&h.buckets[i]))
if b == nil {
b = h.initBucket(i)
}
return h, b
}
func (r *Cache) delete(n *Node) bool {
for {
h, b := r.getBucket(n.hash)
done, deleted := b.delete(r, h, n.hash, n.ns, n.key)
if done {
return deleted
}
}
return false
}
// Nodes returns number of 'cache node' in the map.
func (r *Cache) Nodes() int {
return int(atomic.LoadInt32(&r.nodes))
}
// Size returns sums of 'cache node' size in the map.
func (r *Cache) Size() int {
return int(atomic.LoadInt32(&r.size))
}
// Capacity returns cache capacity.
func (r *Cache) Capacity() int {
if r.cacher == nil {
return 0
}
return r.cacher.Capacity()
}
// SetCapacity sets cache capacity.
func (r *Cache) SetCapacity(capacity int) {
if r.cacher != nil {
r.cacher.SetCapacity(capacity)
}
}
// Get gets 'cache node' with the given namespace and key.
// If cache node is not found and setFunc is not nil, Get will atomically creates
// the 'cache node' by calling setFunc. Otherwise Get will returns nil.
//
// The returned 'cache handle' should be released after use by calling Release
// method.
func (r *Cache) Get(ns, key uint64, setFunc func() (size int, value Value)) *Handle {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return nil
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, setFunc == nil)
if done {
if n != nil {
n.mu.Lock()
if n.value == nil {
if setFunc == nil {
n.mu.Unlock()
n.unref()
return nil
}
n.size, n.value = setFunc()
if n.value == nil {
n.size = 0
n.mu.Unlock()
n.unref()
return nil
}
atomic.AddInt32(&r.size, int32(n.size))
}
n.mu.Unlock()
if r.cacher != nil {
r.cacher.Promote(n)
}
return &Handle{unsafe.Pointer(n)}
}
break
}
}
return nil
}
func (h *fakeHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
// Delete removes and ban 'cache node' with the given namespace and key.
// A banned 'cache node' will never inserted into the 'cache tree'. Ban
// only attributed to the particular 'cache node', so when a 'cache node'
// is recreated it will not be banned.
//
// If onDel is not nil, then it will be executed if such 'cache node'
// doesn't exist or once the 'cache node' is released.
//
// Delete return true is such 'cache node' exist.
func (r *Cache) Delete(ns, key uint64, onDel func()) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if onDel != nil {
n.mu.Lock()
n.onDel = append(n.onDel, onDel)
n.mu.Unlock()
}
if r.cacher != nil {
r.cacher.Ban(n)
}
n.unref()
return true
}
break
}
}
if onDel != nil {
onDel()
}
return false
}
// Evict evicts 'cache node' with the given namespace and key. This will
// simply call Cacher.Evict.
//
// Evict return true is such 'cache node' exist.
func (r *Cache) Evict(ns, key uint64) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return false
}
hash := murmur32(ns, key, 0xf00)
for {
h, b := r.getBucket(hash)
done, _, n := b.get(r, h, hash, ns, key, true)
if done {
if n != nil {
if r.cacher != nil {
r.cacher.Evict(n)
}
n.unref()
return true
}
break
}
}
return false
}
// EvictNS evicts 'cache node' with the given namespace. This will
// simply call Cacher.EvictNS.
func (r *Cache) EvictNS(ns uint64) {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if h.fin != nil {
h.fin()
h.fin = nil
if r.cacher != nil {
r.cacher.EvictNS(ns)
}
}
// EvictAll evicts all 'cache node'. This will simply call Cacher.EvictAll.
func (r *Cache) EvictAll() {
r.mu.RLock()
defer r.mu.RUnlock()
if r.closed {
return
}
if r.cacher != nil {
r.cacher.EvictAll()
}
}
// Close closes the 'cache map' and releases all 'cache node'.
func (r *Cache) Close() error {
r.mu.Lock()
if !r.closed {
r.closed = true
if r.cacher != nil {
if err := r.cacher.Close(); err != nil {
return err
}
}
h := (*mNode)(r.mHead)
h.initBuckets()
for i := range h.buckets {
b := (*mBucket)(h.buckets[i])
for _, n := range b.node {
// Call releaser.
if n.value != nil {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
n.value = nil
}
// Call OnDel.
for _, f := range n.onDel {
f()
}
}
}
}
r.mu.Unlock()
return nil
}
// Node is a 'cache node'.
type Node struct {
r *Cache
hash uint32
ns, key uint64
mu sync.Mutex
size int
value Value
ref int32
onDel []func()
CacheData unsafe.Pointer
}
// NS returns this 'cache node' namespace.
func (n *Node) NS() uint64 {
return n.ns
}
// Key returns this 'cache node' key.
func (n *Node) Key() uint64 {
return n.key
}
// Size returns this 'cache node' size.
func (n *Node) Size() int {
return n.size
}
// Value returns this 'cache node' value.
func (n *Node) Value() Value {
return n.value
}
// Ref returns this 'cache node' ref counter.
func (n *Node) Ref() int32 {
return atomic.LoadInt32(&n.ref)
}
// GetHandle returns an handle for this 'cache node'.
func (n *Node) GetHandle() *Handle {
if atomic.AddInt32(&n.ref, 1) <= 1 {
panic("BUG: Node.GetHandle on zero ref")
}
return &Handle{unsafe.Pointer(n)}
}
func (n *Node) unref() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.delete(n)
}
}
func (n *Node) unrefLocked() {
if atomic.AddInt32(&n.ref, -1) == 0 {
n.r.mu.RLock()
if !n.r.closed {
n.r.delete(n)
}
n.r.mu.RUnlock()
}
}
type Handle struct {
n unsafe.Pointer // *Node
}
func (h *Handle) Value() Value {
n := (*Node)(atomic.LoadPointer(&h.n))
if n != nil {
return n.value
}
return nil
}
func (h *Handle) Release() {
nPtr := atomic.LoadPointer(&h.n)
if nPtr != nil && atomic.CompareAndSwapPointer(&h.n, nPtr, nil) {
n := (*Node)(nPtr)
n.unrefLocked()
}
}
func murmur32(ns, key uint64, seed uint32) uint32 {
const (
m = uint32(0x5bd1e995)
r = 24
)
k1 := uint32(ns >> 32)
k2 := uint32(ns)
k3 := uint32(key >> 32)
k4 := uint32(key)
k1 *= m
k1 ^= k1 >> r
k1 *= m
k2 *= m
k2 ^= k2 >> r
k2 *= m
k3 *= m
k3 ^= k3 >> r
k3 *= m
k4 *= m
k4 ^= k4 >> r
k4 *= m
h := seed
h *= m
h ^= k1
h *= m
h ^= k2
h *= m
h ^= k3
h *= m
h ^= k4
h ^= h >> 13
h *= m
h ^= h >> 15
return h
}

View File

@@ -13,11 +13,26 @@ import (
"sync/atomic"
"testing"
"time"
"unsafe"
)
type int32o int32
func (o *int32o) acquire() {
if atomic.AddInt32((*int32)(o), 1) != 1 {
panic("BUG: invalid ref")
}
}
func (o *int32o) Release() {
if atomic.AddInt32((*int32)(o), -1) != 0 {
panic("BUG: invalid ref")
}
}
type releaserFunc struct {
fn func()
value interface{}
value Value
}
func (r releaserFunc) Release() {
@@ -26,8 +41,8 @@ func (r releaserFunc) Release() {
}
}
func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) Handle {
return ns.Get(key, func() (int, interface{}) {
func set(c *Cache, ns, key uint64, value Value, charge int, relf func()) *Handle {
return c.Get(ns, key, func() (int, Value) {
if relf != nil {
return charge, releaserFunc{relf, value}
} else {
@@ -36,7 +51,246 @@ func set(ns Namespace, key uint64, value interface{}, charge int, relf func()) H
})
}
func TestCache_HitMiss(t *testing.T) {
func TestCacheMap(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
nsx := []struct {
nobjects, nhandles, concurrent, repeat int
}{
{10000, 400, 50, 3},
{100000, 1000, 100, 10},
}
var (
objects [][]int32o
handles [][]unsafe.Pointer
)
for _, x := range nsx {
objects = append(objects, make([]int32o, x.nobjects))
handles = append(handles, make([]unsafe.Pointer, x.nhandles))
}
c := NewCache(nil)
wg := new(sync.WaitGroup)
var done int32
for ns, x := range nsx {
for i := 0; i < x.concurrent; i++ {
wg.Add(1)
go func(ns, i, repeat int, objects []int32o, handles []unsafe.Pointer) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for j := len(objects) * repeat; j >= 0; j-- {
key := uint64(r.Intn(len(objects)))
h := c.Get(uint64(ns), key, func() (int, Value) {
o := &objects[key]
o.acquire()
return 1, o
})
if v := h.Value().(*int32o); v != &objects[key] {
t.Fatalf("#%d invalid value: want=%p got=%p", ns, &objects[key], v)
}
if objects[key] != 1 {
t.Fatalf("#%d invalid object %d: %d", ns, key, objects[key])
}
if !atomic.CompareAndSwapPointer(&handles[r.Intn(len(handles))], nil, unsafe.Pointer(h)) {
h.Release()
}
}
}(ns, i, x.repeat, objects[ns], handles[ns])
}
go func(handles []unsafe.Pointer) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for atomic.LoadInt32(&done) == 0 {
i := r.Intn(len(handles))
h := (*Handle)(atomic.LoadPointer(&handles[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles[i], unsafe.Pointer(h), nil) {
h.Release()
}
time.Sleep(time.Millisecond)
}
}(handles[ns])
}
go func() {
handles := make([]*Handle, 100000)
for atomic.LoadInt32(&done) == 0 {
for i := range handles {
handles[i] = c.Get(999999999, uint64(i), func() (int, Value) {
return 1, 1
})
}
for _, h := range handles {
h.Release()
}
}
}()
wg.Wait()
atomic.StoreInt32(&done, 1)
for _, handles0 := range handles {
for i := range handles0 {
h := (*Handle)(atomic.LoadPointer(&handles0[i]))
if h != nil && atomic.CompareAndSwapPointer(&handles0[i], unsafe.Pointer(h), nil) {
h.Release()
}
}
}
for ns, objects0 := range objects {
for i, o := range objects0 {
if o != 0 {
t.Fatalf("invalid object #%d.%d: ref=%d", ns, i, o)
}
}
}
}
func TestCacheMap_NodesAndSize(t *testing.T) {
c := NewCache(nil)
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 2, nil)
set(c, 1, 1, 3, 3, nil)
set(c, 2, 1, 4, 1, nil)
if c.Nodes() != 4 {
t.Errorf("invalid nodes counter: want=%d got=%d", 4, c.Nodes())
}
if c.Size() != 7 {
t.Errorf("invalid size counter: want=%d got=%d", 4, c.Size())
}
}
func TestLRUCache_Capacity(t *testing.T) {
c := NewCache(NewLRU(10))
if c.Capacity() != 10 {
t.Errorf("invalid capacity: want=%d got=%d", 10, c.Capacity())
}
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 2, nil).Release()
set(c, 1, 1, 3, 3, nil).Release()
set(c, 2, 1, 4, 1, nil).Release()
set(c, 2, 2, 5, 1, nil).Release()
set(c, 2, 3, 6, 1, nil).Release()
set(c, 2, 4, 7, 1, nil).Release()
set(c, 2, 5, 8, 1, nil).Release()
if c.Nodes() != 7 {
t.Errorf("invalid nodes counter: want=%d got=%d", 7, c.Nodes())
}
if c.Size() != 10 {
t.Errorf("invalid size counter: want=%d got=%d", 10, c.Size())
}
c.SetCapacity(9)
if c.Capacity() != 9 {
t.Errorf("invalid capacity: want=%d got=%d", 9, c.Capacity())
}
if c.Nodes() != 6 {
t.Errorf("invalid nodes counter: want=%d got=%d", 6, c.Nodes())
}
if c.Size() != 8 {
t.Errorf("invalid size counter: want=%d got=%d", 8, c.Size())
}
}
func TestCacheMap_NilValue(t *testing.T) {
c := NewCache(NewLRU(10))
h := c.Get(0, 0, func() (size int, value Value) {
return 1, nil
})
if h != nil {
t.Error("cache handle is non-nil")
}
if c.Nodes() != 0 {
t.Errorf("invalid nodes counter: want=%d got=%d", 0, c.Nodes())
}
if c.Size() != 0 {
t.Errorf("invalid size counter: want=%d got=%d", 0, c.Size())
}
}
func TestLRUCache_GetLatency(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
const (
concurrentSet = 30
concurrentGet = 3
duration = 3 * time.Second
delay = 3 * time.Millisecond
maxkey = 100000
)
var (
set, getHit, getAll int32
getMaxLatency, getDuration int64
)
c := NewCache(NewLRU(5000))
wg := &sync.WaitGroup{}
until := time.Now().Add(duration)
for i := 0; i < concurrentSet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for time.Now().Before(until) {
c.Get(0, uint64(r.Intn(maxkey)), func() (int, Value) {
time.Sleep(delay)
atomic.AddInt32(&set, 1)
return 1, 1
}).Release()
}
}(i)
}
for i := 0; i < concurrentGet; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for {
mark := time.Now()
if mark.Before(until) {
h := c.Get(0, uint64(r.Intn(maxkey)), nil)
latency := int64(time.Now().Sub(mark))
m := atomic.LoadInt64(&getMaxLatency)
if latency > m {
atomic.CompareAndSwapInt64(&getMaxLatency, m, latency)
}
atomic.AddInt64(&getDuration, latency)
if h != nil {
atomic.AddInt32(&getHit, 1)
h.Release()
}
atomic.AddInt32(&getAll, 1)
} else {
break
}
}
}(i)
}
wg.Wait()
getAvglatency := time.Duration(getDuration) / time.Duration(getAll)
t.Logf("set=%d getHit=%d getAll=%d getMaxLatency=%v getAvgLatency=%v",
set, getHit, getAll, time.Duration(getMaxLatency), getAvglatency)
if getAvglatency > delay/3 {
t.Errorf("get avg latency > %v: got=%v", delay/3, getAvglatency)
}
}
func TestLRUCache_HitMiss(t *testing.T) {
cases := []struct {
key uint64
value string
@@ -54,14 +308,13 @@ func TestCache_HitMiss(t *testing.T) {
}
setfin := 0
c := NewLRUCache(1000)
ns := c.GetNamespace(0)
c := NewCache(NewLRU(1000))
for i, x := range cases {
set(ns, x.key, x.value, len(x.value), func() {
set(c, 0, x.key, x.value, len(x.value), func() {
setfin++
}).Release()
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j <= i {
// should hit
if h == nil {
@@ -85,7 +338,7 @@ func TestCache_HitMiss(t *testing.T) {
for i, x := range cases {
finalizerOk := false
ns.Delete(x.key, func(exist, pending bool) {
c.Delete(0, x.key, func() {
finalizerOk = true
})
@@ -94,7 +347,7 @@ func TestCache_HitMiss(t *testing.T) {
}
for j, y := range cases {
h := ns.Get(y.key, nil)
h := c.Get(0, y.key, nil)
if j > i {
// should hit
if h == nil {
@@ -122,20 +375,19 @@ func TestCache_HitMiss(t *testing.T) {
}
func TestLRUCache_Eviction(t *testing.T) {
c := NewLRUCache(12)
ns := c.GetNamespace(0)
o1 := set(ns, 1, 1, 1, nil)
set(ns, 2, 2, 1, nil).Release()
set(ns, 3, 3, 1, nil).Release()
set(ns, 4, 4, 1, nil).Release()
set(ns, 5, 5, 1, nil).Release()
if h := ns.Get(2, nil); h != nil { // 1,3,4,5,2
c := NewCache(NewLRU(12))
o1 := set(c, 0, 1, 1, 1, nil)
set(c, 0, 2, 2, 1, nil).Release()
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
set(c, 0, 5, 5, 1, nil).Release()
if h := c.Get(0, 2, nil); h != nil { // 1,3,4,5,2
h.Release()
}
set(ns, 9, 9, 10, nil).Release() // 5,2,9
set(c, 0, 9, 9, 10, nil).Release() // 5,2,9
for _, key := range []uint64{9, 2, 5, 1} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -147,7 +399,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
o1.Release()
for _, key := range []uint64{1, 2, 5} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
@@ -158,7 +410,7 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
for _, key := range []uint64{3, 4, 9} {
h := ns.Get(key, nil)
h := c.Get(0, key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
@@ -169,487 +421,150 @@ func TestLRUCache_Eviction(t *testing.T) {
}
}
func TestLRUCache_SetGet(t *testing.T) {
c := NewLRUCache(13)
ns := c.GetNamespace(0)
for i := 0; i < 200; i++ {
n := uint64(rand.Intn(99999) % 20)
set(ns, n, n, 1, nil).Release()
if h := ns.Get(n, nil); h != nil {
if h.Value() == nil {
t.Errorf("key '%d' contains nil value", n)
func TestLRUCache_Evict(t *testing.T) {
c := NewCache(NewLRU(6))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
set(c, 1, 1, 4, 1, nil).Release()
set(c, 1, 2, 5, 1, nil).Release()
set(c, 2, 1, 6, 1, nil).Release()
set(c, 2, 2, 7, 1, nil).Release()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
h.Release()
} else {
if x := h.Value().(uint64); x != n {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", n, n, x)
}
t.Errorf("Cache.Get on #%d.%d return nil", ns, key)
}
}
}
if ok := c.Evict(0, 1); !ok {
t.Error("first Cache.Evict on #0.1 return false")
}
if ok := c.Evict(0, 1); ok {
t.Error("second Cache.Evict on #0.1 return true")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #0.1 return non-nil: %v", h.Value())
}
c.EvictNS(1)
if h := c.Get(1, 1, nil); h != nil {
t.Errorf("Cache.Get on #1.1 return non-nil: %v", h.Value())
}
if h := c.Get(1, 2, nil); h != nil {
t.Errorf("Cache.Get on #1.2 return non-nil: %v", h.Value())
}
c.EvictAll()
for ns := 0; ns < 3; ns++ {
for key := 1; key < 3; key++ {
if h := c.Get(uint64(ns), uint64(key), nil); h != nil {
t.Errorf("Cache.Get on #%d.%d return non-nil: %v", ns, key, h.Value())
}
}
}
}
func TestLRUCache_Delete(t *testing.T) {
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, nil).Release()
set(c, 0, 2, 2, 1, nil).Release()
if ok := c.Delete(0, 1, delFunc); !ok {
t.Error("Cache.Delete on #1 return false")
}
if h := c.Get(0, 1, nil); h != nil {
t.Errorf("Cache.Get on #1 return non-nil: %v", h.Value())
}
if ok := c.Delete(0, 1, delFunc); ok {
t.Error("Cache.Delete on #1 return true")
}
h2 := c.Get(0, 2, nil)
if h2 == nil {
t.Error("Cache.Get on #2 return nil")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(1) Cache.Delete on #2 return false")
}
if ok := c.Delete(0, 2, delFunc); !ok {
t.Error("(2) Cache.Delete on #2 return false")
}
set(c, 0, 3, 3, 1, nil).Release()
set(c, 0, 4, 4, 1, nil).Release()
c.Get(0, 2, nil).Release()
for key := 2; key <= 4; key++ {
if h := c.Get(0, uint64(key), nil); h != nil {
h.Release()
} else {
t.Errorf("key '%d' doesn't exist", n)
}
}
}
func TestLRUCache_Purge(t *testing.T) {
c := NewLRUCache(3)
ns1 := c.GetNamespace(0)
o1 := set(ns1, 1, 1, 1, nil)
o2 := set(ns1, 2, 2, 1, nil)
ns1.Purge(nil)
set(ns1, 3, 3, 1, nil).Release()
for _, key := range []uint64{1, 2, 3} {
h := ns1.Get(key, nil)
if h == nil {
t.Errorf("miss for key '%d'", key)
} else {
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
o1.Release()
o2.Release()
for _, key := range []uint64{1, 2} {
h := ns1.Get(key, nil)
if h != nil {
t.Errorf("hit for key '%d'", key)
if x := h.Value().(int); x != int(key) {
t.Errorf("invalid value for key '%d' want '%d', got '%d'", key, key, x)
}
h.Release()
}
}
}
type testingCacheObjectCounter struct {
created uint
released uint
}
func (c *testingCacheObjectCounter) createOne() {
c.created++
}
func (c *testingCacheObjectCounter) releaseOne() {
c.released++
}
type testingCacheObject struct {
t *testing.T
cnt *testingCacheObjectCounter
ns, key uint64
releaseCalled bool
}
func (x *testingCacheObject) Release() {
if !x.releaseCalled {
x.releaseCalled = true
x.cnt.releaseOne()
} else {
x.t.Errorf("duplicate setfin NS#%d KEY#%d", x.ns, x.key)
}
}
func TestLRUCache_ConcurrentSetGet(t *testing.T) {
runtime.GOMAXPROCS(runtime.NumCPU())
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
const (
N = 2000000
M = 4000
C = 3
)
var set, get uint32
wg := &sync.WaitGroup{}
c := NewLRUCache(M / 4)
for ni := uint64(0); ni < C; ni++ {
r0 := rand.New(rand.NewSource(seed + int64(ni)))
r1 := rand.New(rand.NewSource(seed + int64(ni) + 1))
ns := c.GetNamespace(ni)
wg.Add(2)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, func() (int, interface{}) {
atomic.AddUint32(&set, 1)
return 1, x
})
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
wg.Done()
}(ns, r0)
go func(ns Namespace, r *rand.Rand) {
for i := 0; i < N; i++ {
x := uint64(r.Int63n(M))
o := ns.Get(x, nil)
if o != nil {
atomic.AddUint32(&get, 1)
if v := o.Value().(uint64); v != x {
t.Errorf("#%d invalid value, got=%d", x, v)
}
o.Release()
}
}
wg.Done()
}(ns, r1)
}
wg.Wait()
t.Logf("set=%d get=%d", set, get)
}
func TestLRUCache_Finalizer(t *testing.T) {
const (
capacity = 100
goroutines = 100
iterations = 10000
keymax = 8000
)
cnt := &testingCacheObjectCounter{}
c := NewLRUCache(capacity)
type instance struct {
seed int64
rnd *rand.Rand
nsid uint64
ns Namespace
effective int
handles []Handle
handlesMap map[uint64]int
delete bool
purge bool
zap bool
wantDel int
delfinCalled int
delfinCalledAll int
delfinCalledEff int
purgefinCalled int
}
instanceGet := func(p *instance, key uint64) {
h := p.ns.Get(key, func() (charge int, value interface{}) {
to := &testingCacheObject{
t: t, cnt: cnt,
ns: p.nsid,
key: key,
}
p.effective++
cnt.createOne()
return 1, releaserFunc{func() {
to.Release()
p.effective--
}, to}
})
p.handles = append(p.handles, h)
p.handlesMap[key] = p.handlesMap[key] + 1
}
instanceRelease := func(p *instance, i int) {
h := p.handles[i]
key := h.Value().(releaserFunc).value.(*testingCacheObject).key
if n := p.handlesMap[key]; n == 0 {
t.Fatal("key ref == 0")
} else if n > 1 {
p.handlesMap[key] = n - 1
} else {
delete(p.handlesMap, key)
}
h.Release()
p.handles = append(p.handles[:i], p.handles[i+1:]...)
p.handles[len(p.handles) : len(p.handles)+1][0] = nil
}
seed := time.Now().UnixNano()
t.Logf("seed=%d", seed)
instances := make([]*instance, goroutines)
for i := range instances {
p := &instance{}
p.handlesMap = make(map[uint64]int)
p.seed = seed + int64(i)
p.rnd = rand.New(rand.NewSource(p.seed))
p.nsid = uint64(i)
p.ns = c.GetNamespace(p.nsid)
p.delete = i%6 == 0
p.purge = i%8 == 0
p.zap = i%12 == 0 || i%3 == 0
instances[i] = p
}
runr := rand.New(rand.NewSource(seed - 1))
run := func(rnd *rand.Rand, x []*instance, init func(p *instance) bool, fn func(p *instance, i int) bool) {
var (
rx []*instance
rn []int
)
if init == nil {
rx = append([]*instance{}, x...)
rn = make([]int, len(x))
} else {
for _, p := range x {
if init(p) {
rx = append(rx, p)
rn = append(rn, 0)
}
}
}
for len(rx) > 0 {
i := rand.Intn(len(rx))
if fn(rx[i], rn[i]) {
rn[i]++
} else {
rx = append(rx[:i], rx[i+1:]...)
rn = append(rn[:i], rn[i+1:]...)
}
t.Errorf("Cache.Get on #%d return nil", key)
}
}
// Get and release.
run(runr, instances, nil, func(p *instance, i int) bool {
if i < iterations {
if len(p.handles) == 0 || p.rnd.Int()%2 == 0 {
instanceGet(p, uint64(p.rnd.Intn(keymax)))
} else {
instanceRelease(p, p.rnd.Intn(len(p.handles)))
}
return true
} else {
return false
h2.Release()
if h := c.Get(0, 2, nil); h != nil {
t.Errorf("Cache.Get on #2 return non-nil: %v", h.Value())
}
if delFuncCalled != 4 {
t.Errorf("delFunc isn't called 4 times: got=%d", delFuncCalled)
}
}
func TestLRUCache_Close(t *testing.T) {
relFuncCalled := 0
relFunc := func() {
relFuncCalled++
}
delFuncCalled := 0
delFunc := func() {
delFuncCalled++
}
c := NewCache(NewLRU(2))
set(c, 0, 1, 1, 1, relFunc).Release()
set(c, 0, 2, 2, 1, relFunc).Release()
h3 := set(c, 0, 3, 3, 1, relFunc)
if h3 == nil {
t.Error("Cache.Get on #3 return nil")
}
if ok := c.Delete(0, 3, delFunc); !ok {
t.Error("Cache.Delete on #3 return false")
}
c.Close()
if relFuncCalled != 3 {
t.Errorf("relFunc isn't called 3 times: got=%d", relFuncCalled)
}
if delFuncCalled != 1 {
t.Errorf("delFunc isn't called 1 times: got=%d", delFuncCalled)
}
}
func BenchmarkLRUCache(b *testing.B) {
c := NewCache(NewLRU(10000))
b.SetParallelism(10)
b.RunParallel(func(pb *testing.PB) {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
for pb.Next() {
key := uint64(r.Intn(1000000))
c.Get(0, key, func() (int, Value) {
return 1, key
}).Release()
}
})
if used, cap := c.Used(), c.Capacity(); used > cap {
t.Errorf("Used > capacity, used=%d cap=%d", used, cap)
}
// Check effective objects.
for i, p := range instances {
if int(p.effective) < len(p.handlesMap) {
t.Errorf("#%d effective objects < acquired handle, eo=%d ah=%d", i, p.effective, len(p.handlesMap))
}
}
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// First delete.
run(runr, instances, func(p *instance) bool {
p.wantDel = p.effective
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
_, wantExist := p.handlesMap[key]
gotExist := p.ns.Delete(key, func(exist, pending bool) {
p.delfinCalledAll++
if exist {
p.delfinCalledEff++
}
})
if !gotExist && wantExist {
t.Errorf("delete on NS#%d KEY#%d not found", p.nsid, key)
}
return true
} else {
return false
}
})
// Second delete.
run(runr, instances, func(p *instance) bool {
p.delfinCalled = 0
return p.delete
}, func(p *instance, i int) bool {
key := uint64(i)
if key < keymax {
gotExist := p.ns.Delete(key, func(exist, pending bool) {
if exist && !pending {
t.Errorf("delete fin on NS#%d KEY#%d exist and not pending for deletion", p.nsid, key)
}
p.delfinCalled++
})
if gotExist {
t.Errorf("delete on NS#%d KEY#%d found", p.nsid, key)
}
return true
} else {
if p.delfinCalled != keymax {
t.Errorf("(2) NS#%d not all delete fin called, diff=%d", p.nsid, keymax-p.delfinCalled)
}
return false
}
})
// Purge.
run(runr, instances, func(p *instance) bool {
return p.purge
}, func(p *instance, i int) bool {
p.ns.Purge(func(ns, key uint64) {
p.purgefinCalled++
})
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Release.
run(runr, instances, func(p *instance) bool {
return !p.zap
}, func(p *instance, i int) bool {
if len(p.handles) > 0 {
instanceRelease(p, len(p.handles)-1)
return true
} else {
return false
}
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
// Zap.
run(runr, instances, func(p *instance) bool {
return p.zap
}, func(p *instance, i int) bool {
p.ns.Zap()
p.handles = nil
p.handlesMap = nil
return false
})
if want := int(cnt.created - cnt.released); c.Size() != want {
t.Errorf("Invalid cache size, want=%d got=%d", want, c.Size())
}
if notrel, used := int(cnt.created-cnt.released), c.Used(); notrel != used {
t.Errorf("Invalid used value, want=%d got=%d", notrel, used)
}
c.Purge(nil)
for _, p := range instances {
if p.delete {
if p.delfinCalledAll != keymax {
t.Errorf("#%d not all delete fin called, purge=%v zap=%v diff=%d", p.nsid, p.purge, p.zap, keymax-p.delfinCalledAll)
}
if p.delfinCalledEff != p.wantDel {
t.Errorf("#%d not all effective delete fin called, diff=%d", p.nsid, p.wantDel-p.delfinCalledEff)
}
if p.purge && p.purgefinCalled > 0 {
t.Errorf("#%d some purge fin called, delete=%v zap=%v n=%d", p.nsid, p.delete, p.zap, p.purgefinCalled)
}
} else {
if p.purge {
if p.purgefinCalled != p.wantDel {
t.Errorf("#%d not all purge fin called, delete=%v zap=%v diff=%d", p.nsid, p.delete, p.zap, p.wantDel-p.purgefinCalled)
}
}
}
}
if cnt.created != cnt.released {
t.Errorf("Some cache object weren't released, created=%d released=%d", cnt.created, cnt.released)
}
}
func BenchmarkLRUCache_Set(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
}
func BenchmarkLRUCache_Get(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, nil)
}
}
func BenchmarkLRUCache_Get2(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil)
}
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
ns.Get(i, func() (charge int, value interface{}) {
return 0, nil
})
}
}
func BenchmarkLRUCache_Release(b *testing.B) {
c := NewLRUCache(0)
ns := c.GetNamespace(0)
handles := make([]Handle, b.N)
for i := uint64(0); i < uint64(b.N); i++ {
handles[i] = set(ns, i, "", 1, nil)
}
b.ResetTimer()
for _, h := range handles {
h.Release()
}
}
func BenchmarkLRUCache_SetRelease(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
for i := uint64(0); i < uint64(b.N); i++ {
set(ns, i, "", 1, nil).Release()
}
}
func BenchmarkLRUCache_SetReleaseTwice(b *testing.B) {
capacity := b.N / 100
if capacity <= 0 {
capacity = 10
}
c := NewLRUCache(capacity)
ns := c.GetNamespace(0)
b.ResetTimer()
na := b.N / 2
nb := b.N - na
for i := uint64(0); i < uint64(na); i++ {
set(ns, i, "", 1, nil).Release()
}
for i := uint64(0); i < uint64(nb); i++ {
set(ns, i, "", 1, nil).Release()
}
}

View File

@@ -0,0 +1,195 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"unsafe"
)
type lruNode struct {
n *Node
h *Handle
ban bool
next, prev *lruNode
}
func (n *lruNode) insert(at *lruNode) {
x := at.next
at.next = n
n.prev = at
n.next = x
x.prev = n
}
func (n *lruNode) remove() {
if n.prev != nil {
n.prev.next = n.next
n.next.prev = n.prev
n.prev = nil
n.next = nil
} else {
panic("BUG: removing removed node")
}
}
type lru struct {
mu sync.Mutex
capacity int
used int
recent lruNode
}
func (r *lru) reset() {
r.recent.next = &r.recent
r.recent.prev = &r.recent
r.used = 0
}
func (r *lru) Capacity() int {
r.mu.Lock()
defer r.mu.Unlock()
return r.capacity
}
func (r *lru) SetCapacity(capacity int) {
var evicted []*lruNode
r.mu.Lock()
r.capacity = capacity
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Promote(n *Node) {
var evicted []*lruNode
r.mu.Lock()
if n.CacheData == nil {
if n.Size() <= r.capacity {
rn := &lruNode{n: n, h: n.GetHandle()}
rn.insert(&r.recent)
n.CacheData = unsafe.Pointer(rn)
r.used += n.Size()
for r.used > r.capacity {
rn := r.recent.prev
if rn == nil {
panic("BUG: invalid LRU used or capacity counter")
}
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.insert(&r.recent)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) Ban(n *Node) {
r.mu.Lock()
if n.CacheData == nil {
n.CacheData = unsafe.Pointer(&lruNode{n: n, ban: true})
} else {
rn := (*lruNode)(n.CacheData)
if !rn.ban {
rn.remove()
rn.ban = true
r.used -= rn.n.Size()
r.mu.Unlock()
rn.h.Release()
rn.h = nil
return
}
}
r.mu.Unlock()
}
func (r *lru) Evict(n *Node) {
r.mu.Lock()
rn := (*lruNode)(n.CacheData)
if rn == nil || rn.ban {
r.mu.Unlock()
return
}
n.CacheData = nil
r.mu.Unlock()
rn.h.Release()
}
func (r *lru) EvictNS(ns uint64) {
var evicted []*lruNode
r.mu.Lock()
for e := r.recent.prev; e != &r.recent; {
rn := e
e = e.prev
if rn.n.NS() == ns {
rn.remove()
rn.n.CacheData = nil
r.used -= rn.n.Size()
evicted = append(evicted, rn)
}
}
r.mu.Unlock()
for _, rn := range evicted {
rn.h.Release()
}
}
func (r *lru) EvictAll() {
r.mu.Lock()
back := r.recent.prev
for rn := back; rn != &r.recent; rn = rn.prev {
rn.n.CacheData = nil
}
r.reset()
r.mu.Unlock()
for rn := back; rn != &r.recent; rn = rn.prev {
rn.h.Release()
}
}
func (r *lru) Close() error {
return nil
}
// NewLRU create a new LRU-cache.
func NewLRU(capacity int) Cacher {
r := &lru{capacity: capacity}
r.reset()
return r
}

View File

@@ -1,622 +0,0 @@
// Copyright (c) 2012, Suryandaru Triandana <syndtr@gmail.com>
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package cache
import (
"sync"
"sync/atomic"
"github.com/syndtr/goleveldb/leveldb/util"
)
// The LLRB implementation were taken from https://github.com/petar/GoLLRB.
// Which contains the following header:
//
// Copyright 2010 Petar Maymounkov. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// lruCache represent a LRU cache state.
type lruCache struct {
mu sync.Mutex
recent lruNode
table map[uint64]*lruNs
capacity int
used, size, alive int
}
// NewLRUCache creates a new initialized LRU cache with the given capacity.
func NewLRUCache(capacity int) Cache {
c := &lruCache{
table: make(map[uint64]*lruNs),
capacity: capacity,
}
c.recent.rNext = &c.recent
c.recent.rPrev = &c.recent
return c
}
func (c *lruCache) Capacity() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.capacity
}
func (c *lruCache) Used() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.used
}
func (c *lruCache) Size() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.size
}
func (c *lruCache) NumObjects() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.alive
}
// SetCapacity set cache capacity.
func (c *lruCache) SetCapacity(capacity int) {
c.mu.Lock()
c.capacity = capacity
c.evict()
c.mu.Unlock()
}
// GetNamespace return namespace object for given id.
func (c *lruCache) GetNamespace(id uint64) Namespace {
c.mu.Lock()
defer c.mu.Unlock()
if ns, ok := c.table[id]; ok {
return ns
}
ns := &lruNs{lru: c, id: id}
c.table[id] = ns
return ns
}
func (c *lruCache) ZapNamespace(id uint64) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.zapNB()
delete(c.table, id)
}
c.mu.Unlock()
}
func (c *lruCache) PurgeNamespace(id uint64, fin PurgeFin) {
c.mu.Lock()
if ns, exist := c.table[id]; exist {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
// Purge purge entire cache.
func (c *lruCache) Purge(fin PurgeFin) {
c.mu.Lock()
for _, ns := range c.table {
ns.purgeNB(fin)
}
c.mu.Unlock()
}
func (c *lruCache) Zap() {
c.mu.Lock()
for _, ns := range c.table {
ns.zapNB()
}
c.table = make(map[uint64]*lruNs)
c.mu.Unlock()
}
func (c *lruCache) evict() {
top := &c.recent
for n := c.recent.rPrev; c.used > c.capacity && n != top; {
if n.state != nodeEffective {
panic("evicting non effective node")
}
n.state = nodeEvicted
n.rRemove()
n.derefNB()
c.used -= n.charge
n = c.recent.rPrev
}
}
type lruNs struct {
lru *lruCache
id uint64
rbRoot *lruNode
state nsState
}
func (ns *lruNs) rbGetOrCreateNode(h *lruNode, key uint64) (hn, n *lruNode) {
if h == nil {
n = &lruNode{ns: ns, key: key}
return n, n
}
if key < h.key {
hn, n = ns.rbGetOrCreateNode(h.rbLeft, key)
if hn != nil {
h.rbLeft = hn
} else {
return nil, n
}
} else if key > h.key {
hn, n = ns.rbGetOrCreateNode(h.rbRight, key)
if hn != nil {
h.rbRight = hn
} else {
return nil, n
}
} else {
return nil, h
}
if rbIsRed(h.rbRight) && !rbIsRed(h.rbLeft) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h, n
}
func (ns *lruNs) getOrCreateNode(key uint64) *lruNode {
hn, n := ns.rbGetOrCreateNode(ns.rbRoot, key)
if hn != nil {
ns.rbRoot = hn
ns.rbRoot.rbBlack = true
}
return n
}
func (ns *lruNs) rbGetNode(key uint64) *lruNode {
h := ns.rbRoot
for h != nil {
switch {
case key < h.key:
h = h.rbLeft
case key > h.key:
h = h.rbRight
default:
return h
}
}
return nil
}
func (ns *lruNs) getNode(key uint64) *lruNode {
return ns.rbGetNode(key)
}
func (ns *lruNs) rbDeleteNode(h *lruNode, key uint64) *lruNode {
if h == nil {
return nil
}
if key < h.key {
if h.rbLeft == nil { // key not present. Nothing to delete
return h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft = ns.rbDeleteNode(h.rbLeft, key)
} else {
if rbIsRed(h.rbLeft) {
h = rbRotRight(h)
}
// If @key equals @h.key and no right children at @h
if h.key == key && h.rbRight == nil {
return nil
}
if h.rbRight != nil && !rbIsRed(h.rbRight) && !rbIsRed(h.rbRight.rbLeft) {
h = rbMoveRight(h)
}
// If @key equals @h.key, and (from above) 'h.Right != nil'
if h.key == key {
var x *lruNode
h.rbRight, x = rbDeleteMin(h.rbRight)
if x == nil {
panic("logic")
}
x.rbLeft, h.rbLeft = h.rbLeft, nil
x.rbRight, h.rbRight = h.rbRight, nil
x.rbBlack = h.rbBlack
h = x
} else { // Else, @key is bigger than @h.key
h.rbRight = ns.rbDeleteNode(h.rbRight, key)
}
}
return rbFixup(h)
}
func (ns *lruNs) deleteNode(key uint64) {
ns.rbRoot = ns.rbDeleteNode(ns.rbRoot, key)
if ns.rbRoot != nil {
ns.rbRoot.rbBlack = true
}
}
func (ns *lruNs) rbIterateNodes(h *lruNode, pivot uint64, iter func(n *lruNode) bool) bool {
if h == nil {
return true
}
if h.key >= pivot {
if !ns.rbIterateNodes(h.rbLeft, pivot, iter) {
return false
}
if !iter(h) {
return false
}
}
return ns.rbIterateNodes(h.rbRight, pivot, iter)
}
func (ns *lruNs) iterateNodes(iter func(n *lruNode) bool) {
ns.rbIterateNodes(ns.rbRoot, 0, iter)
}
func (ns *lruNs) Get(key uint64, setf SetFunc) Handle {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
return nil
}
var n *lruNode
if setf == nil {
n = ns.getNode(key)
if n == nil {
return nil
}
} else {
n = ns.getOrCreateNode(key)
}
switch n.state {
case nodeZero:
charge, value := setf()
if value == nil {
ns.deleteNode(key)
return nil
}
if charge < 0 {
charge = 0
}
n.value = value
n.charge = charge
n.state = nodeEvicted
ns.lru.size += charge
ns.lru.alive++
fallthrough
case nodeEvicted:
if n.charge == 0 {
break
}
// Insert to recent list.
n.state = nodeEffective
n.ref++
ns.lru.used += n.charge
ns.lru.evict()
fallthrough
case nodeEffective:
// Bump to front.
n.rRemove()
n.rInsert(&ns.lru.recent)
case nodeDeleted:
// Do nothing.
default:
panic("invalid state")
}
n.ref++
return &lruHandle{node: n}
}
func (ns *lruNs) Delete(key uint64, fin DelFin) bool {
ns.lru.mu.Lock()
defer ns.lru.mu.Unlock()
if ns.state != nsEffective {
if fin != nil {
fin(false, false)
}
return false
}
n := ns.getNode(key)
if n == nil {
if fin != nil {
fin(false, false)
}
return false
}
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.delfin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.delfin = fin
case nodeDeleted:
if fin != nil {
fin(true, true)
}
return false
default:
panic("invalid state")
}
return true
}
func (ns *lruNs) purgeNB(fin PurgeFin) {
if ns.state == nsEffective {
var nodes []*lruNode
ns.iterateNodes(func(n *lruNode) bool {
nodes = append(nodes, n)
return true
})
for _, n := range nodes {
switch n.state {
case nodeEffective:
ns.lru.used -= n.charge
n.state = nodeDeleted
n.purgefin = fin
n.rRemove()
n.derefNB()
case nodeEvicted:
n.state = nodeDeleted
n.purgefin = fin
case nodeDeleted:
default:
panic("invalid state")
}
}
}
}
func (ns *lruNs) Purge(fin PurgeFin) {
ns.lru.mu.Lock()
ns.purgeNB(fin)
ns.lru.mu.Unlock()
}
func (ns *lruNs) zapNB() {
if ns.state == nsEffective {
ns.state = nsZapped
ns.iterateNodes(func(n *lruNode) bool {
if n.state == nodeEffective {
ns.lru.used -= n.charge
n.rRemove()
}
ns.lru.size -= n.charge
n.state = nodeDeleted
n.fin()
return true
})
ns.rbRoot = nil
}
}
func (ns *lruNs) Zap() {
ns.lru.mu.Lock()
ns.zapNB()
delete(ns.lru.table, ns.id)
ns.lru.mu.Unlock()
}
type lruNode struct {
ns *lruNs
rNext, rPrev *lruNode
rbLeft, rbRight *lruNode
rbBlack bool
key uint64
value interface{}
charge int
ref int
state nodeState
delfin DelFin
purgefin PurgeFin
}
func (n *lruNode) rInsert(at *lruNode) {
x := at.rNext
at.rNext = n
n.rPrev = at
n.rNext = x
x.rPrev = n
}
func (n *lruNode) rRemove() bool {
if n.rPrev == nil {
return false
}
n.rPrev.rNext = n.rNext
n.rNext.rPrev = n.rPrev
n.rPrev = nil
n.rNext = nil
return true
}
func (n *lruNode) fin() {
if r, ok := n.value.(util.Releaser); ok {
r.Release()
}
if n.purgefin != nil {
if n.delfin != nil {
panic("conflicting delete and purge fin")
}
n.purgefin(n.ns.id, n.key)
n.purgefin = nil
} else if n.delfin != nil {
n.delfin(true, false)
n.delfin = nil
}
}
func (n *lruNode) derefNB() {
n.ref--
if n.ref == 0 {
if n.ns.state == nsEffective {
// Remove elemement.
n.ns.deleteNode(n.key)
n.ns.lru.size -= n.charge
n.ns.lru.alive--
n.fin()
}
n.value = nil
} else if n.ref < 0 {
panic("leveldb/cache: lruCache: negative node reference")
}
}
func (n *lruNode) deref() {
n.ns.lru.mu.Lock()
n.derefNB()
n.ns.lru.mu.Unlock()
}
type lruHandle struct {
node *lruNode
once uint32
}
func (h *lruHandle) Value() interface{} {
if atomic.LoadUint32(&h.once) == 0 {
return h.node.value
}
return nil
}
func (h *lruHandle) Release() {
if !atomic.CompareAndSwapUint32(&h.once, 0, 1) {
return
}
h.node.deref()
h.node = nil
}
func rbIsRed(h *lruNode) bool {
if h == nil {
return false
}
return !h.rbBlack
}
func rbRotLeft(h *lruNode) *lruNode {
x := h.rbRight
if x.rbBlack {
panic("rotating a black link")
}
h.rbRight = x.rbLeft
x.rbLeft = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbRotRight(h *lruNode) *lruNode {
x := h.rbLeft
if x.rbBlack {
panic("rotating a black link")
}
h.rbLeft = x.rbRight
x.rbRight = h
x.rbBlack = h.rbBlack
h.rbBlack = false
return x
}
func rbFlip(h *lruNode) {
h.rbBlack = !h.rbBlack
h.rbLeft.rbBlack = !h.rbLeft.rbBlack
h.rbRight.rbBlack = !h.rbRight.rbBlack
}
func rbMoveLeft(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbRight.rbLeft) {
h.rbRight = rbRotRight(h.rbRight)
h = rbRotLeft(h)
rbFlip(h)
}
return h
}
func rbMoveRight(h *lruNode) *lruNode {
rbFlip(h)
if rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
rbFlip(h)
}
return h
}
func rbFixup(h *lruNode) *lruNode {
if rbIsRed(h.rbRight) {
h = rbRotLeft(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbLeft.rbLeft) {
h = rbRotRight(h)
}
if rbIsRed(h.rbLeft) && rbIsRed(h.rbRight) {
rbFlip(h)
}
return h
}
func rbDeleteMin(h *lruNode) (hn, n *lruNode) {
if h == nil {
return nil, nil
}
if h.rbLeft == nil {
return nil, h
}
if !rbIsRed(h.rbLeft) && !rbIsRed(h.rbLeft.rbLeft) {
h = rbMoveLeft(h)
}
h.rbLeft, n = rbDeleteMin(h.rbLeft)
return rbFixup(h), n
}

View File

@@ -9,14 +9,12 @@ package leveldb
import (
"bytes"
"fmt"
"io"
"math/rand"
"testing"
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/storage"
"io"
"math/rand"
"testing"
)
const ctValSize = 1000
@@ -33,8 +31,8 @@ func newDbCorruptHarnessWopt(t *testing.T, o *opt.Options) *dbCorruptHarness {
func newDbCorruptHarness(t *testing.T) *dbCorruptHarness {
return newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
})
}
@@ -269,9 +267,9 @@ func TestCorruptDB_TableIndex(t *testing.T) {
func TestCorruptDB_MissingManifest(t *testing.T) {
rnd := rand.New(rand.NewSource(0x0badda7a))
h := newDbCorruptHarnessWopt(t, &opt.Options{
BlockCache: cache.NewLRUCache(100),
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
BlockCacheCapacity: 100,
Strict: opt.StrictJournalChecksum,
WriteBuffer: 1000 * 60,
})
h.build(1000)

View File

@@ -823,8 +823,8 @@ func (db *DB) GetProperty(name string) (value string, err error) {
case p == "blockpool":
value = fmt.Sprintf("%v", db.s.tops.bpool)
case p == "cachedblock":
if bc := db.s.o.GetBlockCache(); bc != nil {
value = fmt.Sprintf("%d", bc.Size())
if db.s.tops.bcache != nil {
value = fmt.Sprintf("%d", db.s.tops.bcache.Size())
} else {
value = "<nil>"
}

View File

@@ -8,6 +8,7 @@ package leveldb
import (
"container/list"
"fmt"
"runtime"
"sync"
"sync/atomic"
@@ -89,6 +90,10 @@ func (db *DB) newSnapshot() *Snapshot {
return snap
}
func (snap *Snapshot) String() string {
return fmt.Sprintf("leveldb.Snapshot{%d}", snap.elem.seq)
}
// Get gets the value for the given key. It returns ErrNotFound if
// the DB does not contains the key.
//

View File

@@ -1271,7 +1271,7 @@ func TestDB_DeletionMarkers2(t *testing.T) {
}
func TestDB_CompactionTableOpenError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{CachedOpenFiles: -1})
h := newDbHarnessWopt(t, &opt.Options{OpenFilesCacheCapacity: -1})
defer h.close()
im := 10
@@ -1629,8 +1629,8 @@ func TestDB_ManualCompaction(t *testing.T) {
func TestDB_BloomFilter(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
BlockCache: opt.NoCache,
Filter: filter.NewBloomFilter(10),
DisableBlockCache: true,
Filter: filter.NewBloomFilter(10),
})
defer h.close()
@@ -2066,8 +2066,8 @@ func TestDB_GetProperties(t *testing.T) {
func TestDB_GoleveldbIssue72and83(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 1 * opt.MiB,
CachedOpenFiles: 3,
WriteBuffer: 1 * opt.MiB,
OpenFilesCacheCapacity: 3,
})
defer h.close()
@@ -2200,7 +2200,7 @@ func TestDB_GoleveldbIssue72and83(t *testing.T) {
func TestDB_TransientError(t *testing.T) {
h := newDbHarnessWopt(t, &opt.Options{
WriteBuffer: 128 * opt.KiB,
CachedOpenFiles: 3,
OpenFilesCacheCapacity: 3,
DisableCompactionBackoff: true,
})
defer h.close()
@@ -2410,7 +2410,7 @@ func TestDB_TableCompactionBuilder(t *testing.T) {
CompactionTableSize: 43 * opt.KiB,
CompactionExpandLimitFactor: 1,
CompactionGPOverlapsFactor: 1,
BlockCache: opt.NoCache,
DisableBlockCache: true,
}
s, err := newSession(stor, o)
if err != nil {

View File

@@ -112,9 +112,9 @@ func (db *DB) flush(n int) (mem *memDB, nn int, err error) {
db.writeDelay += time.Since(start)
db.writeDelayN++
} else if db.writeDelayN > 0 {
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
db.writeDelay = 0
db.writeDelayN = 0
db.logf("db@write was delayed N·%d T·%v", db.writeDelayN, db.writeDelay)
}
return
}

View File

@@ -17,14 +17,14 @@ import (
var _ = testutil.Defer(func() {
Describe("Leveldb external", func() {
o := &opt.Options{
BlockCache: opt.NoCache,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
CachedOpenFiles: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
DisableBlockCache: true,
BlockRestartInterval: 5,
BlockSize: 80,
Compression: opt.NoCompression,
OpenFilesCacheCapacity: -1,
Strict: opt.StrictAll,
WriteBuffer: 1000,
CompactionTableSize: 2000,
}
Describe("write test", func() {

View File

@@ -106,7 +106,7 @@ func (ik iKey) assert() {
panic("leveldb: nil iKey")
}
if len(ik) < 8 {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", ik, len(ik)))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid length", []byte(ik), len(ik)))
}
}
@@ -124,7 +124,7 @@ func (ik iKey) parseNum() (seq uint64, kt kType) {
num := ik.num()
seq, kt = uint64(num>>8), kType(num&0xff)
if kt > ktVal {
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", ik, len(ik), kt))
panic(fmt.Sprintf("leveldb: iKey %q, len=%d: invalid type %#x", []byte(ik), len(ik), kt))
}
return
}

View File

@@ -20,8 +20,9 @@ const (
GiB = MiB * 1024
)
const (
DefaultBlockCacheSize = 8 * MiB
var (
DefaultBlockCacher = LRUCacher
DefaultBlockCacheCapacity = 8 * MiB
DefaultBlockRestartInterval = 16
DefaultBlockSize = 4 * KiB
DefaultCompactionExpandLimitFactor = 25
@@ -33,7 +34,8 @@ const (
DefaultCompactionTotalSize = 10 * MiB
DefaultCompactionTotalSizeMultiplier = 10.0
DefaultCompressionType = SnappyCompression
DefaultCachedOpenFiles = 500
DefaultOpenFilesCacher = LRUCacher
DefaultOpenFilesCacheCapacity = 500
DefaultMaxMemCompationLevel = 2
DefaultNumLevel = 7
DefaultWriteBuffer = 4 * MiB
@@ -41,22 +43,33 @@ const (
DefaultWriteL0SlowdownTrigger = 8
)
type noCache struct{}
// Cacher is a caching algorithm.
type Cacher interface {
New(capacity int) cache.Cacher
}
func (noCache) SetCapacity(capacity int) {}
func (noCache) Capacity() int { return 0 }
func (noCache) Used() int { return 0 }
func (noCache) Size() int { return 0 }
func (noCache) NumObjects() int { return 0 }
func (noCache) GetNamespace(id uint64) cache.Namespace { return nil }
func (noCache) PurgeNamespace(id uint64, fin cache.PurgeFin) {}
func (noCache) ZapNamespace(id uint64) {}
func (noCache) Purge(fin cache.PurgeFin) {}
func (noCache) Zap() {}
type CacherFunc struct {
NewFunc func(capacity int) cache.Cacher
}
var NoCache cache.Cache = noCache{}
func (f *CacherFunc) New(capacity int) cache.Cacher {
if f.NewFunc != nil {
return f.NewFunc(capacity)
}
return nil
}
// Compression is the per-block compression algorithm to use.
func noCacher(int) cache.Cacher { return nil }
var (
// LRUCacher is the LRU-cache algorithm.
LRUCacher = &CacherFunc{cache.NewLRU}
// NoCacher is the value to disable caching algorithm.
NoCacher = &CacherFunc{}
)
// Compression is the 'sorted table' block compression algorithm to use.
type Compression uint
func (c Compression) String() string {
@@ -133,16 +146,17 @@ type Options struct {
// The default value is nil
AltFilters []filter.Filter
// BlockCache provides per-block caching for LevelDB. Specify NoCache to
// disable block caching.
// BlockCacher provides cache algorithm for LevelDB 'sorted table' block caching.
// Specify NoCacher to disable caching algorithm.
//
// By default LevelDB will create LRU-cache with capacity of BlockCacheSize.
BlockCache cache.Cache
// The default value is LRUCacher.
BlockCacher Cacher
// BlockCacheSize defines the capacity of the default 'block cache'.
// BlockCacheCapacity defines the capacity of the 'sorted table' block caching.
// Use -1 for zero, this has same effect with specifying NoCacher to BlockCacher.
//
// The default value is 8MiB.
BlockCacheSize int
BlockCacheCapacity int
// BlockRestartInterval is the number of keys between restart points for
// delta encoding of keys.
@@ -156,13 +170,6 @@ type Options struct {
// The default value is 4KiB.
BlockSize int
// CachedOpenFiles defines number of open files to kept around when not
// in-use, the counting includes still in-use files.
// Set this to negative value to disable caching.
//
// The default value is 500.
CachedOpenFiles int
// CompactionExpandLimitFactor limits compaction size after expanded.
// This will be multiplied by table size limit at compaction target level.
//
@@ -237,11 +244,17 @@ type Options struct {
// The default value uses the same ordering as bytes.Compare.
Comparer comparer.Comparer
// Compression defines the per-block compression to use.
// Compression defines the 'sorted table' block compression to use.
//
// The default value (DefaultCompression) uses snappy compression.
Compression Compression
// DisableBlockCache allows disable use of cache.Cache functionality on
// 'sorted table' block.
//
// The default value is false.
DisableBlockCache bool
// DisableCompactionBackoff allows disable compaction retry backoff.
//
// The default value is false.
@@ -288,6 +301,18 @@ type Options struct {
// The default is 7.
NumLevel int
// OpenFilesCacher provides cache algorithm for open files caching.
// Specify NoCacher to disable caching algorithm.
//
// The default value is LRUCacher.
OpenFilesCacher Cacher
// OpenFilesCacheCapacity defines the capacity of the open files caching.
// Use -1 for zero, this has same effect with specifying NoCacher to OpenFilesCacher.
//
// The default value is 500.
OpenFilesCacheCapacity int
// Strict defines the DB strict level.
Strict Strict
@@ -320,18 +345,22 @@ func (o *Options) GetAltFilters() []filter.Filter {
return o.AltFilters
}
func (o *Options) GetBlockCache() cache.Cache {
if o == nil {
func (o *Options) GetBlockCacher() Cacher {
if o == nil || o.BlockCacher == nil {
return DefaultBlockCacher
} else if o.BlockCacher == NoCacher {
return nil
}
return o.BlockCache
return o.BlockCacher
}
func (o *Options) GetBlockCacheSize() int {
if o == nil || o.BlockCacheSize <= 0 {
return DefaultBlockCacheSize
func (o *Options) GetBlockCacheCapacity() int {
if o == nil || o.BlockCacheCapacity <= 0 {
return DefaultBlockCacheCapacity
} else if o.BlockCacheCapacity == -1 {
return 0
}
return o.BlockCacheSize
return o.BlockCacheCapacity
}
func (o *Options) GetBlockRestartInterval() int {
@@ -348,15 +377,6 @@ func (o *Options) GetBlockSize() int {
return o.BlockSize
}
func (o *Options) GetCachedOpenFiles() int {
if o == nil || o.CachedOpenFiles == 0 {
return DefaultCachedOpenFiles
} else if o.CachedOpenFiles < 0 {
return 0
}
return o.CachedOpenFiles
}
func (o *Options) GetCompactionExpandLimit(level int) int {
factor := DefaultCompactionExpandLimitFactor
if o != nil && o.CompactionExpandLimitFactor > 0 {
@@ -494,6 +514,25 @@ func (o *Options) GetNumLevel() int {
return o.NumLevel
}
func (o *Options) GetOpenFilesCacher() Cacher {
if o == nil || o.OpenFilesCacher == nil {
return DefaultOpenFilesCacher
}
if o.OpenFilesCacher == NoCacher {
return nil
}
return o.OpenFilesCacher
}
func (o *Options) GetOpenFilesCacheCapacity() int {
if o == nil || o.OpenFilesCacheCapacity <= 0 {
return DefaultOpenFilesCacheCapacity
} else if o.OpenFilesCacheCapacity == -1 {
return 0
}
return o.OpenFilesCacheCapacity
}
func (o *Options) GetStrict(strict Strict) bool {
if o == nil || o.Strict == 0 {
return DefaultStrict&strict != 0

View File

@@ -7,7 +7,6 @@
package leveldb
import (
"github.com/syndtr/goleveldb/leveldb/cache"
"github.com/syndtr/goleveldb/leveldb/filter"
"github.com/syndtr/goleveldb/leveldb/opt"
)
@@ -32,13 +31,6 @@ func (s *session) setOptions(o *opt.Options) {
no.AltFilters[i] = &iFilter{filter}
}
}
// Block cache.
switch o.GetBlockCache() {
case nil:
no.BlockCache = cache.NewLRUCache(o.GetBlockCacheSize())
case opt.NoCache:
no.BlockCache = nil
}
// Comparer.
s.icmp = &iComparer{o.GetComparer()}
no.Comparer = s.icmp

View File

@@ -73,7 +73,7 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
stCompPtrs: make([]iKey, o.GetNumLevel()),
}
s.setOptions(o)
s.tops = newTableOps(s, s.o.GetCachedOpenFiles())
s.tops = newTableOps(s)
s.setVersion(newVersion(s))
s.log("log@legend F·NumFile S·FileSize N·Entry C·BadEntry B·BadBlock Ke·KeyError D·DroppedEntry L·Level Q·SeqNum T·TimeElapsed")
return
@@ -82,9 +82,6 @@ func newSession(stor storage.Storage, o *opt.Options) (s *session, err error) {
// Close session.
func (s *session) close() {
s.tops.close()
if bc := s.o.GetBlockCache(); bc != nil {
bc.Purge(nil)
}
if s.manifest != nil {
s.manifest.Close()
}

View File

@@ -221,7 +221,7 @@ func (fs *fileStorage) GetManifest() (f File, err error) {
fs.log(fmt.Sprintf("skipping %s: invalid file name", fn))
continue
}
if _, e1 := strconv.ParseUint(fn[7:], 10, 0); e1 != nil {
if _, e1 := strconv.ParseUint(fn[8:], 10, 0); e1 != nil {
fs.log(fmt.Sprintf("skipping %s: invalid file num: %v", fn, e1))
continue
}

View File

@@ -286,10 +286,10 @@ func (x *tFilesSortByNum) Less(i, j int) bool {
// Table operations.
type tOps struct {
s *session
cache cache.Cache
cacheNS cache.Namespace
bpool *util.BufferPool
s *session
cache *cache.Cache
bcache *cache.Cache
bpool *util.BufferPool
}
// Creates an empty table and returns table writer.
@@ -338,26 +338,28 @@ func (t *tOps) createFrom(src iterator.Iterator) (f *tFile, n int, err error) {
// Opens table. It returns a cache handle, which should
// be released after use.
func (t *tOps) open(f *tFile) (ch cache.Handle, err error) {
func (t *tOps) open(f *tFile) (ch *cache.Handle, err error) {
num := f.file.Num()
ch = t.cacheNS.Get(num, func() (charge int, value interface{}) {
ch = t.cache.Get(0, num, func() (size int, value cache.Value) {
var r storage.Reader
r, err = f.file.Open()
if err != nil {
return 0, nil
}
var bcacheNS cache.Namespace
if bc := t.s.o.GetBlockCache(); bc != nil {
bcacheNS = bc.GetNamespace(num)
var bcache *cache.CacheGetter
if t.bcache != nil {
bcache = &cache.CacheGetter{Cache: t.bcache, NS: num}
}
var tr *table.Reader
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcacheNS, t.bpool, t.s.o.Options)
tr, err = table.NewReader(r, int64(f.size), storage.NewFileInfo(f.file), bcache, t.bpool, t.s.o.Options)
if err != nil {
r.Close()
return 0, nil
}
return 1, tr
})
if ch == nil && err == nil {
err = ErrClosed
@@ -412,16 +414,14 @@ func (t *tOps) newIterator(f *tFile, slice *util.Range, ro *opt.ReadOptions) ite
// no one use the the table.
func (t *tOps) remove(f *tFile) {
num := f.file.Num()
t.cacheNS.Delete(num, func(exist, pending bool) {
if !pending {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if bc := t.s.o.GetBlockCache(); bc != nil {
bc.ZapNamespace(num)
}
t.cache.Delete(0, num, func() {
if err := f.file.Remove(); err != nil {
t.s.logf("table@remove removing @%d %q", num, err)
} else {
t.s.logf("table@remove removed @%d", num)
}
if t.bcache != nil {
t.bcache.EvictNS(num)
}
})
}
@@ -429,18 +429,34 @@ func (t *tOps) remove(f *tFile) {
// Closes the table ops instance. It will close all tables,
// regadless still used or not.
func (t *tOps) close() {
t.cache.Zap()
t.bpool.Close()
t.cache.Close()
if t.bcache != nil {
t.bcache.Close()
}
}
// Creates new initialized table ops instance.
func newTableOps(s *session, cacheCap int) *tOps {
c := cache.NewLRUCache(cacheCap)
func newTableOps(s *session) *tOps {
var (
cacher cache.Cacher
bcache *cache.Cache
)
if s.o.GetOpenFilesCacheCapacity() > 0 {
cacher = cache.NewLRU(s.o.GetOpenFilesCacheCapacity())
}
if !s.o.DisableBlockCache {
var bcacher cache.Cacher
if s.o.GetBlockCacheCapacity() > 0 {
bcacher = cache.NewLRU(s.o.GetBlockCacheCapacity())
}
bcache = cache.NewCache(bcacher)
}
return &tOps{
s: s,
cache: c,
cacheNS: c.GetNamespace(0),
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
s: s,
cache: cache.NewCache(cacher),
bcache: bcache,
bpool: util.NewBufferPool(s.o.GetBlockSize() + 5),
}
}

View File

@@ -509,7 +509,7 @@ type Reader struct {
mu sync.RWMutex
fi *storage.FileInfo
reader io.ReaderAt
cache cache.Namespace
cache *cache.CacheGetter
err error
bpool *util.BufferPool
// Options
@@ -613,18 +613,22 @@ func (r *Reader) readBlock(bh blockHandle, verifyChecksum bool) (*block, error)
func (r *Reader) readBlockCached(bh blockHandle, verifyChecksum, fillCache bool) (*block, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *block
b, err = r.readBlock(bh, verifyChecksum)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*block)
if !ok {
@@ -667,18 +671,22 @@ func (r *Reader) readFilterBlock(bh blockHandle) (*filterBlock, error) {
func (r *Reader) readFilterBlockCached(bh blockHandle, fillCache bool) (*filterBlock, util.Releaser, error) {
if r.cache != nil {
var err error
ch := r.cache.Get(bh.offset, func() (charge int, value interface{}) {
if !fillCache {
return 0, nil
}
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
var (
err error
ch *cache.Handle
)
if fillCache {
ch = r.cache.Get(bh.offset, func() (size int, value cache.Value) {
var b *filterBlock
b, err = r.readFilterBlock(bh)
if err != nil {
return 0, nil
}
return cap(b.data), b
})
} else {
ch = r.cache.Get(bh.offset, nil)
}
if ch != nil {
b, ok := ch.Value().(*filterBlock)
if !ok {
@@ -980,7 +988,7 @@ func (r *Reader) Release() {
// The fi, cache and bpool is optional and can be nil.
//
// The returned table reader instance is goroutine-safe.
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache cache.Namespace, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
func NewReader(f io.ReaderAt, size int64, fi *storage.FileInfo, cache *cache.CacheGetter, bpool *util.BufferPool, o *opt.Options) (*Reader, error) {
if f == nil {
return nil, errors.New("leveldb/table: nil file")
}

View File

@@ -16,7 +16,7 @@ import (
"unicode/utf8"
)
type lowerCaseASCII struct{ transform.NopResetter }
type lowerCaseASCII struct{ NopResetter }
func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
@@ -34,7 +34,7 @@ func (lowerCaseASCII) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, er
var errYouMentionedX = errors.New("you mentioned X")
type dontMentionX struct{ transform.NopResetter }
type dontMentionX struct{ NopResetter }
func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
n := len(src)
@@ -52,7 +52,7 @@ func (dontMentionX) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err
// doublerAtEOF is a strange Transformer that transforms "this" to "tthhiiss",
// but only if atEOF is true.
type doublerAtEOF struct{ transform.NopResetter }
type doublerAtEOF struct{ NopResetter }
func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if !atEOF {
@@ -71,7 +71,7 @@ func (doublerAtEOF) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err
// rleDecode and rleEncode implement a toy run-length encoding: "aabbbbbbbbbb"
// is encoded as "2a10b". The decoding is assumed to not contain any numbers.
type rleDecode struct{ transform.NopResetter }
type rleDecode struct{ NopResetter }
func (rleDecode) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
loop:
@@ -104,7 +104,7 @@ loop:
}
type rleEncode struct {
transform.NopResetter
NopResetter
// allowStutter means that "xxxxxxxx" can be encoded as "5x3x"
// instead of always as "8x".

44
NICKS Normal file
View File

@@ -0,0 +1,44 @@
# This file maps email addresses used in commits to nicks used the changelog.
AudriusButkevicius <audrius.butkevicius@gmail.com>
Cathryne <cathryne.linenweaver@gmail.com> <Cathryne@users.noreply.github.com>
KayoticSully <kayoticsully@gmail.com>
Nutomic <me@nutomic.com>
Rewt0r <rewt0r@gmx.com> <Rewt0r@users.noreply.github.com>
Vilbrekin <vilbrekin@gmail.com>
Zillode <zillode@zillode.be>
alex2108 <register-github@alex-graf.de>
andrew-d <andrew@du.nham.ca>
asdil12 <dominik@heidler.eu>
bigbear2nd <bigbear2nd@gmail.com>
brendanlong <self@brendanlong.com>
bsidhom <bsidhom@gmail.com>
calmh <jakob@nym.se>
cdata <chris@scriptolo.gy>
ceh <emil@hessman.se>
cqcallaw <enlightened.despot@gmail.com>
facastagnini <federico.castagnini@gmail.com>
filoozoom <philippe@schommers.be>
frioux <frew@afoolishmanifesto.com> <frioux@gmail.com>
gillisig <gilli@vx.is>
jedie <github.com@jensdiemer.de> <git@jensdiemer.de>
jpjp <jamespatterson@operamail.com> <jpjp@users.noreply.github.com>
kozec <kozec@kozec.com>
krozycki <rozycki.karol@gmail.com>
marcindziadus <dziadus.marcin@gmail.com>
marclaporte <marc@marclaporte.com>
mvdan <mvdan@mvdan.cc>
peterhoeg <peter@speartail.com>
philips <brandon@ifup.org>
piobpl <piotrb10@gmail.com>
pluby <phill.luby@newredo.com>
pyfisch <pyfisch@gmail.com>
rumpelsepp <stefan@sevenbyte.org>
qbit <qbit@deftly.net>
seehuhn <voss@seehuhn.de>
snnd <dw@risu.io>
timabell <tim@timwise.co.uk>
tnn2 <tnn@nygren.pp.se>
tojrobinson <tully@tojr.org>
uok <ueomkail@gmail.com> <uok@users.noreply.github.com>
veeti <veeti.paananen@rojekti.fi>

View File

@@ -11,7 +11,7 @@ This is the `syncthing` project. The following are the project goals:
collaborating devices. The protocol should be well defined, unambiguous,
easily understood, free to use, efficient, secure and language neutral.
This is the [Block Exchange
Protocol](https://github.com/syncthing/syncthing/blob/master/protocol/PROTOCOL.md).
Protocol](https://github.com/syncthing/specs/blob/master/BEPv1.md).
2. Provide the reference implementation to demonstrate the usability of
said protocol. This is the `syncthing` utility. It is the hope that
@@ -25,7 +25,8 @@ for incompatible changes.
Getting Started
---------------
Take a look at the [getting started guide](http://discourse.syncthing.net/t/46).
Take a look at the [getting started
guide](https://github.com/syncthing/syncthing/wiki/Getting-Started).
There are a few examples for keeping syncthing running in the background
on your system in [the etc directory](https://github.com/syncthing/syncthing/blob/master/etc).
@@ -37,30 +38,23 @@ Building
--------
Building Syncthing from source is easy, and there's a
[guide](http://discourse.syncthing.net/t/44)
[guide](https://github.com/syncthing/syncthing/wiki/Building).
that describes it for both Unix and Windows.
Signed Releases
---------------
As of v0.7.0 and onwards, git tags and release binaries are GPG signed with
the key BCE524C7 (http://nym.se/gpg.txt). For release binaries, MD5 and
SHA1 checksums are calculated and signed, available in the
md5sum.txt.asc and sha1sum.txt.asc files.
As of v0.10.15 and onwards, git tags and release binaries are GPG signed
with the key D26E6ED000654A3E (see http://syncthing.net/security.html).
For release binaries, MD5 and SHA1 checksums are calculated and signed,
available in the md5sum.txt.asc and sha1sum.txt.asc files.
Documentation
=============
The [syncthing
documentation](http://discourse.syncthing.net/category/documentation) is
on the discourse site.
License
=======
All documentation and protocol specifications are licensed
under the [Creative Commons Attribution 4.0 International
License](http://creativecommons.org/licenses/by/4.0/).
documentation](https://github.com/syncthing/syncthing/wiki/) is on the
Github wiki.
All code is licensed under the
[GPL](https://github.com/syncthing/syncthing/blob/master/LICENSE), v3 or

View File

@@ -22,6 +22,7 @@ import (
"archive/zip"
"bytes"
"compress/gzip"
"crypto/md5"
"flag"
"fmt"
"io"
@@ -72,7 +73,7 @@ func main() {
flag.Parse()
switch goarch {
case "386", "amd64", "arm", "armv5", "armv6", "armv7":
case "386", "amd64", "arm":
break
default:
log.Printf("Unknown goarch %q; proceed with caution!", goarch)
@@ -190,7 +191,12 @@ func install(pkg string, tags []string) {
}
func build(pkg string, tags []string) {
rmr("syncthing", "syncthing.exe")
binary := "syncthing"
if goos == "windows" {
binary += ".exe"
}
rmr(binary, binary+".md5")
args := []string{"build", "-ldflags", ldflags()}
if len(tags) > 0 {
args = append(args, "-tags", strings.Join(tags, ","))
@@ -201,6 +207,13 @@ func build(pkg string, tags []string) {
args = append(args, pkg)
setBuildEnv()
runPrint("go", args...)
// Create an md5 checksum of the binary, to be included in the archive for
// automatic upgrades.
err := md5File(binary)
if err != nil {
log.Fatal(err)
}
}
func buildTar() {
@@ -213,14 +226,20 @@ func buildTar() {
build("./cmd/syncthing", tags)
filename := name + ".tar.gz"
files := []archiveFile{
{"README.md", name + "/README.txt"},
{"LICENSE", name + "/LICENSE.txt"},
{"AUTHORS", name + "/AUTHORS.txt"},
{"syncthing", name + "/syncthing"},
{src: "README.md", dst: name + "/README.txt"},
{src: "LICENSE", dst: name + "/LICENSE.txt"},
{src: "AUTHORS", dst: name + "/AUTHORS.txt"},
{src: "syncthing", dst: name + "/syncthing"},
{src: "syncthing.md5", dst: name + "/syncthing.md5"},
}
for _, file := range listFiles("etc") {
files = append(files, archiveFile{file, name + "/" + file})
files = append(files, archiveFile{src: file, dst: name + "/" + file})
}
for _, file := range listFiles("extra") {
files = append(files, archiveFile{src: file, dst: name + "/" + filepath.Base(file)})
}
tarGz(filename, files)
log.Println(filename)
}
@@ -235,11 +254,17 @@ func buildZip() {
build("./cmd/syncthing", tags)
filename := name + ".zip"
files := []archiveFile{
{"README.md", name + "/README.txt"},
{"LICENSE", name + "/LICENSE.txt"},
{"AUTHORS", name + "/AUTHORS.txt"},
{"syncthing.exe", name + "/syncthing.exe"},
{src: "README.md", dst: name + "/README.txt"},
{src: "LICENSE", dst: name + "/LICENSE.txt"},
{src: "AUTHORS", dst: name + "/AUTHORS.txt"},
{src: "syncthing.exe", dst: name + "/syncthing.exe"},
{src: "syncthing.exe.md5", dst: name + "/syncthing.exe.md5"},
}
for _, file := range listFiles("extra") {
files = append(files, archiveFile{src: file, dst: name + "/" + filepath.Base(file)})
}
zipFile(filename, files)
log.Println(filename)
}
@@ -260,15 +285,7 @@ func listFiles(dir string) []string {
func setBuildEnv() {
os.Setenv("GOOS", goos)
if strings.HasPrefix(goarch, "armv") {
os.Setenv("GOARCH", "arm")
os.Setenv("GOARM", goarch[4:])
} else {
os.Setenv("GOARCH", goarch)
}
if goarch == "386" {
os.Setenv("GO386", "387")
}
os.Setenv("GOARCH", goarch)
wd, err := os.Getwd()
if err != nil {
log.Println("Warning: can't determine current dir:", err)
@@ -284,7 +301,7 @@ func assets() {
}
func xdr() {
runPrint("go", "generate", "./internal/discover", "./internal/files", "./internal/protocol")
runPrint("go", "generate", "./internal/discover", "./internal/db")
}
func translate() {
@@ -554,3 +571,29 @@ func zipFile(out string, files []archiveFile) {
log.Fatal(err)
}
}
func md5File(file string) error {
fd, err := os.Open(file)
if err != nil {
return err
}
defer fd.Close()
h := md5.New()
_, err = io.Copy(h, fd)
if err != nil {
return err
}
out, err := os.Create(file + ".md5")
if err != nil {
return err
}
_, err = fmt.Fprintf(out, "%x\n", h.Sum(nil))
if err != nil {
return err
}
return out.Close()
}

View File

@@ -2,7 +2,7 @@
set -euo pipefail
IFS=$'\n\t'
DOCKERIMGV=1.4-1
STTRACE=${STTRACE:-}
case "${1:-default}" in
default)
@@ -52,9 +52,7 @@ case "${1:-default}" in
all)
go run build.go -goos linux -goarch amd64 tar
go run build.go -goos linux -goarch 386 tar
go run build.go -goos linux -goarch armv5 tar
go run build.go -goos linux -goarch armv6 tar
go run build.go -goos linux -goarch armv7 tar
go run build.go -goos linux -goarch arm tar
go run build.go -goos freebsd -goarch amd64 tar
go run build.go -goos freebsd -goarch 386 tar
@@ -104,31 +102,31 @@ case "${1:-default}" in
fi
;;
docker-init)
docker build -q -t syncthing/build:$DOCKERIMGV docker >/dev/null
;;
docker-all)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
syncthing/build:$DOCKERIMGV \
sh -c './build.sh clean && ./build.sh all && STTRACE=all ./build.sh test-cov'
-e "STTRACE=$STTRACE" \
syncthing/build:latest \
sh -c './build.sh clean \
&& go vet ./cmd/... ./internal/... \
&& ( golint ./cmd/... ; golint ./internal/... ) | egrep -v "comment on exported|should have comment" \
&& ./build.sh all \
&& STTRACE=all ./build.sh test-cov'
;;
docker-test)
docker run --rm -h syncthing-builder -u $(id -u) -t \
-v $(pwd):/tmp/syncthing \
syncthing/build:$DOCKERIMGV \
sh -euxc 'mkdir -p /go/src/github.com/syncthing \
&& cd /go/src/github.com/syncthing \
&& cp -r /tmp/syncthing syncthing \
&& cd syncthing \
&& ./build.sh clean \
-v $(pwd):/go/src/github.com/syncthing/syncthing \
-w /go/src/github.com/syncthing/syncthing \
-e "STTRACE=$STTRACE" \
syncthing/build:latest \
sh -euxc './build.sh clean \
&& go run build.go -race \
&& export GOPATH=$(pwd)/Godeps/_workspace:$GOPATH \
&& cd test \
&& go test -tags integration -v -timeout 60m -short'
&& go test -tags integration -v -timeout 60m -short \
&& git clean -fxd .'
;;
*)

9
changelog.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
since="$1"
if [[ -z $since ]] ; then
since="$(git describe --abbrev=0 HEAD^).."
fi
git log --reverse --pretty=format:'* %s, @%aN)' "$since" | egrep 'fixes #\d|ref #\d' | sed 's/),/,/' | sed 's/fixes #/#/g' | sed 's/ref #/#/g'

View File

@@ -1,7 +1,7 @@
#!/bin/bash
missing-authors() {
for email in $(git log --format=%ae master | sort | uniq) ; do
for email in $(git log --format=%ae HEAD | sort | uniq) ; do
grep -q "$email" AUTHORS || echo $email
done
}
@@ -13,7 +13,11 @@ no-docs-typos() {
grep -v f2459ef3319b2f060dbcdacd0c35a1788a94b8bd |\
grep -v b61f418bf2d1f7d5a9d7088a20a2a448e5e66801 |\
grep -v f0621207e3953711f9ab86d99724f1d0faac45b1 |\
grep -v f1120d7aa936c0658429edef0037792520b46334
grep -v f1120d7aa936c0658429edef0037792520b46334 |\
grep -v a9339d0627fff439879d157c75077f02c9fac61b |\
grep -v 254c63763a3ad42fd82259f1767db526cff94a14 |\
grep -v 4b76ec40c07078beaa2c5e250ed7d9bd6276a718 |\
grep -v ffc39dfbcb34eacc3ea12327a02b6e7741a2c207
}
print-missing-authors() {
@@ -23,22 +27,24 @@ print-missing-authors() {
}
print-missing-copyright() {
find . -name \*.go | xargs grep -L 'Copyright (C)' | grep -v Godeps
find . -name \*.go | xargs egrep -L 'Copyright \(C\)|automatically generated' | grep -v Godeps | grep -v internal/auto/
}
print-line-blame() {
for f in $(find . -name \*.go | grep -v Godep) gui/app.js gui/index.html ; do
git blame --line-porcelain $f | grep author-mail
done | sort | uniq -c | sort -n
}
echo Author emails missing in AUTHORS file:
print-missing-authors
echo
authors=$(print-missing-authors)
if [[ ! -z $authors ]] ; then
echo '***'
echo Author emails not in AUTHORS:
echo $authors
echo '***'
exit 1
fi
echo Files missing copyright notice:
print-missing-copyright
echo
echo Blame lines per author:
print-line-blame
copy=$(print-missing-copyright)
if [[ ! -z $copy ]] ; then
echo ***
echo Files missing copyright notice:
echo $copy
echo ***
exit 1
fi

View File

@@ -45,7 +45,7 @@ func Assets() map[string][]byte {
var gr *gzip.Reader
{{range $asset := .assets}}
bs, _ = base64.StdEncoding.DecodeString("{{$asset.Data}}")
gr, _ = gzip.NewReader(bytes.NewBuffer(bs))
gr, _ = gzip.NewReader(bytes.NewReader(bs))
bs, _ = ioutil.ReadAll(gr)
assets["{{$asset.Name}}"] = bs
{{end}}

177
cmd/stcompdirs/main.go Normal file
View File

@@ -0,0 +1,177 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/md5"
"errors"
"flag"
"fmt"
"io"
"log"
"os"
"path/filepath"
"github.com/syncthing/syncthing/internal/symlinks"
)
func main() {
flag.Parse()
log.Println(compareDirectories(flag.Args()...))
}
// Compare a number of directories. Returns nil if the contents are identical,
// otherwise an error describing the first found difference.
func compareDirectories(dirs ...string) error {
chans := make([]chan fileInfo, len(dirs))
for i := range chans {
chans[i] = make(chan fileInfo)
}
errcs := make([]chan error, len(dirs))
abort := make(chan struct{})
for i := range dirs {
errcs[i] = startWalker(dirs[i], chans[i], abort)
}
res := make([]fileInfo, len(dirs))
for {
numDone := 0
for i := range chans {
fi, ok := <-chans[i]
if !ok {
err, hasError := <-errcs[i]
if hasError {
close(abort)
return err
}
numDone++
}
res[i] = fi
}
for i := 1; i < len(res); i++ {
if res[i] != res[0] {
close(abort)
if res[i].name < res[0].name {
return fmt.Errorf("%s missing %v (present in %s)", dirs[0], res[i], dirs[i])
} else if res[i].name > res[0].name {
return fmt.Errorf("%s missing %v (present in %s)", dirs[i], res[0], dirs[0])
}
return fmt.Errorf("Mismatch; %v (%s) != %v (%s)", res[i], dirs[i], res[0], dirs[0])
}
}
if numDone == len(dirs) {
return nil
}
}
}
type fileInfo struct {
name string
mode os.FileMode
mod int64
hash [16]byte
}
func (f fileInfo) String() string {
return fmt.Sprintf("%s %04o %d %x", f.name, f.mode, f.mod, f.hash)
}
func startWalker(dir string, res chan<- fileInfo, abort <-chan struct{}) chan error {
walker := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
rn, _ := filepath.Rel(dir, path)
if rn == "." || rn == ".stfolder" {
return nil
}
if rn == ".stversions" {
return filepath.SkipDir
}
var f fileInfo
if info.Mode()&os.ModeSymlink != 0 {
f = fileInfo{
name: rn,
mode: os.ModeSymlink,
}
tgt, _, err := symlinks.Read(path)
if err != nil {
return err
}
h := md5.New()
h.Write([]byte(tgt))
hash := h.Sum(nil)
copy(f.hash[:], hash)
} else if info.IsDir() {
f = fileInfo{
name: rn,
mode: info.Mode(),
// hash and modtime zero for directories
}
} else {
f = fileInfo{
name: rn,
mode: info.Mode(),
mod: info.ModTime().Unix(),
}
sum, err := md5file(path)
if err != nil {
return err
}
f.hash = sum
}
select {
case res <- f:
return nil
case <-abort:
return errors.New("abort")
}
}
errc := make(chan error)
go func() {
err := filepath.Walk(dir, walker)
close(res)
if err != nil {
errc <- err
}
close(errc)
}()
return errc
}
func md5file(fname string) (hash [16]byte, err error) {
f, err := os.Open(fname)
if err != nil {
return
}
defer f.Close()
h := md5.New()
io.Copy(h, f)
hb := h.Sum(nil)
copy(hash[:], hb)
return
}

View File

@@ -21,7 +21,7 @@ import (
"os"
"path/filepath"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/scanner"
)

View File

@@ -13,44 +13,40 @@
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
// +build ignore
package main
import (
"encoding/json"
"flag"
"fmt"
"log"
"os"
"strconv"
"strings"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/discover"
)
func main() {
log.SetFlags(0)
log.SetOutput(os.Stdout)
var server string
flag.StringVar(&server, "server", "udp4://announce.syncthing.net:22026", "Announce server")
flag.Parse()
path := strings.Split(flag.Arg(0), "/")
var obj map[string]interface{}
dec := json.NewDecoder(os.Stdin)
dec.UseNumber()
dec.Decode(&obj)
var v interface{} = obj
for _, p := range path {
switch tv := v.(type) {
case map[string]interface{}:
v = tv[p]
case []interface{}:
i, err := strconv.Atoi(p)
if err != nil {
log.Fatal(err)
}
v = tv[i]
default:
return // Silence is golden
}
if len(flag.Args()) != 1 || server == "" {
log.Printf("Usage: %s [-server=\"udp4://announce.syncthing.net:22026\"] <device>", os.Args[0])
os.Exit(64)
}
id, err := protocol.DeviceIDFromString(flag.Args()[0])
if err != nil {
log.Println(err)
os.Exit(1)
}
discoverer := discover.NewDiscoverer(protocol.LocalDeviceID, nil)
discoverer.StartGlobal([]string{server}, 1)
for _, addr := range discoverer.Lookup(id) {
log.Println(addr)
}
fmt.Println(v)
}

View File

@@ -21,8 +21,8 @@ import (
"log"
"os"
"github.com/syncthing/syncthing/internal/files"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/db"
"github.com/syndtr/goleveldb/leveldb"
)
@@ -34,17 +34,17 @@ func main() {
device := flag.String("device", "", "Device ID (blank for global)")
flag.Parse()
db, err := leveldb.OpenFile(flag.Arg(0), nil)
ldb, err := leveldb.OpenFile(flag.Arg(0), nil)
if err != nil {
log.Fatal(err)
}
fs := files.NewSet(*folder, db)
fs := db.NewFileSet(*folder, ldb)
if *device == "" {
log.Printf("*** Global index for folder %q", *folder)
fs.WithGlobalTruncated(func(fi protocol.FileIntf) bool {
f := fi.(protocol.FileInfoTruncated)
fs.WithGlobalTruncated(func(fi db.FileIntf) bool {
f := fi.(db.FileInfoTruncated)
fmt.Println(f)
fmt.Println("\t", fs.Availability(f.Name))
return true
@@ -55,8 +55,8 @@ func main() {
log.Fatal(err)
}
log.Printf("*** Have index for folder %q device %q", *folder, n)
fs.WithHaveTruncated(n, func(fi protocol.FileIntf) bool {
f := fi.(protocol.FileInfoTruncated)
fs.WithHaveTruncated(n, func(fi db.FileIntf) bool {
f := fi.(db.FileInfoTruncated)
fmt.Println(f)
return true
})

View File

@@ -0,0 +1,59 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"fmt"
"os"
"runtime"
"runtime/pprof"
"syscall"
"time"
)
func init() {
if innerProcess && os.Getenv("STBLOCKPROFILE") != "" {
profiler := pprof.Lookup("block")
if profiler == nil {
panic("Couldn't find block profiler")
}
l.Debugln("Starting block profiling")
go saveBlockingProfiles(profiler)
}
}
func saveBlockingProfiles(profiler *pprof.Profile) {
runtime.SetBlockProfileRate(1)
t0 := time.Now()
for t := range time.NewTicker(20 * time.Second).C {
startms := int(t.Sub(t0).Seconds() * 1000)
fd, err := os.Create(fmt.Sprintf("block-%05d-%07d.pprof", syscall.Getpid(), startms))
if err != nil {
panic(err)
}
err = profiler.WriteTo(fd, 0)
if err != nil {
panic(err)
}
err = fd.Close()
if err != nil {
panic(err)
}
}
}

View File

@@ -32,13 +32,14 @@ import (
"time"
"github.com/calmh/logger"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/auto"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/discover"
"github.com/syncthing/syncthing/internal/events"
"github.com/syncthing/syncthing/internal/model"
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/syncthing/internal/upgrade"
"github.com/vitrun/qart/qr"
"golang.org/x/crypto/bcrypt"
@@ -70,7 +71,16 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
if err != nil {
l.Infoln("Loading HTTPS certificate:", err)
l.Infoln("Creating new HTTPS certificate")
newCertificate(confDir, "https-")
// When generating the HTTPS certificate, use the system host name per
// default. If that isn't available, use the "syncthing" default.
var name string
name, err = os.Hostname()
if err != nil {
name = tlsDefaultCommonName
}
newCertificate(confDir, "https-", name)
cert, err = loadCert(confDir, "https-")
}
if err != nil {
@@ -78,7 +88,20 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
}
tlsCfg := &tls.Config{
Certificates: []tls.Certificate{cert},
ServerName: "syncthing",
MinVersion: tls.VersionTLS10, // No SSLv3
CipherSuites: []uint16{
// No RC4
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_RSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
},
}
rawListener, err := net.Listen("tcp", cfg.Address)
@@ -108,6 +131,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
getRestMux.HandleFunc("/rest/upgrade", restGetUpgrade)
getRestMux.HandleFunc("/rest/version", restGetVersion)
getRestMux.HandleFunc("/rest/stats/device", withModel(m, restGetDeviceStats))
getRestMux.HandleFunc("/rest/stats/folder", withModel(m, restGetFolderStats))
// Debug endpoints, not for general use
getRestMux.HandleFunc("/rest/debug/peerCompletion", withModel(m, restGetPeerCompletion))
@@ -126,6 +150,7 @@ func startGUI(cfg config.GUIConfiguration, assetDir string, m *model.Model) erro
postRestMux.HandleFunc("/rest/shutdown", restPostShutdown)
postRestMux.HandleFunc("/rest/upgrade", restPostUpgrade)
postRestMux.HandleFunc("/rest/scan", withModel(m, restPostScan))
postRestMux.HandleFunc("/rest/bump", withModel(m, restPostBump))
// A handler that splits requests between the two above and disables
// caching
@@ -277,6 +302,15 @@ func restGetModel(m *model.Model, w http.ResponseWriter, r *http.Request) {
res["state"], res["stateChanged"] = m.State(folder)
res["version"] = m.CurrentLocalVersion(folder) + m.RemoteLocalVersion(folder)
ignorePatterns, _, _ := m.GetIgnores(folder)
res["ignorePatterns"] = false
for _, line := range ignorePatterns {
if len(line) > 0 && !strings.HasPrefix(line, "//") {
res["ignorePatterns"] = true
break
}
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
}
@@ -291,19 +325,12 @@ func restGetNeed(m *model.Model, w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var folder = qs.Get("folder")
files := m.NeedFolderFilesLimited(folder, 100) // max 100 files
progress, queued, rest := m.NeedFolderFiles(folder, 100)
// Convert the struct to a more loose structure, and inject the size.
output := make([]map[string]interface{}, 0, len(files))
for _, file := range files {
output = append(output, map[string]interface{}{
"Name": file.Name,
"Flags": file.Flags,
"Modified": file.Modified,
"Version": file.Version,
"LocalVersion": file.LocalVersion,
"NumBlocks": file.NumBlocks,
"Size": protocol.BlocksToSize(file.NumBlocks),
})
output := map[string][]map[string]interface{}{
"progress": toNeedSlice(progress),
"queued": toNeedSlice(queued),
"rest": toNeedSlice(rest),
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
@@ -322,6 +349,12 @@ func restGetDeviceStats(m *model.Model, w http.ResponseWriter, r *http.Request)
json.NewEncoder(w).Encode(res)
}
func restGetFolderStats(m *model.Model, w http.ResponseWriter, r *http.Request) {
var res = m.FolderStatistics()
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
}
func restGetConfig(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(cfg.Raw())
@@ -334,44 +367,44 @@ func restPostConfig(m *model.Model, w http.ResponseWriter, r *http.Request) {
l.Warnln("decoding posted config:", err)
http.Error(w, err.Error(), 500)
return
} else {
if newCfg.GUI.Password != cfg.GUI().Password {
if newCfg.GUI.Password != "" {
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
if err != nil {
l.Warnln("bcrypting password:", err)
http.Error(w, err.Error(), 500)
return
} else {
newCfg.GUI.Password = string(hash)
}
}
}
// Start or stop usage reporting as appropriate
if curAcc := cfg.Options().URAccepted; newCfg.Options.URAccepted > curAcc {
// UR was enabled
newCfg.Options.URAccepted = usageReportVersion
newCfg.Options.URUniqueID = randomString(6)
err := sendUsageReport(m)
if err != nil {
l.Infoln("Usage report:", err)
}
go usageReportingLoop(m)
} else if newCfg.Options.URAccepted < curAcc {
// UR was disabled
newCfg.Options.URAccepted = -1
newCfg.Options.URUniqueID = ""
stopUsageReporting()
}
// Activate and save
configInSync = !config.ChangeRequiresRestart(cfg.Raw(), newCfg)
cfg.Replace(newCfg)
cfg.Save()
}
if newCfg.GUI.Password != cfg.GUI().Password {
if newCfg.GUI.Password != "" {
hash, err := bcrypt.GenerateFromPassword([]byte(newCfg.GUI.Password), 0)
if err != nil {
l.Warnln("bcrypting password:", err)
http.Error(w, err.Error(), 500)
return
}
newCfg.GUI.Password = string(hash)
}
}
// Start or stop usage reporting as appropriate
if curAcc := cfg.Options().URAccepted; newCfg.Options.URAccepted > curAcc {
// UR was enabled
newCfg.Options.URAccepted = usageReportVersion
newCfg.Options.URUniqueID = randomString(8)
err := sendUsageReport(m)
if err != nil {
l.Infoln("Usage report:", err)
}
go usageReportingLoop(m)
} else if newCfg.Options.URAccepted < curAcc {
// UR was disabled
newCfg.Options.URAccepted = -1
newCfg.Options.URUniqueID = ""
stopUsageReporting()
}
// Activate and save
configInSync = !config.ChangeRequiresRestart(cfg.Raw(), newCfg)
cfg.Replace(newCfg)
cfg.Save()
}
func restGetConfigInSync(w http.ResponseWriter, r *http.Request) {
@@ -425,6 +458,7 @@ func restGetSystem(w http.ResponseWriter, r *http.Request) {
}
cpuUsageLock.RUnlock()
res["cpuPercent"] = cpusum / 10
res["pathSeparator"] = string(filepath.Separator)
w.Header().Set("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
@@ -548,6 +582,10 @@ func restGetEvents(w http.ResponseWriter, r *http.Request) {
}
func restGetUpgrade(w http.ResponseWriter, r *http.Request) {
if noUpgrade {
http.Error(w, upgrade.ErrUpgradeUnsupported.Error(), 500)
return
}
rel, err := upgrade.LatestRelease(strings.Contains(Version, "-beta"))
if err != nil {
http.Error(w, err.Error(), 500)
@@ -598,7 +636,7 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
}
if upgrade.CompareVersions(rel.Tag, Version) == 1 {
err = upgrade.UpgradeTo(rel)
err = upgrade.To(rel)
if err != nil {
l.Warnln("upgrading:", err)
http.Error(w, err.Error(), 500)
@@ -614,13 +652,29 @@ func restPostUpgrade(w http.ResponseWriter, r *http.Request) {
func restPostScan(m *model.Model, w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
folder := qs.Get("folder")
sub := qs.Get("sub")
err := m.ScanFolderSub(folder, sub)
if err != nil {
http.Error(w, err.Error(), 500)
if folder != "" {
sub := qs.Get("sub")
err := m.ScanFolderSub(folder, sub)
if err != nil {
http.Error(w, err.Error(), 500)
}
} else {
errors := m.ScanFolders()
if len(errors) > 0 {
http.Error(w, "Error scanning folders", 500)
json.NewEncoder(w).Encode(errors)
}
}
}
func restPostBump(m *model.Model, w http.ResponseWriter, r *http.Request) {
qs := r.URL.Query()
folder := qs.Get("folder")
file := qs.Get("file")
m.BringToFront(folder, file)
restGetNeed(m, w, r)
}
func getQR(w http.ResponseWriter, r *http.Request) {
var qs = r.URL.Query()
var text = qs.Get("text")
@@ -746,3 +800,19 @@ func mimeTypeForFile(file string) string {
return mime.TypeByExtension(ext)
}
}
func toNeedSlice(fs []db.FileInfoTruncated) []map[string]interface{} {
output := make([]map[string]interface{}, len(fs))
for i, file := range fs {
output[i] = map[string]interface{}{
"Name": file.Name,
"Flags": file.Flags,
"Modified": file.Modified,
"Version": file.Version,
"LocalVersion": file.LocalVersion,
"NumBlocks": file.NumBlocks,
"Size": db.BlocksToSize(int(file.NumBlocks)),
}
}
return output
}

View File

@@ -17,8 +17,6 @@ package main
import (
"bufio"
"crypto/rand"
"encoding/base64"
"fmt"
"net/http"
"os"
@@ -88,7 +86,7 @@ func validCsrfToken(token string) bool {
}
func newCsrfToken() string {
token := randomString(30)
token := randomString(32)
csrfMut.Lock()
csrfTokens = append(csrfTokens, token)
@@ -140,13 +138,3 @@ func loadCsrfTokens() {
csrfTokens = append(csrfTokens, s.Text())
}
}
func randomString(len int) string {
bs := make([]byte, len)
_, err := rand.Reader.Read(bs)
if err != nil {
l.Fatalln(err)
}
return base64.StdEncoding.EncodeToString(bs)
}

View File

@@ -20,27 +20,32 @@ import (
"os"
"runtime"
"runtime/pprof"
"strconv"
"syscall"
"time"
)
func init() {
if innerProcess && os.Getenv("STHEAPPROFILE") != "" {
rate := 1
if i, err := strconv.Atoi(os.Getenv("STHEAPPROFILE")); err == nil {
rate = i
}
l.Debugln("Starting heap profiling")
go saveHeapProfiles()
go saveHeapProfiles(rate)
}
}
func saveHeapProfiles() {
runtime.MemProfileRate = 1
func saveHeapProfiles(rate int) {
runtime.MemProfileRate = rate
var memstats, prevMemstats runtime.MemStats
t0 := time.Now()
for t := range time.NewTicker(250 * time.Millisecond).C {
startms := int(t.Sub(t0).Seconds() * 1000)
name := fmt.Sprintf("heap-%05d.pprof", syscall.Getpid())
for {
runtime.ReadMemStats(&memstats)
if memstats.HeapInuse > prevMemstats.HeapInuse {
fd, err := os.Create(fmt.Sprintf("heap-%05d-%07d.pprof", syscall.Getpid(), startms))
fd, err := os.Create(name + ".tmp")
if err != nil {
panic(err)
}
@@ -52,7 +57,16 @@ func saveHeapProfiles() {
if err != nil {
panic(err)
}
_ = os.Remove(name) // Error deliberately ignored
err = os.Rename(name+".tmp", name)
if err != nil {
panic(err)
}
prevMemstats = memstats
}
time.Sleep(250 * time.Millisecond)
}
}

View File

@@ -21,7 +21,6 @@ import (
"fmt"
"io"
"log"
"math/rand"
"net"
"net/http"
_ "net/http/pprof"
@@ -38,13 +37,13 @@ import (
"github.com/calmh/logger"
"github.com/juju/ratelimit"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/discover"
"github.com/syncthing/syncthing/internal/events"
"github.com/syncthing/syncthing/internal/files"
"github.com/syncthing/syncthing/internal/model"
"github.com/syncthing/syncthing/internal/osutil"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/syncthing/internal/symlinks"
"github.com/syncthing/syncthing/internal/upgrade"
"github.com/syncthing/syncthing/internal/upnp"
@@ -103,10 +102,10 @@ func init() {
}
var (
cfg *config.ConfigWrapper
cfg *config.Wrapper
myID protocol.DeviceID
confDir string
logFlags int = log.Ltime
logFlags = log.Ltime
writeRateLimit *ratelimit.Bucket
readRateLimit *ratelimit.Bucket
stop = make(chan int)
@@ -143,50 +142,50 @@ Development Settings
The following environment variables modify syncthing's behavior in ways that
are mostly useful for developers. Use with care.
STGUIASSETS Directory to load GUI assets from. Overrides compiled in assets.
STGUIASSETS Directory to load GUI assets from. Overrides compiled in assets.
STTRACE A comma separated string of facilities to trace. The valid
facility strings are:
STTRACE A comma separated string of facilities to trace. The valid
facility strings are:
- "beacon" (the beacon package)
- "discover" (the discover package)
- "events" (the events package)
- "files" (the files package)
- "net" (the main package; connections & network messages)
- "model" (the model package)
- "scanner" (the scanner package)
- "stats" (the stats package)
- "upnp" (the upnp package)
- "xdr" (the xdr package)
- "all" (all of the above)
- "beacon" (the beacon package)
- "discover" (the discover package)
- "events" (the events package)
- "files" (the files package)
- "net" (the main package; connections & network messages)
- "model" (the model package)
- "scanner" (the scanner package)
- "stats" (the stats package)
- "upnp" (the upnp package)
- "xdr" (the xdr package)
- "all" (all of the above)
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
profiler with HTTP access.
STPROFILER Set to a listen address such as "127.0.0.1:9090" to start the
profiler with HTTP access.
STCPUPROFILE Write a CPU profile to cpu-$pid.pprof on exit.
STCPUPROFILE Write a CPU profile to cpu-$pid.pprof on exit.
STHEAPPROFILE Write heap profiles to heap-$pid-$timestamp.pprof each time
heap usage increases.
STHEAPPROFILE Write heap profiles to heap-$pid-$timestamp.pprof each time
heap usage increases.
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
STBLOCKPROFILE Write block profiles to block-$pid-$timestamp.pprof every 20
seconds.
STNOUPGRADE Disable automatic upgrades.
STPERFSTATS Write running performance statistics to perf-$pid.csv. Not
supported on Windows.
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
available CPU cores.`
STNOUPGRADE Disable automatic upgrades.
GOMAXPROCS Set the maximum number of CPU cores to use. Defaults to all
available CPU cores.`
)
func init() {
rand.Seed(time.Now().UnixNano())
}
// Command line and environment options
var (
reset bool
showVersion bool
doUpgrade bool
doUpgradeCheck bool
upgradeTo string
noBrowser bool
noConsole bool
generateDir string
@@ -232,6 +231,7 @@ func main() {
flag.BoolVar(&doUpgrade, "upgrade", false, "Perform upgrade")
flag.BoolVar(&doUpgradeCheck, "upgrade-check", false, "Check for available upgrade")
flag.BoolVar(&showVersion, "version", false, "Show version")
flag.StringVar(&upgradeTo, "upgrade-to", upgradeTo, "Force upgrade directly from specified URL")
flag.Usage = usageFor(flag.CommandLine, usage, fmt.Sprintf(extraUsage, defConfDir))
flag.Parse()
@@ -266,19 +266,22 @@ func main() {
}
info, err := os.Stat(dir)
if err != nil {
l.Fatalln("generate:", err)
}
if !info.IsDir() {
if err == nil && !info.IsDir() {
l.Fatalln(dir, "is not a directory")
}
if err != nil && os.IsNotExist(err) {
err = os.MkdirAll(dir, 0700)
if err != nil {
l.Fatalln("generate:", err)
}
}
cert, err := loadCert(dir, "")
if err == nil {
l.Warnln("Key exists; will not overwrite.")
l.Infoln("Device ID:", protocol.NewDeviceID(cert.Certificate[0]))
} else {
newCertificate(dir, "")
newCertificate(dir, "", tlsDefaultCommonName)
cert, err = loadCert(dir, "")
myID = protocol.NewDeviceID(cert.Certificate[0])
if err != nil {
@@ -317,6 +320,15 @@ func main() {
// Ensure that our home directory exists.
ensureDir(confDir, 0700)
if upgradeTo != "" {
err := upgrade.ToURL(upgradeTo)
if err != nil {
l.Fatalln("Upgrade:", err) // exits 1
}
l.Okln("Upgraded from", upgradeTo)
return
}
if doUpgrade || doUpgradeCheck {
rel, err := upgrade.LatestRelease(IsBeta)
if err != nil {
@@ -332,20 +344,19 @@ func main() {
if doUpgrade {
// Use leveldb database locks to protect against concurrent upgrades
_, err = leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{CachedOpenFiles: 100})
_, err = leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{OpenFilesCacheCapacity: 100})
if err != nil {
l.Fatalln("Cannot upgrade, database seems to be locked. Is another copy of Syncthing already running?")
}
err = upgrade.UpgradeTo(rel)
err = upgrade.To(rel)
if err != nil {
l.Fatalln("Upgrade:", err) // exits 1
}
l.Okf("Upgraded to %q", rel.Tag)
return
} else {
return
}
return
}
if reset {
@@ -376,13 +387,17 @@ func syncthingMain() {
// Ensure that that we have a certificate and key.
cert, err = loadCert(confDir, "")
if err != nil {
newCertificate(confDir, "")
newCertificate(confDir, "", tlsDefaultCommonName)
cert, err = loadCert(confDir, "")
if err != nil {
l.Fatalln("load cert:", err)
}
}
// We reinitialize the predictable RNG with our device ID, to get a
// sequence that is always the same but unique to this syncthing instance.
predictableRandom.Seed(seedFromBytes(cert.Certificate[0]))
myID = protocol.NewDeviceID(cert.Certificate[0])
l.SetPrefix(fmt.Sprintf("[%s] ", myID.String()[:5]))
@@ -477,21 +492,21 @@ func syncthingMain() {
readRateLimit = ratelimit.NewBucketWithRate(float64(1000*opts.MaxRecvKbps), int64(5*1000*opts.MaxRecvKbps))
}
db, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{CachedOpenFiles: 100})
ldb, err := leveldb.OpenFile(filepath.Join(confDir, "index"), &opt.Options{OpenFilesCacheCapacity: 100})
if err != nil {
l.Fatalln("Cannot open database:", err, "- Is another copy of Syncthing already running?")
}
// Remove database entries for folders that no longer exist in the config
folders := cfg.Folders()
for _, folder := range files.ListFolders(db) {
for _, folder := range db.ListFolders(ldb) {
if _, ok := folders[folder]; !ok {
l.Infof("Cleaning data for dropped folder %q", folder)
files.DropFolder(db, folder)
db.DropFolder(ldb, folder)
}
}
m := model.NewModel(cfg, myName, "syncthing", Version, db)
m := model.NewModel(cfg, myName, "syncthing", Version, ldb)
sanityCheckFolders(cfg, m)
@@ -574,7 +589,9 @@ func syncthingMain() {
if opts.URUniqueID == "" {
// Previously the ID was generated from the node ID. We now need
// to generate a new one.
opts.URUniqueID = randomString(6)
opts.URUniqueID = randomString(8)
cfg.SetOptions(opts)
cfg.Save()
}
go usageReportingLoop(m)
go func() {
@@ -596,7 +613,7 @@ func syncthingMain() {
} else if IsRelease {
go autoUpgrade()
} else {
l.Infof("No automatic upgrades; %s is not a relase version.", Version)
l.Infof("No automatic upgrades; %s is not a release version.", Version)
}
}
@@ -609,7 +626,7 @@ func syncthingMain() {
os.Exit(code)
}
func setupGUI(cfg *config.ConfigWrapper, m *model.Model) {
func setupGUI(cfg *config.Wrapper, m *model.Model) {
opts := cfg.Options()
guiCfg := overrideGUIConfig(cfg.GUI(), guiAddress, guiAuthentication, guiAPIKey)
@@ -644,13 +661,15 @@ func setupGUI(cfg *config.ConfigWrapper, m *model.Model) {
}
if opts.StartBrowser && !noBrowser && !stRestarting {
urlOpen := fmt.Sprintf("%s://%s/", proto, net.JoinHostPort(hostOpen, strconv.Itoa(addr.Port)))
openURL(urlOpen)
// Can potentially block if the utility we are invoking doesn't
// fork, and just execs, hence keep it in it's own routine.
go openURL(urlOpen)
}
}
}
}
func sanityCheckFolders(cfg *config.ConfigWrapper, m *model.Model) {
func sanityCheckFolders(cfg *config.Wrapper, m *model.Model) {
nextFolder:
for id, folder := range cfg.Folders() {
if folder.Invalid != "" {
@@ -755,7 +774,7 @@ func setupUPnP() {
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = igds[0]
igd = &igds[0]
externalPort = setupExternalPort(igd, port)
if externalPort == 0 {
@@ -779,12 +798,9 @@ func setupExternalPort(igd *upnp.IGD, port int) int {
return 0
}
// We seed the random number generator with the node ID to get a
// repeatable sequence of random external ports.
rnd := rand.NewSource(certSeed(cert.Certificate[0]))
for i := 0; i < 10; i++ {
r := 1024 + int(rnd.Int63()%(65535-1024))
err := igd.AddPortMapping(upnp.TCP, r, port, "syncthing", cfg.Options().UPnPLease*60)
r := 1024 + predictableRandom.Intn(65535-1024)
err := igd.AddPortMapping(upnp.TCP, r, port, fmt.Sprintf("syncthing-%d", r), cfg.Options().UPnPLease*60)
if err == nil {
return r
}
@@ -806,7 +822,7 @@ func renewUPnP(port int) {
if len(igds) > 0 {
// Configure the first discovered IGD only. This is a work-around until we have a better mechanism
// for handling multiple IGDs, which will require changes to the global discovery service
igd = igds[0]
igd = &igds[0]
} else {
if debugNet {
l.Debugln("Failed to discover IGD during UPnP port mapping renewal.")
@@ -849,11 +865,24 @@ func renewUPnP(port int) {
}
func resetFolders() {
confDir, err := osutil.ExpandTilde(confDir)
if err != nil {
log.Fatal(err)
}
cfgFile := filepath.Join(confDir, "config.xml")
cfg, err := config.Load(cfgFile, myID)
if err != nil {
log.Fatal(err)
}
suffix := fmt.Sprintf(".syncthing-reset-%d", time.Now().UnixNano())
for _, folder := range cfg.Folders() {
if _, err := os.Stat(folder.Path); err == nil {
l.Infof("Reset: Moving %s -> %s", folder.Path, folder.Path+suffix)
os.Rename(folder.Path, folder.Path+suffix)
base := filepath.Base(folder.Path)
dir := filepath.Dir(filepath.Join(folder.Path, ".."))
l.Infof("Reset: Moving %s -> %s", folder.Path, filepath.Join(dir, base+suffix))
os.Rename(folder.Path, filepath.Join(dir, base+suffix))
}
}
@@ -912,7 +941,7 @@ next:
// the certificate and used another name.
certName := deviceCfg.CertName
if certName == "" {
certName = "syncthing"
certName = tlsDefaultCommonName
}
err := remoteCert.VerifyHostname(certName)
if err != nil {
@@ -926,12 +955,12 @@ next:
// If rate limiting is set, we wrap the connection in a
// limiter.
var wr io.Writer = conn
wr := io.Writer(conn)
if writeRateLimit != nil {
wr = &limitedWriter{conn, writeRateLimit}
}
var rd io.Reader = conn
rd := io.Reader(conn)
if readRateLimit != nil {
rd = &limitedReader{conn, readRateLimit}
}
@@ -953,11 +982,16 @@ next:
}
}
events.Default.Log(events.DeviceRejected, map[string]string{
"device": remoteID.String(),
"address": conn.RemoteAddr().String(),
})
l.Infof("Connection from %s with unknown device ID %s; ignoring", conn.RemoteAddr(), remoteID)
if !cfg.IgnoredDevice(remoteID) {
events.Default.Log(events.DeviceRejected, map[string]string{
"device": remoteID.String(),
"address": conn.RemoteAddr().String(),
})
l.Infof("Connection from %s with unknown device ID %s", conn.RemoteAddr(), remoteID)
} else {
l.Infof("Connection from %s with ignored device ID %s", conn.RemoteAddr(), remoteID)
}
conn.Close()
}
}
@@ -1004,7 +1038,7 @@ func listenTLS(conns chan *tls.Conn, addr string, tlsCfg *tls.Config) {
}
func dialTLS(m *model.Model, conns chan *tls.Conn, tlsCfg *tls.Config) {
var delay time.Duration = 1 * time.Second
delay := time.Second
for {
nextDevice:
for deviceID, deviceCfg := range cfg.Devices() {
@@ -1143,9 +1177,8 @@ func getDefaultConfDir() (string, error) {
default:
if xdgCfg := os.Getenv("XDG_CONFIG_HOME"); xdgCfg != "" {
return filepath.Join(xdgCfg, "syncthing"), nil
} else {
return osutil.ExpandTilde("~/.config/syncthing")
}
return osutil.ExpandTilde("~/.config/syncthing")
}
}
@@ -1233,36 +1266,46 @@ func standbyMonitor() {
}
func autoUpgrade() {
var skipped bool
interval := time.Duration(cfg.Options().AutoUpgradeIntervalH) * time.Hour
timer := time.NewTimer(0)
sub := events.Default.Subscribe(events.DeviceConnected)
for {
if skipped {
time.Sleep(interval)
} else {
skipped = true
select {
case event := <-sub.C():
data, ok := event.Data.(map[string]string)
if !ok || data["clientName"] != "syncthing" || upgrade.CompareVersions(data["clientVersion"], Version) != upgrade.Newer {
continue
}
l.Infof("Connected to device %s with a newer version (current %q < remote %q). Checking for upgrades.", data["id"], Version, data["clientVersion"])
case <-timer.C:
}
rel, err := upgrade.LatestRelease(IsBeta)
if err == upgrade.ErrUpgradeUnsupported {
events.Default.Unsubscribe(sub)
return
}
if err != nil {
// Don't complain too loudly here; we might simply not have
// internet connectivity, or the upgrade server might be down.
l.Infoln("Automatic upgrade:", err)
timer.Reset(time.Duration(cfg.Options().AutoUpgradeIntervalH) * time.Hour)
continue
}
if upgrade.CompareVersions(rel.Tag, Version) <= 0 {
if upgrade.CompareVersions(rel.Tag, Version) != upgrade.Newer {
// Skip equal, older or majorly newer (incompatible) versions
timer.Reset(time.Duration(cfg.Options().AutoUpgradeIntervalH) * time.Hour)
continue
}
l.Infof("Automatic upgrade (current %q < latest %q)", Version, rel.Tag)
err = upgrade.UpgradeTo(rel)
err = upgrade.To(rel)
if err != nil {
l.Warnln("Automatic upgrade:", err)
timer.Reset(time.Duration(cfg.Options().AutoUpgradeIntervalH) * time.Hour)
continue
}
events.Default.Unsubscribe(sub)
l.Warnf("Automatically upgraded to version %q. Restarting in 1 minute.", rel.Tag)
time.Sleep(time.Minute)
stop <- exitUpgrading

View File

@@ -19,10 +19,10 @@ import (
"os"
"testing"
"github.com/syncthing/protocol"
"github.com/syncthing/syncthing/internal/config"
"github.com/syncthing/syncthing/internal/files"
"github.com/syncthing/syncthing/internal/db"
"github.com/syncthing/syncthing/internal/model"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/storage"
@@ -44,11 +44,11 @@ func TestSanityCheck(t *testing.T) {
}
}
db, _ := leveldb.Open(storage.NewMemStorage(), nil)
ldb, _ := leveldb.Open(storage.NewMemStorage(), nil)
// Case 1 - new folder, directory and marker created
m := model.NewModel(cfg, "device", "syncthing", "dev", db)
m := model.NewModel(cfg, "device", "syncthing", "dev", ldb)
sanityCheckFolders(cfg, m)
if cfg.Folders()["folder"].Invalid != "" {
@@ -75,7 +75,7 @@ func TestSanityCheck(t *testing.T) {
Folders: []config.FolderConfiguration{fcfg},
})
m = model.NewModel(cfg, "device", "syncthing", "dev", db)
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
sanityCheckFolders(cfg, m)
if cfg.Folders()["folder"].Invalid != "" {
@@ -91,12 +91,12 @@ func TestSanityCheck(t *testing.T) {
// Case 3 - marker missing
set := files.NewSet("folder", db)
set := db.NewFileSet("folder", ldb)
set.Update(protocol.LocalDeviceID, []protocol.FileInfo{
{Name: "dummyfile"},
})
m = model.NewModel(cfg, "device", "syncthing", "dev", db)
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
sanityCheckFolders(cfg, m)
if cfg.Folders()["folder"].Invalid != "folder marker missing" {
@@ -110,7 +110,7 @@ func TestSanityCheck(t *testing.T) {
Folders: []config.FolderConfiguration{fcfg},
})
m = model.NewModel(cfg, "device", "syncthing", "dev", db)
m = model.NewModel(cfg, "device", "syncthing", "dev", ldb)
sanityCheckFolders(cfg, m)
if cfg.Folders()["folder"].Invalid != "folder path missing" {

View File

@@ -22,7 +22,7 @@ import (
"strings"
)
func memorySize() (uint64, error) {
func memorySize() (int64, error) {
cmd := exec.Command("sysctl", "hw.memsize")
out, err := cmd.Output()
if err != nil {
@@ -32,7 +32,7 @@ func memorySize() (uint64, error) {
if len(fs) != 2 {
return 0, errors.New("sysctl parse error")
}
bytes, err := strconv.ParseUint(fs[1], 10, 64)
bytes, err := strconv.ParseInt(fs[1], 10, 64)
if err != nil {
return 0, err
}

View File

@@ -23,7 +23,7 @@ import (
"strings"
)
func memorySize() (uint64, error) {
func memorySize() (int64, error) {
f, err := os.Open("/proc/meminfo")
if err != nil {
return 0, err
@@ -40,7 +40,7 @@ func memorySize() (uint64, error) {
return 0, errors.New("/proc/meminfo parse error 2")
}
kb, err := strconv.ParseUint(fs[1], 10, 64)
kb, err := strconv.ParseInt(fs[1], 10, 64)
if err != nil {
return 0, err
}

View File

@@ -13,16 +13,28 @@
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package protocol
package main
import (
"os"
"errors"
"os/exec"
"strconv"
"strings"
"github.com/calmh/logger"
)
var (
debug = strings.Contains(os.Getenv("STTRACE"), "protocol") || os.Getenv("STTRACE") == "all"
l = logger.DefaultLogger
)
func memorySize() (int64, error) {
cmd := exec.Command("/sbin/sysctl", "hw.physmem64")
out, err := cmd.Output()
if err != nil {
return 0, err
}
fs := strings.Fields(string(out))
if len(fs) != 3 {
return 0, errors.New("sysctl parse error")
}
bytes, err := strconv.ParseInt(fs[2], 10, 64)
if err != nil {
return 0, err
}
return bytes, nil
}

View File

@@ -22,14 +22,14 @@ import (
"strconv"
)
func memorySize() (uint64, error) {
func memorySize() (int64, error) {
cmd := exec.Command("prtconf", "-m")
out, err := cmd.CombinedOutput()
if err != nil {
return 0, err
}
mb, err := strconv.ParseUint(string(out), 10, 64)
mb, err := strconv.ParseInt(string(out), 10, 64)
if err != nil {
return 0, err
}

View File

@@ -19,6 +19,6 @@ package main
import "errors"
func memorySize() (uint64, error) {
func memorySize() (int64, error) {
return 0, errors.New("not implemented")
}

View File

@@ -26,7 +26,7 @@ var (
globalMemoryStatusEx, _ = syscall.GetProcAddress(kernel32, "GlobalMemoryStatusEx")
)
func memorySize() (uint64, error) {
func memorySize() (int64, error) {
var memoryStatusEx [64]byte
binary.LittleEndian.PutUint32(memoryStatusEx[:], 64)
p := uintptr(unsafe.Pointer(&memoryStatusEx[0]))
@@ -36,5 +36,5 @@ func memorySize() (uint64, error) {
return 0, callErr
}
return binary.LittleEndian.Uint64(memoryStatusEx[8:]), nil
return int64(binary.LittleEndian.Uint64(memoryStatusEx[8:])), nil
}

View File

@@ -24,7 +24,7 @@ import (
"syscall"
"time"
"github.com/syncthing/syncthing/internal/protocol"
"github.com/syncthing/protocol"
)
func init() {
@@ -43,7 +43,7 @@ func savePerfStats(file string) {
var prevTime int64
var rusage syscall.Rusage
var memstats runtime.MemStats
var prevIn, prevOut uint64
var prevIn, prevOut int64
t0 := time.Now()
for t := range time.NewTicker(250 * time.Millisecond).C {

68
cmd/syncthing/random.go Normal file
View File

@@ -0,0 +1,68 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"crypto/md5"
cryptoRand "crypto/rand"
"encoding/binary"
"io"
mathRand "math/rand"
)
// randomCharset contains the characters that can make up a randomString().
const randomCharset = "01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-"
// predictableRandom is an RNG that will always have the same sequence. It
// will be seeded with the device ID during startup, so that the sequence is
// predictable but varies between instances.
var predictableRandom = mathRand.New(mathRand.NewSource(42))
func init() {
// The default RNG should be seeded with something good.
mathRand.Seed(randomInt64())
}
// randomString returns a string of random characters (taken from
// randomCharset) of the specified length.
func randomString(l int) string {
bs := make([]byte, l)
for i := range bs {
bs[i] = randomCharset[mathRand.Intn(len(randomCharset))]
}
return string(bs)
}
// randomInt64 returns a strongly random int64, slowly
func randomInt64() int64 {
var bs [8]byte
_, err := io.ReadFull(cryptoRand.Reader, bs[:])
if err != nil {
panic("randomness failure: " + err.Error())
}
return seedFromBytes(bs[:])
}
// seedFromBytes calculates a weak 64 bit hash from the given byte slice,
// suitable for use a predictable random seed.
func seedFromBytes(bs []byte) int64 {
h := md5.New()
h.Write(bs)
s := h.Sum(nil)
// The MD5 hash of the byte slice is 16 bytes long. We interpret it as two
// uint64s and XOR them together.
return int64(binary.BigEndian.Uint64(s[0:]) ^ binary.BigEndian.Uint64(s[8:]))
}

View File

@@ -0,0 +1,87 @@
// Copyright (C) 2014 The Syncthing Authors.
//
// This program is free software: you can redistribute it and/or modify it
// under the terms of the GNU General Public License as published by the Free
// Software Foundation, either version 3 of the License, or (at your option)
// any later version.
//
// This program is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
// more details.
//
// You should have received a copy of the GNU General Public License along
// with this program. If not, see <http://www.gnu.org/licenses/>.
package main
import (
"sync"
"testing"
)
var predictableRandomTest sync.Once
func TestPredictableRandom(t *testing.T) {
predictableRandomTest.Do(func() {
// predictable random sequence is predictable
e := 3440579354231278675
if v := predictableRandom.Int(); v != e {
t.Errorf("Unexpected random value %d != %d", v, e)
}
})
}
func TestSeedFromBytes(t *testing.T) {
// should always return the same seed for the same bytes
tcs := []struct {
bs []byte
v int64
}{
{[]byte("hello world"), -3639725434188061933},
{[]byte("hello worlx"), -2539100776074091088},
}
for _, tc := range tcs {
if v := seedFromBytes(tc.bs); v != tc.v {
t.Errorf("Unexpected seed value %d != %d", v, tc.v)
}
}
}
func TestRandomString(t *testing.T) {
for _, l := range []int{0, 1, 2, 3, 4, 8, 42} {
s := randomString(l)
if len(s) != l {
t.Errorf("Incorrect length %d != %d", len(s), l)
}
}
strings := make([]string, 1000)
for i := range strings {
strings[i] = randomString(8)
for j := range strings {
if i == j {
continue
}
if strings[i] == strings[j] {
t.Errorf("Repeated random string %q", strings[i])
}
}
}
}
func TestRandomInt64(t *testing.T) {
ints := make([]int64, 1000)
for i := range ints {
ints[i] = randomInt64()
for j := range ints {
if i == j {
continue
}
if ints[i] == ints[j] {
t.Errorf("Repeated random int64 %d", ints[i])
}
}
}
}

View File

@@ -19,11 +19,9 @@ import (
"bufio"
"crypto/rand"
"crypto/rsa"
"crypto/sha256"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/binary"
"encoding/pem"
"io"
"math/big"
@@ -35,8 +33,8 @@ import (
)
const (
tlsRSABits = 3072
tlsName = "syncthing"
tlsRSABits = 3072
tlsDefaultCommonName = "syncthing"
)
func loadCert(dir string, prefix string) (tls.Certificate, error) {
@@ -45,15 +43,8 @@ func loadCert(dir string, prefix string) (tls.Certificate, error) {
return tls.LoadX509KeyPair(cf, kf)
}
func certSeed(bs []byte) int64 {
hf := sha256.New()
hf.Write(bs)
id := hf.Sum(nil)
return int64(binary.BigEndian.Uint64(id))
}
func newCertificate(dir string, prefix string) {
l.Infoln("Generating RSA key and certificate...")
func newCertificate(dir, prefix, name string) {
l.Infof("Generating RSA key and certificate for %s...", name)
priv, err := rsa.GenerateKey(rand.Reader, tlsRSABits)
if err != nil {
@@ -66,7 +57,7 @@ func newCertificate(dir string, prefix string) {
template := x509.Certificate{
SerialNumber: new(big.Int).SetInt64(mr.Int63()),
Subject: pkix.Name{
CommonName: tlsName,
CommonName: name,
},
NotBefore: notBefore,
NotAfter: notAfter,

View File

@@ -1,60 +0,0 @@
FROM debian:squeeze
MAINTAINER Jakob Borg <jakob@nym.se>
ENV GOLANG_VERSION 1.4rc2
# SCMs for "go get", gcc for cgo
RUN apt-get update && apt-get install -y \
ca-certificates curl gcc libc6-dev make \
bzr git mercurial unzip \
--no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Get the binary dist of Go to be able to bootstrap gonative.
RUN curl -sSL https://golang.org/dl/go${GOLANG_VERSION}.linux-amd64.tar.gz \
| tar -v -C /usr/local -xz
ENV PATH /usr/local/go/bin:$PATH
RUN mkdir /go
ENV GOPATH /go
ENV PATH /go/bin:$PATH
WORKDIR /go
# Use gonative to install native Go for most arch/OS combos
RUN go get github.com/calmh/gonative \
&& cd /usr/local \
&& rm -rf go \
&& gonative -version $GOLANG_VERSION
# Rebuild the special and missing versions
RUN bash -xec '\
cd /usr/local/go/src; \
for platform in linux/386 freebsd/386 windows/386 linux/arm openbsd/amd64 openbsd/386; do \
GOOS=${platform%/*} \
GOARCH=${platform##*/} \
GOARM=5 \
GO386=387 \
CGO_ENABLED=0 \
./make.bash --no-clean 2>&1; \
done \
&& ./make.bash --no-clean \
'
# Install packages needed for test coverage
RUN go get github.com/tools/godep \
&& go get golang.org/x/tools/cmd/cover \
&& go get github.com/axw/gocov/gocov \
&& go get github.com/AlekSi/gocov-xml
# Build standard library for race
RUN go install -race std
# Random build users needs to be able to create stuff in /go
RUN chmod -R 777 /go/bin /go/pkg /go/src

View File

@@ -1,29 +0,0 @@
Docker Build
============
Official builds are produced using a Docker image specified by the
Dockerfile in this directory. The following commands exactly reproduce
the official build process.
Create an image called `syncthing/build` with the build environment.
```
./build.sh docker-init
```
> This is a Debian based image containing the latest stable version of
> Go set up for cross compilation. The cross compilation uses the
> dynamically linked standard libraries and SSE instructions for amd64
> builds, but static linking and minimal instruction set for the 386 and
> arm builds. The command should be run in the main repo directory, as a
> user with permission to perform Docker operations.
Build the full set of supported binaries.
```
./build.sh docker-all
```
> This uses a temporary container with the image from above and a volume
> mapped to the directory containing the source. Tests are run and
> binary packages created.

View File

@@ -0,0 +1,27 @@
This directory contains a configuration for running syncthing under the
"systemd" service manager on Linux both under either a systemd system service or
systemd user service.
1. Install systemd.
2. If you are running this as a system level service:
1. Create the user you will be running the service as (foo in this example).
2. Copy the syncthing@.service files to /etc/systemd/system
3. Enable and start the service
systemctl enable syncthing@foo.service
systemctl start syncthing@foo.service
3. If you are running this as a user level service:
1. Log in as the user you will be running the service as
2. Copy the syncthing.service files to /etc/systemd/user
3. Enable and start the service
systemctl --user enable syncthing.service
systemctl --user start syncthing.service
Log output is sent to the journal.

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization for %I
Documentation=https://github.com/syncthing/syncthing/wiki
After=network.target
[Service]
User=%i
Environment=STNORESTART=yes
ExecStart=/usr/bin/syncthing
Restart=on-failure
RestartPreventExitStatus=1
SuccessExitStatus=2
RestartForceExitStatus=3 4
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,15 @@
[Unit]
Description=Syncthing - Open Source Continuous File Synchronization
Documentation=https://github.com/syncthing/syncthing/wiki
After=network.target
[Service]
Environment=STNORESTART=yes
ExecStart=/usr/bin/syncthing
Restart=on-failure
RestartPreventExitStatus=1
SuccessExitStatus=2
RestartForceExitStatus=3 4
[Install]
WantedBy=default.target

View File

@@ -39,6 +39,8 @@ ul+h5 {
.panel-title {
position: relative;
text-overflow: ellipsis;
overflow: hidden;
}
identicon {
@@ -54,6 +56,10 @@ identicon {
margin-top: 0px;
}
.checkbox input[type="checkbox"], .radio input[type="radio"] {
float: none; /* issue #1197 */
}
.popover {
max-width: none;
}
@@ -69,6 +75,10 @@ identicon {
top: 1px;
}
.panel-heading .glyphicon {
margin-right: 15px;
}
.panel-heading {
position: relative;
overflow: hidden;
@@ -195,3 +205,15 @@ ul.three-columns li, ul.two-columns li {
padding-left: 0.5em;
text-indent: -0.5em;
}
/** Footer nav on small devices **/
@media (max-width: 767px) {
body {
padding-bottom: 0;
}
.navbar-fixed-bottom {
position: static;
}
}

View File

@@ -1,71 +1,88 @@
{
"API Key": "Ключ API",
"About": "Аб праграме",
"Add": "Дадаць",
"Add Device": "Дадаць прыладу",
"Add Folder": "Дадаць каталёг",
"Add new folder?": "Дадаць новы каталёг ?",
"Address": "Адрас",
"Addresses": "Адрасы",
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
"Anonymous Usage Reporting": "Anonymous Usage Reporting",
"Any devices configured on an introducer device will be added to this device as well.": "Any devices configured on an introducer device will be added to this device as well.",
"Automatic upgrades": "Automatic upgrades",
"Bugs": "Bugs",
"CPU Utilization": "CPU Utilization",
"Close": "Close",
"Bugs": "Памылкі",
"CPU Utilization": "Выкарыстаньне працэсара",
"Changelog": "Сьпіс зьменаў",
"Close": "Зачыніць",
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
"Compression is recommended in most setups.": "Compression is recommended in most setups.",
"Compression is recommended in most setups.": "Сьцісканьне рэкамэндавана ў большасьці выпадкаў.",
"Connection Error": "Connection Error",
"Copied from elsewhere": "Copied from elsewhere",
"Copied from original": "Copied from original",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg and the following Contributors:",
"Delete": "Delete",
"Device ID": "Device ID",
"Device Identification": "Device Identification",
"Device Name": "Device Name",
"Disconnected": "Disconnected",
"Documentation": "Documentation",
"Download Rate": "Download Rate",
"Edit": "Edit",
"Edit Device": "Edit Device",
"Edit Folder": "Edit Folder",
"Delete": "Выдаліць",
"Device ID": "ID прылады",
"Device Identification": "Ідэнтыфікацыя прылады",
"Device Name": "Назва прылады",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Device {{device}} ({{address}}) wants to connect. Add new device?",
"Devices": "Прылады",
"Disconnected": "Адлучана",
"Documentation": "Дакумэнтацыя",
"Download Rate": "Хуткасьць спампоўваньня",
"Downloaded": "Downloaded",
"Downloading": "Downloading",
"Edit": "Зьмяніць",
"Edit Device": "Зьмяніць прыладу",
"Edit Folder": "Зьмяніць каталёг",
"Editing": "Editing",
"Enable UPnP": "Enable UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.",
"Enter ignore patterns, one per line.": "Enter ignore patterns, one per line.",
"Error": "Error",
"File Versioning": "File Versioning",
"File Versioning": "Вэрсіі файлаў",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "File permission bits are ignored when looking for changes. Use on FAT filesystems.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.",
"Folder ID": "Folder ID",
"Folder ID": "ID каталёгу",
"Folder Master": "Folder Master",
"Folder Path": "Folder Path",
"Folder Path": "Шлях каталёгу",
"Folders": "Каталёгі",
"GUI Authentication Password": "GUI Authentication Password",
"GUI Authentication User": "GUI Authentication User",
"GUI Listen Addresses": "GUI Listen Addresses",
"Generate": "Generate",
"Global Discovery": "Global Discovery",
"Global Discovery Server": "Global Discovery Server",
"Global State": "Global State",
"Generate": "Сгенераваць",
"Global Discovery": "Глябальнае вызначэньне",
"Global Discovery Server": "Сэрвер глябальнага вызначэньня",
"Global State": "Глябальны стан",
"Idle": "Idle",
"Ignore Patterns": "Ignore Patterns",
"Ignore Permissions": "Ignore Permissions",
"Ignore": "Ignore",
"Ignore Patterns": "Ігнараваць шаблёны",
"Ignore Permissions": "Ігнараваць правы",
"Incoming Rate Limit (KiB/s)": "Incoming Rate Limit (KiB/s)",
"Introducer": "Introducer",
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
"Keep Versions": "Keep Versions",
"Last seen": "Last seen",
"Keep Versions": "Трымаць вэрсій",
"Last File Received": "Апошні атрыманы файл",
"Last File Synced": "Апошні сынхранізаваны файл",
"Last seen": "Апошні раз бачылі",
"Later": "Later",
"Latest Release": "Latest Release",
"Local Discovery": "Local Discovery",
"Local State": "Local State",
"Local Discovery": "Лякальнае вызначэньне",
"Local State": "Лякальны стан",
"Maximum Age": "Maximum Age",
"Multi level wildcard (matches multiple directory levels)": "Multi level wildcard (matches multiple directory levels)",
"Never": "Never",
"No": "No",
"No File Versioning": "No File Versioning",
"Never": "Ніколі",
"New Device": "New Device",
"New Folder": "New Folder",
"No": "Не",
"No File Versioning": "Не захоўваць вэрсіі",
"Notice": "Notice",
"OK": "OK",
"OK": "Добра",
"Offline": "Offline",
"Online": "Online",
"Out Of Sync": "Out Of Sync",
"Out Of Sync": "Несынхранізавана",
"Out of Sync Items": "Несынхранізаваныя складнікі",
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
"Override Changes": "Override Changes",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for",
@@ -74,38 +91,47 @@
"Preview": "Preview",
"Preview Usage Report": "Preview Usage Report",
"Quick guide to supported patterns": "Quick guide to supported patterns",
"RAM Utilization": "RAM Utilization",
"Rescan": "Rescan",
"Rescan Interval": "Rescan Interval",
"Restart": "Restart",
"Restart Needed": "Restart Needed",
"Restarting": "Restarting",
"Save": "Save",
"Scanning": "Scanning",
"Select the devices to share this folder with.": "Select the devices to share this folder with.",
"Settings": "Settings",
"Share With Devices": "Share With Devices",
"Shared With": "Shared With",
"RAM Utilization": "Выкарыстаньне памяці",
"Rescan": "Перачытаць",
"Rescan Interval": "Інтэрвал перачытваньня",
"Restart": "Перастартаваць",
"Restart Needed": "Патрэбна перастартоўваньне",
"Restarting": "Перастартоўваньне",
"Reused": "Reused",
"Save": "Захаваць",
"Scanning": "Скануецца",
"Select the devices to share this folder with.": "Пазначце прылады зь якімі трэба абагуліць гэты каталёг.",
"Select the folders to share with this device.": "Пазначце каталёгі якія трэба абагуліць з гэтай прыладай.",
"Settings": "Налады",
"Share": "Абагуліць",
"Share Folder": "Share Folder",
"Share Folders With Device": "Share Folders With Device",
"Share With Devices": "Абагуліць з прыладамі",
"Share this folder?": "Абагуліць гэты каталёг ?",
"Shared With": "Абагульны з",
"Short identifier for the folder. Must be the same on all cluster devices.": "Short identifier for the folder. Must be the same on all cluster devices.",
"Show ID": "Show ID",
"Show ID": "Паказаць ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.",
"Shutdown": "Shutdown",
"Simple File Versioning": "Simple File Versioning",
"Shutdown": "Выключыць",
"Shutdown Complete": "Выключэньне завершанае",
"Simple File Versioning": "Простае захоўваньне вэрсій",
"Single level wildcard (matches within a directory only)": "Single level wildcard (matches within a directory only)",
"Source Code": "Source Code",
"Staggered File Versioning": "Staggered File Versioning",
"Source Code": "Зыходнікі",
"Staggered File Versioning": "Адаптыўнае захоўваньне вэрсій",
"Start Browser": "Start Browser",
"Stopped": "Stopped",
"Support / Forum": "Support / Forum",
"Support": "Падтрымка",
"Support / Forum": "Падтрымка / Форум",
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
"Synchronization": "Synchronization",
"Syncing": "Syncing",
"Synchronization": "Сынхранізацыя",
"Syncing": "Сынхранізуецца",
"Syncthing has been shut down.": "Syncthing has been shut down.",
"Syncthing includes the following software or portions thereof:": "Syncthing includes the following software or portions thereof:",
"Syncthing is restarting.": "Syncthing is restarting.",
"Syncthing is restarting.": "Syncthing перастартоўвае.",
"Syncthing is upgrading.": "Syncthing is upgrading.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
"The device ID cannot be blank.": "The device ID cannot be blank.",
@@ -119,24 +145,27 @@
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.",
"The maximum age must be a number and cannot be blank.": "The maximum age must be a number and cannot be blank.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "The maximum time to keep a version (in days, set to 0 to keep versions forever).",
"The number of old versions to keep, per file.": "The number of old versions to keep, per file.",
"The number of old versions to keep, per file.": "Колькі старых вэрсій трымаць, для кожнага файлу.",
"The number of versions must be a number and cannot be blank.": "The number of versions must be a number and cannot be blank.",
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
"The rescan interval must be at least 5 seconds.": "The rescan interval must be at least 5 seconds.",
"Unknown": "Unknown",
"Up to Date": "Up to Date",
"Unshared": "Unshared",
"Unused": "Unused",
"Up to Date": "Найноўшае",
"Upgrade To {%version%}": "Upgrade To {{version}}",
"Upgrading": "Upgrading",
"Upload Rate": "Upload Rate",
"Use Compression": "Use Compression",
"Upgrading": "Абнаўленьне",
"Upload Rate": "Хуткасьць запампоўваньня",
"Use Compression": "Выкарыстоўваць сьцісканьне",
"Use HTTPS for GUI": "Use HTTPS for GUI",
"Version": "Version",
"Version": "Вэрсія",
"Versions Path": "Versions Path",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "When adding a new device, keep in mind that this device must be added on the other side too.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.",
"Yes": "Yes",
"Yes": "Так",
"You must keep at least one version.": "You must keep at least one version.",
"full documentation": "full documentation",
"items": "items"
"items": "складнікаў",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} wants to share folder \"{{folder}}\"."
}

View File

@@ -1,28 +1,37 @@
{
"API Key": "API Ключ",
"About": "За Програмата",
"Add": "Добави",
"Add Device": "Добави устройство",
"Add Folder": "Добави папка",
"Add new folder?": "Добави нова папка?",
"Address": "Адрес",
"Addresses": "Адреси",
"Allow Anonymous Usage Reporting?": "Разреши анонимен доклад за ползване на програмата?",
"Anonymous Usage Reporting": "Анонимен Доклад",
"Any devices configured on an introducer device will be added to this device as well.": "Устройства настроени на introducer компютъра също ще бъдат добавени към този компютър.",
"Automatic upgrades": "Automatic upgrades",
"Automatic upgrades": "Автоматични ъпдейти",
"Bugs": "Бъгове",
"CPU Utilization": "Натоварване на Процесора",
"Changelog": "Сипъск с промени",
"Close": "Затвори",
"Comment, when used at the start of a line": "Коментар, използван в началото на реда",
"Compression is recommended in most setups.": "Компресията е препоръчителна в повечето конфигурации.",
"Connection Error": "Грешка при Свързването",
"Copied from elsewhere": "Копиране от някъде другаде",
"Copied from original": "Копиран от оригинала",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Правата запазени © 2014 Jakob Borg и следните Сътрудници:",
"Delete": "Изтрий",
"Device ID": "Идентификатор на устройство",
"Device Identification": "Идентификация на устройство",
"Device Name": "Име на устройство",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Устройство {{device}} ({{address}}) желае да се свърже. Добави ново устройство?",
"Devices": "Устройства",
"Disconnected": "Прекрати Връзката",
"Documentation": "Документация",
"Download Rate": "Скорост на Теглене",
"Downloaded": "Изтеглен",
"Downloading": "Изтегляне",
"Edit": "Промени",
"Edit Device": "Промени устройство",
"Edit Folder": "Промени папка",
@@ -34,10 +43,11 @@
"File Versioning": "Файлови Версии",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Битовете за права за достъп са игнорирани, когато се проверява за промени. Използвай с файлови системи тип FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Когато syncthing замени или изтрие файл той се премества в .stversions и преименува с дабавени дата и час.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други устройства, но промени направени на тава устройство ще бъдат синхронизирани до другите устройства.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Файловете са защитени от промени направени на други устройства, но промени направени на това устройство ще бъдат синхронизирани с другите устройства.",
"Folder ID": "Идентификатор на папка",
"Folder Master": "Главна папка",
"Folder Path": "Път до папката",
"Folders": "Папки",
"GUI Authentication Password": "Парола за Потребителския Интерфейс",
"GUI Authentication User": "Потребител за Потребителския Интерфейс",
"GUI Listen Addresses": "Адрес за Свързване с Потребителския Интерфейс",
@@ -46,19 +56,25 @@
"Global Discovery Server": "Сървър за Глобално Откриване",
"Global State": "Глобално състояние",
"Idle": "Без Работа",
"Ignore": "Игнорирай",
"Ignore Patterns": "Шаблони за Игнориране",
"Ignore Permissions": "Игнорирай Права за Достъп",
"Incoming Rate Limit (KiB/s)": "Входящ Лимит на Скороста(KiB/s)",
"Incoming Rate Limit (KiB/s)": "Входящ Лимит на Скоростта (KiB/s)",
"Introducer": "Introducer",
"Inversion of the given condition (i.e. do not exclude)": "Обратното на даденото условие (пр. не изключвай)",
"Keep Versions": "Пази Версии",
"Last File Received": "Последния получен файл",
"Last File Synced": "Последния синхронизиран файл",
"Last seen": "Последно видян",
"Later": "По-късно",
"Latest Release": "Най-новата Версия",
"Local Discovery": "Локално Откриване",
"Local State": "Локално състояние",
"Maximum Age": "Максимална Възраст",
"Multi level wildcard (matches multiple directory levels)": "Маска на много нива (покрива папки с много нива)",
"Never": "Никога",
"New Device": "Ново устройство",
"New Folder": "Нова папка",
"No": "Не",
"No File Versioning": "Няма Файлови Версии",
"Notice": "Известие",
@@ -66,6 +82,7 @@
"Offline": "Не е на линия",
"Online": "На линия",
"Out Of Sync": "Не Синхронизиран",
"Out of Sync Items": "Несинхронизирани елементи",
"Outgoing Rate Limit (KiB/s)": "Лимит на Изходящата Скорост (KiB/s)",
"Override Changes": "Замени Промените",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Пътят до папката на този компютър. Ще бъде създадена ако не съществува. Символът тилда (~) може да бъде използван като заместител на",
@@ -80,32 +97,41 @@
"Restart": "Рестартирай",
"Restart Needed": "Изискава се Рестартиране",
"Restarting": "Рестартиране",
"Reused": "Повторно използван",
"Save": "Запази",
"Scanning": "Сканиране",
"Select the devices to share this folder with.": "Избери устройствата, с които да споделиш тази папка.",
"Select the folders to share with this device.": "Изберете папките за споделяне с това устройство.",
"Settings": "Настройки",
"Share": "Сподели",
"Share Folder": "Сподели папка",
"Share Folders With Device": "Сподели папки с това устройство",
"Share With Devices": "Сподели с устройства",
"Shared With": "Сподел С",
"Share this folder?": "Сподели тази папка?",
"Shared With": "Споделена със",
"Short identifier for the folder. Must be the same on all cluster devices.": "Кратък идентификатор на папката. Трябва да бъде същият на всички компютри.",
"Show ID": "Покажи Идентификатора",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Покажи вместо идентификатор на устройството в статус на клъстъра. Ще бъде предлагано на други комютри като име по подразбиране.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Покажи вместо идентификатор на устройството в статус на клъстъра. Ще бъде обновено с името по подразбиране изпратено от другия компютър.",
"Shutdown": "Спри Програмата",
"Simple File Versioning": "Просто Файлови Версии",
"Shutdown Complete": "Спирането завършено",
"Simple File Versioning": "Опростени Файлови Версии",
"Single level wildcard (matches within a directory only)": "Маска на едно ниво (покрива само в папка)",
"Source Code": "Сорс Код",
"Staggered File Versioning": "Наслагващи се Файлови Версии",
"Start Browser": "Стартирай Браузъра",
"Stopped": "Спряна",
"Support": "Помощ",
"Support / Forum": "Помощ / Форум",
"Sync Protocol Listen Addresses": "Адрес за слушане на синхронизиращия протокол",
"Synchronization": "Синхронизация",
"Syncing": "Синхронизиране",
"Syncthing has been shut down.": "Syncthing е спрян.",
"Syncthing includes the following software or portions thereof:": "Syncthing включва следният софтуер пълно или частично:",
"Syncthing is restarting.": "Syncthing се рестартирва",
"Syncthing is restarting.": "Syncthing се рестартира",
"Syncthing is upgrading.": "Syncthing се обновява.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing изглежда не е включен, или има проблем с интерент връзката. Повторен опит...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing има проблем при обработването на заявката. Моля, презаредете браузъра или рестартирайте Syncthing ако проблемът продължи.",
"The aggregated statistics are publicly available at {%url%}.": "Сумарната статистика е публично достъпна на {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Конфигурацията е запазена, но не е активирана. Syncthing трябва да рестартира, за да се активира новата конфигурация.",
"The device ID cannot be blank.": "Полето идентификатор на устройство не може да бъде празно.",
@@ -121,9 +147,11 @@
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Максималното време да се пазят весрсии (в дни, сложи 0, за да пазиш версии завинаги).",
"The number of old versions to keep, per file.": "Броят стари версии, които да бъдат пазени за всеки файл.",
"The number of versions must be a number and cannot be blank.": "Броят версии трябва да бъде число и не може да бъде празно.",
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
"The rescan interval must be a non-negative number of seconds.": "Интервала на сканиране трябва да бъде не отрицателно число в секунди.",
"The rescan interval must be at least 5 seconds.": "Интервала за повторно сканиране трябва да бъде поне 5 секунди.",
"Unknown": "Неясен",
"Unshared": "Споделянето прекратено",
"Unused": "Неизползван",
"Up to Date": "Актуален",
"Upgrade To {%version%}": "Обновен До {{version}}",
"Upgrading": "Обновяване",
@@ -138,5 +166,6 @@
"Yes": "Да",
"You must keep at least one version.": "Трябва да пазиш поне една версия.",
"full documentation": "пълна документация",
"items": "артикула"
"items": "артикула",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} желае да сподели папка \"{{folder}}\"."
}

View File

@@ -0,0 +1,171 @@
{
"API Key": "Clau API",
"About": "Sobre",
"Add": "Afegir",
"Add Device": "Afegir dispositiu",
"Add Folder": "Afegir carpeta",
"Add new folder?": "Afegir nova carpeta?",
"Address": "Adreça",
"Addresses": "Adreces",
"Allow Anonymous Usage Reporting?": "Permetre l'enviament anònim d'informes d'ús?",
"Anonymous Usage Reporting": "Informe anònim d'ús",
"Any devices configured on an introducer device will be added to this device as well.": "Qualsevol dispositiu configurat com a dispositiu introductor s'afegirà a aquest dispositiu tambè.",
"Automatic upgrades": "Actualitzacions automàtiques",
"Bugs": "Bugs",
"CPU Utilization": "Utilització del CPU",
"Changelog": "Historial de canvis",
"Close": "Tancar",
"Comment, when used at the start of a line": "Comentari quan és usat al principi d'una línia",
"Compression is recommended in most setups.": "Generalment, la compressió és recomanada.",
"Connection Error": "Error de connexió",
"Copied from elsewhere": "Copiat d'un altre lloc",
"Copied from original": "Copiat de l'original",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Drets d'autor © 2014 Jakob Borg i els següents contribuïdors:",
"Delete": "Esborrar",
"Device ID": "ID del dispositiu",
"Device Identification": "Identificació del dispositiu",
"Device Name": "Nom del dispositiu",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "El dispositiu {{device}} ({{address}}) vol conectar-se. Afegir nou dispositiu?",
"Devices": "Dispositius",
"Disconnected": "Desconnectat",
"Documentation": "Documentació",
"Download Rate": "Tasca de descarrega",
"Downloaded": "Descarregat",
"Downloading": "Descarregant",
"Edit": "Editar",
"Edit Device": "Modificar dispositiu",
"Edit Folder": "Modificar carpeta",
"Editing": "Modificant",
"Enable UPnP": "Habilitat UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Introduir, separat per comes, adreces \"ip:port\" o \"dynamic\" per descobrir automàticament les adreces.",
"Enter ignore patterns, one per line.": "Introduïx els patrons d'ignoració, un per línia.",
"Error": "Error",
"File Versioning": "Versionat de Fitxers",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Els bits de permisos dels fitxers son ignorats quan es cerquen canvis. Utilitzar en sistemes de fitxers FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Els fitxers es mouen amb l'estampat de la data a la carpeta .stversions quan son substituïts o esborrats per syncthing.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Els fitxers estan protegits de canvis fets per altres dispositius, però els canvis fets en aquest dispositiu seran enviats a la resta del cluster.",
"Folder ID": "ID de carpeta",
"Folder Master": "Carpeta mestre",
"Folder Path": "Camí de carpeta",
"Folders": "Carpetes",
"GUI Authentication Password": "Contrasenya d'autenticació GUI",
"GUI Authentication User": "Usuari d'autenticació GUI",
"GUI Listen Addresses": "Adreça d'escolta del GUI",
"Generate": "Generar",
"Global Discovery": "Descobriment Global",
"Global Discovery Server": "Servidor de Descobriment Global",
"Global State": "Estat global",
"Idle": "Inactiu",
"Ignore": "Ignorar",
"Ignore Patterns": "Patrons d'ignoració",
"Ignore Permissions": "Ignora Permisos",
"Incoming Rate Limit (KiB/s)": "Tasca Límit d'Entrada (KiB/s)",
"Introducer": "Introductor",
"Inversion of the given condition (i.e. do not exclude)": "Inversió del patrò introduït",
"Keep Versions": "Mantenir Versions",
"Last File Received": "Últim fitxer rebut",
"Last File Synced": "Últim fitxer sincronitzat",
"Last seen": "Vist per última vegada",
"Later": "Després",
"Latest Release": "Última publicació",
"Local Discovery": "Descobriment Local",
"Local State": "Estat local",
"Maximum Age": "Antiguitat Màxima",
"Multi level wildcard (matches multiple directory levels)": "Caràcter comodí de nivell múltiple (aparella en carpetes de nivells múltiples)",
"Never": "Mai",
"New Device": "Nou dispositiu",
"New Folder": "Nova carpeta",
"No": "No",
"No File Versioning": "Sense Versionat de Fitxer",
"Notice": "Avís",
"OK": "OK",
"Offline": "Desconnectat",
"Online": "Connectat",
"Out Of Sync": "Fora de la Sincronització",
"Out of Sync Items": "Arxius encara no sincronitzats",
"Outgoing Rate Limit (KiB/s)": "Tasca Límit de Sortida (KiB/s)",
"Override Changes": "Sobreescriure Canvis",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta de la carpeta a l'equip local. Si no existeix serà creada. El caràcter (~) es pot fer servir com a drecera de",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Ruta on les versions s'haurien de guardar (deixa-ho buit per fer servir el directori .stversions per defecte a la carpeta)",
"Please wait": "Si-us-plau espera",
"Preview": "Vista prèvia",
"Preview Usage Report": "Vista Prèvia de l'Informe d'Ús",
"Quick guide to supported patterns": "Guia ràpida per als possibles patrons",
"RAM Utilization": "Utilització de la RAM",
"Rescan": "Re-escanejar",
"Rescan Interval": "Interval de re-escaneig",
"Restart": "Reiniciar",
"Restart Needed": "És Necessari Reiniciar",
"Restarting": "Reiniciant",
"Reused": "Reutilitzat",
"Save": "Guardar",
"Scanning": "Escanejant",
"Select the devices to share this folder with.": "Selecciona els dispositius en els quals compartir aquesta carpeta.",
"Select the folders to share with this device.": "Selecciona la carpeta per a compartir en aquest dispositiu.",
"Settings": "Preferències",
"Share": "Compartir",
"Share Folder": "Compartir carpeta",
"Share Folders With Device": "Compartir carpetes en dispositiu",
"Share With Devices": "Compartir en dispositius",
"Share this folder?": "Compartir aquesta carpeta?",
"Shared With": "Compartir Amb",
"Short identifier for the folder. Must be the same on all cluster devices.": "Breva identificació per a la carpeta. Té que ser la mateixa per tots els dispositius del cluster.",
"Show ID": "Mostrar ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Mostrat en comptes del ID del Node en l'estat del cluster. Serà advertit als altres dispositius com un nom opcional per defecte.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Mostrat en comptes del ID del Node en l'estat del cluster. S'actualitzarà al nom del dispositiu si es deixa buit.",
"Shutdown": "Apagar",
"Shutdown Complete": "Apagat complet",
"Simple File Versioning": "Versionat de Fitxers Senzill",
"Single level wildcard (matches within a directory only)": "Caràcter comodí de nivell singular (aparella sóls en una carpeta)",
"Source Code": "Codi Font",
"Staggered File Versioning": "Versionat de Fitxers Esglaonat",
"Start Browser": "Arrancar Navegador",
"Stopped": "Aturat",
"Support": "Suport",
"Support / Forum": "Suport / Fòrum",
"Sync Protocol Listen Addresses": "Adreça d'escolta del Protocol Sync",
"Synchronization": "Sincronització",
"Syncing": "Synthing",
"Syncthing has been shut down.": "S'ha aturat el synthing.",
"Syncthing includes the following software or portions thereof:": "Syncthing inclou el següent programari o parts dels mateixos:",
"Syncthing is restarting.": "Reiniciant syncthing.",
"Syncthing is upgrading.": "Actualitzant syncthing.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Synthing sembla parat, o hi ha algun problema amb la connexió a Internet. Reintentant...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing sembla estar experimentant un problema en processar la teva sol·licitud. Si us plau, recarregeu el navegador o reinicieu Syncthing si el problema persisteix.",
"The aggregated statistics are publicly available at {%url%}.": "Les estadístiques agregades estan públicament disponibles a {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuració s'ha guardar però no s'ha activat. S'ha de reiniciar el synthing per activar la nova configuració.",
"The device ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "El ID del dispositiu per introduir ací es pot trobar al diàleg \"Editar > Mostrar ID\" en l'altre dispositiu. Els espais i les barres son opcionals (s'ignoren).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "L'informe d'ús encriptat s'envia diàriament. Es fa servir per rastrejar plataformes habituals, mides de carpetes i versions de l'aplicació. Si es canvia el conjunt de dades reportades es demanarà amb aquest diàleg de nou.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "El ID del dispositiu introduït no sembla vàlid. Hauria de tenir 52 o 56 caràcters amb lletres i números, els espais i les barres son opcionals.",
"The folder ID cannot be blank.": "El ID del dispositiu no pot estar en blanc.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "El ID de la carpeta ha de ser un identificador curt (64 caràcters o menys) format només per lletres, nombres i el punt (.), barra (-) i barra baixa (_).",
"The folder ID must be unique.": "El ID de la carpeta ha de ser únic.",
"The folder path cannot be blank.": "El camí a la carpeta no pot estar en blanc.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es fan servir els següents intervals: per la primera hora es manté una versió cada 30 segons, pel primer dia es manté una versió cada hora, pel primer cada 30 dies es manté una versió cada dia, fins el màxim d'antiguitat es manté una versió cada setmana.",
"The maximum age must be a number and cannot be blank.": "La màxima antiguitat ha de ser un número i no pot estar en blanc.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Temps màxim en mantenir una versió (en dies, si es deixa en 0 es mantenen les versions per sempre).",
"The number of old versions to keep, per file.": "El nombre de versions antigues que es mantenen per fitxer.",
"The number of versions must be a number and cannot be blank.": "El nombre de versions ha de ser un número i no es pot deixar en blanc.",
"The rescan interval must be a non-negative number of seconds.": "El interval de re-escaneig ha der ser un nombre positiu de segons.",
"The rescan interval must be at least 5 seconds.": "El interval de re-escaneig ha de ser com a mínim de 5 segons.",
"Unknown": "Desconegut",
"Unshared": "No compartit",
"Unused": "No usat",
"Up to Date": "Actualitzat",
"Upgrade To {%version%}": "Actualitzar a {{version}}",
"Upgrading": "Actualitzant",
"Upload Rate": "Tasca de Pujada",
"Use Compression": "Utilitza compressió",
"Use HTTPS for GUI": "Utilitzar HTTPS pel GUI",
"Version": "Versió",
"Versions Path": "Carpeta de les Versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions son automàticament eliminades si son més antigues que el màxim d'antiguitat o si excedeixen del nombre de fitxers permesos en un interval.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Quan s'afegeix un nou dispositiu, recorda que aquest dispositiu tambè s'ha d'afegir a l'altre banda.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Quan s'afegeix una nova carpeta recorda que el ID d'aquesta s'utilitza per lligar repositoris entre els dispositius. Es distingeix entre majúscules i minúscules i ha de ser exactament iguals entre tots els dispositius.",
"Yes": "Si",
"You must keep at least one version.": "Has de mantenir com a mínim una versió.",
"full documentation": "documentació sencera",
"items": "Elements",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vol compartir la carpeta \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "API klíč",
"About": "O aplikaci",
"Add": "Přidat",
"Add Device": "Přidat přístroj",
"Add Folder": "Přidat adresář",
"Add new folder?": "Přidat nový adresář?",
"Address": "Adresa",
"Addresses": "Adresy",
"Allow Anonymous Usage Reporting?": "Povolit anonymní hlášení o používání?",
@@ -11,18 +13,25 @@
"Automatic upgrades": "Automatický upgrade",
"Bugs": "Chyby",
"CPU Utilization": "Využití CPU",
"Changelog": "Changelog",
"Close": "Zavřít",
"Comment, when used at the start of a line": "Komentář, pokud použito na začátku řádku",
"Compression is recommended in most setups.": "Komprese je doporučena pro většinu nastavení.",
"Connection Error": "Chyba připojení",
"Copied from elsewhere": "Zkopírováno odjinud",
"Copied from original": "Zkopírováno z originálu",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg a následující přispěvatelé:",
"Delete": "Smazat",
"Device ID": "ID přístroje",
"Device Identification": "Identifikace přístroje",
"Device Name": "Jméno přístroje",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Přístroj {{device}} ({{address}}) žádá o připojení. Chcete ho přidat?",
"Devices": "Přístroje",
"Disconnected": "Odpojeno",
"Documentation": "Dokumentace",
"Download Rate": "Rychlost stahování",
"Downloaded": "Staženo",
"Downloading": "Stahuji",
"Edit": "Upravit",
"Edit Device": "Upravit přístroj",
"Edit Folder": "Upravit adresář",
@@ -37,7 +46,8 @@
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Soubory jsou chráněny před změnami na ostatních přístrojích, ale změny provedené z tohoto umístění budou rozeslány na zbytek clusteru.",
"Folder ID": "ID adresáře",
"Folder Master": "Master adresář",
"Folder Path": "Cesta adresáře",
"Folder Path": "Cesta k adresáři",
"Folders": "Adresáře",
"GUI Authentication Password": "Přihlašovací heslo pro GUI",
"GUI Authentication User": "Přihlašovací jméno pro GUI",
"GUI Listen Addresses": "Adresa naslouchání GUI",
@@ -46,19 +56,25 @@
"Global Discovery Server": "Server globálního oznamování",
"Global State": "Všeobecný status",
"Idle": "Nečinný",
"Ignore": "Ignorovat",
"Ignore Patterns": "Ignorované vzory",
"Ignore Permissions": "Ignorovat oprávnění",
"Incoming Rate Limit (KiB/s)": "Omezení příchozí rychlosti (KiB/s)",
"Introducer": "Zavaděč",
"Inversion of the given condition (i.e. do not exclude)": "Prohození zadané podmínky (např. nevynechat)",
"Keep Versions": "Ponechat verze",
"Last File Received": "Poslední soubor přijat",
"Last File Synced": "Poslední soubor synchronizován",
"Last seen": "Naposledy spatřen",
"Later": "Později",
"Latest Release": "Poslední vydání",
"Local Discovery": "Místní oznamování",
"Local State": "Místní status",
"Maximum Age": "Maximální časový limit",
"Multi level wildcard (matches multiple directory levels)": "Víceúrovňový zástupný znak (shoda skrz více úrovní adresářů)",
"Never": "Nikdy",
"New Device": "Nový přístroj",
"New Folder": "Nový adresář",
"No": "Ne",
"No File Versioning": "Bez verzí souborů",
"Notice": "Oznámení",
@@ -66,6 +82,7 @@
"Offline": "Offline",
"Online": "Online",
"Out Of Sync": "Nesesynchronizováno",
"Out of Sync Items": "Nesesynchronizované položky",
"Outgoing Rate Limit (KiB/s)": "Omezení odchozí rychlosti (KiB/s)",
"Override Changes": "Přepsat změny",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Cesta k adresáři na lokálním počítači. Pokud neexistuje, bude vytvořen. Znak vlnovky (~) může být použit jako zkratka pro",
@@ -80,23 +97,31 @@
"Restart": "Restart",
"Restart Needed": "Je nutný restart",
"Restarting": "Restartuji",
"Reused": "Opakovaně použité",
"Save": "Uložit",
"Scanning": "Skenování",
"Select the devices to share this folder with.": "Vybrat přístroje se kterými sdílet tento adresář.",
"Select the folders to share with this device.": "Vybrat adresáře sdílené tomuto přístroji.",
"Settings": "Nastavení",
"Share": "Sdílet",
"Share Folder": "Sdílet adresář",
"Share Folders With Device": "Sdílet adresáře tomuto přístroji",
"Share With Devices": "Sdílet s přístroji",
"Share this folder?": "Sdílet tento adresář?",
"Shared With": "Sdíleno s",
"Short identifier for the folder. Must be the same on all cluster devices.": "Krátký identifikátor tohoto adresáře. Musí být stejný na všech přístrojích v clusteru.",
"Show ID": "Zobrazit ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Zobrazeno místo ID přístroje na náhledu stavu clusteru. Bude odesíláno ostatním přístrojům jako dodatečné výchozí jméno.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Zobrazeno místo ID přístroje na náhledu stavu clusteru. Bude aktualizováno na jméno které přístroj odesílá pokud nebylo vyplněno.",
"Shutdown": "Vypnutí",
"Shutdown": "Vypnout",
"Shutdown Complete": "Vypnutí ukončeno",
"Simple File Versioning": "Jednoduché verze souborů",
"Single level wildcard (matches within a directory only)": "Jednoúrovňový zástupný znak (shody pouze uvnitř adresáře)",
"Source Code": "Zdrojový kód",
"Staggered File Versioning": "Vícenásobné verze souborů",
"Start Browser": "Otevřít prohlížeč",
"Stopped": "Pozastaveno",
"Support": "Podpora",
"Support / Forum": "Podpora / Fórum",
"Sync Protocol Listen Addresses": "Adresa naslouchání synchronizačního protokolu",
"Synchronization": "Synchronizace",
@@ -106,6 +131,7 @@
"Syncthing is restarting.": "Syncthing se restartuje.",
"Syncthing is upgrading.": "Syncthing se aktualizuje.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing se zdá být nefunkční, nebo je problém s připojením k Internetu. Opakuji...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing má nejspíše problém s provedením vašeho požadavku. Pokud problém přetrvává, načtěte znovu data v prohlížeči nebo restartujte Syncthing.",
"The aggregated statistics are publicly available at {%url%}.": "Souhrnné statistiky jsou veřejně dostupné na {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Konfigurace byla uložena, ale není aktivována. Pro aktivaci nové konfigurace je třeba restartovat Syncthing.",
"The device ID cannot be blank.": "ID přístroje nemůže být prázdné.",
@@ -124,6 +150,8 @@
"The rescan interval must be a non-negative number of seconds.": "Interval pro opakování skenování musí být pozitivní číslo.",
"The rescan interval must be at least 5 seconds.": "Interval opakování skenování musí být delší než 5 sekund.",
"Unknown": "Neznámý",
"Unshared": "Nesdílené",
"Unused": "Nepoužité",
"Up to Date": "Aktuální",
"Upgrade To {%version%}": "Aktualizovat na {{version}}",
"Upgrading": "Aktualizuji",
@@ -138,5 +166,6 @@
"Yes": "Ano",
"You must keep at least one version.": "Je třeba ponechat alespoň jednu verzi.",
"full documentation": "plná dokumentace",
"items": "položky"
"items": "položky",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} chce sdílet adresář \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "API-Schlüssel",
"About": "Über Syncthing",
"Add": "Hinzufügen",
"Add Device": "Gerät hinzufügen",
"Add Folder": "Verzeichnis hinzufügen",
"Add new folder?": "Neues Verzeichnis hinzufügen?",
"Address": "Adresse",
"Addresses": "Adressen",
"Allow Anonymous Usage Reporting?": "Übertragung von anonymen Nutzungsberichten erlauben?",
@@ -11,18 +13,25 @@
"Automatic upgrades": "Automat. Updates",
"Bugs": "Fehler",
"CPU Utilization": "Prozessorauslastung",
"Changelog": "Versionsinfo",
"Close": "Schließen",
"Comment, when used at the start of a line": "Kommentar, wenn am Anfang der Zeile benutzt.",
"Compression is recommended in most setups.": "Datenkomprimierung ist für die meisten Anwendungen empfohlen",
"Connection Error": "Verbindungsfehler",
"Copied from elsewhere": "Von woanders kopiert",
"Copied from original": "Vom Originial kopiert",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg und folgende Unterstützer:",
"Delete": "Löschen",
"Device ID": "Geräte ID",
"Device Identification": "Gerät Identifikation",
"Device Name": "Gerätename",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Gerät {{device}} ({{address}}) möchte sich verbinden. Gerät hinzufügen?",
"Devices": "Geräte",
"Disconnected": "Getrennt",
"Documentation": "Dokumentation",
"Download Rate": "Downloadgeschwindigkeit",
"Download Rate": "Download",
"Downloaded": "Heruntergeladen",
"Downloading": "Lädt herunter",
"Edit": "Bearbeiten",
"Edit Device": "Gerät bearbeiten",
"Edit Folder": "Verzeichnis bearbeiten",
@@ -33,11 +42,12 @@
"Error": "Fehler",
"File Versioning": "Dateiversionierung",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Dateizugriffsrechte beim Suchen nach Veränderungen ignorieren. Bei FAT-Dateisystemen verwenden.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Dateien werden, bevor Syncthing sie löscht oder ersetzt, als datierte Versionen in einen Ordner names .stversions verschoben.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Dateien werden, bevor Syncthing sie löscht oder ersetzt, als datierte Versionen in ein Verzeichnis names .stversions verschoben.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Dateien sind vor Veränderung durch andere Geräte geschützt, auf diesem Gerät durchgeführte Veränderungen werden aber auf den Rest des Verbunds übertragen.",
"Folder ID": "Verzeichnis ID",
"Folder Master": "Keine Veränderungen zulassen",
"Folder Path": "Verzeichnispfad",
"Folders": "Verzeichnisse",
"GUI Authentication Password": "Passwort für Zugang zur Benutzeroberfläche",
"GUI Authentication User": "Nutzername für Zugang zur Benutzeroberfläche",
"GUI Listen Addresses": "Adresse(n) für die Benutzeroberfläche",
@@ -46,57 +56,72 @@
"Global Discovery Server": "Globaler Auffindungsserver",
"Global State": "Globaler Status",
"Idle": "Untätig",
"Ignore": "Ignorieren",
"Ignore Patterns": "Ignoriermuster",
"Ignore Permissions": "Berechtigungen ignorieren",
"Incoming Rate Limit (KiB/s)": "Eingehendes Datenratelimit (KiB/s)",
"Introducer": "Verteilergerät",
"Inversion of the given condition (i.e. do not exclude)": "Umkehrung der angegebenen Bedingung (z.B. schließe nicht aus)",
"Keep Versions": "Versionen erhalten",
"Last File Received": "Letzte Datei",
"Last File Synced": "Letzte Änderung",
"Last seen": "Zuletzt online",
"Later": "Später",
"Latest Release": "Letzte Veröffentlichung",
"Local Discovery": "Lokale Auffindung",
"Local State": "Lokaler Status",
"Maximum Age": "Höchstalter",
"Multi level wildcard (matches multiple directory levels)": "Verschachteltes Maskenzeichen (wird für verschachtelte Verzeichnisse verwendet)",
"Never": "Nie",
"New Device": "Neues Gerät",
"New Folder": "Neues Verzeichnis",
"No": "Nein",
"No File Versioning": "Keine Dateiversionierung",
"Notice": "Benachrichtigung",
"OK": "Ok",
"Notice": "Hinweis",
"OK": "OK",
"Offline": "Offline",
"Online": "Online",
"Out Of Sync": "Nicht synchronisiert",
"Out of Sync Items": "Nicht synchronisierte Objekte",
"Outgoing Rate Limit (KiB/s)": "Ausgehendes Datenratelimit (KiB/s)",
"Override Changes": "Änderungen überschreiben",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Pfad zum Verzeichnis auf dem lokalen Rechner. Wird erzeugt, wenn es nicht existiert. Das Tilden-Zeichen (~) kann als Abkürzung benutzt werden für",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Pfad in dem die Versionen gespeichert werden sollen (ohne Angabe wird der Ordner .stversions im Verzeichnis verwendet).",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Pfad in dem die Versionen gespeichert werden sollen (ohne Angabe wird das Verzeichnis .stversions im Verzeichnis verwendet).",
"Please wait": "Bitte warten",
"Preview": "Vorschau",
"Preview Usage Report": "Vorschau des Nutzungsberichts",
"Quick guide to supported patterns": "Schnellanleitung zu den unterstützten Suchstrukturen",
"RAM Utilization": "Verwendeter Arbeitsspeicher",
"RAM Utilization": "RAM Auslastung",
"Rescan": "Überprüfen",
"Rescan Interval": "Suchintervall",
"Restart": "Neustart",
"Restart Needed": "Neustart notwendig",
"Restarting": "Wird neu gestartet",
"Reused": "Erneut benutzt",
"Save": "Speichern",
"Scanning": "Suche",
"Select the devices to share this folder with.": "Wähle die Geräte aus, mit denen Du dieses Verzeichnis teilen willst.",
"Select the folders to share with this device.": "Wähle die Verzeichnisse aus, die du mit diesem Gerät teilen möchtest",
"Settings": "Einstellungen",
"Share": "Teilen",
"Share Folder": "Teile Verzeichnis",
"Share Folders With Device": "Teile Verzeichnisse mit diesem Gerät",
"Share With Devices": "Teile mit diesen Geräten",
"Share this folder?": "Dieses Verzeichnis teilen?",
"Shared With": "Geteilt mit",
"Short identifier for the folder. Must be the same on all cluster devices.": "Kurze ID für das Verzeichnis. Muss auf allen Verbunds-Geräten gleich sein.",
"Show ID": "ID anzeigen",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Wird anstatt der Geräte ID im Verbunds-Status angezeigt. Wird als optionaler Standardname an andere Geräte bekannt gegeben.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Wird anstatt der Geräte ID im Verbunds-Status angezeigt. Wird auf den Namen aktualisiert, den das Gerät angibt.",
"Shutdown": "Herunterfahren",
"Shutdown Complete": "Vollständig Heruntergefahren",
"Simple File Versioning": "Einfache Dateiversionierung",
"Single level wildcard (matches within a directory only)": "Einzelnes Maskenzeichen (wird für ein einzelnes Verzeichnis verwendet)",
"Source Code": "Quellcode",
"Staggered File Versioning": "Stufenweise Dateiversionierung",
"Start Browser": "Starte Browser",
"Start Browser": "Browser starten",
"Stopped": "Gestoppt",
"Support": "Support",
"Support / Forum": "Support / Forum",
"Sync Protocol Listen Addresses": "Adresse(n) für das Synchronisierungsprotokoll",
"Synchronization": "Synchronisierung",
@@ -106,6 +131,7 @@
"Syncthing is restarting.": "Syncthing wird neu gestartet",
"Syncthing is upgrading.": "Syncthing wird aktualisiert",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing scheint nicht erreichbar zu sein oder es gibt ein Problem mit Deiner Internetverbindung. Versuche erneut...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Es scheint als ob Syncthing ein Problem mit der Verarbeitung ihrer Eingabe hat. Bitte laden sie die Seite neu oder führen sie einen Neustart von Syncthing durch, falls das Problem weiterhin besteht.",
"The aggregated statistics are publicly available at {%url%}.": "Die gesammelten Statistiken sind öffentlich verfügbar unter {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Die Konfiguration wurde gespeichert, aber nicht aktiviert. Syncthing muss neugestartet werden um die neue Konfiguration zu aktivieren.",
"The device ID cannot be blank.": "Die Geräte ID darf nicht leer sein.",
@@ -113,7 +139,7 @@
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Der verschlüsselte Nutzungsbericht wird täglich gesendet. Er wird benutzt um Statistiken über verwendete Betriebssysteme, Verzeichnis-Größen und Programm-Versionen zu erstellen. Sollte der Bericht in Zukunft weitere Daten erfassen, wird dieses Fenster erneut angezeigt.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Die eingegebene Geräte ID scheint nicht gültig zu sein. Es sollte eine 52 oder 56 stellige Zeichenkette aus Buchstaben und Nummern sein. Leerzeichen und Bindestriche sind optional.",
"The folder ID cannot be blank.": "Die Verzeichnis ID darf nicht leer sein.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Die Verzeichnis ID muss eine kurze Kennung (64 Zeichen oder weniger) sein. Sie kann nur aus Buchstaben, Zahlen und den Punkt- (.), Bindestrich- (-), und Unterstrich- (_) Zeichen bestehen.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Die Verzeichnis ID muss eine kurze Kennung (64 Zeichen oder weniger) sein. Sie kann nur aus Buchstaben, Zahlen und dem Punkt- (.), Bindestrich- (-), und Unterstrich- (_) Zeichen bestehen.",
"The folder ID must be unique.": "Die Verzeichnis ID muss eindeutig sein.",
"The folder path cannot be blank.": "Der Verzeichnispfad kann nicht leer sein",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Es wird in folgenden Abständen versioniert: in der ersten Stunde wird alle 30 Sekunden eine Version behalten, am ersten Tag eine jede Stunde, in den ersten 30 Tagen eine jeden Tag, danach wird bis zum Höchstalter eine Version pro Woche beibehalten.",
@@ -124,12 +150,14 @@
"The rescan interval must be a non-negative number of seconds.": "Das Suchintervall muss eine nicht negative Anzahl von Sekunden sein.",
"The rescan interval must be at least 5 seconds.": "Das Suchintervall muss mindestens 5 Sekunden betragen.",
"Unknown": "Unbekannt",
"Unshared": "Ungeteilt",
"Unused": "Ungenutzt",
"Up to Date": "Aktuell",
"Upgrade To {%version%}": "Update auf {{version}}",
"Upgrading": "Wird aktualisiert",
"Upload Rate": "Uploadgeschwindigkeit",
"Upload Rate": "Upload",
"Use Compression": "Benutze Komprimierung",
"Use HTTPS for GUI": "Benutze HTTPS für Benutzeroberfläche",
"Use HTTPS for GUI": "HTTPS für Benutzeroberfläche benutzen",
"Version": "Version",
"Versions Path": "Versionierungspfad",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Alte Versionen werden automatisch gelöscht wenn sie älter als das angegebene Höchstalter sind oder die Höchstzahl der Dateien pro Zeitabschnitt überschritten wird.",
@@ -138,5 +166,6 @@
"Yes": "Ja",
"You must keep at least one version.": "Du musst mindestens eine Version behalten.",
"full documentation": "Komplette Dokumentation",
"items": "Einträge"
"items": "Einträge",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} möchte das Verzeichnis \"{{folder}}\" teilen."
}

View File

@@ -0,0 +1,171 @@
{
"API Key": "Κλειδί API",
"About": "Σχετικά",
"Add": "Προσθήκη",
"Add Device": "Προσθήκη συσκευής",
"Add Folder": "Προσθήκη φακέλου",
"Add new folder?": "Προσθήκη νέου φακέλου;",
"Address": "Διεύθυνση",
"Addresses": "Διευθύνσεις",
"Allow Anonymous Usage Reporting?": "Να επιτρέπεται η αποστολή ανώνυμων στοιχείων χρήσης;",
"Anonymous Usage Reporting": "Ανώνυμα στοιχεία χρήσης",
"Any devices configured on an introducer device will be added to this device as well.": "Όλες οι συσκευές που είναι δηλωμένες σε μια συσκευή διαμοίρασης θα υπάρχουν και εδώ",
"Automatic upgrades": "Αυτόματη αναβάθμιση",
"Bugs": "Bugs",
"CPU Utilization": "Επιβάρυνση του επεξεργαστή",
"Changelog": "Λίστα αλλαγών",
"Close": "Τέλος",
"Comment, when used at the start of a line": "Σχόλιο, όταν χρησιμοποιείται στην αρχή μιας γραμμής",
"Compression is recommended in most setups.": "Η συμπίεση προτείνεται για τις περισσότερες εγκαταστάσεις",
"Connection Error": "Σφάλμα σύνδεσης",
"Copied from elsewhere": "Έχει αντιγραφεί από κάπου αλλού",
"Copied from original": "Έχει αντιγραφεί από το πρωτότυπο",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 από τον Jakob Borg και τους παρακάτω υποστηρικτές:",
"Delete": "Διαγραφή",
"Device ID": "ID Συσκευής",
"Device Identification": "Ταυτότητα Συσκευής",
"Device Name": "Όνομα Συσκευής",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Η συσκευή {{device}} ({{address}}) επιθυμεί να συνδεθεί. Προσθήκη της νέας συσκευής?",
"Devices": "Συσκευές",
"Disconnected": "Αποσυνδεδεμένος",
"Documentation": "Τεκμηρίωση",
"Download Rate": "Ταχύτητα Λήψης",
"Downloaded": "Έχει ληφθεί",
"Downloading": "Λήψη",
"Edit": "Επεξεργασία",
"Edit Device": "Επεξεργασία Συσκευής",
"Edit Folder": "Επεξεργασία Φακέλου",
"Editing": "Επεξεργασία σε εξέλιξη",
"Enable UPnP": "Ενεργοποίηση UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Γράψε τις διευθύνσεις IP με τη μορφή \"ip:θύρα\", διαχωρισμένες με κόμμα. Αλλιώς γράψε \"dynamic\" για να πραγματοποιηθεί η αυτόματη εξεύρεση διευθύνσεων.",
"Enter ignore patterns, one per line.": "Δώσε τα πρότυπα που θα αγνοηθούν, ένα σε κάθε γραμμή.",
"Error": "Σφάλμα",
"File Versioning": "File Versioning",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Να αγνοούνται τα δικαιώματα των αρχείων όταν γίνεται αναζήτηση για αλλαγές. Χρησιμοποιείται σε συστήματα αρχείων τύπου FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Όταν τα αρχεία αντικατασταθούν ή διαγραφούν από το syncthing, τότε να μεταφέρονται σε φάκελο με το όνομα .stversions με χρονική σήμανση.",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Τα αρχεία προστατεύονται από αλλαγές που γίνονται σε άλλες συσκευές, αλλά όποιες αλλαγές γίνουν σε αυτή τη συσκευή θα αποσταλούν στο όλη τη συστάδα συσκευών.",
"Folder ID": "ID Φακέλου",
"Folder Master": "Να μην επιτρέπονται αλλαγές",
"Folder Path": "Μονοπάτι Φακέλου",
"Folders": "Φάκελοι",
"GUI Authentication Password": "Κωδικός για την πρόσβαση στη διεπαφή",
"GUI Authentication User": "Χρηστώνυμο για την πρόσβαση στη διεπαφή",
"GUI Listen Addresses": "Διευθύνσεις στις οποίες θα αποκρίνεται η διεπαφή",
"Generate": "Δημιουργία",
"Global Discovery": "Να μπορεί να ανευρεθεί καθολικά (από παντού)",
"Global Discovery Server": "Διακομιστής καθολικής ανεύρεσης κόμβου",
"Global State": "Καθολική κατάσταση",
"Idle": "Ανενεργό",
"Ignore": "Αγνόησε",
"Ignore Patterns": "Ignore Patterns",
"Ignore Permissions": "Αγνόησε Δικαιώματα",
"Incoming Rate Limit (KiB/s)": "Περιορισμός ταχύτητας λήψης (KiB/δ)",
"Introducer": "Βασικός κόμβος",
"Inversion of the given condition (i.e. do not exclude)": "Αντιστροφή της δοσμένης συνθήκης (π.χ. να μην εξαιρείς) ",
"Keep Versions": "Διατήρηση εκδόσεων",
"Last File Received": "Πιο πρόσφατο αρχείο",
"Last File Synced": "Τελευταία αλλαγή",
"Last seen": "Τελευταία φορά συνδεδεμένος",
"Later": "Αργότερα",
"Latest Release": "Τελευταία Έκδοση",
"Local Discovery": "Τοπική Ανεύρεση",
"Local State": "Τοπική Κατάσταση",
"Maximum Age": "Μέγιστη ηλικία",
"Multi level wildcard (matches multiple directory levels)": "Τελεστής μπαλαντέρ (*) για πολλά επίπεδα (χρησιμοποιείται για εμφωλευμένους φακέλους)",
"Never": "Ποτέ",
"New Device": "Νέα Συσκευή",
"New Folder": "Νέος Φάκελος",
"No": "Αριθμός",
"No File Versioning": "Να μην τηρούνται παλιότερες εκδόσεις",
"Notice": "Σημείωση",
"OK": "OK",
"Offline": "Εκτός σύνδεσης",
"Online": "Συνδεδεμένο",
"Out Of Sync": "Μη Συγχρονισμένα",
"Out of Sync Items": "Μη συγχρονισμένα αντικείμενα",
"Outgoing Rate Limit (KiB/s)": "Ρυθμός αποστολής (KiB/δ)",
"Override Changes": "Να αντικατασταθούν οι αλλαγές",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Μονοπάτι του φακέλου σε αυτόν τον υπολογιστή. Αν δεν υπάρχει θα δημιουργηθεί. Η περισπωμένη (~) μπορεί να μπει σαν συντόμευση για",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Ο φάκελος στον οποίο θα αποθηκεεύονται οι εκδόσεις των αρχείων (αν δεν οριστεί θα αποθηκεύονται στον υποφάκελο .stversions)",
"Please wait": "Παρακαλώ περιμένετε",
"Preview": "Προεπισκόπηση",
"Preview Usage Report": "Προεπισκόπηση αναφοράς χρήσης",
"Quick guide to supported patterns": "Σύντομη βοήθεια σχετικά με τα πρότυπα αναζήτησης που υποστηρίζονται",
"RAM Utilization": "Επιβάρυνση RAM",
"Rescan": "Έλεγξε για αλλαγές",
"Rescan Interval": "Κάθε πότε θα ελέγχεται για αλλαγές ",
"Restart": "Επανεκκίνηση",
"Restart Needed": "Απαιτείται επανεκκίνηση",
"Restarting": "Επανεκκίνηση",
"Reused": "Χρησιμοποιήθηκε ξανά",
"Save": "Αποθήκευση",
"Scanning": "Έλεγχος για αλλαγές",
"Select the devices to share this folder with.": "Διάλεξε τις συσκευές προς τις οποίες θα διαμοιράζεται αυτός ο φάκελος.",
"Select the folders to share with this device.": "Διάλεξε ποιοι φάκελοι θα διαμοιράζονται προς αυτή τη συσκευή.",
"Settings": "Ρυθμίσεις",
"Share": "Διαμοίραση",
"Share Folder": "Διαμοίραση φακέλου",
"Share Folders With Device": "Διαμοίρασε τους φακέλους προς αυτή τη συσκευή",
"Share With Devices": "Διαμοίρασε με αυτές τις συσκευές",
"Share this folder?": "Να διαμοιραστεί αυτός ο φάκελος;",
"Shared With": "Διαμοιράζεται με",
"Short identifier for the folder. Must be the same on all cluster devices.": "Σύντομη ταυτότητα για το φάκελο. Θα πρέπει να είναι η ίδια σε όλες τις συσκευές με τις οποίες διαμοιράζεται ο φάκελος αυτός.",
"Show ID": "Εμφάνιση ταυτότητας",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Θα φαίνεται αντί για την ταυτότητα της συσκευής στην προβολή της κατάστασης ολόκληρης της συστάδας. Θα γνωστοποιείται σαν το προαιρετικό όνομα της συσκευής.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Θα φαίνεται αντί για την ταυτότητα της συσκευής στην προβολή της κατάστασης ολόκληρης της συστάδας. Θα ενημερώνεται αυτόματα αν αλλάξει το όνομα της συσκευής.",
"Shutdown": "Απενεργοποίηση",
"Shutdown Complete": "Πλήρης απενεργοποίηση",
"Simple File Versioning": "Απλή παρακολούθηση εκδόσεων",
"Single level wildcard (matches within a directory only)": "Τελεστής μπαλαντέρ (*) για ένα επίπεδο (χρησιμοποιείται για έναν φάκελο μόνο)",
"Source Code": "Πηγαίος κώδικας",
"Staggered File Versioning": "Να τηρούνται κλιμακούμενες εκδόσεις",
"Start Browser": "Εκκίνηση φυλλομετρητή",
"Stopped": "Απενεργοποιημένο",
"Support": "Υποστήριξη",
"Support / Forum": "Υποστήριξη / Φόρουμ",
"Sync Protocol Listen Addresses": "Διευθύνσεις για το πρωτόκολλο συγχρονισμού",
"Synchronization": "Συγχρονισμός",
"Syncing": "Συγχρονίζω",
"Syncthing has been shut down.": "Ο συγχρονισμός έχει απενεργοποιηθεί.",
"Syncthing includes the following software or portions thereof:": "Το syncthing περιλαμβάνει τα παρακάτω λογισμικά ή μέρη αυτών:",
"Syncthing is restarting.": "Το syncthing επανεκκινεί.",
"Syncthing is upgrading.": "Το syncthing αναβαθμίζεται.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Το syncthing φαίνεται πως είναι απενεργοποιημένο ή υπάρχει πρόβλημα στη σύνδεσή σας στο διαδίκτυο. Προσπαθώ πάλι…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Φαίνεται πως το syncthing έχει κάποιο πρόβλημα με αυτό που ζήτησες. Αν το πρόβλημα επιμείνει φόρτωσε ξανά τη σελίδα ή επανεκκίνησε το syncthing.",
"The aggregated statistics are publicly available at {%url%}.": "Τα στατιστικά που έχουν συλλεγεί είναι δημόσια διαθέσιμα στο {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "Οι ρυθμίσεις έχουν αποθηκευτεί αλλά δεν έχουν ενεργοποιηθεί. Πρέπει να επανεκκινήσεις το syncthing για να ισχύσουν οι νέες ρυθμίσεις.",
"The device ID cannot be blank.": "Η ταυτότητα της συσκευής δεν μπορεί να είναι κενή",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "Η ταυτότητα της συσκευής που θα μπει εδώ βρίσκεται στο μενού \"Επεξεργασία > Εμφάνιση ταυτότητας\" στην άλλη συσκευή. Κενοί χαρακτήρες και παύλες είναι προαιρετικοί (απλά θα αγνοηθούν).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "Η κρυπτογραφημένη αναφορά χρήσης στέλνεται καθημερινά. Χρησιμοποιείται για να ",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "Η ταυτότητα συσκευής που έδωσες δε φαίνεται έγκυρη. Θα πρέπει να είναι 52 ή 56 χαρακτήρες (γράμματα και αριθμοί). Τα κενά και οι παύλες είναι προαιρετικά (αδιάφορα).",
"The folder ID cannot be blank.": "Η ταυτότητα του φακέλου δεν μπορεί να είναι κενή",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "Η ταυτότητα του φακέλου πρέπει να είναι ένα σύντομο αναγνωριστικό (το πολύ 64 χαρακτήρες). Μπορεί να αποτελείται από γράμματα, αριθμούς, την τελεία (.), την παύλα (-) και την κάτω παύλα (_).",
"The folder ID must be unique.": "Η ταυτότητα του φακέλου πρέπει να είναι μοναδική.",
"The folder path cannot be blank.": "Το μονοπάτι του φακέλου δεν μπορεί να είναι κενό.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Θα χρησιμοποιούνται τα εξής διαστήματα: Την πρώτη ώρα θα τηρείται μια έκδοση κάθε 30 δευτερόλεπτα. Την πρώτη ημέρα μια έκδοση κάθε μια ώρα. Τις πρώτες 30 ημέρες, μία έκδοση κάθε ημέρα. Από εκεί και έπειτα μέχρι τη μέγιστη ηλικία, θα τηρείται μια έκδοση κάθε εβδομάδα.",
"The maximum age must be a number and cannot be blank.": "Η μέγιστη ηλικία πρέπει να είναι αριθμός και δεν μπορεί να μην δοθεί.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Η μέγιστη ηλικία παλιότερων εκδόσεων (σε ημέρες, αν δώσεις 0 οι παλιότερες εκδόσεις θα διατηρούνται για πάντα).",
"The number of old versions to keep, per file.": "Πόσες παλιότερες εκδόσεις θα διατηρούνται, ανά αρχείο.",
"The number of versions must be a number and cannot be blank.": "Ο αριθμός εκδόσεων πρέπει να είναι αριθμός και σίγουρα όχι κενό.",
"The rescan interval must be a non-negative number of seconds.": "Ο χρόνος επανελέγχου για αλλαγές είναι σε δευτερόλεπτα (δηλ. θετικός αριθμός).",
"The rescan interval must be at least 5 seconds.": "Ο χρόνος επανελέγχου πρέπει να είναι πάνω από 5 δευτερόλεπτα.",
"Unknown": "Άγνωστο",
"Unshared": "Δεν μοιράζεται",
"Unused": "Δεν χρησιμοποιείται",
"Up to Date": "Ενημερωμένος",
"Upgrade To {%version%}": "Αναβάθμιση στην έκδοση {{version}}",
"Upgrading": "Αναβάθμιση",
"Upload Rate": "Upload Rate",
"Use Compression": "Χρήση συμπίεσης",
"Use HTTPS for GUI": "Χρήση HTTPS για το GUI",
"Version": "Έκδοση",
"Versions Path": "Φάκελος παλιότερων εκδόσεων",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Οι παλιές εκδόσεις θα σβήνονται αυτόματα όταν ξεπεράσουν τη μέγιστη ηλικία ή όταν ξεπεραστεί ο μέγιστος αριθμός αρχείων ανά περίοδο.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "When adding a new device, keep in mind that this device must be added on the other side too.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.",
"Yes": "Ναι",
"You must keep at least one version.": "You must keep at least one version.",
"full documentation": "πλήρης τεκμηρίωση",
"items": "items",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} θέλει να μοιράσει τον φάκελο \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "API Key",
"About": "About",
"Add": "Add",
"Add Device": "Add Device",
"Add Folder": "Add Folder",
"Add new folder?": "Add new folder?",
"Address": "Address",
"Addresses": "Addresses",
"Allow Anonymous Usage Reporting?": "Allow Anonymous Usage Reporting?",
@@ -11,6 +13,7 @@
"Automatic upgrades": "Automatic upgrades",
"Bugs": "Bugs",
"CPU Utilization": "CPU Utilization",
"Changelog": "Changelog",
"Close": "Close",
"Comment, when used at the start of a line": "Comment, when used at the start of a line",
"Compression is recommended in most setups.": "Compression is recommended in most setups.",
@@ -22,6 +25,8 @@
"Device ID": "Device ID",
"Device Identification": "Device Identification",
"Device Name": "Device Name",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Device {{device}} ({{address}}) wants to connect. Add new device?",
"Devices": "Devices",
"Disconnected": "Disconnected",
"Documentation": "Documentation",
"Download Rate": "Download Rate",
@@ -51,20 +56,25 @@
"Global Discovery Server": "Global Discovery Server",
"Global State": "Global State",
"Idle": "Idle",
"Ignore": "Ignore",
"Ignore Patterns": "Ignore Patterns",
"Ignore Permissions": "Ignore Permissions",
"Incoming Rate Limit (KiB/s)": "Incoming Rate Limit (KiB/s)",
"Introducer": "Introducer",
"Inversion of the given condition (i.e. do not exclude)": "Inversion of the given condition (i.e. do not exclude)",
"Keep Versions": "Keep Versions",
"Last File Received": "Last File Received",
"Last File Synced": "Last File Synced",
"Last seen": "Last seen",
"Later": "Later",
"Latest Release": "Latest Release",
"Legend:": "Legend:",
"Local Discovery": "Local Discovery",
"Local State": "Local State",
"Maximum Age": "Maximum Age",
"Multi level wildcard (matches multiple directory levels)": "Multi level wildcard (matches multiple directory levels)",
"Never": "Never",
"New Device": "New Device",
"New Folder": "New Folder",
"No": "No",
"No File Versioning": "No File Versioning",
"Notice": "Notice",
@@ -72,6 +82,7 @@
"Offline": "Offline",
"Online": "Online",
"Out Of Sync": "Out Of Sync",
"Out of Sync Items": "Out of Sync Items",
"Outgoing Rate Limit (KiB/s)": "Outgoing Rate Limit (KiB/s)",
"Override Changes": "Override Changes",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for",
@@ -90,20 +101,27 @@
"Save": "Save",
"Scanning": "Scanning",
"Select the devices to share this folder with.": "Select the devices to share this folder with.",
"Select the folders to share with this device.": "Select the folders to share with this device.",
"Settings": "Settings",
"Share": "Share",
"Share Folder": "Share Folder",
"Share Folders With Device": "Share Folders With Device",
"Share With Devices": "Share With Devices",
"Share this folder?": "Share this folder?",
"Shared With": "Shared With",
"Short identifier for the folder. Must be the same on all cluster devices.": "Short identifier for the folder. Must be the same on all cluster devices.",
"Show ID": "Show ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.",
"Shutdown": "Shutdown",
"Shutdown Complete": "Shutdown Complete",
"Simple File Versioning": "Simple File Versioning",
"Single level wildcard (matches within a directory only)": "Single level wildcard (matches within a directory only)",
"Source Code": "Source Code",
"Staggered File Versioning": "Staggered File Versioning",
"Start Browser": "Start Browser",
"Stopped": "Stopped",
"Support": "Support",
"Support / Forum": "Support / Forum",
"Sync Protocol Listen Addresses": "Sync Protocol Listen Addresses",
"Synchronization": "Synchronization",
@@ -113,6 +131,7 @@
"Syncthing is restarting.": "Syncthing is restarting.",
"Syncthing is upgrading.": "Syncthing is upgrading.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
"The aggregated statistics are publicly available at {%url%}.": "The aggregated statistics are publicly available at {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.",
"The device ID cannot be blank.": "The device ID cannot be blank.",
@@ -147,5 +166,6 @@
"Yes": "Yes",
"You must keep at least one version.": "You must keep at least one version.",
"full documentation": "full documentation",
"items": "items"
"items": "items",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} wants to share folder \"{{folder}}\"."
}

View File

@@ -0,0 +1,171 @@
{
"API Key": "Clave API",
"About": "Acerca de",
"Add": "Agregar",
"Add Device": "Agregar dispositivo",
"Add Folder": "Agregar repositorio",
"Add new folder?": "¿Agregar nuevo repositorio?",
"Address": "Dirección",
"Addresses": "Direcciones",
"Allow Anonymous Usage Reporting?": "Permitir reporte anónimo de uso?",
"Anonymous Usage Reporting": "Reporte anónimo de uso",
"Any devices configured on an introducer device will be added to this device as well.": "Cualquier dispositivo configurado en un dispositivo introductor será también agregado a este dispositivo.",
"Automatic upgrades": "Actualizaciones automáticas",
"Bugs": "Errores",
"CPU Utilization": "Uso de la CPU",
"Changelog": "Registro de cambios",
"Close": "Cerrar",
"Comment, when used at the start of a line": "Comentario, cuando es utilizado al inicio de una línea.",
"Compression is recommended in most setups.": "La compresión de datos es recomendada para la mayoría de las configuraciones.",
"Connection Error": "Error de conexión",
"Copied from elsewhere": "Copiado de otra parte.",
"Copied from original": "Copiado del original",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Derechos de autor © 2014 Jakob Borg y los siguientes colaboradores:",
"Delete": "Suprimir",
"Device ID": "ID del dispositivo",
"Device Identification": "Identificación del dispositivo",
"Device Name": "Nombre del dispositivo",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "El dispositivo {{device}} ({{address}}) se quiere conectar. ¿Agregar nuevo dispositivo?",
"Devices": "Dispositivos",
"Disconnected": "Desconectado",
"Documentation": "Documentación",
"Download Rate": "Tasa de descarga",
"Downloaded": "Descargado",
"Downloading": "Descargando",
"Edit": "Editar",
"Edit Device": "Editar dispositivo",
"Edit Folder": "Editar repositorio",
"Editing": "Editando",
"Enable UPnP": "Permitir UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Ingrese las direcciones \"ip:puerto\" separadas por coma, o \"dynamic\" para descubrir automáticamente las direcciones.",
"Enter ignore patterns, one per line.": "Añadir patrones de exclusión, uno por línea.",
"Error": "Error",
"File Versioning": "Control de versiones",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Los permisos de archivo son ignorados al buscar cambios. Usar en sistemas de archivos FAT.",
"Files are moved to date stamped versions in a .stversions folder when replaced or deleted by syncthing.": "Los archivos son movidos al directorio .stversions y renombrados a versiones marcadas por fecha cuando son reemplazados o eliminados por Syncthing,",
"Files are protected from changes made on other devices, but changes made on this device will be sent to the rest of the cluster.": "Los archivos están protegidos frente a los cambios realizados en otros dispositivos, peros los cambios realizados en este dispositivo serán envíados al resto del grupo",
"Folder ID": "ID del repositorio",
"Folder Master": "Repositorio maestro",
"Folder Path": "Ruta del repositorio",
"Folders": "Repositorios",
"GUI Authentication Password": "Contraseña de autenticación de la GUI",
"GUI Authentication User": "Usuario de la GUI",
"GUI Listen Addresses": "Direcciones de escucha para la GUI.",
"Generate": "Generar",
"Global Discovery": "Búsqueda en internet",
"Global Discovery Server": "Servidor global de identificación",
"Global State": "Estado global",
"Idle": "Inactivo",
"Ignore": "Ignorar",
"Ignore Patterns": "Patrones de exclusión",
"Ignore Permissions": "Ignorar permisos",
"Incoming Rate Limit (KiB/s)": "Límite de velocidad de entrada (KiB/s)",
"Introducer": "Introductor",
"Inversion of the given condition (i.e. do not exclude)": "Inversión de la condición dada (es decir, no excluir)",
"Keep Versions": "Conservar versiones",
"Last File Received": "Último archivo recibido",
"Last File Synced": "Último archivo sincronizado.",
"Last seen": "Visto por ultima vez",
"Later": "Más tarde",
"Latest Release": "Última versión",
"Local Discovery": "Búsqueda en red local",
"Local State": "Estado local",
"Maximum Age": "Edad máxima",
"Multi level wildcard (matches multiple directory levels)": "Carácter comodín multinivel (coincide en el directorio y sus subdirectorios)",
"Never": "Nunca",
"New Device": "Nuevo dispositivo",
"New Folder": "Nuevo repositorio",
"No": "No",
"No File Versioning": "Sin control de versiones de archivos",
"Notice": "Aviso",
"OK": "OK",
"Offline": "Fuera de linea",
"Online": "En linea",
"Out Of Sync": "Fuera de sincronización",
"Out of Sync Items": "Ítems no sincronizados",
"Outgoing Rate Limit (KiB/s)": "Tasa máxima de envío (KiB/s)",
"Override Changes": "Reemplazar los cambios",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Ruta del repositorio en el equipo local. Será creado si no existe. El carácter tilde (~) puede ser utilizado como atajo de ",
"Path where versions should be stored (leave empty for the default .stversions folder in the folder).": "Ruta donde serán guardas las versiones (dejar vacío para usar el directorio predifinido \".stversions\" en el repositorio)",
"Please wait": "Aguarde por favor",
"Preview": "Vista previa",
"Preview Usage Report": "Ver reporte de uso",
"Quick guide to supported patterns": "Guía rápida sobre los patrones soportados",
"RAM Utilization": "Utilización de RAM",
"Rescan": "Reescanear",
"Rescan Interval": "Intervalo de reescaneo",
"Restart": "Reiniciar",
"Restart Needed": "Es necesario reiniciar",
"Restarting": "Reiniciando",
"Reused": "Reutilizado",
"Save": "Guardar",
"Scanning": "Actualización",
"Select the devices to share this folder with.": "Seleccione los dispositivos con los cuales compartir este repositorio.",
"Select the folders to share with this device.": "Seleccione los repositorios para compartir con este dispositivo.",
"Settings": "Configuración",
"Share": "Compartir",
"Share Folder": "Compartir repositorio",
"Share Folders With Device": "Compartir repositorios con dispositivo",
"Share With Devices": "Compartir con los dispositivos",
"Share this folder?": "¿Compartir este repositorio?",
"Shared With": "Compartido con",
"Short identifier for the folder. Must be the same on all cluster devices.": "Identificador corto para el repositorio. Debe ser el mismo en todos los dispositivos del grupo.",
"Show ID": "Mostrar ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Mostrado en lugar de la ID del dispositivo en el estado del grupo. Será sugerido a otros dispositivos como nombre predeterminado opcional.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Mostrado en lugar de la ID del dispositivo en el estado del grupo. Si se deja en blanco, será usado el nombre sugerido por el dispositivo.",
"Shutdown": "Apagar",
"Shutdown Complete": "Apagado completado",
"Simple File Versioning": "Versiones simple de archivos",
"Single level wildcard (matches within a directory only)": "Carácter comodín de un solo nivel (coincide sólo dentro de un directorio)",
"Source Code": "Código fuente",
"Staggered File Versioning": "Versiones del archivo escalonado",
"Start Browser": "Iniciar navegador",
"Stopped": "Parado",
"Support": "Soporte",
"Support / Forum": "Soporte / Foro",
"Sync Protocol Listen Addresses": "Dirección de escucha del protocolo de sincronización",
"Synchronization": "Sincronización",
"Syncing": "Sincronización",
"Syncthing has been shut down.": "La sincronización esta apagada",
"Syncthing includes the following software or portions thereof:": "Syncthing incluye los siguientes softwares o partes de ellos:",
"Syncthing is restarting.": "Syncthing está reiniciando.",
"Syncthing is upgrading.": "Syncthing se está actualizando.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing parece estar apagado, o hay un problema con su conexión de Internet. Reintentando...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing parece estar experimentando un problema al procesar su solicitud. Por favor, recargue el navegador o reinicie Syncthing si el problema persiste.",
"The aggregated statistics are publicly available at {%url%}.": "Las estadísticas acumuladas están disponibles públicamente en {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuración ha sido guardada pero no activada.\nSyncthing debe reiniciarse para activar la nueva configuración.",
"The device ID cannot be blank.": "La ID del dispositivo no puede estar en blanco.",
"The device ID to enter here can be found in the \"Edit > Show ID\" dialog on the other device. Spaces and dashes are optional (ignored).": "La ID del dispositivo a introducir se puede encontrar en la opción de menú \"Edición > Mostrar ID\" en el otro dispositivo. Espacios y guiones son opcionales (ignorados).",
"The encrypted usage report is sent daily. It is used to track common platforms, folder sizes and app versions. If the reported data set is changed you will be prompted with this dialog again.": "El informe de uso se envía encriptado diariamente. Se utiliza para hacer un seguimiento de plataformas comunes, tamaño de repositorios y versiones de la aplicación. Si el conjunto de datos cambia será notificado mediante este dialogo nuevamente.",
"The entered device ID does not look valid. It should be a 52 or 56 character string consisting of letters and numbers, with spaces and dashes being optional.": "La ID del dispositivo introducida no es válida. Debe ser una cadena de 52 o 56 caracteres consistente en letras y números, con espacios y guiones opcionales.",
"The folder ID cannot be blank.": "La ID del repositorio no puede estar en blanco.",
"The folder ID must be a short identifier (64 characters or less) consisting of letters, numbers and the dot (.), dash (-) and underscode (_) characters only.": "La ID del repositorio debe ser un identificador corto (64 caracteres o menos) consistente solamente en letras, números, punto (.), guion (-) y guion bajo (_).",
"The folder ID must be unique.": "La ID del repositorio debe ser única.",
"The folder path cannot be blank.": "La ruta del repositorio no puede estar vacía.",
"The following intervals are used: for the first hour a version is kept every 30 seconds, for the first day a version is kept every hour, for the first 30 days a version is kept every day, until the maximum age a version is kept every week.": "Los siguientes intervalos se utilizan: para la primera hora una versión se mantiene cada 30 segundos, para el primer día de una versión se mantiene cada hora, durante los primeros 30 días de la versión se mantiene todos los días, hasta que la edad máxima de una versión se mantiene cada semana.",
"The maximum age must be a number and cannot be blank.": "La edad máxima debe ser un número y no puede estar en blanco.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "El tiempo máximo para mantener una versión (en días, establece en 0 para mantener versiones para siempre).",
"The number of old versions to keep, per file.": "El numero de versiones anteriores a conservar, por archivo.",
"The number of versions must be a number and cannot be blank.": "El número de versiones debe ser un número y no puede estar vacío.",
"The rescan interval must be a non-negative number of seconds.": "El intervalo de reescaneo debe ser un número no negativo de segundos.",
"The rescan interval must be at least 5 seconds.": "El intervalo de reescaneo debe ser al menos de 5 segundos.",
"Unknown": "Desconocido",
"Unshared": "No compartido",
"Unused": "No utilizado",
"Up to Date": "Actualizado",
"Upgrade To {%version%}": "Actualizar a {{version}}",
"Upgrading": "Actualizando",
"Upload Rate": "Tasa de subida",
"Use Compression": "Usar compresión",
"Use HTTPS for GUI": "Usar HTTPS para la GUI",
"Version": "Versión",
"Versions Path": "Ruta de versiones",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Las versiones se eliminan automáticamente si son mayores de la edad máxima o mayor que el número de archivos permitidos en un intervalo.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Al agregar un nuevo dispositivo, tenga en cuenta que este dispositivo se debe agregar en el otro lado también.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Al agregar un nuevo repositorio, tenga en cuenta que la ID del repositorio se utiliza para conectar los repositorios entre dispositivos. Se distingue entre mayúsculas y minúsculas y debe ser exactamente igual en todos los dispositivos.",
"Yes": "Sí",
"You must keep at least one version.": "Debe mantener al menos una versión",
"full documentation": "documentación completa",
"items": "ítems",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} quiere compartir repositorio \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "Clé API",
"About": "À propos",
"Add": "Ajouter",
"Add Device": "Ajouter un périphérique",
"Add Folder": "Ajouter un répertoire",
"Add new folder?": "Ajouter un nouveau répertoire ?",
"Address": "Adresse",
"Addresses": "Adresses",
"Allow Anonymous Usage Reporting?": "Autoriser le rapport anonyme de statistiques d'utilisation ?",
@@ -11,25 +13,32 @@
"Automatic upgrades": "Mises à jour automatiques",
"Bugs": "Bugs",
"CPU Utilization": "Utilisation du CPU",
"Changelog": "Nouveautés",
"Close": "Fermer",
"Comment, when used at the start of a line": "Commentaire, lorsque utilisé en début de ligne",
"Compression is recommended in most setups.": "La compression est recommandée pour la plupart des configurations.",
"Connection Error": "Erreur de connexion",
"Copied from elsewhere": "Copié d'ailleurs",
"Copied from original": "Copié de l'original",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg et les contributeurs suivants :",
"Delete": "Supprimer",
"Device ID": "ID du périphérique",
"Device Identification": "Identification de l'appareil",
"Device Name": "Nom du périphérique",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "La machine {{device}} ({{address}}) veut se connecter. Voulez-vous ajouter cette machine ?",
"Devices": "Machines",
"Disconnected": "Déconnecté",
"Documentation": "Documentation",
"Download Rate": "Débit de réception",
"Downloaded": "Téléchargé",
"Downloading": "En cours de téléchargement",
"Edit": "Éditer",
"Edit Device": "Éditer le périphérique",
"Edit Folder": "Éditer le répertoire",
"Editing": "Édition",
"Enable UPnP": "Activer l'UPnP",
"Enter comma separated \"ip:port\" addresses or \"dynamic\" to perform automatic discovery of the address.": "Entrer les adresses \"ip:port\" séparées par une virgule ou \"dynamic\" afin d'activer la recherche automatique de l'adresse.",
"Enter ignore patterns, one per line.": "Entrer les modèles à ignorer, un par ligne.",
"Enter ignore patterns, one per line.": "Entrer les masques de filtrage, un par ligne.",
"Error": "Erreur",
"File Versioning": "Versions de fichier",
"File permission bits are ignored when looking for changes. Use on FAT filesystems.": "Les permissions de fichier sont ignorées lors de la recherche de changements. À utiliser sur les systèmes de fichiers de type FAT.",
@@ -38,27 +47,34 @@
"Folder ID": "ID du répertoire",
"Folder Master": "Répertoire maître",
"Folder Path": "Chemin du répertoire",
"Folders": "Répertoires",
"GUI Authentication Password": "Mot de passe d'authentification GUI",
"GUI Authentication User": "Utilistateur autorisé GUI",
"GUI Authentication User": "Utilisateur autorisé GUI",
"GUI Listen Addresses": "Adresse du GUI",
"Generate": "Générer",
"Global Discovery": "Recherche globale",
"Global Discovery Server": "Serveur global de recherche",
"Global State": "État global",
"Idle": "Au repos",
"Ignore": "Ignorer",
"Ignore Patterns": "Modèles à éviter",
"Ignore Permissions": "Ignorer les permissions",
"Incoming Rate Limit (KiB/s)": "Limite du débit entrant (KiB/s)",
"Introducer": "Initiateur",
"Inversion of the given condition (i.e. do not exclude)": "Inverser la condition donnée (i.e. ne pas exclure)",
"Keep Versions": "Conserver les versions",
"Last File Received": "Dernier fichier reçu",
"Last File Synced": "Dernier fichier synchronisé",
"Last seen": "Dernière apparition",
"Later": "Plus tard",
"Latest Release": "Dernière version",
"Local Discovery": "Recherche locale",
"Local State": "État local",
"Maximum Age": "Ancienneté maximum",
"Multi level wildcard (matches multiple directory levels)": "Astérisque à plusieurs niveaux (correspond aux répertoires et sous-répertoires)",
"Never": "Jamais",
"New Device": "Nouvelle machine",
"New Folder": "Nouveau répertoire",
"No": "Non",
"No File Versioning": "Pas de version de fichier",
"Notice": "Notification",
@@ -66,6 +82,7 @@
"Offline": "Offline",
"Online": "Online",
"Out Of Sync": "Non synchronisé",
"Out of Sync Items": "Objets non synchronisés",
"Outgoing Rate Limit (KiB/s)": "Limite du débit sortant (KiB/s)",
"Override Changes": "Écraser les changements",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Chemin du répertoire sur l'ordinateur local. Il sera créé si il n'existe pas. Le caractère tilde (~) peut être utilisé comme raccourci vers",
@@ -73,30 +90,38 @@
"Please wait": "Merci de patienter",
"Preview": "Aperçu",
"Preview Usage Report": "Aperçu du rapport de statistiques d'utilisation",
"Quick guide to supported patterns": "Guide rapide des modèles supportés",
"Quick guide to supported patterns": "Guide rapide des masques supportés",
"RAM Utilization": "Utilisation de la RAM",
"Rescan": "Rescanner",
"Rescan Interval": "Intervalle de scan",
"Restart": "Redémarrer",
"Restart Needed": "Redémarrage nécessaire",
"Restarting": "Redémarrage",
"Reused": "Réutilisé",
"Save": "Sauver",
"Scanning": "En cours de scan",
"Select the devices to share this folder with.": "Sélectionner les appareils avec qui partager ce répertoire.",
"Select the devices to share this folder with.": "Sélectionner les machines avec qui partager ce répertoire.",
"Select the folders to share with this device.": "Sélectionner les répertoires à partager avec cette machine.",
"Settings": "Configuration",
"Share With Devices": "Partager avec les périphériques",
"Share": "Partager",
"Share Folder": "Partager le répertoire",
"Share Folders With Device": "Partager des répertoires avec des machines",
"Share With Devices": "Partage avec des machines",
"Share this folder?": "Voulez-vous partager ce répertoire ?",
"Shared With": "Partagé avec",
"Short identifier for the folder. Must be the same on all cluster devices.": "Court identifiant du répertoire. Il doit être le même sur l'ensemble des appareils du groupe.",
"Short identifier for the folder. Must be the same on all cluster devices.": "Court identifiant du répertoire. Il doit être le même sur l'ensemble des machines du groupe.",
"Show ID": "Montrer l'ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de l'appareil dans le statut du groupe. Sera proposé aux autres nœuds comme un nom par défaut optionnel.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Affiché à la place de l'ID de l'appareil dans le statut du groupe. Si laissé vide, il sera changé par le nom proposé par l'appareil.",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Affiché à la place de l'ID de la machine dans le groupe. Sera proposé aux autres machines comme nom optionnel par défaut.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Affiché à la place de l'ID de la machine dans le groupe. Si laissé vide, il sera mis à jour par le nom proposé par la machine distante.",
"Shutdown": "Éteindre",
"Simple File Versioning": "Versions simples de fichier",
"Shutdown Complete": "Extinction terminée",
"Simple File Versioning": "Suivi simple des versions de fichier",
"Single level wildcard (matches within a directory only)": "Astérisque à un seul niveau (correspond uniquement à lintérieur du répertoire)",
"Source Code": "Code source",
"Staggered File Versioning": "Versions échelonnées de fichier",
"Start Browser": "Démarrer le navigateur web",
"Stopped": "Arrêté",
"Support": "Aide",
"Support / Forum": "Aide / Forum",
"Sync Protocol Listen Addresses": "Adresse du protocole de synchronisation",
"Synchronization": "Synchronisation",
@@ -106,6 +131,7 @@
"Syncthing is restarting.": "Syncthing est cours de redémarrage.",
"Syncthing is upgrading.": "Syncthing est cours de mise à jour.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing semble être éteint, ou il y a un problème avec votre connexion Internet. Nouvelle tentative ...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing semble éprouver des difficultés à appliquer votre requête. Si le problème persiste, veuillez rafraîchir votre navigateur ou redémarrer Syncthing.",
"The aggregated statistics are publicly available at {%url%}.": "Les statistiques agrégées sont disponibles publiquement à l'adresse {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configuration a été sauvée mais pas activée. Syncthing doit redémarrer afin d'activer la nouvelle configuration.",
"The device ID cannot be blank.": "L'ID de l'appareil ne peut être vide.",
@@ -120,10 +146,12 @@
"The maximum age must be a number and cannot be blank.": "L'ancienneté maximum doit être un nombre et ne peut être vide.",
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "Le temps maximum de conservation d'une version (en jours, mettre à 0 pour conserver les versions pour toujours)",
"The number of old versions to keep, per file.": "Le nombre d'anciennes versions à garder, par fichier.",
"The number of versions must be a number and cannot be blank.": "Le nombre de version doit être un nombre et ne peut pas être vide.",
"The number of versions must be a number and cannot be blank.": "Le nombre de versions doit être numérique, et ne peut pas être vide.",
"The rescan interval must be a non-negative number of seconds.": "L'intervalle de scan ne doit pas être un nombre négatif de secondes.",
"The rescan interval must be at least 5 seconds.": "L'intervalle de scan doit être d'au minimum 5 secondes.",
"Unknown": "Inconnu",
"Unshared": "Non partagé",
"Unused": "Non utilisé",
"Up to Date": "Synchronisation à jour",
"Upgrade To {%version%}": "Mettre à jour vers {{version}}",
"Upgrading": "Mise à jour de Syncthing",
@@ -132,11 +160,12 @@
"Use HTTPS for GUI": "Utiliser l'HTTPS pour le GUI",
"Version": "Version",
"Versions Path": "Emplacement des versions",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions sont supprimées automatiquement si celles-ci sont plus anciennes que l'ancienneté maximum ou que leur nombre est supérieur au nombre autorisé dans une intervale.",
"Versions are automatically deleted if they are older than the maximum age or exceed the number of files allowed in an interval.": "Les versions seront supprimées automatiquement, si elles dépassent la durée maximum de conservation, ou si leur nombre est supérieur à la valeur autorisée dans l'intervalle.",
"When adding a new device, keep in mind that this device must be added on the other side too.": "Lorsqu'un appareil est ajouté, gardez à l'esprit que cet appareil doit aussi être ajouté de l'autre coté.",
"When adding a new folder, keep in mind that the Folder ID is used to tie folders together between devices. They are case sensitive and must match exactly between all devices.": "Lorsqu'un nouveau répertoire est ajouté, gardez à l'esprit que son ID est utilisé pour lier les répertoires à travers les appareils. Les ID sont sensibles à la casse et doivent être identiques à travers tous les nœuds.",
"Yes": "Oui",
"You must keep at least one version.": "Vous devez garder au minimum une version.",
"full documentation": "documentation complète",
"items": "éléments"
"items": "éléments",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} veut partager le répertoire \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "API kulcs",
"About": "Névjegy",
"Add": "Add",
"Add Device": "Eszköz hozzáadása",
"Add Folder": "Mappa hozzáadása",
"Add new folder?": "Add new folder?",
"Address": "Cím",
"Addresses": "Címek",
"Allow Anonymous Usage Reporting?": "Engedélyezed a névtelen felhasználási adatok küldését?",
@@ -11,18 +13,25 @@
"Automatic upgrades": "Automatikus frissítés",
"Bugs": "Hibák",
"CPU Utilization": "Processzor használat",
"Changelog": "Changelog",
"Close": "Bezárás",
"Comment, when used at the start of a line": "Megjegyzés, a sor elején használva",
"Compression is recommended in most setups.": "A tömörítés a a legtöbb esetben ajánlott",
"Connection Error": "Kapcsolódási hiba",
"Copied from elsewhere": "Másolva máshonnan",
"Copied from original": "Másolva az eredetiről",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg és az alábbi Közreműködők",
"Delete": "Törlés",
"Device ID": "Eszköz azonosító",
"Device Identification": "Eszköz azonosító",
"Device Name": "Eszköz neve",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Device {{device}} ({{address}}) wants to connect. Add new device?",
"Devices": "Devices",
"Disconnected": "Kapcsolat bontva",
"Documentation": "Dokumentáció",
"Download Rate": "Letöltési sebesség",
"Downloaded": "Letöltve",
"Downloading": "Letöltés",
"Edit": "Szerkesztés",
"Edit Device": "Eszköz szerkesztése",
"Edit Folder": "Mappa szerkesztése",
@@ -38,6 +47,7 @@
"Folder ID": "Mappa azonosító",
"Folder Master": "Központi mappa",
"Folder Path": "Mappa elérési útja",
"Folders": "Mappák",
"GUI Authentication Password": "Grafikus felület jelszava",
"GUI Authentication User": "Grafikus felület felhasználó neve ",
"GUI Listen Addresses": "Grafikus felület címe",
@@ -46,19 +56,25 @@
"Global Discovery Server": "Globális felfedező szerver",
"Global State": "Globális állapot",
"Idle": "Tétlen",
"Ignore": "Ignore",
"Ignore Patterns": "Figyelmen kívül hagyás",
"Ignore Permissions": "Jogosultságok figyelmen kívül hagyása",
"Incoming Rate Limit (KiB/s)": "Bejövő sebesség korlát (KIB/mp)",
"Introducer": "Bevezető",
"Inversion of the given condition (i.e. do not exclude)": "A feltétel ellentéte (pl. ki nem hagyás)",
"Keep Versions": "Megtartott verziók",
"Last File Received": "Last File Received",
"Last File Synced": "Utolsó szinkronizált fájl",
"Last seen": "Utoljára látva",
"Later": "Later",
"Latest Release": "Utolsó kiadás",
"Local Discovery": "Helyi felfedezés",
"Local State": "Helyi állapot",
"Maximum Age": "Maximális kor",
"Multi level wildcard (matches multiple directory levels)": "Több szintű helyettesítő karakter (több könyvtár szintre érvényesül)",
"Never": "Soha",
"New Device": "New Device",
"New Folder": "New Folder",
"No": "Nem",
"No File Versioning": "Nincs fájl verziózás",
"Notice": "Megjegyzés",
@@ -66,6 +82,7 @@
"Offline": "Nincs kapcsolat",
"Online": "Kapcsolódva",
"Out Of Sync": "Nincs szinkronban",
"Out of Sync Items": "Out of Sync Items",
"Outgoing Rate Limit (KiB/s)": "Kimenő sávszélesség (KiB/mp)",
"Override Changes": "Változtatások felülbírálása",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "A mappa elérési útja az eszközön. Amennyiben nem létezik, a program automatikusan létrehozza. A hullámvonal (~) a következő helyettesítésre használható: ",
@@ -80,23 +97,31 @@
"Restart": "Újraindítás",
"Restart Needed": "Újraindítás szükséges",
"Restarting": "Újraindulás",
"Reused": "Újrafelhasználva",
"Save": "Mentés",
"Scanning": "Átnézés",
"Select the devices to share this folder with.": "Válaszd ki az eszközöket amelyekkel meg szeretnéd osztani a mappát",
"Select the folders to share with this device.": "Válaszd ki a mappákat amiket meg szeretnél osztani ezzel az eszközzel",
"Settings": "Beállítások",
"Share": "Share",
"Share Folder": "Share Folder",
"Share Folders With Device": "Mappák megosztása az eszközzel",
"Share With Devices": "Megosztás más eszközzel",
"Share this folder?": "Share this folder?",
"Shared With": "Megosztva ezekkel:",
"Short identifier for the folder. Must be the same on all cluster devices.": "Rövid azonosító. Minden megosztott eszközön azonosnak kell lennie.",
"Show ID": "Azonosító mutatása",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Az eszköz azonosító helyett jelenik meg. A többi eszközön alapértelmezett névként használható. ",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Az eszköz azonosító helyett jelenik meg. Üresen hagyva az eszköz saját neve lesz használva ",
"Shutdown": "Leállítás",
"Shutdown Complete": "Shutdown Complete",
"Simple File Versioning": "Egyszerű fájl verziókövetés",
"Single level wildcard (matches within a directory only)": "Egyszintű helyettesítő karakter (csak egy mappára érvényes)",
"Source Code": "Forráskód",
"Staggered File Versioning": "Többszintű fájl verziókövetés",
"Start Browser": "Böngésző indítása",
"Stopped": "Leállítva",
"Support": "Support",
"Support / Forum": "Támogatás / Fórum",
"Sync Protocol Listen Addresses": "Szinkronizációs protokoll címe",
"Synchronization": "Szinkronizálás",
@@ -106,6 +131,7 @@
"Syncthing is restarting.": "Syncthing újraindul",
"Syncthing is upgrading.": "Syncthing frissül",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Úgy tűnik, hogy a Syncthing nem működik, vagy valami probléma van az hálózati kapcsolattal. Újra próbálom...",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
"The aggregated statistics are publicly available at {%url%}.": "Az összevont statisztikák nyilvánosan elérhetők a {{url}} címen.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "A beállítások elmentésre kerültek, de nem lettek aktiválva. Indítsd újra a Syncthing-et, hogy aktiváld őket.",
"The device ID cannot be blank.": "Az eszköz azonosító nem lehet üres.",
@@ -124,6 +150,8 @@
"The rescan interval must be a non-negative number of seconds.": "Az átnézési intervallum nullánál nagyobb másodperc érték kell legyen",
"The rescan interval must be at least 5 seconds.": "Az átnézési intervallumnak legalább 5 másodpercnek kell lennie.",
"Unknown": "Ismeretlen",
"Unshared": "Nincs megosztva",
"Unused": "Nincs használatban",
"Up to Date": "Friss",
"Upgrade To {%version%}": "Frissítés a {{version}} verzióra",
"Upgrading": "Frissítés",
@@ -138,5 +166,6 @@
"Yes": "Igen",
"You must keep at least one version.": "Legalább egy verziót meg kell tartanod",
"full documentation": "teljes dokumentáció",
"items": "tételek"
"items": "tételek",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} wants to share folder \"{{folder}}\"."
}

View File

@@ -1,8 +1,10 @@
{
"API Key": "Chiave API",
"About": "Informazioni",
"Add": "Aggiungi",
"Add Device": "Aggiungi Dispositivo",
"Add Folder": "Aggiungi Cartella",
"Add new folder?": "Aggiungere una nuova cartella?",
"Address": "Indirizzo",
"Addresses": "Indirizzi",
"Allow Anonymous Usage Reporting?": "Abilitare Statistiche Anonime di Utilizzo?",
@@ -11,18 +13,25 @@
"Automatic upgrades": "Aggiornamenti automatici",
"Bugs": "Bug",
"CPU Utilization": "Utilizzo CPU",
"Changelog": "Changelog",
"Close": "Chiudi",
"Comment, when used at the start of a line": "Per commentare, va inserito all'inizio di una riga",
"Compression is recommended in most setups.": "La compressione è raccomandata nella maggior parte delle configurazioni.",
"Connection Error": "Errore di Connessione",
"Copied from elsewhere": "Copiato da qualche altra parte",
"Copied from original": "Copiato dall'originale",
"Copyright © 2014 Jakob Borg and the following Contributors:": "Copyright © 2014 Jakob Borg e i seguenti Collaboratori:",
"Delete": "Elimina",
"Device ID": "ID Dispositivo",
"Device Identification": "Identificazione Dispositivo",
"Device Name": "Nome Dispositivo",
"Device {%device%} ({%address%}) wants to connect. Add new device?": "Il dispositivo {{device}} ({{address}}) chiede di connettersi. Aggiungere il nuovo dispositivo?",
"Devices": "Dispositivi",
"Disconnected": "Disconnesso",
"Documentation": "Documentazione",
"Download Rate": "Velocità Download",
"Downloaded": "Scaricato",
"Downloading": "Sto scaricando",
"Edit": "Modifica",
"Edit Device": "Modifica Dispositivo",
"Edit Folder": "Modifica Cartella",
@@ -38,6 +47,7 @@
"Folder ID": "ID Cartella",
"Folder Master": "Cartella Principale",
"Folder Path": "Percorso Cartella",
"Folders": "Cartelle",
"GUI Authentication Password": "Password di Autenticazione dell'Utente",
"GUI Authentication User": "Utente dell'Interfaccia Grafica",
"GUI Listen Addresses": "Indirizzi dell'Interfaccia Grafica",
@@ -46,19 +56,25 @@
"Global Discovery Server": "Server di Ricerca Globale",
"Global State": "Stato Globale",
"Idle": "Inattivo",
"Ignore": "Ignora",
"Ignore Patterns": "Schemi Esclusione File",
"Ignore Permissions": "Ignora Permessi",
"Incoming Rate Limit (KiB/s)": "Limite Velocità in Ingresso (KiB/s)",
"Introducer": "Introduttore",
"Inversion of the given condition (i.e. do not exclude)": "Inversione della condizione indicata (ad es. non escludere)",
"Keep Versions": "Versioni Mantenute",
"Last File Received": "Ultimo File Ricevuto",
"Last File Synced": "Ultimo File Sincronizzato",
"Last seen": "Ultima connessione",
"Later": "Più Tardi",
"Latest Release": "Ultima Versione",
"Local Discovery": "Individuazione Locale",
"Local State": "Stato Locale",
"Maximum Age": "Durata Massima",
"Multi level wildcard (matches multiple directory levels)": "Metacarattere multi-livello (corrisponde alle cartelle e alle sotto-cartelle)",
"Never": "Mai",
"New Device": "Nuovo Dispositivo",
"New Folder": "Nuova Cartella",
"No": "No",
"No File Versioning": "Nessun Controllo Versione",
"Notice": "Avviso",
@@ -66,6 +82,7 @@
"Offline": "Non in linea",
"Online": "In linea",
"Out Of Sync": "Non Sincronizzati",
"Out of Sync Items": "Elementi Non Sincronizzati",
"Outgoing Rate Limit (KiB/s)": "Limite Velocità in Uscita (KiB/s)",
"Override Changes": "Ignora Modifiche",
"Path to the folder on the local computer. Will be created if it does not exist. The tilde character (~) can be used as a shortcut for": "Percorso della cartella nel computer locale. Verrà creata se non esiste già. Il carattere tilde (~) può essere utilizzato come scorciatoia per",
@@ -80,23 +97,31 @@
"Restart": "Riavvia",
"Restart Needed": "Riavvio Necessario",
"Restarting": "Riavvio",
"Reused": "Riutilizzato",
"Save": "Salva",
"Scanning": "Scansione in corso",
"Select the devices to share this folder with.": "Seleziona i dispositivi con i quali condividere questa cartella.",
"Select the folders to share with this device.": "Seleziona le cartelle da condividere con questo dispositivo.",
"Settings": "Impostazioni",
"Share": "Condividi",
"Share Folder": "Condividi la Cartella",
"Share Folders With Device": "Condividi Cartelle con il Dispositivo",
"Share With Devices": "Condividi con i Dispositivi",
"Share this folder?": "Vuoi condividere questa cartella?",
"Shared With": "Condiviso Con",
"Short identifier for the folder. Must be the same on all cluster devices.": "Breve identificatore della cartella. Deve essere lo stesso su tutti i dispositivi del cluster.",
"Show ID": "Mostra ID",
"Shown instead of Device ID in the cluster status. Will be advertised to other devices as an optional default name.": "Visibile al posto dell'ID Dispositivo nello stato del cluster. Negli altri dispositivi verrà presentato come nome predefinito opzionale.",
"Shown instead of Device ID in the cluster status. Will be updated to the name the device advertises if left empty.": "Visibile al posto dell'ID Dispositivo nello stato del cluster. Se viene lasciato vuoto, verrà utilizzato il nome proposto dal dispositivo.",
"Shutdown": "Arresta",
"Shutdown Complete": "Arresto Eseguito",
"Simple File Versioning": "Controllo Versione Semplice",
"Single level wildcard (matches within a directory only)": "Metacarattere di singolo livello (corrisponde solo all'interno di una cartella)",
"Source Code": "Codice Sorgente",
"Staggered File Versioning": "Controllo Versione Cadenzato",
"Start Browser": "Avvia Browser",
"Stopped": "Fermato",
"Support": "Supporto",
"Support / Forum": "Supporto / Forum",
"Sync Protocol Listen Addresses": "Indirizzi del Protocollo di Sincronizzazione",
"Synchronization": "Sincronizzazione",
@@ -106,6 +131,7 @@
"Syncthing is restarting.": "Riavvio di Syncthing in corso.",
"Syncthing is upgrading.": "Aggiornamento di Syncthing in corso.",
"Syncthing seems to be down, or there is a problem with your Internet connection. Retrying…": "Syncthing sembra inattivo, oppure c'è un problema con la tua connessione a Internet. Nuovo tentativo…",
"Syncthing seems to be experiencing a problem processing your request. Please refresh the page or restart Syncthing if the problem persists.": "Syncthing seems to be experiencing a problem processing your request. Please reload your browser or restart Syncthing if the problem persists.",
"The aggregated statistics are publicly available at {%url%}.": "Le statistiche aggregate sono disponibili pubblicamente su {{url}}.",
"The configuration has been saved but not activated. Syncthing must restart to activate the new configuration.": "La configurazione è stata salvata ma non attivata. Devi riavviare Syncthing per attivare la nuova configurazione.",
"The device ID cannot be blank.": "L'ID del dispositivo non può essere vuoto.",
@@ -121,9 +147,11 @@
"The maximum time to keep a version (in days, set to 0 to keep versions forever).": "La durata massima di una versione (in giorni, imposta a 0 per mantenere le versioni per sempre).",
"The number of old versions to keep, per file.": "Il numero di vecchie versioni da mantenere, per file.",
"The number of versions must be a number and cannot be blank.": "Il numero di versioni dev'essere un numero e non può essere vuoto.",
"The rescan interval must be a non-negative number of seconds.": "The rescan interval must be a non-negative number of seconds.",
"The rescan interval must be a non-negative number of seconds.": "L'intervallo di scansione deve essere un numero superiore a zero secondi.",
"The rescan interval must be at least 5 seconds.": "L'intervallo di scansione non può essere inferiore a 5 secondi.",
"Unknown": "Sconosciuto",
"Unshared": "Non Condiviso",
"Unused": "Non Utilizzato",
"Up to Date": "Sincronizzato",
"Upgrade To {%version%}": "Aggiorna alla {{version}}",
"Upgrading": "Aggiornamento",
@@ -138,5 +166,6 @@
"Yes": "Sì",
"You must keep at least one version.": "È necessario mantenere almeno una versione.",
"full documentation": "documentazione completa",
"items": "elementi"
"items": "elementi",
"{%device%} wants to share folder \"{%folder%}\".": "{{device}} vuole condividere la cartella \"{{folder}}\"."
}

Some files were not shown because too many files have changed in this diff Show More